Rendered at 17:47:59 GMT+0000 (Coordinated Universal Time) with Netlify.
hamuraijack 2 days ago [-]
so how would you eval your own claude.md? Each context is unique to the project, team, and personal root claude.md. Do you just take given task and ask it to redo the same one over and over again against a known solution? Do you just keep using it and "feel" whether or not it's working? How is that different from what everyone is already doing?
sjmaplesec 1 days ago [-]
The review eval tests language, activation etc of skills. I guess you could move it all to a skill quick and then run an eval on that if using Tessl. This checks if the way you write the instructions etc are being well understood by the agent
skybrian 3 days ago [-]
Okay, but how would I write evals for my project's agents file? Any good examples out there?
alexhans 2 days ago [-]
I wrote https://ai-evals.io (community site) to make the concept approachable no matter what tools you choose to use.
At first glance this looks like an entire ecosystem full of slop and by running that eval you generate more? I'm looking for something a bit more curated.
sjmaplesec 1 days ago [-]
No, the context can be human created as much as it could be llm generated. The suggestions are based on Anthropic best practices and allow the agents to activate, and use the skills better, make the text clearer for the agent etc.
pavel_lishin 2 days ago [-]
I don't even know what an eval is.
sjmaplesec 1 days ago [-]
An eval is to an LLM as a test is to code.
furyofantares 2 days ago [-]
If it was easy to write evals, I would come at it from that direction.
But since it's not, what I do to avoid working on AGENTS.md blind is I test it on whatever causes me to write it.
I have some prompt, the AI messes it up in some way that I think it shouldn't, maybe it's something I've seen it do before and I'm sick of it. So I update AGENTS.md, revert the changes, /undo in the chat context and re-submit the same prompt.
sjmaplesec 1 days ago [-]
Tessl can generate the evals, both to test anthropic best practices as well as running scenarios with and without the skill to check if it's helping
stuaxo 2 days ago [-]
I mean.. Claude kept putting in deprecated APIs for code I was getting it to write, so I adjusted the prompt to say not to + it seemed to help.
sjmaplesec 1 days ago [-]
Can add this as a skill or as part of a skill, and so you don't need to keep prompting the same things.
You can learn about them evaluating that site https://github.com/Alexhans/eval-ception and then the pattern should be easy to test on your own thing.
What do you think would resonate with you or with the audience you're thinking about?
That repo also has an illustrative eval for Agent Skill in Airflow for Localization
https://github.com/Alexhans/eval-ception/tree/main/exams/air...
The question I have is: what are we optimizing for and how do we measure it?
In your own repos, I see you have a fork of safepass, which seems like a nice simple project, but it doesn't have an agents file yet.
It's agents all the way down!
Submit a GitHub repo containing skills to Tessl, and it will generate the evals, run them, and present the results. https://tessl.io/registry/skills/submit
The evals and results are all shown, no login necessary, so you can assess them yourself. e.g. https://tessl.io/registry/skills/github/coreyhaines31/market... (click details to see the eval texts).
But since it's not, what I do to avoid working on AGENTS.md blind is I test it on whatever causes me to write it.
I have some prompt, the AI messes it up in some way that I think it shouldn't, maybe it's something I've seen it do before and I'm sick of it. So I update AGENTS.md, revert the changes, /undo in the chat context and re-submit the same prompt.