I spend a lot of my time improving our agents’ skills and commands. Without the guardrails, agentic coding feels like unleashing the power of 1000 suns at your codebase. You must channel that energy somehow. I didn't know what we were doing was called Harness Engineering, so thanks for that.
I'm about to publish an article on how you can build a test agent (including some code as well!) then next week a more detailed rundown. I can't speak for others, but these are exciting times in software engineering.
I have the same feeling, AI can be super powerful, but get distracted and uses that energy in something unintended
The harness engineering term is kind of emerging. I saw that both OpenAI and Anthropic started adopting the term, so it should become the standard term used for "AI guardrails."
I spend a lot of my time improving our agents’ skills and commands. Without the guardrails, agentic coding feels like unleashing the power of 1000 suns at your codebase. You must channel that energy somehow. I didn't know what we were doing was called Harness Engineering, so thanks for that.
I'm about to publish an article on how you can build a test agent (including some code as well!) then next week a more detailed rundown. I can't speak for others, but these are exciting times in software engineering.
Nice write-up, Fran!
Looking forward to reading your article!!
I have the same feeling, AI can be super powerful, but get distracted and uses that energy in something unintended
The harness engineering term is kind of emerging. I saw that both OpenAI and Anthropic started adopting the term, so it should become the standard term used for "AI guardrails."
It did sound familiar. I think I read it here first: https://www.anthropic.com/engineering/harness-design-long-running-apps
And thanks, just hit publish. :)