AI Steering: How to get AI to do what you want
Most software engineers feel AI is random and slows down their work. Learn to steer AI with steering docs, spec-style prompts, and iterative loops to ship faster.
Most engineers I talk to treat AI like a slot machine. Some days it writes a good function, other days it hallucinates code and deletes all the tests so the build succeeds (yes, I’ve seen this multiple times). It feels random, so you end up using it because it seems to be the best practice, but not trusting it, so sometimes it feels like it slows you down.
The truth is simple. Productive engineers do not hope AI gets it right. They steer it. They decide the architecture, the data flow, and the definitions of done. AI only fills in the boring parts at high speed.
You do not need to become an AI researcher for this. You need a different way to think about AI in your daily work. A mental model, a small set of rules, and a workflow you can run on every ticket. That’s what you’ll find in this article (🎁 and a checklist for paid subs with tactical actions)
In this post, you’ll learn
How to think of AI
How to write a steering doc
How to design prompts
How to build an iterative loop
Why most engineers feel AI is random
Most engineers paste a vague task into an AI tool, accept the first wall of text, and call it a day. The code compiles, so they move on without review, which brings a lot of comments. Even worse if you accept the code and keep building on top of that. The next time, the AI output is worse because it takes bad examples from previous AI-generated code. They stop trusting AI for serious work, and it becomes a downwards spiral.
A teammate wrote a doc that was one hundred percent AI generated. Pasted some metrics and investigations into the prompt, and asked AI to write a document with proposals. There was no investigation behind it. No check if the numbers even made sense. The result looked polished and with good writing, but the solutions proposed weren’t taclking the problem. It was the same pattern as copy pasting a Stack Overflow answer for another problem that you don’t have.
Productive engineers behave differently. They use AI to accelerate steps they already do. They still read documentation, they still design data flows, and they still decide what a good solution looks like. AI speeds up the typing and some of the research, but it does not replace their judgment.
Your manager will not promote the person who pastes prompts and ships whatever comes back. That is like a developer who forwards every question to another teammate. They’ll promote the engineer who can design, orchestrate, and ship with AI as leverage. Steering AI is now part of the job, and sooner than later a performance evaluation criteria.
Change your mental model, you are the orchestrator, not the passenger
The first shift is simple. You are the orchestrator. AI is the fast pair of hands, you’re the thinking head. The model does not keep software design principles in mind unless I push for them. It will happily mix concerns in a single function and add logic in the wrong place. If I do not act as the architect, the code base gets messy.
For example, when I was working with some dependency injection in Java, I tried to hack my way through with AI. I kept asking for fixes and patches based on the terminal output without really understanding what was happening. The result was a lot of trial and error, with no success. Only when I slowed down and read the terminal output myself I could find what was happening. AI cannot fix a mental model you do not have.
Reviewing AI output is another place where the orchestrator mindset matters. I noticed a lazy habit in myself. Once the AI code worked, I did not want to go through it in detail. The tests passed, so my brain wanted to move on. That is a trap. AI is not yet at the level where you can skip review. You need to treat its output like a pull request from a new hire. First check the structure, then the happy path, then edge cases...
Thinking of AI as a junior engineer is a helpful frame. You give them clear tasks with little ambiguity, you define patterns, and you review their work. Productive engineers do the same with AI. They do not outsource judgment. They design the system, then let AI do the repetitive parts inside that system.
Create steering docs, steering rules.
Once you accept that you are the orchestrator, you need to communicate rules to the AI. That is where a steering doc comes in. I treat it as a small rule in the IDE and some snippets I can paste in my prompts. It defines the task at hand (e.g. write code or brainstorm), the code and file structure, and the quality bar I expect.
When I take time to prepare prompts ahead of time and have a clear idea of how I want the code to look, everything moves faster. Instead of asking for implmenetation fast, I check the flows, gather the right context, and craft the requirements. The hard part is defining what to do, and AI is much better when I give it that up front. When implementing, I tell the AI to write the code first, and only when I ask it later, to add metrics or logs. There’s no point in having AI write more code without validating it, it becomes more troublesome.
Guardrails in the steering doc reduce AI “hallucinations”. I set hard limits like “no new dependencies”, “don’t add fake APIs”, “follow existing patterns”. I also add a line that says that if the model does not know something, it must ask clarifying questions first. Also, it’s important to define your domain entities (add lengthy comments in those classes).
This is a lot of work if you wrote from scratch every time. That’s why saving this steering doc as a snippet and tweaking it when AI does somehting unexpected is the way to go.
Design prompts like specs, not wishes
Most AI pain comes from vague prompts. You describe an outcome loosely and hope the model fills the gaps, often wrong. A better way is to design prompts like specs.
For non trivial work, I first ask the model for a short step by step plan or checklist. I ask it to be brief and high level, otherwise I will not review it. Once I like the plan, I say that we will execute the first step only. Then we move step by step.
Schemas are a big part of this. For backend work, I try to define data structures and function signatures first. If you want to put a name to it, call it “data structure driven development”. Most services are a mix of CRUD operations and calls to other services. If I define the data flows and structures clearly, adapting the rest of the code is much easier. For text documents, I do the same with headings, creating an outline.
There is also a downside with heavy spec systems. My team tried “spec driven development” in the style of AWS Kiro. The idea is nice, but in practice it was super verbose and painful to maintain. I saw pull requests with 22 times more markdown line changes than code changes. AI was not good enough to keep a perfect mirror between code and specs. My takeaway is to keep minimal specs focused on data models and interfaces. Once the code exists, I reference the code itself and delete the markdown.
Context is another key element. I learned this while doing a feature flag cleanup. I rushed in and did not understand the context of the changes, just told it to remove the unused code. Only on the third revision, after I had a better view of the context, was the cleanup correct.
I now break large tasks into small prompts, each with the specific “definition of done”, and ask the model to list assumptions before writing code.
Build an iterative loop with AI, not a one-shot
The default way to work with AI should be conversational programming, not one-shot generation. Since I use Cursor as my IDE, I notice how much faster I move when I iterate with the model. The value is not in getting the correct code in a single reply. It is in trying different approaches quickly, seeing the diffs, and converging on a better design over a few turns.
To make that loop work, I avoid vague asks like “make this better”. Instead, I request precise changes. For example:
❌ ”make it smaller”.
✅ “I want a function split into three functions to fetch data, validate it, and store it”,
It is the same pattern as a good code review. You do not tell a teammate that you do not like their code. You propose a change.
With AI, the more specific, the better the outcomes.
Examples also matter more than long rules. Few-shot prompts work better than zero-shot. I show the model one example of the exact style of answer I want. For code, I point it to an existing implementation in the repository or to another project that does what I want it to do. Then I ask it to follow that pattern for the next case.
Self-checks are an important part of the loop. In my steering rules, I tell AI to check the problems tab in the IDE, compile the code, run the tests, and verify that there are no errors in the logs. Adding a simple checklist of requirements of this particular feature, this lets the model self-evaluate if the work is done and work autonomously for longer periods of time.
My most important takeaway is that during generation time, instead of checking Slack, I use those seconds to plan the next prompt. If I distract myself, I cannot steer the next turn well. Remember, you’re like an orchestra director, you can’t check your phone while musicians are playing, they’ll fall out of sync.
Conclusion
Some engineers will continue pasting prompts and accepting whatever comes back, the same way some kept copy-pasting from stack overflow. Their results will stay random.
Productive engineers will design systems, write steering docs, and treat AI as a power tool that follows their lead.
The engineer who can steer AI well ships more work, learns faster, and leaves a better paper trail of decisions.
Your next step is small. Before writing the next prompt, open your IDE settings and check which rules to steer AI you have there and improve anything that seems off. Keep opening those settings whenever you feel AI is not doing what you want.
🎁 Paid subs can access the checklist with all the techniques discussed in this article here
And subscribe to the newsletter so you can tell me how things went after reading this article.
This is an article inside our system’s transition from phase 1 to phase 2: get more time, but it also helps on building career capital by becoming the engineer who ships the most code. I’m building this system for paid subscribers. Thanks for your continued support!
🗞️ Other articles people like
👏 Weekly applause
Here are some articles I read during this last week:
Top 5 Communication Frameworks for Engineers You Must Remember by
. Communication is often the difference between a stalled project and a launched one.- . Sharding is a powerful technique, but it introduces complexity to your architecture. Saurabh walks through the tradeoffs here
- . Building for high-frequency trading requires a different kind of architecture than a regular web app.
How the Operating System Manage the Hardware by
. To write truly performant software, you can’t treat the hardware as a black box.
P.S. This may interest you:
Are you in doubt whether the paid version of the newsletter is for you? Discover the benefits here
Could you take one minute to answer a quick, anonymous survey to make me improve this newsletter? Take the survey here
Are you a brand looking to advertise to engaged engineers and leaders? Book your slot now
Give a like ❤️ to this post if you found it useful, and share it with a friend to get referral rewards
The fancy images are AI-generated, the ones that aren’t fancy are likely done by myself :)















