The engineer AI can't replace
Most engineers use AI to ship faster. The ones with taste use it to ship better.
I see people ship technical docs that are 100% AI-generated. Pull data, drop it into a prompt, and ask the model to write the document. No investigation. No judgment. No iteration. Just transforming data with natural language.
Last week, I saw it at work. The doc looked fine on the surface. The structure was there. The sentences were clean. But when I read it carefully, I could feel something was off. The reasoning was shallow. The recommendations were generic. The parts that should have been hard were the ones that felt the easiest.
Something was missing.
Get the guide to build your first AI agent directly in your inbox on newsletter signup:
I’ve been thinking about that moment a lot. I work with great engineers, and we all feel the same pressure to ship faster with AI tools. The quality of outputs is not about using a different AI. It is about taste. And once I saw it in someone else’s work, I started seeing it in my own too: in pull requests I rushed, in design reviews where I nodded along, in production incidents that surprised us all.
This is the thing most people are getting wrong about AI coding. AI slop is not an AI problem. It is a taste problem. The models are doing exactly what they were asked to do. The question is whether the person on the other end of the prompt knows what “right” looks like before they hit enter.
In this post, you’ll learn
What developer taste is in software engineering, and why it matters more now than at any point in the last decade
How AI slop code shows up in real codebases and what taste mistakes actually look like
Why senior engineers who use AI well have better taste, not faster typing
How to develop a developer's taste as a practitioner working with AI tools every day
What the shift to AI-assisted coding means for your career and who will be left behind
The AI slop problem you already recognize
If you work in software today, you have seen AI slop. You might have shipped some of it yourself. I know I have. It is the pull request that compiles and passes tests, but makes no sense when you actually read it. It is the function that solves a problem nobody asked about. It is the new file that duplicates logic already living three folders away.
Slop is not broken code. That is the trick. Broken code gets caught. Slop is code that works today and quietly makes the next six months of your life worse. It is the extra abstraction nobody will remember adding. It is the test that asserts the wrong thing. It is the migration that ran fine in staging and deleted data in production because the happy path was all anybody cared about.
I remember reviewing a change where AI had wired authentication into a service by copying a pattern from somewhere else in the repo. The pattern was wrong. Not wrong for the other service, wrong for ours. The author had trusted the output because it looked like the rest of the codebase. Nobody paused to ask whether the rest of the codebase was the right reference in the first place. That is slop. It is code written by someone, human or not, who gave little consideration.
The reason this is getting worse is simple. AI tools raised the floor on how fast you can produce code. They did nothing for the floor on how carefully you have to think about it. If anything, they made things worse because companies increased the pressure to deliver fast thanks to AI. Most engineers did not evolve their thinking for AI. They kept typing, only they type prompts now. The model started doing the typing for them. The thinking got forgotten.
Learn why AI slop is the biggest risk for software engineers and the system I use to avoid it
What developer taste actually is
What some people are starting to call “software taste” is really just this: the judgment you bring before you write the first line of code.
Let me give you the definition I’ve been thinking about. Developer taste is the judgment to know what the right thing is, and the discipline to pursue it, before you write a single line of code. That is it. It is not aesthetics. It’s not about preferring tabs to spaces. It’s not about being a nitpicker. It is not how pretty your diffs look in review.
An important distinction is Taste vs Skill. Taste is what you bring to the problem. Skill is what you do with the problem once you understand it. A lot of engineers have skill without taste. They can write anything you describe, but they cannot tell you whether the thing you described is worth building. They will follow the spec down into the ground and ship exactly the wrong solution, on time, with full test coverage. This is everywhere now.
Taste is also not speed. Speed is how fast you get from idea to merge. Taste is how often the idea was worth merging in the first place. I have worked with engineers who shipped half as much as the people around them and moved the business twice as far, because every single thing they shipped was pointed at something that mattered. They rejected work that would not move a metric. They pushed back on specs that did not add up. They asked the question everyone else was too busy to ask. This is why it’s important to listen when these engineers raise a concern, instead of dismissing them because they are slowing down the initiative.
The simplest way I can describe taste is this. When you look at a piece of code, you feel something before you can explain what. That feeling is the compressed memory of every system you have broken, every bug you have chased at 2 am, every design you have watched rot under real traffic. AI can approximate the surface patterns. It cannot approximate the ache. That ache is the thing that tells you how much this shortcut is going to cost you in a month. It tells you this abstraction is premature. It tells you this test is testing the wrong layer. Taste is your scar tissue. Taste is your intuition. AI does not have this.
What taste looks like in practice with AI tools
I was reviewing a change last month where the model had added a handful of new types to a model of a service. The change compiled. The types were correct in isolation. The problem was that the types belonged in a different file, because the service had two separate API surfaces, and those surfaces were never supposed to share definitions. If you did not know the architecture, you would have approved it. I only caught it because I had been there when the split was made, and I knew why the boundary existed.
That is taste in practice. It is not some magical pattern-matching. It is remembering why things are the way they are. It is the person who read the change, who was in the room for the incident, who walked the whole graph once, and kept the map. The model can see the files. It cannot see the history. If you do not bring the history, nobody does, and the fence gets moved without anyone asking why it was there. Even if you feed the history to AI, you’ll run out of context window. You need to pair with the AI for the best results.
Another one. I had started to build a new feature, and I caught myself about to paste an AI-generated block of code without opening a single other file. I paused. I closed the editor. I wrote down what the feature actually needed to do, end to end, as if I were explaining it to someone else. Then I opened the existing integration tests and worked outside-in, from the external behavior I wanted, down into the code I would have to change. Only then did I go back to the AI to prompt it. The prompt I wrote the second time was about five times longer and produced a change that was four times smaller than what the first prompt would have given me. That is taste in prompting. The quality of the answer is a function of the quality of the context.
This is the pattern I keep coming back to. Engineers with taste use AI to iterate toward a thing they already know is right. Engineers without taste use AI to guess at what right might look like, and then ship whichever guess compiled. These are not the same activity. They look the same from the outside. They produce completely different codebases over the course of a year.
That’s what many non-tech people miss. AI adds value to all of us, but it adds more value when you have taste:
Non-tech people with AI produce better code than non-tech people without AI
Tech people with AI produce better code than tech people without AI
Tech people with AI produce better code than non-tech people with AI
Read more about the difference between prompt engineering and spec engineering and why senior engineers in big tech are moving toward the latter
What taste mistakes look like
These are not bugs. They are the subtle wrongness in code that works today and becomes harder to maintain later. Here are the five I keep seeing.
Treating AI output as final. You ask the model to write a function. It writes a function. You paste it in. You run the test. The test passes. You move on. What you skipped was the part where you read the code and asked whether this is what you would have written. Not word-for-word. Just whether it’s right. If you never ask the question, you are not using AI. You are being used by AI. AI output should be the first draft. Vibe coding (accepting whatever the model produces without reviewing it) is this pattern.
Copying from a secondary source instead of the primary one. The model was learned from other people’s code. Other people’s code is not the spec. The spec is the spec. When I see an engineer implement something by pattern-matching against a similar-looking file in the same repo, I get nervous. When that similar-looking file was also written by a model, I get really nervous. The original source of truth exists somewhere. The docs. The RFC. The design review. Find it. Read it. Reference it for the model. Then come back.
Skipping problem decomposition. This one is a classic, and AI has made it worse. You get a task. The task has three parts. You ask the model to do all three at once. It gives you a plausible answer that is wrong about one of them in a way you cannot see because you never wrote the three parts down separately. Taste says stop. Break the problem into pieces you can reason about. Decide the answer in your head for each piece. Then let the model write the pieces. You still have to own the thinking and orchestration of AI tools.
Shipping the happy path and calling it done. I see so much AI-generated code that solves the case where everything works and says nothing about the case where it does not. No error handling. No edge cases. No tests for the ugly stuff. The model will happily do any of that if you ask. The engineer did not ask because the engineer did not think about it. Taste is the reflex that makes you think about it without being asked. You only find the edge cases when thinking about the problem.
Making code work without making it right. This is the 50-50 rule I use with my team. Getting the code to work is half the job. Getting it right, clean, small, reviewable, shippable, and understandable in six months is the other half. AI is very good in the first half. It’s also good in the second half, but only if you are on top of it. You can’t abdicate responsibility.
How to develop a developer's taste
I am going to give you the practices that have worked for me. None of them is flashy. Pick the ones that best fit your case.
Work outside-in. Before you write any code, write the test that describes what the feature should do from the outside. Or at least in human language. Not a unit test. An integration test that pretends you are the user, the other service, or the API caller. This forces you to decide what “done” means before you start. It also gives you a truth function you can run against the AI output later. Outside-in thinking is the single biggest taste accelerator I know.
Keep commits small and single-purpose. One commit, one responsibility. This sounds like a style preference. It is not. Small commits force you to decide what each change is actually about, which forces you to have an opinion about the change, which is where taste lives. If your diff is 800 lines and three concerns, you have already abdicated the decision to whoever reviews it next.
Read your own code in the review UI before you assign it. Pretend you did not write it. Pretend a junior engineer submitted it to you for approval. What questions would you ask them? What comments would you leave? This is the exercise I wish more engineers did. It is also the single best way to spot AI slop in your own PRs, because AI slop reads differently when you stop being the author and start being the reader.
Go to primary sources. The docs. The standard. The paper on which the library was based. The ADR in the repo. When you do not know something, do not ask the model with its training knowledge. Ask the source.
Define the skeleton yourself, let AI fill it in. I call this “Data Structure Driven Development”. You have to think about the data flows. I decide the types, the function signatures, the module boundaries, and the names. Then I let the model implement the bodies. This inverts the default AI workflow. The default is that the model drafts the shape, and you clean up. The better pattern is that you draft the shape and the model fills in the details.
Review more code than you write. Reading other people’s code, especially code you think is “bad”, is how you develop a nose for what is wrong. If you only ever look at your own work, you calibrate against yourself. If you look at everyone’s work, you calibrate against the full distribution. You understand your code is not the best, but also it’s not the worst. This is the closest thing I know to a shortcut for taste.
A practical guide to AI steering and getting the model to do what you actually want:
The career implications of a software engineer’s taste
AI is not coming for your job in the way the headlines said it would. It is coming for the part of your job you were already doing on autopilot. If that part was most of your job, you are in trouble. If most of your job was judgment, you just got a superpower to deliver more.
Engineers without taste are becoming executors. They will ship a lot of code. They will look busy. They will hit their sprint metrics. And over time, they will be treated like a fungible resource, because what they are doing can be done by anybody with a prompt box. The market rate for typing is dropping fast. The market rate for knowing what to type is not.
Engineers with taste are becoming orchestrators. They frame the problem. They design the shape. They review the output. They decide what’s raised as a PR. They use AI with clear intent and firm opinions. Their leverage goes up every time the tools get better, because the tools make their judgment multiply faster.
My honest bet is that the next five years are going to be rough for the first group and spectacular for the second, and the thing that separates them is not talent or tenure or which company they work at. It is whether they decided to focus on developing their taste, or pretend you either have it or not.
You can train it. You just have to start.
Some Common Questions About Developers’ Taste
What’s a developer's taste in software engineering?
Software engineer taste, or developer taste, is the engineering judgment to know what the right solution looks like before you write it, and the discipline to pursue that solution instead of the first one that compiles. It is not aesthetic preference or coding style. It’s not choosing one programming language or another. It is the compressed experience that lets you feel when code is wrong, when a design will not scale, or when a shortcut will cost you later.
Can AI have good taste in code?
No, and that is the point. AI models produce the most likely output given a prompt, which means they default to the average of what they were trained on, unless they have a post-training layer in which they get steered to something else. Taste is the ability to reject the average when the average is wrong. Taste is also to accept the AI code when it’s right. A model cannot reject anything. The engineer using it has to, and that act of rejection is where taste lives.
What separates a senior engineer from a junior engineer in the AI era?
The ability to know what “right” looks like before asking AI to get there. Juniors without taste use AI to guess at solutions. Seniors with taste use AI to iterate toward solutions they can already picture. The gap is in framing the problem, not in writing the code.
What is an AI taste mistake?
An AI taste mistake is code that is technically correct today but quietly wrong in a way that hurts you later. Examples include copying a pattern from the wrong file, handling only the happy path, skipping problem decomposition, or duplicating logic because the model did not know the existing code. These mistakes pass review and surface months later as bugs or tech debt.
How do you develop engineering judgment when AI writes most of your code?
You develop it by forcing yourself to think before prompting. Write tests outside-in before implementation. Keep commits small. Read your own PRs as if someone else wrote them. Go to primary sources instead of pattern-matching. Define the data structures yourself and let AI fill in the bodies. Review more code than you write.
Conclusion: Taste is the skill that compounds in the AI era
AI slop code is not an AI problem. It is a taste problem, and taste is the thing you have always been able to develop. Whether the tool in your hand is a terminal, an IDE, or a model that writes code for you, you develop your taste.
The tool changed. The skill underneath did not. If you were the kind of engineer who read the source, asked the hard question, and rejected the easy answer before, you are going to be fine. If you were not, the next few years are going to be harder than you think.
The thing I want you to take from this article is that taste is not a personality trait. It is a practice. You build it the same way you build any other muscle, with small, deliberate reps, done consistently, over a long time.
None of it is glamorous. All of it works.
Key Takeaways
Developer taste is the judgment to know what the right solution looks like before writing any code, and the discipline to pursue it instead of the first output that compiles.
AI slop is a symptom of missing taste, not a problem with the models, because AI produces exactly what the prompt asked for and nothing more.
Taste mistakes are subtle errors in AI-generated code that work today and cost you later, including wrong-file patterns, skipped edge cases, and happy-path-only solutions.
Engineers develop taste through outside-in testing, small single-purpose commits, primary-source research, and reviewing more code than they write.
The AI era is rewarding engineers with taste and punishing engineers who treat AI like a typing shortcut, because the market rate for typing is collapsing while the rate for judgment is climbing.
If you want to go deeper on how senior engineers are actually changing their workflow to work well with AI, read this system to prevent AI Slop:










That's a great one with the Auth-copy thing, so AI. 😄 Coding agents often take things at face value, and unless something is structurally wrong, they rarely push back and just roll with something assuming it's a design choice. Been down the same road rebuilding our code-review agents as a team — "available" rules almost never get used. ~2-3 of 10 fire per run for us. Taste is also whats in context at turn one vs hoping it gets picked up. Just wrote about this actually. Great one Fran!