The 17 biggest mental traps costing software engineers time and growth
17 cognitive biases that hurt your estimates, code reviews, and technical decisions. With real examples and how to fix each one.
Get the free AI Agent Building Blocks ebook when you subscribe
Software engineers often think of bugs as code-level mistakes. But cognitive biases cause bigger problems. They shape how we estimate tasks, judge others’ work, choose tools, and learn from mistakes.
This post covers 17 cognitive biases and logical fallacies that affect software engineers daily. Some are well known. For others, this might be the first time you hear them.
In this post, you'll learn
Why the planning fallacy leads to bad estimates and how to fix it
How the Dunning-Kruger effect and impostor syndrome distort self-assessment
Why bike-shedding wastes your team’s time in meetings
How confirmation bias and anchoring bias lead to poor technical decisions
Practical examples of each bias in day-to-day engineering work
Planning fallacy in Software Engineering
The planning fallacy is the tendency to underestimate how long a task will take, even when you've done similar tasks before.
Why it matters: This leads to missed deadlines and poor project planning. Engineers commit to tight timelines and end up working under pressure because they believe things will go smoothly.
Practical explanation: You might say a feature will take "a day or two" but forget the edge cases, integration, testing, and unexpected blockers. Even if the last three features overran by 40%, you still think this one will be different.
Projects go off track when teams don't account for reality. This bias isn't solved by experience alone. You need deliberate effort to add buffers and use past data, not gut feeling.
The XY problem in Software Engineering
The XY problem happens when someone asks about a solution (Y) instead of the actual problem (X) they’re trying to solve.
Why it matters: Teams waste time solving the wrong problem. Discussions get stuck in unproductive directions.
Practical explanation: A developer asks, “How do I merge two JSON files in bash?” when the real problem is transforming a dataset. They get low-quality help because the question hides the context.
Good engineers ask clarifying questions. Don’t assume the first problem someone presents is the real one. Dig into what they’re trying to do.
Impostor syndrome in Software Engineering
Impostor syndrome is the belief that you're not as competent as others think you are, despite evidence of your skills.
Why it matters: Engineers hold back ideas, avoid taking lead roles, or hesitate to ask questions. They assume others are smarter and know more.
Practical explanation: You ship production code every week, but after a tough bug or a comment from a senior engineer, you question if you belong. You avoid contributing in design reviews because you're afraid of sounding stupid.
This bias limits growth. People learn slower and miss opportunities because they silence themselves. The truth is, even senior engineers feel this way at times.
Dunning-Kruger effect in Software Engineering
The Dunning-Kruger effect is when people with low ability overestimate their skills, while experts underestimate theirs.
Why it matters: Junior engineers may feel overly confident and make risky decisions. Seniors may doubt themselves too much and avoid stating opinions.
Practical explanation: A new developer might say, “This API should be easy,” and start refactoring core logic. Meanwhile, a more experienced engineer might say, “This part of the system is tricky,” even after working on it for years.
This leads to overconfidence in early stages and self-doubt later. Awareness helps you calibrate your confidence and ask better questions.
Law of the instrument in Software Engineering
This is when you overuse your favorite tool for every problem.
Why it matters: Engineers miss better solutions because they default to familiar ones.
Practical explanation: You know React well, so you use it for everything, even when a simple server-side page would work better.
Choose tools based on fit, not comfort. Expand your toolbox.
Self-serving bias in Software Engineering
This is the tendency to credit yourself for wins and blame others or luck for failures.
Why it matters: It blocks honest reflection and learning.
Practical explanation: You say the feature shipped fast because of your smart design, but when it’s delayed, you blame meetings or unclear specs.
Accountability helps you grow. Excuses don't.
Fundamental attribution error in Software Engineering
This is the tendency to blame people’s traits instead of looking at context.
Why it matters: You misjudge colleagues and miss root causes of issues.
Practical explanation: You think a teammate is careless because of a bug, when the real issue was poor requirements or unclear communication.
Assume people are doing their best. Fix systems, not just individuals.
IKEA effect in Software Engineering
The IKEA effect shows up when software engineers overvalue code they built themselves.
Why it matters: Engineers get attached to code and resist changes, even when refactoring is needed.
Practical explanation: You write a custom build script. Later, someone suggests replacing it with a standard tool. You push back, not because it's better, but because it's yours.
This slows down improvement. Code should be judged on utility, not personal effort.
Bike-shedding in Software Engineering
Bike-shedding is spending too much time on trivial details while ignoring important problems.
Why it matters: Teams get caught up debating easy things instead of solving hard ones.
Practical explanation: You spend 30 minutes debating folder structure in a meeting, but only 5 minutes on designing the data model.
Watch where your team’s time goes. Don’t confuse easy discussions with valuable ones.
Authority bias in Software Engineering
Authority bias in software teams is when you give too much weight to opinions from senior engineers or perceived experts.
Why it matters: Good ideas get ignored if they come from junior voices. Bad ideas get followed if they come from the top.
Practical explanation: A staff engineer suggests a tool they haven’t used in production, and the team goes with it without questioning. No one wants to challenge the authority.
Titles don’t make someone right. Arguments should stand on evidence, not seniority.
Conformity bias in Software Engineering
Conformity bias is the tendency for software engineers to align with the group, even if you disagree.
Why it matters: Engineers may hold back objections in design reviews or planning sessions, leading to poor decisions.
Practical explanation: A new engineer notices a flaw in the proposed architecture but stays silent because everyone else seems to agree.
Speaking up is uncomfortable, but groupthink is worse. Diversity of thought improves systems.
Not invented here (NIH) in Software Engineering
Not Invented Here is the bias that software engineers have against using external solutions in favor of building your own.
Why it matters: Teams reinvent the wheel and waste time building things that already exist.
Practical explanation: Instead of using a stable open-source library, you build an internal tool with less functionality. You justify it by saying, “It’s tailored to our needs.”
This leads to long-term maintenance cost and slower delivery. Use existing solutions when they work.
Bandwagon effect in Software Engineering
The bandwagon effect hits software engineering teams when they adopt trends just because others are doing it.
Why it matters: Engineers may push tools or patterns that aren't a good fit, just to follow the crowd.
Practical explanation: You move to microservices because “everyone is doing it,” not because you have a scaling problem.
Trendy tech can work, but not in every context. You need to ask, “Does this solve our real problem?”
Survivorship bias in Software Engineering
Survivorship bias is focusing on the winners and ignoring the failed software projects that didn’t survive.
Why it matters: You hear about successful startups using bleeding-edge tech and assume that’s the reason they succeeded.
Practical explanation: You read a blog post where a company scaled with Rust and Kafka. You think you should do the same, ignoring the 50 startups that tried and failed with that stack.
Look beyond success stories. Ask what didn’t work and why. Survivors are not always representative.
Availability heuristic in Software Engineering
The availability heuristic is the tendency of software engineers to judge something based on how easily they can recall examples.
Why it matters: Engineers make decisions based on what’s recent, not what’s relevant.
Practical explanation: If you recently fixed a big caching bug, you might over-prioritize caching concerns in your next project. If a service went down last week, you might now overestimate that risk.
This creates blind spots. Just because something is top of mind doesn't mean it's common or important.
Anchoring bias in Software Engineering
Anchoring bias is when software engineers rely too heavily on the first piece of information they see.
Why it matters: Early estimates, guesses, or suggestions can distort later judgments. Even if they’re wrong, they stick in our heads.
Practical explanation: A team lead says a task should take 3 days. Everyone else starts their thinking from that point, even if it’s based on nothing. It becomes the anchor.
You need to reset expectations with real data. Don't let bad anchors shape your planning or tech choices.
Confirmation bias in Software Engineering
Confirmation bias is the tendency of software engineers to look for evidence that supports your beliefs and ignore evidence that contradicts them.
Why it matters: Engineers may resist valid criticism, cherry-pick benchmarks, or over-trust their own assumptions.
Practical explanation: You believe framework A is faster, so you test only the cases where A wins. You ignore or dismiss the cases where framework B performs better.
This bias leads to poor technical decisions and tribalism. Stay skeptical of your own assumptions, not just others’.
Common Questions
What are the most common cognitive biases in software engineering?
The most common ones are the planning fallacy, Dunning-Kruger effect, confirmation bias, and bikeshedding. Each one affects how engineers estimate work, evaluate code, and make technical decisions.
How do cognitive biases affect software project estimates?
The planning fallacy causes engineers to underestimate task duration. Anchoring bias locks teams into the first estimate they hear. Together, they are the main reasons software projects miss deadlines.
What is bikeshedding in software development?
Bikeshedding is when a team spends too much time debating trivial details while ignoring important problems. In software, this often shows up as long arguments about naming conventions or folder structure while the data model goes unreviewed.
How can software engineers avoid cognitive biases?
The best approach is awareness. Learn the most common biases like planning fallacy, confirmation bias, and anchoring. Then build team habits that counter them. Use historical data for estimates instead of gut feeling. Run blameless postmortems. Encourage dissent in design reviews.
Conclusion
These biases don’t show up in your terminal. They show up in decisions, code reviews, meetings, and planning. Left unchecked, they slow you down and damage your team.
Start noticing them in your own work. Write better estimates. Ask better questions. Challenge your first instinct. Challenge your team with data.
Which of these are new for you?











![r/Design - [X-post r/XKCD / the front page] An XKCD on industrial Design r/Design - [X-post r/XKCD / the front page] An XKCD on industrial Design](https://substackcdn.com/image/fetch/$s_!kHdD!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0ca18529-3548-4ed7-9801-7e901771479b_640x634.jpeg)













Awesome read, Fran, and thank you for mentioning one of my latest articles!
Excellent list 👌