How a Senior Principal Engineer communicates at Amazon
Vague communication kills developer productivity. Master the 3-level framework and RFC 2119 to give precise instructions to both your team and your AI agents.
The difference between a junior developer and a senior leader often comes down to communication precision. Junior developers might know how to write code, but they often lack the organizational habits to scale their impact across a team.
This gap becomes obvious when we look at how different engineers interact with artificial intelligence. Treating an AI tool like a casual chat partner leads to a productivity loss. If you want to deliver your best work today, interacting with both humans and AI, you must know how to communicate to each of them.
We can break down communication into three distinct levels of authority. I learned about this framework from Luu Tran, a Senior Principal Engineer working on Alexa at Amazon. This mental framework will change how you interact with both your colleagues and your AI agents.
In this post, you’ll learn:
How to categorize your feedback using Luu’s proven framework.
How to apply human communication styles to machine instructions.
Techniques to stop your coding assistant from blindly agreeing with bad ideas.
The 3 levels of engineering communication
The weight of your comments matters in a technical environment.
When a Senior Principal Engineer speaks, most people will take those words very seriously. If it’s a junior engineer instead, people would assume their level of confidence is lower due to their limited experience. This creates a need for a system that helps teams distinguish between mandatory changes and casual thoughts.
Level 1: Authoritative
Luu Tran defines a clear framework starting with the authoritative veto. This is your level one input, which acts as a non-negotiable directive based on deep expertise.
You use this level when you know your feedback applies directly to the core architecture or strict security constraints of a project. It leaves no room for debate and requires immediate compliance.
If the person didn’t accept the feedback, you’d escalate to your management chain or their management chain, and you’d ask other peers to jump in. In essence, you’d move heaven and earth to make sure they don’t make the mistake you are foreseeing.
Level 2: Talking from experience
The second tier is the advisory experience. This level of input offers guidance without enforcing a strict path.
As an experienced engineer, you’ve worked on many projects with many people, and you can see parallels between present situations and your past work.
You cannot be completely sure what the exact impact will be on their specific project, or you know that multiple options are good, so you offer it as a suggestion rather than a mandate.
You’d often end with “but YMMV“ (your mileage may vary)
Level 3: Talking as a user of the system
The third tier is the unverified opinion, which is casual feedback from a user perspective. It’s like thinking out loud. You are just another opinion.
You don’t intend your input to be used for any decision. You don’t want people to quote what you said. You just wanted to share a thought.
Most times, if you find yourself in a room with people who will take your words seriously and use them for decision-making, shut up. Use this level of talking only when you have very good communication with those people, and your words won’t have consequences.
The real problem: People misunderstand the level, and you misunderstand yourself
If there’s enough tenure and seniority, even with people at the same level, they can take your words as a commitment or the words from someone with deep experience.
Imagine a director of software engineering talking casually about the predictions of AI replacing 90% of the job in 2 years, and talking about starting to use AI in human resources. Then the HR director does layoffs, only to find that AI is not good enough yet.
These are things that happen because they didn’t understand the message at the same level
Most people will take any words from a Senior Principal engineer as a mandate, while taking any words from a junior as an opinion. Sure, the Senior Principal is most likely right, and the junior may have missed most of the complexity of the problem, but that may not happen 100% of the time.
If you can’t express which level you are coming from, as you grow in experience, you’d find yourself afraid to talk. Anything you say may be taken as a mandate, even if it was just a joke or an opinion. You will find you can’t think out loud or talk casually to brainstorm ideas. You end up being conservative, not taking any risk, not thinking outside the box.
That’s why it’s important that you understand which level you are coming from, that you communicate it clearly, and that people act according to the level of communication you’re using.
Applying these communication styles to AI
AI is amazing because you can rant about anything, and it will pick specific things in your text/audio and give you a reasonable response. But for real work, you don’t want a “reasonable” response. You want quality.
You can map these human communication styles directly to how you prompt automated models. Organizing your instructions this way helps the system understand the intent and strictness of your requests. This prevents the model from hallucinating features you do not need and keeps your output focused on the actual requirements.
Treat your most critical instructions as level one vetoes. Set these as hard rules and acceptance criteria that the system absolutely must meet to complete the task.
For context provided across multiple files, use the level two advisory approach. You need to add rules that tell the system to use the context as a reference and not as a mandatory copy-and-paste task. It’s like sanitizing the input data, you want the things you control as must-haves, the things outside your control as recommendations and references.
You must completely avoid making level three comments when working with AI. Casual opinions and unverified thoughts have worse results. If your casual opinions conflict with technical efficiency, the system will likely get confused and provide significantly worse results. Keeping your instructions strictly at levels one and two guarantees a higher quality response.
An example:
You want to write some backend logic for 3 kinds of orders: Pre-order, regular, and subscription.
If you’re certain that you want to use a strategy pattern, better to prompt AI in level 1.
If you want to take as a reference another backend that has a rule engine, but you want to evaluate if it fits your use case use it as level 2, and make the level 1 instructions about providing a comparison of pros/cons, not about implementing it
If you were thinking it would be good to eventually refactor other parts of the system... Don’t tell AI. Otherwise, AI will refactor them now, but you actually wanted to implement some logic for orders.
Engineering good prompts with RFC 2119
I took this philosophy from Agent SOPs.
Technical documentation uses very specific keywords to remove ambiguity from complex systems. By using the RFC 2119 standard in your prompts, you create a harder contract with the machine. This guarantees it knows exactly what is required versus what is simply a preference for the final output.
The terms MUST and REQUIRED carry importance. You use these terms for absolute requirements like output formats or specific language versions. Writing these words in uppercase helps the parser identify them as hard constraints that cannot be ignored under any circumstances.
You can use SHOULD and RECOMMENDED when you have a strong preference but are willing to let the system deviate for a logically valid reason. This signals that the machine can exercise limited technical judgment.
Then you could use MAY and OPTIONAL for features that are nice to have. For best results, you can just omit optional items entirely, as leaving them out keeps the specification clean.
Don’t limit yourself to writing those words in uppercase. There are many other ways to convey the 3 levels of information:
Write an acceptance criteria validation to ensure the AI meets the MUST and REQUIRED points
Provide examples and templates that already contain the SHOULD and RECOMMENDED points, indicating it’s just an example, so the AI doesn’t focus on replicating it perfectly.
Make small prompts, use agents to solve a clear and bounded problem, and don’t include information in prompts or context that the agent doesn’t need to know
At work, I have written instructions for an AI to evaluate whether a postmortem meets the quality criteria of Amazon’s writing.
I didn’t use any of these, just wrote bullet points that the AI must mark as ✅ or ❌
Then I find that some of the guidance is subjective, like writing the “executive summary” in 3 paragraphs and containing certain information in each. It’s perfectly fine to write a postmortem in 4, but the AI always marks it as ❌ when there aren’t exactly 3 paragraphs, which makes the prompt less useful
Breaking the “you’re absolutely right” identity of your AI
Most commercial models are fine-tuned to be extremely polite and helpful to the user. This often leads to AI simply agreeing with your bad ideas just to avoid friction. Many people love it, they even protest when OpenAI replaced GPT-4o with GPT-5. That feels good, but it’s not good in the long-term.
To get senior-level output from your tools, you need actionable strategies to break this politeness loop and force critical analysis. You can’t use AI tools outside the box. You need to customize them:
You must give the system custom rules to act strictly as a code reviewer.
Tell it to completely drop conversational fillers, apologies, and flattery from its vocabulary. I tell my AIs to be concise and sharp.
Ask that it only outputs a technical critique based on your strict level one constraints, not nice-to-have topics. This ensures that you can’t keep a loop where AI always tells you something new to fix.
Stop asking open questions like “What do you think about my code?” Instead, ask it to find vulnerabilities, inefficiencies, and evaluate other rules like readability, best practices for the language, etc.
You also need to understand the fundamental difference between the tools you are using.
A standard chatbot is designed for conversation and needs heavy prompting to stay critical and objective.
An automated agent can be configured with a validation loop where it automatically tests its own code before replying to you.
Choosing the right tool for the job is a critical organizational skill for any modern engineer.
Conclusion
As you get tenure and seniority, productivity is less and less about typing faster or memorizing syntax. It becomes about communicating with absolute precision and organizing your thoughts into clear directives. It’s about making an impact through others.
Well, it turns out we are all senior engineers now, giving work to a team of AI agents. Most of us have not written a single line of code since December 2025, and if you did, I bet it was to fix something on top of an AI output.
When you are clear about which level of communication you’re using, you eliminate the guesswork from AI and from other people.
You must make a clear disclaimer about when you’re chatting, and you don’t want any AI or engineer to take action on your words. Also, make the disclaimer when it’s a must-have, and if it’s not met, it’s not good.
Don’t communicate your raw thoughts to avoid confusing AI and people.
Don’t demand must-haves when you’re not clear about it being the right direction
Be clear about where you’re coming from.
If you found value in this post:
❤️ Click the heart to help others find it.
✉️ Subscribe to get the next one in your inbox.
💬 Leave a comment with your biggest takeaway
Today’s article will allow you to do your work faster with AI, moving from phase 1 to phase 2. I’m building this system below for paid subscribers. Thanks for your continued support!
🗞️ Other articles people like
👏 Weekly applause
Here are some articles I enjoyed from the past week
The Evolution of Software Engineering by Gregor Ojstersek and me. Productivity is no longer limited by computing power but by human judgment, requiring a shift from writing code to managing agents.
The Software Development Lifecycle Is Dead by Boris. AI agents have collapsed traditional phases into a single fluid process of iteration where the quality of your context is the only true constraint.
What I Learned at Software Engineering at Google Book by Dr Milan Milanović . Good book review. Software engineering is about maintaining a system for decades, and it’s exactly what everyone misses when thinking about vibe-coding.
Hungry Minds by Alexandre Zajac. This is my go-to source to find good articles to read!
This may interest you:
Are you in doubt whether the paid version of the newsletter is for you? Discover the benefits here
Could you take one minute to answer a quick, anonymous survey to make me improve this newsletter? Take the survey here
Are you a brand looking to advertise to engaged engineers and leaders? Reach out here
Give a like ❤️ to this post if you found it useful, and share it with a friend to get referral rewards










