Back to blog
5 min read
Teaching LLMs to Code Review Like Senior Developers: A Context-First Approach

The day I taught AI to read code like a Senior Developer

What the article covers

We’d been teaching AI to read code like a fresh bootcamp grad, not a senior developer.

The magic isn’t in fancy ML or bigger models. It’s in mirroring how senior devs think:

  • Context First: We front-load system understanding before diving into code
  • Pattern Matching: Group similar files to spot repeated approaches
  • Impact Analysis: Consider changes in relation to the whole system
  • Historical Understanding: Track why code evolved certain ways

My Thoughts

Overall takeaway

When chatting with people working with LLMs, it’s often apparent that they are using small or even tiny prompts.

These same people complain that the model just doesn’t understand, that it’s not very clever, or that the iteration process is making them question whether it’s worth continuing to invest in learning how to use this new technology.

This type of interaction, where you provide zero examples to the LLM, is called Zero-Shot Prompting.

If you provide one example of the task, it’s called Single-Shot prompting.

If you provide several examples to the LLM, then it’s called Few-Shot prompting.

And this is where most people stop.

What’s been fascinating to me is that very few people go ahead and create lots of examples.

I’ve seen an exceeding few people create more than 5, let alone 10 examples to put into their prompt.

In this article, the way Namanyay Goel figured out how to better work with the coding assistant was to provide “context-aware grouping”:

Instead of dumping files linearly, we built a context-aware grouping system

By grouping the files, you are not only giving the LLM context that they should be looked at and compared together but also providing many examples.

This article serves as a great reminder that while LLMs have seen massive amounts of data, it’s helpful to overcommunicate what you are trying to do, what you expect them to do, and how what you are sharing with them interlinks with what you’re trying to do.

General Prompt Patterns

The article points to several effective LLM prompting patterns:

  1. Context Before Details Traditional approach: “Here’s a file. Analyze it” Improved approach: “This is part of the auth system, here are the related components.”

  2. Relationship Mapping Traditional approach: Files analyzed in isolation Improved approach: Files grouped by feature and interconnections

  3. Historical Context Traditional approach: Current state only Improved approach: Including relevant PR history and system evolution

These prompting patterns mirror how senior developers generally approach codebases.

Senior developers build a mental model first, then dive into specifics.

These patterns point to a broader strategic insight: LLMs perform best when given clear system context upfront.

Strategic Implications

For technical leaders, the shift towards expanded-context-first prompting has significant implications.

This context-first approach offers several advantages:

  1. Better Analysis Quality: More comprehensive understanding leads to better insights
  2. Reduced Review Time: Context-aware grouping helps focus on what matters
  3. Knowledge Transfer: The system can capture and share senior dev insights
  4. Technical Debt Prevention: Early warning system for architectural issues
  5. Team Alignment: Helps maintain consistent patterns across the codebase

To realize these advantages, teams need a structured implementation approach.

Implementation Framework for AI Teams

Teams implementing this approach should:

  1. Start with System Mapping

    • Document core features and components
    • Identify key integration points
    • Map common patterns
  2. Build Context Layers

    • Group related files
    • Document architectural decisions
    • Track historical changes
  3. Design Smart Prompts

    • Front-load system context
    • Include related components
    • Reference historical decisions
  4. Implement Feedback Loops

    • Track successful insights
    • Document false positives
    • Refine grouping strategies

Note that this framework scales.

Start small with one component and then expand as your team builds confidence with the approach.

Key Takeaways for AI Agent Development

The core insight from this experiment extends far beyond code review.

Effective AI assistance is also about better context delivery.

As senior developers excel through system understanding rather than code knowledge, AI agents need comprehensive context to provide valuable insights.

This pattern likely extends beyond code review to other AI agent applications as well.

The key to getting the most out of them is teaching them to think like experts by providing rich, interconnected context rather than isolated inputs.

As we move towards even more sophisticated AI applications, the principle of rich context delivery will likely become a key differentiator between average and terrific AI implementations.