AI-assisted Java modernization: All the heuristics

This page was initially generated by Claude Code, then reviewed and corrected by me.

Over the course of my AI-assisted Java modernization series, I’ve documented practical heuristics for working effectively with AI coding assistants on legacy modernization projects. This post consolidates all those heuristics in one place as a quick reference guide.

These heuristics emerged from an exercise of porting a legacy JEE 6 application to Spring Boot using Claude Code. They’re not rigid rulesβ€”they’re patterns that proved useful in practice. Use them as a starting point and adapt them to your context.

These patterns emerged from practice. Check them out, try them, find what works for you, adapt them to your own workflows and your style.


Planning & Strategy

These heuristics help you set up your project for success before diving into code.

πŸ“‹ The Plan-Before-You-Code Heuristic

Before attempting anything complicated, ask the AI to come up with one or more plans. Ask the AI to ask clarifying questions!

Put the AI in plan mode and let it explore the problem space before jumping into implementation. Modern AI tools will often offer clarifying questions spontaneously. Some tools do not have a plan mode; in that case, just say “Let’s plan now, do not code yet, think first!”

Example: Instead of “upgrade to Jakarta EE 11,” ask “Let’s think of a plan to bring this up to the latest version.” The AI will ask about target versions, packaging changes, UI migration approach, etc.

Why it matters: Planning reduces risk, explores trade-offs, and ensures you’re making informed decisions rather than rushing into potentially problematic implementations.

See it in action in Part V

The Ask For Options Heuristic

Start new tasks in plan mode; ask for options to get the model to explore multiple approaches.

Don’t rush to code. Put the AI in plan mode and ask “what are our options?” This triggers deeper reasoning about trade-offs and alternatives.

Example: “We want to port the product page to Spring Boot. What are our options?” makes the AI explain the situation that it sees, will uncover any misunderstanding or missing context, and might help us learn something too, given that the AI has read vastly more books on programming than any human can.

Why it matters: AI’s (or a human’s) first idea isn’t always the best. Exploring the solution space leads to better decisions.

Credit: Inspired by Andrej Karpathy’s advice

See it in action in Part IV

πŸ’Ž The Value First Heuristic

Plan modernization projects so that the most valuable parts are ported first.

Don’t follow the “logical” order (infrastructure β†’ foundation β†’ features). Start with the most valuable user journey. Try to apply any desired enhancements while you modernize, since you’re reworking the code anyway.

Example: In an e-commerce app, start with the purchase flow, not the user registration flow. The purchase flow is where the business value is.

Why it matters: Delivers business value early, enables early feedback, and maintains stakeholder engagement throughout the project.

See it in action in Part II

🏈 The Team Sport Heuristic

Legacy modernization is a team sport. Involve the people who normally work with this system.

Don’t rely solely on AI and code analysis. Talk to the humans who understand the business context, the hidden dependencies, and the “why” behind the code.

Example: When analyzing user journeys, the AI can trace code flows, but humans know which features customers actually care about and which are legacy cruft.

Why it matters: Domain knowledge and organizational context aren’t in the codebase. People provide essential context that makes modernization successful.

See it in action in Part II


Development

These heuristics help while coding with AI.

πŸƒ The Run-Locally Heuristic

The first thing when starting work on a legacy codebase is to see if you can compile it. The second thing is to see if you can run it locally.

Running locally makes the development loop much faster for both humans and AI, as opposed to having to deploy to the cloud before you can see the app running.

Example: When working with a Kubernetes-focused legacy app, create a docker-compose.yml file first so you can test changes locally rather than deploying to a cluster each time.

Why it matters: Fast feedback loops are essential for effective development. Local execution enables rapid iteration and debugging without cloud deployment overhead.

See it in action in Part V

🎯 The Goal Heuristic

Give the AI a goal and let it iterate towards that goal.

Instead of giving step-by-step instructions, state the desired outcome and let the AI figure out how to get there. When the first attempt doesn’t work, the AI will debug and iterate until it succeeds. The AI must have a way to get fast feedback on the success or failure of its actions

Examples: say “try to build the app” and let it iterate until the build is clean. Say “test the app with Puppeteer” and let it handle any errors it finds.

Why it matters: The AI has lots of energy and enthusiasm; it really wants to achive the goal we give it. Letting it work autonomously towards a goal is faster than micromanaging each step. Plus, you never know, perhaps the AI finds a way that we wouldn’t have thought of if we were micromanaging.

Credit: Federico Feroldi

See it in action in Part I

πŸ”„ The Iteration Heuristic

If you find yourself accepting the AI output without question, you’re losing control.

Don’t just accept the first thing the AI produces. Analyze it critically, ask yourself how it could be improved, then ask the AI to iterate. The most value from AI comes through iteration.

Example: AI writes a repository test using @SpringBootTest. You recognize this will be slow. Ask it to use a lighter-weight approach with manual DataSource configuration.

Why it matters: AI produces “average” code based on its training data. Your expertise combined with AI’s implementation speed produces better results than either alone.

Credit: Uberto Barbini

See it in action in Part II and Part IV

πŸ›‘ The Break the Loop Heuristic

Keep an eye on what the AI is doing and stop it if it’s getting lost.

AI can get stuck in loops, trying the same failed approach repeatedly, or pursuing the wrong path. Watch its progress and intervene when you see it losing direction.

Example: AI repeatedly tries to access the wrong URL for the home page, then attempts to rebuild the application. Stop it and provide the correct URL directly.

Why it matters: Context and tokens are precious. Letting the AI thrash wastes both and degrades its performance. Human intervention saves time.

See it in action in Part II

Let the AI Do the Testing Heuristic

When the AI claims it’s done with a task, let it verify using tools or MCP servers.

Don’t manually test what the AI built. Instead, have the AI use browser automation (Puppeteer), run tests, or otherwise verify its own work programmatically.

Example: After porting a product page, the AI uses Puppeteer MCP to navigate to the page, take screenshots, and verify the functionality works as expected.

Why it matters: AI can test faster than humans, catches obvious bugs immediately, and the testing becomes part of the documented workflow.

See it in action in Part IV


Project Hygiene

These heuristics keep your project clean and manageable.

Get The AI To Program Itself Heuristic

Don’t write documentation files directly; tell the AI the effect you want to achieve and let it work for you.

When you need to update CLAUDE.md, README, or other documentation, describe what you want to document rather than editing directly. The AI will likely produce more comprehensive and better-organized documentation.

Example: Instead of editing CLAUDE.md yourself, say “update CLAUDE.md with clear instructions to use make restart when testing the application.”

Why it matters: (1) Results in more effective documentation, (2) it’s less work for you, (3) builds skills in delegating to AI agents.

See it in action in Part IV

πŸ’Ύ The One-Prompt-One-Commit Heuristic

After every successful prompt, commit to version control.

Each time the AI completes a task successfully, commit the changes. This creates a safety net and makes it easy to roll back if the next attempt goes wrong.

Example: AI successfully fixes the Maven build configuration. Commit. AI then updates Docker configuration. Commit. AI fixes database connectivity. Commit.

Why it matters: Creates checkpoint you can return to, makes debugging easier, documents the incremental progress, and prevents losing working states.

Credit: Uberto Barbini

See it in action in Part I

πŸ“Š The Manage Context Heuristic

Be aware at all times of how much free context you have. Try to avoid getting close to the limit.

Regularly check context window usage with /context. When context gets tight (>80%), either compact the context or start fresh. Don’t let it fill completely as this degrades AI performance.

Example: Before starting a potentially lengthy debugging session, check /context. If at 82%, clear or compact first rather than starting new work.

Why it matters: Full context windows degrade AI reasoning, cause it to “forget” earlier context, and lead to mistakes. Managing context proactively maintains quality.

See it in action in Part I and Part IV

πŸ”§ The Makefile Heuristic

Provide a Makefile (or equivalent tool) that makes it easy for humans and AI to execute common development tasks.

Wrap the commands to building, testing, starting and restarting the application in a place that’s very well documented and easy to run. Makefiles are very good for this; other viable alternatives exists, eg npm scripts. This prevents the AI from guessing or using ineffective commands. Also, when you wrap the commands you can add your own documentation and follow up actions, that helps the AI know what to do.

Example: Create a Makefile with make help that documents all the other commands, eg make restart that does mvn clean package && docker-compose down && docker-compose up -d --build and then prints open the application at http://localhost:5432.

Why it matters: Reduces ambiguity, prevents wasted time on wrong commands, documents the canonical way to do things, and benefits both humans and AI.

Credit: Armin Ronacher and Shrivu Shankar

See it in action in Part III

πŸ“ The Keep CLAUDE.md Up To Date Heuristic

Periodically ensure that the AI documentation is still accurate. Let the AI do it! But keep an eye on it.

After significant changes to your project, ask the AI to update CLAUDE.md to reflect the current state. Claude tends to leave duplicate information or write “historic” information about “how we got here.” With a fresh context, ask Claude to review CLAUDE.md and remove duplications and inconsistencies.

Example: After completing a major migration, ask “please update CLAUDE.md to reflect the current state of the app” and then later “review CLAUDE.md and remove duplications and inconsistencies.”

Why it matters: Stale documentation misleads both humans and AI in future sessions. Keeping CLAUDE.md trim and accurate ensures the AI works more effectively when it starts new work.

See it in action in Part V


The Series

Want to see these heuristics in action? Read the full series:

Want to leave a comment? Please do so on Linkedin!

#AI   #Modernization