Maintainable Code Is a Question of Responsibility, Not of AI

"The code isn’t maintainable.", 

"In the end, it takes longer to fix the AI’s bugs than if you had written it yourself."

I’ve been hearing these comments a lot lately, both within my own team and in conversations with other agencies. And I understand where this skepticism comes from. Developers are proud of their craft. They’ve learned to work cleanly, follow standards, and avoid tech debt. When a language model spits out code that technically works but feels sloppy or contains risks, it feels like confirmation: AI can’t do good engineering.

For me, though, "AI code isn’t maintainable" is not a technical statement. It’s a statement about responsibility.

Why this discussion becomes emotional so quickly

Programming is more than typing. It’s thinking, weighing options, experience. When a tool suddenly produces something in seconds that used to take an hour, it understandably scratches at one’s sense of identity.

The most common questions I encounter are therefore very down-to-earth: "Will someone still understand this later?" "Who is responsible?" "Will this still be maintainable in two years?" These are the right questions. The wrong conclusion, however, would be to reject AI altogether because of them.

Because, honestly, unmaintainable code is not an exclusive AI product. We wrote plenty of it even without AI: under time pressure, just before release, with a quick fix that later lives on as a legend in the codebase.

The real problem is not AI, but a lack of responsibility

AI amplifies what is already there. Good standards become more visible, bad habits too. Anyone who treats AI as an autopilot produces chaos. Anyone who uses AI as pair programming can gain quality. The decisive difference lies not in the model, but in the mindset:

AI can generate code. But I am responsible for the software.

This sentence sounds simple, but it is central. It determines whether AI creates new forms of tech debt, or whether AI helps reduce existing ones.

Valid arguments from the skeptics

Skepticism within the team is not unfounded, especially when AI is used unreflectively:

  • Hallucinations and missing context can lead to false assumptions and broken architecture.
  • Blind copy-paste creates a patchwork with no consistent style.
  • Security risks arise when generated solutions are not critically reviewed.

These objections are not resistance to progress. They are a quality filter. As a CEO, it is not my job to ignore skepticism, but to integrate it in a way that enables better software.

Right now, it’s easy for a feeling to arise that everyone else is already further ahead. The tools are evolving rapidly, success stories are everywhere. That creates pressure, and under pressure you rarely make good architectural decisions.

Progress is not a race. There is no benefit in being further ahead if no one understands what is happening in your own system anymore.

AI is a tool that needs to be shaped with clear standards - just like other processes have been in the past.

When AI provides meaningful support for us

I don’t see AI as a replacement for thinking, but as an amplifier for good engineering. It is particularly helpful:

  • for routine tasks that eat up time
  • when refactoring small units
  • when creating and extending tests
  • when understanding unfamiliar or organically grown codebases

Sometimes it produces nonsense or hallucinates. That’s exactly why we treat it like a very fast junior developer: helpful, but not infallible. And always subject to review.

Leadership instead of a question of belief

I would like us to discuss AI not as a question of identity, but as a question of professionalism: how do we deliver better software?

Ignoring AI is not a neutral decision. It will not disappear. It will then simply be used without standards, perhaps secretly, and that’s when things get chaotic.

AI-generated code is not automatically unmaintainable. Unclear distribution of responsibility is.

AI does not replace us. But developers who take responsibility and use AI consciously will shape the future of our profession.

Anja Schirwinski
  • CEO

Co-founder and CEO of undpaul. Project manager, account manager, front end developer, certified Acquia developer and author of Drupal 8 Configuration Management (Packt Publishing).