AI-Generated Code: Say Hello to Legacy 2.0

Jerzy Zawadzki - Chief Technology Officer
5 minutes read

AI is reshaping software development - not in the future but right now.



Vibe coding and AI-generated applications are shaping how entire systems are built, often with minimal human input.



And that’s not necessarily a bad thing. I’ve been working with AI assistance for almost 3 years now. I use AI to generate a good chunk of my code. A proper AI assistant can produce working, solid-looking code.



But is such an app truly ours and maintainable? What are the risks when it comes to debugging, scaling, or extending such AI-generated software? Even if there’s no technical debt, AI-generated code can still be considered legacy.



This post explores why that is - and what it means for tech leads and CTOs navigating the future of software development.

Table of Contents

  1. Vibe Coding: A New Paradigm or Just a Trend?

  2. AI-Generated Applications and the New Legacy Risk

  3. Scaling and Maintaining AI-Build Software

  4. Debugging AI-Generated Code: Who Takes the Blame?

  5. What Tech Leads and CTOs Should Ask Before Adopting AI in Development

  6. The Future of Coding with AI: Assist, Not Replace

  7. Takeaway for Tech Leaders

Vibe Coding: A New Paradigm or Just a Trend?

A new wave of coding is here - and it doesn’t necessarily start in an IDE. The concept of vibe coding emerged from the growing trend of developers using AI to generate nearly entire applications guided by instincts and natural language prompts.

Industry Snapshot

In a discussion posted on YouTube, Y Combinator managing partner Jared Friedman shared insights on the growing dominance of AI-generated code:

(...) Every one of these people is highly technical, completely capable of building their own products from scratch. A year ago, they would have built their product from scratch — but now 95% of it is built by an AI,” he said.

Friedman, along with YC CEO Garry Tan, managing partner Harj Taggar, and general partner Diana Hu, emphasized that this shift isn’t about non-technical founders relying on AI - it’s about experienced developers embracing a new way to build.

Expert Insight 

Andrej Karpathy, former head of AI at Tesla, coined the term vibe coding to describe a new style of software creation:  

“There's a new kind of coding I call ‘vibe coding,’ where you fully give in to the vibes, embrace exponentials, and forget that the code even exists.”

“I'm building a project or webapp, but it's not really coding - I just see stuff, say stuff, run stuff, and copy paste stuff, and it mostly works.”

This raw, humorous take perfectly captures the essence of vibe coding: fast, intuitive, low-friction - but also fragile, throwaway, and disconnected from deeper understanding. 

Source: X, Andrej Karpathy, @karpathy

Two years ago, these tools were mostly useful for suggesting the next line of code - more of an autocomplete than a true assistant. But that’s changing rapidly. I’m fully in favor of embracing code-generating tools as broadly as possible. Automation has transformed countless industries over the past centuries; there’s no reason software development should be any different now that we have the means.

That said, the key is to use these tools consciously. You have to understand their limitations, the risks involved, and the trade-offs you're making. Blind reliance can lead to fragile systems, shallow understanding, and a false sense of progress. Used thoughtfully, though, they’re incredibly powerful - they can boost productivity, unlock creativity, and let developers focus on higher-level design rather than boilerplate code.

AI is great at producing correct code, but that doesn’t always mean it’s the right code. It can confidently generate implementations that work but aren’t what your business or product needs. That’s why the human-in-the-loop principle is so essential. The human using AI needs to steer, validate, and make intentional choices.

As in Top Gun: "It’s not the plane, it's the pilot”. Using AI responsibly means staying in control of the direction - not just letting the autopilot fly. But even when the generated application works and ships fast, there’s a price to pay later. 

AI Can Write Your Code - but Can Your Team Own It?

AI-assisted development comes with real benefits—but also with long-term responsibilities. If you’re wondering how to adopt AI tools without compromising code quality, maintainability, or team morale…

AI-Generated Applications and the New Legacy Risk

Just because AI-generated code runs today doesn’t mean it’s fully reliable or maintainable. If no one understands it, it becomes a liability the moment it goes live. 

One of the most common assumptions about AI-generated applications is that they save time. And in the short term, they often do. Sometimes, this time advantage can be genuinely helpful, especially for new businesses - these days, we’re able to ship an MVP in days, not months like just a few years ago.

But in the end, there may be no one who would truly understand how the code works. No team behind it. Just hundreds of thousands of lines, generated in hours or days. 

Even the person who prompted the AI may not be able to explain how it all fits together - or fix it when something breaks. 

On the surface, AI-generated applications may seem solid - they run, deliver output, and even look well-structured. But underneath, they may lack architectural maturity and cohesion. The problem isn’t readability but whether the architecture makes sense for the product’s evolution. AI can propose an entire structure, but without insight into plans or context, it may miss critical points of flexibility. And when a system isn’t designed with change in mind, it risks becoming legacy from day one.

In traditional systems, we rarely start with legacy. It’s usually a slow process. Sometimes, it’s the result of poor architectural choices early on, but more often, it comes from years of evolving code - decisions layered on top of decisions.

Eventually, the system becomes hard to change, fragile, and slow. Few people understand how things really work. You start to see performance issues, fear of refactoring, and lack of flexibility. Morale drops. The team burns out or gets replaced. And the product slows down.

Now, let’s apply that thinking to what we’re seeing with AI. We can generate hundreds of thousands of lines of code in one go. And even we - the ones using AI - won’t immediately know what’s really in there. There’s no way to trace or justify every decision in a codebase that big, that fast. The same problems may show up because of it. And debugging them might be even harder because no one knows the code.

In both cases - classic legacy systems and AI-generated ones - the issue isn’t about technology. It’s about people. Programming is a human thing. This means soft skills, communication, and cognitive limits matter. A lot.

And importantly, a reflection on whether it’s possible to build “non-legacy” systems with AI - and what rules or practices would need to be in place to do that,

The key point: The code we get isn’t free. Once we run it, it becomes our liability. 

Scaling and Maintaining AI-Build Software 

An MVP built by AI might impress investors. It might even get you to market faster. But what happens six months later - when the feature set doubles, user load triples, and you start hiring developers who didn’t generate the original code?

That’s where the real test begins. Someone will have to maintain it, extend it, debug it.

Maintaining AI-generated software requires much more than knowing how to prompt a model. It requires a deep understanding of system architecture, clean code principles, and the ability to work with complexity that wasn’t created by human logic but by statistical pattern matching. 

AI-generated code might compile, and often, it even looks clean. But what it frequently lacks is context. Why was it written this way? What’s the business need behind this function? What’s likely to change? Without those anchors, even the best-looking code can be hard to reason about or extend. 

In my view, maintainability means being prepared for change. And the ability to prepare for change is what separates a regular developer from a senior one. A regular dev can write working code. A senior knows why it’s written a certain way.

We can’t predict every future change. Trying to do that would result in massive, bloated, unmaintainable code — most of it likely never used. The goal isn’t to handle everything up front but to think deliberately about where to place the “joints” in the system — the moving parts that might change. It’s about segmenting code into closed, modular areas, reducing unnecessary dependencies — all based on real business context. And this takes time.

When we develop products using traditional practices (even if some code is generated), this process happens more naturally — step by step, with each decision anchored in evolving understanding. But when the whole decision-making process is compressed into a few prompts, it becomes much harder to make intentional, well-informed choices.

There’s also the issue of scaling. As soon as an AI-built product hits real-world traction, the technical decisions made (or skipped!) during early development start mattering. And if no one understands the system deeply, scaling it responsibly becomes risky - or even impossible. The same AI that built the system won’t take responsibility for it. 

Debugging AI-Generated Code: Who Takes the Blame?

Shipping fast is great - until something breaks.

In traditional software projects, debugging usually means diving into code written by you or your team. There’s logic, conventions, version history, and shared understanding. But when the bulk of the code is generated by AI, those anchors disappear.

And when a system crashes or exposes a vulnerability, the question becomes: Who actually understands what went wrong?

AI won’t take responsibility; sooner or later, someone will have to dive into this project just like they would with a system written by a team that disappeared overnight.

While AI-generated code can be efficient, it isn’t always safe. Research and real-world incidents have shown that AI-generated code can:

  • introduce hidden bugs or performance bottlenecks, 

  • include security vulnerabilities (especially in input handling or authentication),

  • make incorrect assumptions about business logic, 

  • or simply behave in unpredictable ways under edge cases.

From my perspective, it’s a bit like this: a junior developer shouldn’t be pushing code copied from Stack Overflow straight to production on their first day. And in the same way, we probably shouldn’t be mindlessly shipping AI-generated code without review.

Every responsible AI usage guide mentions one key principle: the human in the loop. Programming shouldn’t be an exception.

And while debugging AI-generated code, developers often hit these blockers:

  • difficulty tracing logic flow,

  • no clear mapping between requirements and implementation,

  • inconsistent use of patterns or conventions,

  • and increasingly - working with code written in a language or framework they don’t fully know, making it even harder to assess whether the output is reliable. 

In short, AI can generate code and explain what it does in plain language. But it can’t understand that code for us. It won’t assess whether the solution fits our specific business case or plans. That part still requires us.

What Tech Leads and CTOs Should Ask Before Adopting AI in Development

The benefits of AI coding tools are clear: faster prototyping, reduced boilerplate, and easier experimentation. But when used without guardrails, they can just as easily introduce long-term risk - especially at the organizational level. 

As tech leaders, our responsibility is to build systems our team can own, maintain, and scale.

That’s why before rolling out  AI-generated code across your product stack, it’s worth asking a few questions:

  1. Who will be responsible for the AI-generated code six months from now?

  2. Is there enough documentation, architecture, and test coverage to support long-term maintenance?

  3. Are developers trained to read, assess, and debug code they didn’t write - or even prompt?

  4. Do we have a clear strategy for validating AI-generated output before it reaches production?

  5. Are we tracking where and how AI is being used across our codebase?

At Polcode, we’re exploring how to include AI in our workflows without compromising quality. That means team-wide awareness of risks, training on how to assess AI output, and integrating AI usage into our existing processes like code reviews and CI/CD - not replacing them. 

Because the biggest risk with AI in development isn’t that the code will be wrong but that we’ll assume it’s right without understanding how or why it works.

The takeaway? The faster you move, the more intentional you need to be.

The Future of Coding with AI: Assist, Not Replace

AI in software development is here to stay. And that’s a good thing - used wisely, it can free developers from repetitive tasks, unblock teams, and enable faster iteration. The real challenge is whether we’re ready to own it. 

Because AI doesn’t maintain clean architecture. It doesn’t automatically document decisions. It doesn’t debug at 2 AM. 

That’s still on us. 

As tech leads and CTOs, we need to treat AI like we would any junior engineer: powerful and promising but requiring oversight, context, and mentorship.

The future of coding with AI isn’t hands-off - it’s more hands-on than ever, just in a different way.

We don’t need to reject AI-generated applications, but we do need to lead them.

Takeaway for Tech Leaders

AI-generated applications are here to stay. But before replacing your dev team with prompts, make sure you’re not trading short-term velocity for long-term liability. In software, understanding the code is just as important as shipping it. 

On-demand webinar: Moving Forward From Legacy Systems

Want to end legacy codebase misery and learn how to reignite your old IT system? Watch our on-demand webinar hosted by our CTO - Jerzy Zawadzki.

Watch Recording
moving forward from legacy systems - webinar

Latest Blog Posts

Curious How to Use AI in Development - Without Turning Your Codebase Into Legacy?

1.

Tell Us About Your Current Setup

Where are you with AI in your dev process?

2.

Get a Quick Expert Take

We’ll review your approach and flag potential risks.

3.

Learn What Works

See how we combine AI with clean, scalable practices.

4.

Make Your Next Move

From pilots to real projects - we’ll help you lead with confidence.