The internet used to be filled with broken ideas.
That was fine. Expected, even. A broken idea teaches you something. It shows you what the market doesn't want, what users won't tolerate, where your assumptions were wrong. Broken ideas are cheap lessons wrapped in disappointment and redirect energy.
Now? Now the internet is filled with broken AI-built ideas — shipped in five minutes, leaking data by Tuesday, and wasting carbon while they wait for someone to notice they exist.
The dream was democratisation. The reality is dilution at scale.
The problem isn't that AI writes code. It's that it skips the thinking.
AI didn't solve product-market fit. It obliterated the concept entirely.
Anyone with a ChatGPT Plus subscription and a free weekend can now ship a "startup". They can spin up a landing page, generate a backend, connect a database, deploy to Vercel, and call themselves a founder by Sunday night. No prior experience required. No architecture review. No security audit. No second thought.
The result? A tsunami of weird, half-baked applications cluttering cloud infrastructure and choking systems that were designed for intentional, considered deployments.
These aren't just bad apps. They're dangerous apps.
Broken authentication. Unstable database schemas that collapse under real traffic. GDPR nightmares that would make a privacy lawyer weep. Customer data leaking into the frontend like it's 2006 all over again — except now it's happening at 100x the velocity because the barrier to shipping has collapsed to zero.
We're not talking about a few edge cases. We're talking about a new category of technical debt: AI-generated legacy code that's legacy before it ever had a user.
Products with no purpose, no security architecture, and no defensible place in the world — except burning through GPU cycles and creating legal exposure for founders who didn't know what they were signing up for.
The dream was faster iteration. The reality is faster implosion.
Here's what nobody tells you in the "learn to build with AI" courses:
Legal frameworks weren't built for this velocity.
They weren't designed for a world where someone can spin up a data-processing service in an afternoon without understanding consent management, data residency, or retention policies. They weren't built for founders who don't know the difference between hashing and encryption, or who think "auth" means "I used a library I found on npm."
And here's the kicker: AI doesn't have liability. You do.
When your AI co-pilot suggests you store passwords in plaintext because it misunderstood your prompt, the regulator doesn't fine the model. They fine you. When customer data leaks because your generated API didn't sanitise inputs, the lawsuit names you, not OpenAI.
Data breaches don't care if your dev co-pilot made the mistake. Neither do your users. Neither does the ICO.
Every broken AI-build creates a trail of cleanup work that someone else has to do — and increasingly, that someone is a senior developer who thought they'd be building new things, not playing janitor to a bot's mistakes.
I've been called in to fix these Frankenproducts more times than I can count now.
Most of them are unfixable. The architecture is so fundamentally flawed, so deeply coupled to bad assumptions, that the only viable path forward is to start from scratch. And when you do, you realise the AI "prototype" didn't save time, it cost time, because now you're unwinding decisions that never should have been made in the first place.
The good developers aren't building anymore. They're cleaning up after the AI gold rush.
But here's the thing: the promise is actually real.
I'm not anti-AI. I'm anti-recklessness.
Used properly, AI tooling genuinely does let you test ideas faster, cheaper, and with less upfront commitment. Tools like Lovable, Cursor, and v0 let you run smart, lean experiments before you write production code. They let you validate demand, test messaging, explore UX patterns, and learn what resonates — all without hiring a dev team or burning six months of runway.
That's genuinely powerful.
But here's what matters: tests are not launches.
A test is something you show to fifty people to see if they care. A launch is something you ship to the world with your reputation attached. They are not the same thing, and they don't have the same requirements.
You still need real security. Real architecture. Real legal cover.
You still need someone who knows the difference between "it works on my machine" and "it works under load with hostile input from the public internet."
AI helps you move fast, but you still need someone to know where you're going. And more importantly, when to stop.
The dirty secret of the AI building boom
Speed is not the same as progress.
Shipping is not the same as succeeding.
And access to tools is not the same as competence.
The most dangerous thing about AI-assisted development isn't that it writes bad code. It's that it writes plausible code. Code that looks right. Code that runs. Code that passes the "does this work?" test but fails every other test that actually matters.
It gives people just enough capability to be dangerous, and not enough wisdom to know when they're out of their depth.
The result is a generation of "founders" who've never had to learn the hard lessons. Who've never had a security researcher show them how trivial it is to exfiltrate their entire user database. Who've never had to explain to a customer why their personal information is now on a forum. Who've never had to unwind a schema migration that broke production because they didn't understand foreign keys.
These lessons used to be unavoidable. You learned them by breaking things in small, contained ways before you broke them in public, catastrophic ways.
Now? You can skip straight to catastrophic. And you won't even know it until someone sends you a proof-of-concept exploit.
What good looks like
If you're serious about building something real, something that matters, something that lasts, here's what you need:
Use AI to explore. Let it help you sketch ideas, generate options, test concepts. Let it be your sparring partner for the messy early phase when you don't know what you're building yet.
Use humans to decide. Someone needs to know what's actually important. What's a real risk versus what's a theoretical one. What's technically sound versus what's technically "good enough for now but will haunt you later."
Use experts to ship. When it's time to go live — when real people are trusting you with real data — bring in someone who's seen what goes wrong. Someone who knows that "it works" and "it's safe" are not the same sentence.
Clean builds win. Dirty data burns.
And the dirtiest secret of all? The market is starting to notice. Users are getting better at sniffing out the AI slop. Investors are getting tired of "I built this with ChatGPT in a weekend" pitches. The novelty is wearing off, and what's left is the fundamentals: does this solve a real problem? Is it built properly? Can I trust it?
Those questions never went away. They were just drowned out by the noise.
If you're serious about building, build with people who'll tell you the truth.
Not just ship what the model suggests.
Not just celebrate velocity for its own sake.
Not just assume that because it deployed, it's done.
The best builders I know use AI constantly. They also know exactly where it ends and their judgment begins. They know when to trust the output and when to rewrite it from scratch. They know the difference between a prototype and a product.
They don't confuse access with expertise.
And they don't mistake shipping fast for shipping well.
The internet doesn't need more broken AI ideas. It needs fewer, better ones, built by people who understand what they're building and why it matters.
Because here's the uncomfortable truth: democratising the tools didn't democratise the outcomes. It just democratised the damage.