AI & “Vibe Coding,” and the Star Trek Tool We Were Never Ready For

Written by Richard Armstrong
March 27, 2026

 

I’ve been thinking about AI and vibe coding in a way that I can’t shake:

 

“AI and vibe coding is actually like a Star Trek medical suturing tool that got dropped by a time traveler from the future into a world not yet ready for it, and it got into the wrong hands.”

 

My analogy is inspired by Scott Adams, who once joked that he personally couldn’t be trusted with a Star Trek-style medical device that instantly heals wounds, because he would “sneak up behind people and seal their asses shut as a practical joke.”

 

It’s funny. It’s crude. It’s a bit frightening. And it’s uncomfortably accurate.

 

Because that’s exactly what’s happening.

 

We didn’t gradually evolve into this capability. We didn’t build the institutional discipline around it first. We didn’t align education, engineering rigor, or operational safeguards.

 

We just… dropped a probabilistic code generator into the hands of millions of people and said: go wild! And then those who can’t read or write a line of code started trying to build systems.

The Core Misunderstanding: What LLMs Actually Are

Let’s strip away the hype.

 

Large Language Models are not reasoning systems. They are not deterministic engines. They do not “understand” software. AI can’t “think”.

 

They are probabilistic sequence predictors trained to produce plausible outputs based on patterns in training data.

 

That distinction matters.

 

Because it means:

  • They optimize for plausibility, not correctness
  • They gravitate toward the happy path
  • They degrade rapidly under ambiguity or edge cases
  • They produce confidently wrong outputs (hallucinations)
  • They have no intrinsic model of runtime behavior, system constraints, or production risk

If you don’t already understand software engineering at a deep level, you have no way to detect when the model is wrong.

 

And it will be wrong.

 

The Productivity Multiplier—In the Right Hands

In the hands of a credentialed software engineer, AI is legitimately powerful.

 

Used correctly, it becomes:

  • A scaffolding accelerator
  • A boilerplate reducer
  • A search and synthesis engine
  • A pair programmer that never gets tired

But here’s the key distinction:

Real developers do not “vibe code.”

They constrain the model.

 

They:

  • Define detailed architecture first
  • Control interfaces and contracts
  • Validate outputs against known invariants
  • Test aggressively
  • Reject large portions of generated code

They treat AI as an untrusted contributor in a tightly governed system.

 

Because they understand the fundamentals here:
The cost of generating code is now near zero with AI.

 

The resulting AI generated code is riddled with slop and defects, and requires each and every line to be inspected and reviewed for completeness and correctness.

 

The cost of verifying code has not changed (been reduced).

 

Receiving thousands of lines of sloppy code that must be reviewed results in burn-out and defects passing through due to the overwhelming amount of code to be inspected and reviewed.

 

The cost of 100% AI generated systems is significant due to the inspection and review scaling to inhuman speeds.

 

And that imbalance is where the real danger lives.

 

Vibe Coding: The Illusion of Progress

“Vibe coding” flips that discipline upside down.

 

Instead of:

  • Designing systems intentionally
  • Writing controlled, testable code
  • Iterating with feedback

It becomes countless hours of:

  • A Hail Mary Prompt → generate → paste → doesn’t compile or run → repeat

The result?

  • Verbose, fragile codebases
  • Hidden coupling and inconsistent patterns
  • Missing error handling
  • Undefined edge cases
  • No coherent architecture

And worst of all:

 

More lines of code than any human realistically can or wants to inspect.

 

That last point is critical.

 

Because it introduces a new failure mode:

 

Inspection fatigue.

 

Developers, especially inexperienced ones, become overwhelmed by the volume of generated code and start trusting it by default due to inspection exhaustion.

 

That’s how technical debt explodes, and defects get through.

 

Not slowly. Not incrementally.

 

Exponentially.

 

The Multi-Agent Myth: Scaling the Problem

Some try to fix this by adding more AI:

  • “Let’s have multiple agents review each other.”
  • “Let’s build a swarm.”
  • “Let’s run iterative refinement loops overnight.”

This sounds sophisticated. It’s not.

 

It introduces a known systems problem:

 

Correlated failure modes.

 

All these agents:

  • Share similar training distributions
  • Exhibit similar blind spots
  • Reinforce each other’s assumptions

So what you get is not validation.

 

You get consensus without correctness.

 

After just a few iterations, the system converges. Not toward truth, but toward a self-consistent hallucination.

 

The Star Trek Tool in the Wrong Hands

Now we come back to that analogy.

In the hands of trained professionals, a futuristic surgical tool saves lives.

 

In the hands of someone untrained?

 

It becomes dangerous. Unpredictable. Misused.

 

That’s exactly what we’re seeing with AI.

 

People who:

  • Cannot read code
  • Cannot reason about control flow
  • Cannot debug runtime failures
  • Cannot evaluate architecture

…are now generating entire systems.

And because the output looks sophisticated, they believe they’re succeeding.

 

Until reality catches up:

  • Systems fail under load or scale
  • Edge cases break everything
  • Security holes appear
  • Maintenance becomes impossible

And the gap between perceived capability and actual capability becomes painfully obvious.

 

This Isn’t About Gatekeeping, It’s About Physics

This isn’t about protecting developers.

 

It’s about respecting the constraints of the system.

 

Software engineering has always required:

  • Determinism
  • Explicit logic
  • Verifiable behavior
  • Long-term maintainability

LLMs do not eliminate those requirements.

 

They amplify the consequences of ignoring them.

 

Where This Actually Goes

AI is not replacing engineers.
It is raising the bar.

 

Because now:

  • You can generate more code than ever
  • You can introduce more defects than ever
  • You can accumulate technical debt faster than ever

Which means:

 

The need for experienced engineers, who can design, constrain, and validate systems, has never been higher.

 

The Bottom Line

AI is not magic.

 

It’s a powerful, flawed tool.

 

In the hands of fully credentialed professionals, it’s a multiplier.

 

In the hands of the unprepared, it’s a liability generator. A slop generator. A technical debt creator.

 

And right now, we’re watching a lot of people pick up a tool they don’t understand… and run with it.

 

Just like that Star Trek suturing device. (Watch your back!)

Shopping Cart