Skip to main content

How AI Hallucinations Help Visionaries Dream of the New

· 4 min read
Rick Jewett

How AI Hallucinations Help Visionaries Dream of the New

In AI circles, “hallucination” has become a dirty word. The term describes when AI models confidently produce information that’s entirely fabricated — details that sound plausible, but are ungrounded in fact. In high-stakes situations like legal filings, medical diagnoses, or financial advice, hallucinations can cause serious problems (Ji et al., 2023).

But what if we’ve misunderstood hallucinations entirely?

What if — for visionaries, creators, and entrepreneurs — AI hallucinations are not bugs, but features?

Hallucination: The Creative Engine Hiding in Plain Sight

At its core, every hallucination is simply a prediction the AI made when facts ran out. When the model is unsure, it doesn’t stop. It creates. And in that spontaneous act of “error,” it often surfaces novel combinations, unconsidered connections, and entirely new ways of seeing a problem.

For visionaries, this is a kind of idea generator we’ve never had before.

  • Entrepreneurs use AI hallucinations to imagine products that don’t exist (yet).
  • Inventors ask AI to solve unsolved problems — and watch it propose entirely unconventional approaches.
  • Storytellers collaborate with AI hallucinations to break writer’s block and explore fictional worlds.
  • Designers let AI “dream” new visual styles, blending influences no human mind might have combined.

While hallucinations are dangerous when presented as fact, they are incredibly fertile when used as hypothesis fuel.

The Forgotten Role of Fiction in Innovation

History is full of “hallucinated” ideas that changed the world:

  • Jules Verne described submarines and moon landings before either existed.
  • Gene Roddenberry’s Star Trek envisioned personal communicators, replicators, and voice-first computers decades before smartphones, 3D printers, and Alexa.
  • Leonardo da Vinci sketched flying machines centuries before aeronautics.

In every case, visionaries hallucinated — and their hallucinations became blueprints for future reality (Johnson, 2010).

AI now gives us access to this kind of dreaming at industrial scale.

Prompting the Future

The skill, then, is not in trying to eliminate AI hallucinations, but in learning how to prompt them safely, harvest them, and refine their output.

Smart innovators are already developing workflows like:

  • Divergent prompting — intentionally encouraging the AI to “imagine” without constraints.
  • Hallucination capture — saving AI’s speculative outputs for later analysis and refinement.
  • Human-AI co-drafting — treating hallucinated outputs as creative partners to be edited, shaped, or corrected.

The trick isn’t to trust everything the AI says — it’s to recognize when an unexpected answer might open a door.

Hallucinations as Windows to Adjacent Possibles

In complex systems theory, there’s a concept called the adjacent possible — the set of things that could exist next, given what exists today (Kauffman, 2000). AI hallucinations can act as windows into these adjacent possibles.

When an AI proposes something that doesn’t exist, it’s often sitting just beyond the edge of what could exist. That’s where visionaries thrive.

In fact, many of the companies that will define the next decade may owe their genesis to an AI hallucination that sparked a human insight.

From Error to Advantage

We must be cautious with AI hallucinations when accuracy matters.

But we must also be courageous enough to see their power when possibility matters.

In a strange twist of fate, the very thing engineers fight to eliminate may become one of the greatest tools ever handed to dreamers.

AI isn’t just a mirror of the world that is. It’s a generator of worlds that could be (Hao, 2023).

And for those willing to collaborate with their machines — hallucinations may be the new frontier of human imagination.

References

  • Ji, Z., Lee, N., Frieske, R., et al. (2023). Survey of Hallucination in Natural Language Generation. ACM Computing Surveys.
  • Johnson, S. (2010). Where Good Ideas Come From: The Natural History of Innovation. Riverhead Books.
  • Kauffman, S. A. (2000). Investigations. Oxford University Press.
  • Hao, K. (2023). Why AI Hallucinations Are Hard to Fix. MIT Technology Review.
  • OpenAI (2022). Introducing ChatGPT. OpenAI blog.

Author Bio

Rick Jewett is the founder of ChatSites™ and creator of VoiceMate™, where he helps turn AI’s most misunderstood flaw — hallucination — into a tool for safe, permission-based innovation.