Skip to main content

How AI Hallucinations Help Visionaries Dream of the New

· 4 min read
Rick Jewett

How AI Hallucinations Help Visionaries Dream of the New

In AI circles, “hallucination” has become a dirty word. The term describes when AI models confidently produce information that’s entirely fabricated — details that sound plausible, but are ungrounded in fact. In high-stakes situations like legal filings, medical diagnoses, or financial advice, hallucinations can cause serious problems (Ji et al., 2023).

But what if we’ve misunderstood hallucinations entirely?

What if — for visionaries, creators, and entrepreneurs — AI hallucinations are not bugs, but features?

Hallucination: The Creative Engine Hiding in Plain Sight

At its core, every hallucination is simply a prediction the AI made when facts ran out. When the model is unsure, it doesn’t stop. It creates. And in that spontaneous act of “error,” it often surfaces novel combinations, unconsidered connections, and entirely new ways of seeing a problem.

For visionaries, this is a kind of idea generator we’ve never had before.

  • Entrepreneurs use AI hallucinations to imagine products that don’t exist (yet).
  • Inventors ask AI to solve unsolved problems — and watch it propose entirely unconventional approaches.
  • Storytellers collaborate with AI hallucinations to break writer’s block and explore fictional worlds.
  • Designers let AI “dream” new visual styles, blending influences no human mind might have combined.

While hallucinations are dangerous when presented as fact, they are incredibly fertile when used as hypothesis fuel.

The Forgotten Role of Fiction in Innovation

History is full of “hallucinated” ideas that changed the world:

  • Jules Verne described submarines and moon landings before either existed.
  • Gene Roddenberry’s Star Trek envisioned personal communicators, replicators, and voice-first computers decades before smartphones, 3D printers, and Alexa.
  • Leonardo da Vinci sketched flying machines centuries before aeronautics.

In every case, visionaries hallucinated — and their hallucinations became blueprints for future reality (Johnson, 2010).

AI now gives us access to this kind of dreaming at industrial scale.

Prompting the Future

The skill, then, is not in trying to eliminate AI hallucinations, but in learning how to prompt them safely, harvest them, and refine their output.

Smart innovators are already developing workflows like:

  • Divergent prompting — intentionally encouraging the AI to “imagine” without constraints.
  • Hallucination capture — saving AI’s speculative outputs for later analysis and refinement.
  • Human-AI co-drafting — treating hallucinated outputs as creative partners to be edited, shaped, or corrected.

The trick isn’t to trust everything the AI says — it’s to recognize when an unexpected answer might open a door.

Hallucinations as Windows to Adjacent Possibles

In complex systems theory, there’s a concept called the adjacent possible — the set of things that could exist next, given what exists today (Kauffman, 2000). AI hallucinations can act as windows into these adjacent possibles.

When an AI proposes something that doesn’t exist, it’s often sitting just beyond the edge of what could exist. That’s where visionaries thrive.

In fact, many of the companies that will define the next decade may owe their genesis to an AI hallucination that sparked a human insight.

From Error to Advantage

We must be cautious with AI hallucinations when accuracy matters.

But we must also be courageous enough to see their power when possibility matters.

In a strange twist of fate, the very thing engineers fight to eliminate may become one of the greatest tools ever handed to dreamers.

AI isn’t just a mirror of the world that is. It’s a generator of worlds that could be (Hao, 2023).

And for those willing to collaborate with their machines — hallucinations may be the new frontier of human imagination.

References

  • Ji, Z., Lee, N., Frieske, R., et al. (2023). Survey of Hallucination in Natural Language Generation. ACM Computing Surveys.
  • Johnson, S. (2010). Where Good Ideas Come From: The Natural History of Innovation. Riverhead Books.
  • Kauffman, S. A. (2000). Investigations. Oxford University Press.
  • Hao, K. (2023). Why AI Hallucinations Are Hard to Fix. MIT Technology Review.
  • OpenAI (2022). Introducing ChatGPT. OpenAI blog.

Author Bio

Rick Jewett is the founder of ChatSites™ and creator of VoiceMate™, where he helps turn AI’s most misunderstood flaw — hallucination — into a tool for safe, permission-based innovation.

The Consent Layer for AI: Why Permission Will Save AI

· 4 min read
Rick Jewett
Rick Jewett
Founder & Visionary, The Human Channel

The Problem Nobody Wants to Talk About

For years now, we’ve all watched AI grow at an extraordinary pace. ChatGPT writes essays. Gemini summarizes research. Claude drafts contracts. And the more we use these tools, the more we wonder:

Where did all this knowledge come from?

The honest answer: it came from us.

From billions of pages of books, articles, blogs, private conversations, photos, videos, songs, voices — much of it pulled into massive AI models without the knowledge or consent of the people who created it.

It worked. But it came with consequences.

The Wild West Phase of AI

The first generation of AI companies operated like digital prospectors, racing to scrape as much data as possible as quickly as possible. Copyright, consent, permission — these became afterthoughts. The assumption was simple: whoever trained the biggest model first would win.

But now, lawsuits are mounting. Creators are pushing back. Regulators are stepping in. And the public is starting to question the very trustworthiness of these systems.

AI is approaching its Napster moment — just as the music industry once did. Unlimited access felt great — until artists, rights holders, and regulators stepped in and forced the industry to evolve.

The Real Issue Is Control

The problem isn’t that AI exists. The problem is how it has been built and deployed.

  • People want AI that helps them, not replaces them.
  • They want AI that works with their permission, not behind their backs.
  • They want AI that respects their work, their identity, and their privacy.

In short: AI must learn to ask first.

This is where the next evolution of AI begins: a Consent Layer.

An infrastructure where individuals, creators, businesses, and governments can safely participate in the AI economy — on their terms.

  • You control what data you share.
  • You decide who can access your content.
  • You authorize how your likeness, voice, or work can be used.
  • You receive compensation where appropriate.
  • You remain fully in control of your identity.

No scraping. No legal ambiguity. No silent exploitation.

Introducing PulseID

One part of this emerging architecture is PulseID.

PulseID serves as an individual’s AI permission key. It is a personal digital identity layer that records what content, data, and likeness you control — and who is allowed to access it.

When an AI system requests access to data connected to you:

  • If you’ve authorized it, permission is granted.
  • If you haven’t, the request is denied.

It’s simple. It’s transparent. It’s fully auditable. And most importantly, it places the human back in control.

PulseID and related Smart Packet infrastructure patent pending.

Why This Is Not The End of AI — But The Beginning of Sustainable AI

There’s a misconception that permission-based AI will slow innovation or weaken the capabilities we’ve grown to rely on.

In reality, the opposite is true.

The reasoning engines behind modern AI are improving rapidly. The core intelligence remains intact. What’s broken is not the reasoning — it’s the way the data was collected.

Without trust, AI faces existential risk:

  • Regulatory shutdowns.
  • Public backlash.
  • Legal collapse.

But with permissioned systems like PulseID and Smart Packets (patent pending), AI remains powerful, but becomes sustainable:

  • Safer.
  • Smarter.
  • Fairer.
  • Fully aligned with creators, regulators, and users.

The Window Is Closing

The world has seen this play out before.

  • Napster collapsed. Spotify emerged.
  • Pirate streaming collapsed. Netflix emerged.
  • Wild web scraping collapsed. Licensed content APIs emerged.

Now, it is AI’s turn to evolve.

The real breakthrough isn’t who can scrape the most data. The real breakthrough is who can build the system that everyone can trust.

The Consent Layer for AI is not an option. It is a necessity.

The only question is: who will lead it?


The Human Channel — Always Human. Always Permissioned. Always Trusted. (Patent pending.)

The Governance Road Ahead

· One min read
Rick Jewett
Rick Jewett
Founder & Visionary, The Human Channel

We are entering a period of AI governance acceleration. The Human Channel exists to lead that conversation with solutions built for consent, identity, and human trust.

We don't just want to complain about AI. We're building something better.

The Human-AI Partnership: Our Next Great Collaboration

· One min read
Rick Jewett
Rick Jewett
Founder & Visionary, The Human Channel

The future is not "humans vs. AI."
The future is humans with AI.

The Human Channel exists because we see what's coming:

  • AI will handle knowledge retrieval, analysis, synthesis.
  • Humans will handle trust, judgment, permission, nuance.

We’re entering the age of:

  • AI Amplification — every person equipped with personal cognitive assistants.
  • Human Filters — where relationships, values, and trust become essential.
  • Permission-Based Commerce — where interruption dies, and invitation wins.

The Work Ahead

We will use The Human Channel to explore questions like:

  • How do we train AI to reflect human dignity?
  • How do businesses earn trust in an AI-filtered world?
  • How do we preserve meaningful human interaction when answers are free?

This platform isn't about hype cycles or technical specs.
It's about the practical, human-centered work of coexisting with AI.

The Human Channel is not fighting the future.
We are preparing for it.


Welcome to the partnership.

The Human Channel vs The Noise Economy

· 2 min read
Rick Jewett
Rick Jewett
Founder & Visionary, The Human Channel

The Noise Economy is Collapsing

For decades, businesses operated by shouting louder.
More ads. More emails. More popups. More DMs.
Consumers are overwhelmed, fatigued, and tuning out.

We call this The Noise Economy — where value is measured by interruption.


Permission is the New Currency

In a world flooded by automated outreach, AI content farms, and infinite recommendations, what people crave most is:

Signal over noise
Relevance over randomness
Trust over transactions

Permission becomes everything.

When someone grants you permission to speak into their life, you’re no longer just fighting for attention. You’re in the privileged space of trust.


The Human Channel Difference

The Human Channel isn’t another tool that tries to yell louder.

It’s built for high-trust, high-permission conversations, where:

  • Messages are delivered with consent.
  • Communication respects time and attention.
  • AI assists without intruding.
  • Businesses serve as trusted advisors, not relentless spammers.

This is Not a Tweak. It's a Reset.

We’re not optimizing the Noise Economy.
We’re building the opposite of it.

The Human Channel is where AI and human interaction intersect — with dignity, permission, and purpose.


The future is clear:
You will either operate inside The Human Channel — or continue competing in the collapsing Noise Economy.

Which side will you build for?

Scarcity Is Dead: The Rise of Human Interaction in the AI Economy

· 3 min read
Rick Jewett
Rick Jewett
Founder & Visionary, The Human Channel

What if knowledge itself is no longer scarce?

For centuries, economies were built on the scarcity of information. Experts were paid for access. Universities sold degrees. Consultants billed by the hour for specialized knowledge.

But all of that is changing.

In an AI-powered world, knowledge is becoming instantly abundant, commoditized, and virtually free. Anyone can access answers, frameworks, insights, and data in seconds. AI models like ChatGPT, Claude, and Perplexity are compressing entire libraries of expertise into simple conversations.

"Scarcity of knowledge will no longer be an issue.
Therefore, knowledge will become worthless.
Human interaction will become very valuable."
— Unknown Speaker

This single observation cuts to the heart of where we're heading.


The True Scarcity Becomes Human Interaction

As knowledge abundance accelerates, human connection becomes the scarce asset.

  • People crave trusted interaction.
  • Businesses crave permission to interact with their customers.
  • Individuals crave meaningful dialogue, not generic content blasts.

In this emerging world, businesses will no longer be able to differentiate simply by what they know. Instead, they'll differentiate by how they interact — how permissioned, trusted, and personalized their engagement feels.

We are entering what we call:

The Human Channel Economy.


What Is The Human Channel?

The Human Channel is being built for exactly this shift:

Restoring Trust:
Authentic permission-based communication, not forced attention.

Protecting Human Attention:
Shielding people from the endless noise of AI-generated spam, calls, and outbound interruption.

Building Permissioned AI Infrastructure:
AI agents that respect identity, consent, timing, and personal context.

Selling Interaction, Not Just Information:
Businesses will thrive by offering curated, valuable, trust-filled conversations — not by hoarding knowledge.

Smart Packets + Identity Layers:
New digital formats like Smart Packets (SPID Protocol) ensure every interaction is intentional, async, and AI-assisted, while still being human-first.


The Future Is Not Less AI — It's More AI... With Boundaries

The problem isn't AI itself.

The problem is AI without permission.

Unchecked, AI-driven outreach will overwhelm consumers with bots, cold DMs, unsolicited voice clones, and non-stop requests for attention.

The Human Channel offers an alternative path:

  • Permissioned access
  • Identity-driven routing
  • Async voice interaction models
  • Trust-first engagement

We Are Building the Infrastructure for This Shift

The Human Channel isn't just an idea. It's an infrastructure layer that includes:

  • ✅ The Smart Packet Identity Layer (SPID)
  • ✅ Trust-Routing Protocols for AI agents
  • ✅ Permissioned AI communication models
  • ✅ Asynchronous voice-powered interactions
  • ✅ Full auditability and regulatory alignment

In the AI economy, knowledge is free.
Permissioned human interaction is the new currency.
The Human Channel is being built for this exact future.


Join us as we build the rails for permission-first AI communication.