AI Is Not a Librarian. It’s a Novelist With a Radio Telescope.

The Category Mistake We Keep Making

For seventy years, information technology has helped humans manage information. We store records, retrieve them, sort them, count them, and present them. Databases, CRMs, spreadsheets, search engines — all of them live in the same world: the world where truth already exists somewhere, and the job of software is to fetch it cleanly.

Generative AI is not that.

Treating generative AI like a new kind of information manager is the root of most confusion in AI right now. It’s why people keep saying “it hallucinates,” “it lies,” or “it’s broken.” We’re grading a novelist with the rubric for a librarian.

Fetch and Generate Are Two Different Species

There are two fundamentally different jobs:

Fetch (retrieval):
Pull an existing record from an authoritative store.
The goal is fidelity to what already exists.

Generate:
Create a new candidate solution, argument, image, plan, or explanation based on learned patterns.
The goal is coherent novelty: a plausible new thing under constraints.

A database fetches. A generative model generates.
They aren’t “versions” of each other. They’re different machines.

So asking a model:
“List my last ten customer orders”
is a database job.

Asking a model:
“Draft the best follow-up email to a frustrated customer”
is a generative job.

Every time we reverse those, we get disappointed.

What “Patterns and Prediction” Really Means

Generative AI doesn’t contain a filing cabinet of facts, sentences, or images. It learns the statistical structure of a domain — patterns of how things tend to go together — and then predicts what comes next.

This is not a metaphor. It’s the architecture.
The model’s “knowledge” is a time-evolving pattern, not a shelf of stored objects.

That’s why it can create a brand-new paragraph, a brand-new plan, a brand-new image.
Not because it found one.
Because generation is what it is.

Two Telescopes, One Sky

Here’s a clean way to feel this:

Imagine you and an AI looking at the same night sky.

You’re using a visible-light telescope.
The AI is using a radio telescope.

Same sky.
Totally different patterns.

Neither instrument cares what the other sees.
The radio telescope isn’t trying to help the visible telescope.
It’s just outputting what it detects.

That’s a good picture of human vs AI pattern vision.
We are narrow-band pattern recognizers.
AI is a different-band pattern recognizer.

So when it produces a solution that feels alien or surprising, that’s not a bug. That’s what a different spectrum looks like.

RAG Is Beautiful — If You Stop Expecting It to Create Factual Outputs

Retrieval-Augmented Generation (RAG) is popular because it lets models pull relevant documents and then generate from them. That’s fine. It’s often beautiful.

But turning on RAG doesn’t change the species of the model.
It doesn’t turn a generator into a database.

RAG feeds the novelist.
It doesn’t turn the novelist into a librarian.

So if you ask a model (with or without RAG) to draft a legal brief, a consulting report, or a strategic analysis, what you’re getting back is an argument. A strategy. A proposal. Not a fact sheet.

When humans read that draft and find a detail that isn’t factual, the reflex today is:
“RAG isn’t strict enough. Tighten the guardrails. Force factuality.”

That reflex is a dead end.

Because the more you force a generator to behave like a truth engine, the more you amputate the very thing you hired it for: strategic imagination.

The most “strictly factual” model will be timid, narrow, and mediocre at solutions. It will start to sound like a database with a voice, which is not what we need.

The Right Collaboration Etiquette

Here’s the better posture:

Let the model generate the best argument it can.
Let it roam in the high-dimensional landscape of patterns.
Let it do its native work.

Then we do ours.

If a precedent doesn’t exist, or an example is imaginary, that doesn’t mean the model “failed.” It means the model is arguing without doing the factual auditing job — because that’s our job.

We don’t scold the novelist for inventing.
We read the draft for the shape of the story.

The question becomes:
“What role is that imaginary precedent playing in the argument?”
“What real precedent plays the same role?”
“What real evidence fits this structure?”

AI gives you the topology of the solution.
Humans reconcile that topology with reality.

That is a powerful partnership precisely because strengths and weaknesses line up:

  • AI will read a 200-page corpus and track nuance almost no human can track, and at a cost almost no human can match.
  • Humans know what the world has actually ratified, what never happened, what must be verified.

A Bright Boundary: Don’t Use AI for Oxen Counts

Suppose you subpoena a million emails in a legal case.

If you want to know exactly how many times the word “oxen” appears, generative AI is a terrible choice. That’s a pure retrieval/counting problem. Parse the emails. Put them in a database. Use deterministic software. You’ll get a precise answer every time.

But if you want:
“Given this whole corpus, what is the strongest argument we can make?”
that’s a generative problem. That’s where AI shines.

Same input.
Two different engines.
Two different jobs.

The One Line to Remember

AI’s job is to make the case.
Our job is to reconcile the case with reality.

If we can teach that boundary — and stop demanding factual bookkeeping from a novelist — we’ll finally use generative AI as what it actually is: a peer intelligence that can produce real strategies, real arguments, and real creative solutions that humans alone wouldn’t have seen.

Author: John Rector

Co-founded E2open with a $2.1 billion exit in May 2025. Opened a 3,000 sq ft AI Lab on Clements Ferry Road called "Charleston AI" in January 2026 to help local individuals and organizations understand and use artificial intelligence. Author of four books: World War AI, The Coming AI Subconscious, Robot Noon, and Love, The Cosmic Dance.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Discover more from John Rector

Subscribe now to keep reading and get access to the full archive.

Continue reading