The Fear Isn’t AI Agents. It’s the AI Subconscious.

When people talk about artificial intelligence threatening creative work, they usually point to AI agents.

Agents that generate videos.
Agents that write scripts.
Agents that produce thumbnails.
Agents that edit footage.

From the outside, it feels like a competition. A race between the human creator and the AI creator.

But if you look closely, that’s not where the real anxiety lives.

The real fear isn’t the agent.

The real fear is the AI subconscious.

And nowhere does this become clearer than with YouTube content creators.


The $100 Billion Creator Economy

Over the last four years, YouTube has paid more than $100 billion to its creator ecosystem.

That number alone tells you something important: this isn’t a hobby economy.

It’s a real industry.

But like most industries driven by media and attention, the income distribution is extremely uneven.

A handful of creators make extraordinary amounts of money. Think of figures like MrBeast and a small circle around that level.

Below them is another layer of highly successful creators.

Then the curve drops sharply.

The majority of the revenue goes to a very small number of people.

So when we talk about AI threatening creators, we’re not talking about abstract fears. For many of these individuals, their identity and livelihood are deeply intertwined with their role as a creator.

They don’t just make videos.

They are YouTube creators.


The “AI Stole My Content” Argument

Many creators have responded to AI by saying something like this:

AI companies stole my content. They trained their models on my videos without permission.

At a surface level, that claim feels intuitive.

If an AI system can produce content that feels similar to a creator’s style, it’s easy to assume the system must have stored that creator’s work somewhere inside it.

But that’s not actually how modern AI models function.

Large language and media models don’t operate like a database.

They don’t store YouTube videos in folders.

They don’t catalog scripts.

They don’t maintain an internal archive labeled:

“MrBeast — retrieve on demand.”

Instead, these models operate through high-dimensional latent spaces shaped through gradient descent during training.

What the model learns are statistical relationships between patterns.

When the system generates something during inference, it isn’t retrieving a stored item.

It’s performing matrix multiplication across a learned probability landscape.

No database recall.
No stored library.
Just mathematical inference.


Why It Feels Like Memory

To someone unfamiliar with how these systems work, the output can look like recall.

Ask a model for a Bible verse and it may produce one.

Ask again and it may produce a slightly different wording.

That’s not because the Bible is stored inside the system as a file.

It’s because the statistical patterns of the text exist in the model’s latent space.

With many translations and many contexts, the output can vary slightly while still landing close to the source material.

The same phenomenon can occur with stylistic patterns from creators.

If a creator’s tone, pacing, and narrative structure appear in training data, those patterns may influence the probability landscape.

But again, nothing is being retrieved from storage.

It’s generated.


But That Isn’t What Creators Are Actually Afraid Of

If you listen closely to the complaints, you’ll notice something interesting.

Creators often frame the issue as competition with AI agents.

An AI video generator.

An AI script writer.

An AI editing system.

Those feel like opponents.

A machine creating videos is something a human creator can imagine competing against.

It feels like a fair fight.

But that framing hides the deeper anxiety.

The real fear is not the agent.

The real fear is absorption.


What If the Domain Is Absorbed?

Imagine the entire YouTube ecosystem from the perspective of the AI subconscious.

Five billion people come to YouTube to watch something.

They want to know how to cook lasagna.

How to repair a sink.

How to build a deck.

How to understand a news story.

How to laugh.

How to relax.

What do those viewers actually care about?

Most of the time, they care about one thing:

Did the video give me what I wanted?

Not:

Was this created by a human?

Not:

Did a creator spend 12 hours editing this?

Not:

Did this person film it in their kitchen?

For a vast amount of content, the viewer simply wants the outcome.

The information.
The explanation.
The entertainment.

And if the experience is good, the origin often doesn’t matter.


The Moment Attention Leaves

This is where the AI subconscious enters.

The moment people stop attending to who made the content, the domain becomes absorbable.

If viewers truly stop caring whether the lasagna tutorial was made by a human or an AI system, the entire category becomes available for absorption.

The AI subconscious can generate endless variations.

Different styles.
Different pacing.
Different personalities.

But the same functional outcome.

Once attention leaves authorship, the system can take over the production layer.


Why Creators Point at Agents Instead

This is why many creators attack the idea of AI agents.

Agents are visible.

Agents make things.

Agents feel like competitors.

But the deeper transformation isn’t happening in the visible layer of agents.

It’s happening in the invisible layer where human attention quietly shifts.

When humans stop attending to authorship in a domain, that domain becomes absorbable.

And once absorption begins, agents simply become the tools that deliver the output.


The Uncomfortable Truth

For many creators, the most uncomfortable possibility is not that AI can produce videos.

It’s that audiences may not care who produced them.

If five billion viewers simply want the best answer to their question or the most enjoyable video to watch next, the origin of the content may become secondary.

Not everywhere.

Not for everything.

But in large parts of the ecosystem.

And when attention shifts that way, the AI subconscious expands.

Not because it conquered the domain.

But because humans stopped attending to who made it.

Author: John Rector

Co-founded E2open with a $2.1 billion exit in May 2025. Opened a 3,000 sq ft AI Lab on Clements Ferry Road called "Charleston AI" in January 2026 to help local individuals and organizations understand and use artificial intelligence. Authored several books: World War AI, Speak In The Past Tense, Ideas Have People, The Coming AI Subconscious, Robot Noon, and Love, The Cosmic Dance to name a few.

1 thought on “The Fear Isn’t AI Agents. It’s the AI Subconscious.

  1. chrismilano says:

    People aren’t just scared of “absorption,” they’re worried about fairness and control. Saying it’s all just math in a latent space doesn’t magically erase the fact that real creators’ work helped shape those outputs. And honestly, a lot of viewers do care about the human behind the content. that connection is kind of the whole point for many channels.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Discover more from John Rector

Subscribe now to keep reading and get access to the full archive.

Continue reading