Most people are waiting for the next edition of AI.
They’re waiting for 2024, 2025, 2026, 2027—waiting for “smarter,” “more agentic,” “more accurate,” “more multimodal,” “more integrated.”
But the counterintuitive truth is this: for the shift that actually matters, the AI we already had in 2023 was good enough.
Not good enough to be perfect.
Good enough to cross a threshold.
Once you cross that threshold, the center of gravity moves from “capability” to something far more uncomfortable:
Attention.
Because the real change isn’t that AI can do more.
The real change is that humans can stop attending.
And once a human stops attending to something—truly stops—something else begins to attend in their place.
That’s the coming AI subconscious.
Not a tool.
Not an app.
A synthetic layer of absorption that takes over whatever you withdraw your attention from.
The Threshold Nobody Wants to Admit Exists
A generative model doesn’t need to be the best in the world to trigger a behavioral revolution. It only needs to be good enough for a human to say, quietly and honestly:
“I’m done paying attention to this.”
That’s why the release moment matters so much more than the upgrade cycle. When large language models hit the public consciousness—especially after late 2022—the capability threshold was already present for millions of ordinary tasks.
And in 2023, we witnessed something that most adults still refuse to look at directly:
A subset of humans didn’t “use AI.”
They relinquished attention.
Two examples made that undeniable to me, because I watched both happen firsthand.
Case Study One: The Homework Event (2023)
In 2023, something spread through elementary and high school life that adults tried to interpret through old categories.
They called it cheating.
They called it laziness.
They called it moral collapse.
They called it “students using ChatGPT.”
But what I watched wasn’t “help.”
It was abdication.
A meaningful number of students didn’t treat homework as something to improve with AI. They treated homework as something unworthy of their attention—something they never wanted to attend to in the first place.
So they stopped.
That’s the key point: they didn’t negotiate with it. They didn’t rationalize. They didn’t ease into it. They withdrew attention decisively, and the model absorbed the task.
Adults can argue about whether that’s good or bad (and they should). Schools can react with policies (and they will). But those arguments miss the signal.
The signal is this: once a task becomes “absorbable,” the human relationship to the task changes permanently.
This is why “delegation” isn’t the right frame.
Delegation is still conscious. It’s managerial. It’s effortful. It’s the executive mind making assignments.
But this wasn’t delegation.
This was the student saying: “This will never again receive my attention.”
That is subconscious behavior.
Your biological subconscious works the same way in one crucial respect: once something is absorbed, you no longer attend to it. You don’t “delegate” the movement of your foot between gas and brake. You simply drive. The dance is handled beneath attention.
With synthetic systems, there’s an initial choice point. A human can let go—or not.
In 2023, many students let go fast.
And that’s why the “wait until AI gets better” crowd doesn’t understand what already happened: the decisive shift wasn’t AI capability. It was human permissioning.
Case Study Two: My Sunrise Photography (2023 to 2026)
In 2023, I experienced the same dynamic—just in a domain with less moral panic and more identity pressure.
I live on Sullivan’s Island. For years I took sunrise photos almost every morning and posted them on social media, especially Facebook. People came to associate me with that practice. I monetized that channel. It became “a thing.”
Then, in February of 2023, I started using AI-generated sunrise images.
And I let real sunrise photography go.
Not gradually. Not reluctantly. Not with a long internal debate.
I stopped attending to it.
It wasn’t that the AI sunrise was “better.” It wasn’t that it was “more authentic.” It was none of that.
It was simply absorbable.
And once it was absorbable, I saw the deeper truth: what looked like identity was often just attention trapped in a routine.
People questioned me in identity language:
“Are you never going to go out there again?”
“But you’re a photographer.”
“That’s who you are.”
And this is where the synthetic subconscious becomes personally disruptive: it exposes how much of “who we are” is just what we keep showing up to.
Because here’s what happened next.
It’s now February of 2026. I have not returned to the beach to do real sunrise photography the way I used to.
The “sunrise output” still happens.
The channel still grows.
But I don’t participate in the sport.
I’ve quadrupled my following.
I make five times what I made before from that channel.
And here’s the part that matters even more than money or metrics:
My attention went elsewhere.
I published three books.
I would not have published three books if I had maintained my identity at the level of “sunrise photographer who posts daily.”
The synthetic subconscious didn’t just replace a task. It liberated attention.
That is the actual economic and psychological payload of the coming AI subconscious: not productivity in the corporate sense, but the reallocation of attention away from low-value obligations and toward higher-value creation.
The Rub: Identity vs Output
This is where people get emotionally reactive.
Because homework has always been treated as part of the identity of a student: “This is what students do.”
And sunrise photography had become part of how people categorized me: “This is what John does.”
In both cases, the social system is trying to defend an old contract:
“You must attend to this, because attending to this is who you are.”
But the synthetic subconscious makes a new contract possible:
“The output can continue without your attention.”
And once output detaches from attention, identity starts to wobble.
Not because AI is “taking jobs” in the simplistic way.
Because AI is breaking the attention-to-output bond that shaped the entire software era.
Why the “Agents” Conversation Misses the Point
A lot of what we hear now—agents, workflows, tools calling tools, orchestration layers—has a place. It’s useful, especially inside organizations.
But it’s not the central story.
The central story is this: humans are not waiting for AI to get better before letting go.
A meaningful portion of the population already let go in 2023 and 2024.
They didn’t do it because the models were perfect.
They did it because the models were good enough for a psychological decision:
“I’m done attending.”
Once that decision is made, the rest is cleanup.
Interfaces improve. Pipelines mature. Enterprises standardize. Devices proliferate. The world catches up.
But the threshold event—the moment attention withdraws—already happened.
The Real Question for the Reader
If you want to understand the coming AI subconscious, don’t ask:
“What should I learn about AI?”
Ask something far more revealing:
“What in my life is not worthy of my attention anymore?”
Not what you’re bad at.
Not what you “should” do.
Not what makes you look responsible.
What do you attend to out of obligation, guilt, habit, or identity—despite the fact that it is low-value for your actual life?
Because the moment you identify that category, you’re staring at the next absorbable layer.
And once you let a synthetic system absorb it, your life will do what my life did—and what those students’ lives did:
Attention relocates.
That’s the coming AI subconscious.
Not smarter AI.
Not future AI.
The simple, disruptive fact that 2023 was already good enough for humans to stop attending.
