The Agent Is Just a Function

The agent is not the miracle.

The agent is the function.

That distinction matters because the current AI conversation keeps treating “agent” as though it were the next evolutionary stage of intelligence. First we had chatbots. Then we had copilots. Now we have agents.

But in the Reality Equation, the agent is much simpler than that.

Reality = Actual / Expectation

The agent belongs after Reality appears.

The agent is not Actual.

The agent is not Expectation.

The agent is not the real component of Expectation, which is subconscious prediction.

The agent is not the imaginary component of Expectation, which is ideas.

The agent is a function applied to Reality.

That is all.

It may be a very useful function. It may be powerful, automated, fast, persistent, integrated, and commercially valuable. But mathematically, it remains a function.

Submission = f(Reality)

Booking = f(Reality)

Publishing = f(Reality)

Filing = f(Reality)

Calling = f(Reality)

Ordering = f(Reality)

Approving = f(Reality)

Rejecting = f(Reality)

The agent takes Reality as input and produces some action as output.

That is the clean definition.

The problem in many current AI systems is not that the agent is weak. The problem is that the agent is being applied to the wrong input.

Instead of this:

Submission = f(Reality)

We often build this:

Submission = f(Prediction)

That is the architectural mistake.

A large language model is a synthetic subconscious prediction machine. In the Reality Equation, it belongs in the real component of the denominator. It predicts words, images, code, structures, arguments, classifications, and patterns.

That is extraordinary.

But prediction is not Reality.

Prediction is one component of Expectation.

Expectation also contains the imaginary component, which is ideas. More precisely, it contains the system’s relationship with ideas. That relationship has magnitude and argument. It is the system’s ideational bias-vector.

And above the denominator, there is Actual.

Actual is the numerator.

Reality is the quotient.

Only after that quotient appears should the function act.

This is easy to miss because human beings do not experience the components separately. We do not wake up inside pure prediction. We do not experience pure Actual. We do not inspect our ideational bias-vector as an isolated object before we act.

We receive Reality.

Reality is given.

Then we act.

That is why human agency feels natural. The right-hand side has already resolved before consciousness begins. We do not have to assemble Actual, prediction, and ideas into a quotient. We simply find ourselves inside Reality and apply functions to it.

We decide.

We speak.

We move.

We submit.

We publish.

We call.

We sign.

We send.

Artificial systems are different because we can see the pieces.

We can see the prediction machine.

We can see the dataset pretending to serve as Actual.

We can see the bias-vector, even if we do not yet measure it well.

We can see the tools.

We can see the workflow.

We can see the function.

And because we can see all the parts, we are tempted to wire them together too quickly.

We take a prediction machine.

We attach tools.

We call the whole thing an agent.

Then we are surprised when it submits prediction into history.

But the agent did not necessarily fail.

The function may have worked perfectly.

It took the input it was given and acted on it.

The failure happened before the function.

The input was malformed.

This is why blaming the agent can be misleading. It puts responsibility in the wrong place.

If a system drafts a research paper with false citations and then submits it, the submission function may have done exactly what it was designed to do. It submitted. The deeper failure was that the function was applied to prediction rather than Reality.

The citation did not exist.

The source was not checked.

The declared Actual was weak or missing.

The prediction was allowed to masquerade as the quotient.

That is not an agent problem in the deepest sense.

It is a Reality construction problem.

The same applies to code.

If an AI system writes Python code and an automation commits it without execution, testing, or inspection, the commit function may work. The file may be saved. The repository may be updated. The deployment may even begin.

But what did the function act on?

Prediction.

Not Reality.

A coding Reality would include the predicted code, the actual runtime, the actual error messages, the actual dependencies, the actual tests, and the actual behavior of the program.

Only then should the commit function act.

The agent is not supposed to magically contain all of that.

The agent is just the function.

The missing work is upstream.

This distinction becomes especially important when people say, “We need better AI agents.”

Sometimes that is true.

But often the more precise statement is:

We need better inputs for agents.

Or even better:

We need synthetic Reality before agency.

A weak system says:

The model produced this. Now act.

A stronger system says:

The model produced a prediction. Now construct the quotient. Then act.

That difference changes the design philosophy.

The agent should not be burdened with becoming the entire right-hand side of the Reality Equation. It should not be expected to be Actual, Expectation, prediction, ideas, verification, judgment, and action all at once.

That is too much confusion packed into one word.

The agent should remain simple.

It is the function.

The function should be applied to Reality.

Therefore, the real engineering question is not merely, “What tools does the agent have?”

The real question is, “What is the agent acting on?”

A tool is not Reality.

A browser is not Reality.

An API is not Reality.

A database connection is not Reality.

A file system is not Reality.

A submit button is not Reality.

Tools give the function reach. They do not guarantee that the function is acting on the right input.

This is the great misunderstanding of agentic AI.

People see a system that can use tools and assume it has agency.

But tool use is not agency in the mature sense.

Tool use is action capacity.

Agency requires the function to be applied to Reality.

A system can have many tools and still be acting on prediction.

That is dangerous because tool access increases consequence. A prediction without tools may merely appear on a screen. A prediction with tools can enter history. It can send the email, publish the claim, place the order, delete the file, update the record, or submit the paper.

The danger is not prediction by itself.

The danger is prediction with hands.

That is what immature agents often are: prediction with hands.

The mature agentic system must be different.

It must receive something closer to Reality.

That means the declared Actual must be present. In a laboratory or artificial system, Actual is represented by the actual outcome, the verified dataset, the source material, the label, the measurement, the test result, the record, the observed condition.

This laboratory Actual is never as pure as the Immutable Past. It can be mislabeled, incomplete, stale, biased, or poorly sampled. But it is still the attempt to place something in the numerator.

The prediction must also be present. That is the real component of the denominator. This is where today’s AI systems are strongest.

The ideational bias-vector must also be recognized. That is the imaginary component of the denominator. Every system stands in relationship with ideas. That relationship is not neutral. It has magnitude and argument. A model may lean toward certain forms, styles, assumptions, prohibitions, simplifications, refusals, or patterns. That leaning is not incidental. It is part of the denominator.

Only when these components are brought into relation can we speak of a synthetic Reality-like input.

Then the agent can act.

This gives us a cleaner architecture:

Prediction is not enough.

Prediction plus tools is not enough.

Prediction plus tools plus automation is not enough.

The agent should act on Reality.

If the system does not have Reality, it should act cautiously or not at all.

This is why creative domains often feel easier. In creative work, the human can sometimes accept prediction as Actual. If the model generates an image and the human uses the image, the prediction becomes the actual artifact inside that workflow.

In that case, the agent may not be very important. The economic value came from prediction being accepted.

But truth-bound domains are different.

A research citation cannot become real because the model predicted it.

A medical condition cannot become benign because the model predicted it.

A contract clause cannot mean something because the model predicted that meaning.

A bank balance cannot become correct because the model generated the number.

In truth-bound domains, Actual must discipline prediction.

That is why agents in these domains require a much stronger upstream Reality layer. The function must not merely submit the predicted artifact. It must act on a quotient that includes declared Actual, prediction, and the system’s relationship with ideas.

The agent is not the source of truth.

The agent is the function applied after truth has been structurally approximated.

This is where the current market language becomes sloppy. “AI agent” makes it sound as though the intelligence and the action are the same thing.

They are not.

AI is often the prediction machine.

The agent is the action function.

Automation is the mechanical movement through a workflow.

Synthetic Reality is the missing layer that should come before meaningful action.

Once we separate these, everything becomes easier to understand.

A model that writes a book is not acting as an agent simply because it produced a long artifact. It predicted a book-shaped output. If the human accepts that output and publishes it, the book entered history through acceptance and publication.

A script that uploads the book to Kindle is not the source of AI value. It is an automation. It may be useful, but it is not the miracle.

A system that verifies a research paper against actual sources before submission is closer to a mature agentic architecture. But even there, the submission function remains just a function. The serious work is constructing the Reality on which it acts.

So we should stop speaking as though agents are magical beings.

They are functions.

Some functions are simple.

Some are complex.

Some are automated.

Some are human.

Some are artificial.

But the category does not change.

A function takes an input and produces an output.

The moral, practical, and commercial question is whether the input is worthy of action.

That is where responsibility begins.

Not at the submit button.

Before it.

Before the function.

Before the agent.

At the construction of Reality.

The next generation of AI systems will not be judged merely by whether they can act. Acting is easy. Tools can act. Scripts can act. Automations can act. Bad agents can act very quickly.

The serious question is whether the system knows what it is acting on.

That is the line between automation and agency.

Automation moves.

Agency acts on Reality.

The agent is just a function.

The future belongs to those who remember what the function is supposed to receive.

Author: John Rector

Co-founded E2open with a $2.1 billion exit in May 2025. Opened a 3,000 sq ft AI Lab on Clements Ferry Road called "Charleston AI" in January 2026 to help local individuals and organizations understand and use artificial intelligence. Authored several books: World War AI, Speak In The Past Tense, Ideas Have People, The Coming AI Subconscious, Robot Noon, and Love, The Cosmic Dance to name a few.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Discover more from John Rector

Subscribe now to keep reading and get access to the full archive.

Continue reading