Site icon John Rector

I Didn’t Train the AI Employees. I Constrained Them.

The Saltwater Cowboys experiment (two AI employees, one very human outcome)

Saltwater Cowboys is a wildly popular restaurant on Shem Creek in Charleston, South Carolina. Recently I built them two AI employees:

Amy now answers roughly 50 calls a day. Before this, the human managers would end up having to personally deal with around 30 of those 50 because phone calls aren’t just “information.” They’re emotion, confusion, exceptions, special requests, complaints, hiring inquiries, vendor coordination, and the constant little surprises that drag a manager away from the floor.

Today, that human attention load is down to about three calls a day.

That’s the part people like to hear because it feels like automation.

But the more important part is why it worked.

The real trick wasn’t training. It was constraining.

Most people assume building an AI employee is like onboarding a new hire:

“Here’s the menu.”

“Here’s how reservations work.”

“Here’s the hours.”

“Here’s how to handle to-go orders.”

“Here’s the daily special.”

That’s not what happened.

I didn’t train Amy and John on how to be restaurant staff.

They already know how.

They already know what a restaurant is.

They already know the shape of phone calls.

They already know what customers tend to ask.

They already know how complaints typically go.

They already know how hiring inquiries sound.

They already know what an “owner request” feels like.

They even know a thousand plausible menu items and a thousand plausible “daily special” formats.

That’s the point.

These models arrive pre-loaded with general competence. In many ways they show up like a hyper-experienced employee who has worked everywhere.

Which creates a new problem:

They also arrive pre-loaded with plausible nonsense.

The bull riding problem (the clearest example I’ve ever seen)

“Saltwater Cowboys” sounds like it could be a place with bull riding.

So the AI will happily infer a whole vibe:

Not because it’s stupid.

Because it’s helpful — and its job is to complete patterns.

If you don’t constrain it, it will invent a reality that sounds reasonable.

So you don’t “train” it by feeding it a lesson on restaurants.

You constrain it with truth:

That’s not training. That’s governance.

Nuance is mostly negation

When people say, “AI can’t handle nuance,” what they often mean is:

AI can’t reliably guess your specific version of the world.

Because nuance isn’t more knowledge.

Nuance is the shape of the local truth.

And local truth is largely made of:

That’s why my prompts for Amy and John are not giant encyclopedias.

They’re mostly negation.

Examples from this project looked like:

Even when you do provide menu details, the real value isn’t that the AI “learned the menu.”

The real value is that the AI is now constrained from confidently hallucinating a menu that sounds right.

Why Amy + John works: an escalation ladder, not a brain transplant

Here’s the structural move that made the whole system feel sane:

That means humans aren’t “replaced.”

They’re protected.

And the restaurant doesn’t lose the human touch where it matters most: the truly weird situations where judgment and authority live.

Training is expensive. Constraints are scalable.

If you approach AI employees like training, you end up chasing an impossible goal:

“I need them to know everything about my business.”

That’s infinite work, and it’s the wrong model.

If you approach it like constraining, you’re doing something much more realistic:

“I need them to know what not to do, what not to promise, how we specifically operate, and when to hand off.”

That’s finite.

That’s documentable.

That’s testable.

That’s maintainable.

And it maps perfectly to how great managers actually run organizations:

The new craft: writing constraints that sound human

The art isn’t “prompt engineering” in the cringe sense.

The art is writing operational constraints that:

That’s why I call this constraining, not training.

Training is trying to make the AI smarter.

Constraining is trying to make the AI faithful.

The takeaway

AI workers are arriving with something we’ve never had in labor before:

A new employee who already knows how the world works.

Your job isn’t to teach them what a restaurant is.

Your job is to carve a narrow tunnel through their broad knowledge so they reliably operate inside your reality.

That tunnel is made of constraints.

And once you see it, you can’t unsee it:

The future of “managing AI employees” is less about giving them more information…

…and more about telling them, with precision, what is not true here.

Exit mobile version