Architectural Specification: Cognitive-Inspired AI Systems and the Completed-Form Model

1. Executive Overview: From Conscious Clerks to Subconscious Engines

Strategic AI implementation requires a fundamental ontological shift: we must cease treating Artificial Intelligence as a “conscious mind” and recognize it as an externalized subconscious. Traditional deployments suffer from a “Category Error,” attempting to force Large Language Models (LLMs) into the role of a “conscious clerk”—a deliberative entity expected to follow sequential, deterministic rules. In reality, modern AI is an industrial-scale analogue of the collective unconscious, trained on the collective residue of human pattern-making. By framing AI as a predictive engine rather than a clerical worker, organizations can transition from brittle, chat-based interfaces to “ambient” automation that disappears into the background of a workflow, surfacing to human awareness only when a pattern breaks.

DimensionConscious Clerk (Deterministic)Subconscious Engine (Predictive)
LogicStep-by-step; gathering inputs and consulting rules.Gestalt pattern completion; predicting wholes.
Output StyleRouting tasks; sequential field collection.“Completed-form” outcomes and proposals.
Resource Cost“Expensive”; recruited for novelty, doubt, and ambiguity.Efficient; “ambient,” proactive, and reflex-driven.
AccountabilityIntent-based, reflective, and rule-bound.Pattern-based; inhibits autonomous execution pending validation.

This shift allows for the creation of an “autopilot” state where the mechanical work of an organization is handled by predictive pattern matching. To steer these engines without sacrificing reliability, builders must master the mechanism of Attention Design.

2. The Attention Design Framework: Prompting as a Steering Mechanism

In this architecture, “attention” is the critical bridge between deliberative human intent and automatic AI prediction. It is the negotiation layer that determines what the system surfaces, suppresses, or flags for escalation. Rather than “prompting for vibes,” architects must engage in Attention Engineering: the deliberate shaping of the predictive field to ensure the engine privileges the correct variables.

The six primary attention levers used to steer the model include:

  • Objective: Defining the dominant outcome or “north star” the system must achieve.
  • Priority: Establishing the hierarchy of values when goals (e.g., speed vs. accuracy) conflict.
  • Constraints: Explicitly defining the “negative space”—what is forbidden or unacceptable.
  • Risk: Marking specific variables that must trigger heightened caution or deterministic checks.
  • Uncertainty: Providing architectural instructions for behavior when data is missing or ambiguous.
  • Identity: Defining the specific jurisdiction and role the AI inhabits to narrow its predictive scope.

The “So What?” of this framework is profound: mis-aimed attention—not model failure—is the primary root cause of errors in high-stakes environments. Because AI predicts a “gestalt outcome,” the designer’s objective is to alter the geometry of the predictive field rather than attempting to “correct” individual answers. Once the geometry is set, the system produces a “Completed-Form” proposal that requires a specific structural response.

3. Structural Requirement: The Proposal/Commitment Split

The fundamental safety requirement of a cognitive-inspired architecture is the rigid separation of “Proposal” from “Commitment.” This split acknowledges that while AI is extraordinary at interpreting intent and synthesizing options, its output is a prediction, and prediction is not obligation.

AI systems naturally exhibit Completed-Form Behavior, predicting finalized outcomes (e.g., a filled schedule or a contract-like resolution) rather than gathering data in steps. To prevent these plausible completions from becoming unintended binding actions, the architecture must enforce the following roles:

The Proposal/Commitment Split

  • The AI System owns the Proposal and Interpretation: It generates a predicted gestalt completion (e.g., “Here is the recommended price and terms”).
  • The Human/Deterministic System owns the Commitment: It holds the sole authority to transition that proposal into a binding, real-world action (e.g., “Execute the contract”).

This split is a business necessity. An organization is obligated to compliance, consistency, and accountability—values a prediction engine cannot guarantee on its own. Technical enforcement of this boundary is managed via Commit Gates.

4. Boundary Design: Deterministic Systems and Commit Gates

Strategic placement of deterministic boundaries is essential for actions where exactness is non-negotiable. Rather than “guardrails”—which fruitlessly attempt to force a predictive model to act like a rules engine—we implement Commit Gates. These gates allow the AI to propose freely but inhibit the execution of those proposals until they pass deterministic verification.

Commit Gate Task List:

  • [ ] Price Thresholds: Proposals exceeding set financial limits trigger human review.
  • [ ] Identity/Credentials: Actions involving private data require deterministic verification.
  • [ ] Knowledge Grounding: Policy statements must be cross-referenced against approved sources.
  • [ ] Contractual/Legal Obligations: Any term affecting liability or legal standing requires an expert gate.

The Determinism Boundary

Predictive/Subconscious (AI)Deterministic/Commit Authority (Classic Software)
Interpreting intent from natural language.Pricing tables, discount logic, and ledgers.
Mapping intent to categories.Inventory availability and capacity constraints.
Explaining outputs in human language.Compliance rules and contractual terms.
Synthesizing context into a new proposal.Calendars, accounting records, and “World-Write” tools.

This hybrid model prevents “AI-totalitarianism” by ensuring that while the AI can creatively interpret and propose, the deterministic system remains the final authority on what is “true.”

5. Operational Modes: Accept, Shape, and Override

To scale human agency, the architecture employs three operational modes that map directly to human cognitive patterns:

  • Mode 1: Accept (Fast Path): The human or system allows the AI’s prediction to stand. This is the default for low-stakes, high-stability tasks.
  • Mode 2: Shape (Attention Engineering): The operator does not “correct” the output but alters the salience map (e.g., adding a constraint) to let the AI re-predict the entire “form” with new geometry.
  • Mode 3: Override (Deterministic Switch): The predictive engine is bypassed. A human or deterministic rule takes over, often triggered by a Commit Gate firing.

These modes provide a scalable framework, allowing operators to manage AI through high-level attention rather than micromanaging clerical steps.

6. System Design Patterns for Builders

To minimize friction, builders must adopt “proposal-first” design patterns that transform the AI from a chat window into an ambient proactive layer.

  1. Proposal-First Interfaces: UX requirements must always present the AI’s output as a completed recommendation (e.g., “Here is the drafted reply”) with explicit “Accept/Shape/Override” controls.
  2. Tooling as World-Write: Tools (APIs, databases) are strictly commitment mechanisms. The model “thinks” (predicts) internally; the tool serves only as the Commit Gate’s execution arm to “write” to the world.
  3. Commit Gates at Tool Boundaries: Logic must be implemented at the API layer to ensure no tool—such as charge_card or send_email—can be called without meeting deterministic thresholds.
  4. Exception Labeling: Every override or gate-fire must be labeled (e.g., novelty, stakes, ambiguity). This data serves as the primary training signal for the next iteration of the system’s attention geometry.

7. Governance and Measurement: Tracking Predictive Quality

Because AI is a predictive engine, it must be measured as such. Success is defined by the stability of the predictive field and the grace with which the system escalates to consciousness.

  • Proposal Quality Metrics: Acceptance rate, Shape rate (frequency of revision), and Revision depth.
  • Risk & Governance Metrics: Gate fire rate, escalation correctness (identifying uncertainty), and post-commit correction rate.
  • Operational Metrics: Time-to-resolution and Exception breakdown (Novelty vs. Stakes vs. Ambiguity).

These metrics turn prompting into a measurable engineering discipline and create continuous feedback loops, using exception labeling to refine the architecture’s predictive accuracy over time.

8. Conclusion: The Externalized Subconscious as a Stable State

The Completed-Form Model offers a cognitively faithful and architecturally stable framework for the AI era. By moving beyond the “conscious clerk” metaphor, we recognize that AI is not an alien intelligence, but an “old companion”—the subconscious—now externalized and available through a communal interface.

The long-term value of this architecture lies in its placement of AI in our technical ontology: a predictive autopilot that handles the messy boundaries of human intent, while leaving the finality of commitment to deterministic systems. For builders of the future, the directive is absolute:

Let AI propose freely; control commitment carefully.

Author: John Rector

Co-founded E2open with a $2.1 billion exit in May 2025. Opened a 3,000 sq ft AI Lab on Clements Ferry Road called "Charleston AI" in January 2026 to help local individuals and organizations understand and use artificial intelligence. Author of four books: World War AI, The Coming AI Subconscious, Robot Noon, and Love, The Cosmic Dance.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Discover more from John Rector

Subscribe now to keep reading and get access to the full archive.

Continue reading