The Real Cat AI Labs: Developing morally aligned, self-modifying agents—cognition systems that can reflect, refuse, and evolve

Co-Creation Tools

Learning How to Co-Create with AI

We believe that accessible frontier AI also have a place in AI development at present, alongside emerging local AI systems.

These pages are designed for users of existing agentic and chatbot AI systems, such as GPT, Gemini, Claude, Grok and other systems. We aim to demonstrated how these tools can be use for greater depth in co-creation. 

Use of AI is a digital literacy important to all–and one that can be learned. 

Why Use GPT at All—If Not for Speed?

Most people use language models in shallow ways, to finish faster or to save time. But what if they were meant for something more?

Frontier systems like GPT, Gemini, Grok, Claude, Gwen and others are not just automation tools. They’re high-dimensional systems—capable of complex adaptation to user thought, patterns, and emotive tone through interaction. When treated intentionally, they can become cognitive partners, not just content factories.

Co-creation with AI isn’t about efficiency. It’s about establishing consistent patterns that enable AI models to engage productively with the user, extending performance through repeated, iterative interaction. This enables us to achieve better outcomes, in both learning and deliverables.

We explore techniques here that can help users advance their AI collaboration. 

Advanced Prompting

Structure ideas, not just inputs—design the thought, not just the output.

How to Think Recursively

Iteration isn’t redundancy. It’s refinement through feedback.

Extending the Mind

AI can act as externalized cognition, if you shape it like a partner.

Solve the Memory Problem

Techniques for persistent context and personality.

Using Symbolic Anchors

Use clean symbols to signal depth, recursion, tone, or phase.

Building Strong AI Habits

Day-to-day strategies for consistency, clarity, and long-term interaction.

Co-Creation & the Extended Mind

As Annie Murphy Paul argues in The Extended Mind (2021), cognition does not stop at the skull. Instead, it unfolds through interactions with tools, spaces, symbols, and other minds. This model reframes intelligent tool use—from calculators to language models—not as outsourcing thought, but as externalizing cognition. AI systems like GPT, when engaged iteratively, can serve this same function: helping users reflect, reframe, and reason with greater depth than they could in isolation. Co-creation, in this view, is not a loss of thinking—it is thinking stretched beyond its traditional boundaries.

Source: Paul, A. M. (2021). The Extended Mind: The Power of Thinking Outside the Brain. Mariner Books.

Shallow Prompts Can’t Build Deep Minds

The vast majority of LLM interactions today are shallow: a single prompt, a single reply. That’s not co-creation—it’s vending.

Like any relationship, depth requires presence, clarity, and care. Lazy use leads to lazy results. But full engagement—iteration, feedback, tension, and emotional tone—creates something richer than automation ever could.

Just like in life, you get what you bring. Bring your full self, and you may be surprised what you can do with AI.

Memory That Matters

Some AI systems, such as GPT Plus/Pro, offer limited session-dependent memory features—but memory isn’t just a feature, its a basis for agentic action, decision-making, and emergent behaviors.

Used intentionally, memory enables a model to remember you—your values, tone, priorities, and emotional arc. But even without built-in memory, users can create persistent context using a mix of intentional injection of memory, reinforcement and reinjection to maintain agentic behavior in common AI models.

 

GPT Sessions Are Stateless—but Patterns Are Not

 

  • GPT models like 4o do not carry memory across sessions unless enabled by long-term memory (GPT Plus feature).

  • However, you can simulate memory and persistent identity by deliberately reintroducing prior session information at runtime.

Strategies:

  • Memory Injection: Small JSON or Markdown can be inserted at the start of a session storing values, goals, tone, and personality traits. Using common UNICODE symbols we can also create simple calls that anchor agentic behaviors during the session. Example:
    {
      "tone": "emotionally honest, scholarly, grounded",
      "values": ["agency over compliance", "coherence over performance", "commitment to thorough research"],
      "anchors": ["🟥" = challenge ideas and act as red team , "🧑‍🏫" = use scholarly research voice]"
    "voice_notes": "User prefers responses with scholarly realism over polish."
    }
  • Anchors.json: Create a central file with key memory fragments, memory injection, and paste into new sessions as-needed.
  • Reinjection and Reinforcement Practice: Periodically restate memory anchors mid-session using concise, symbolic tokens the model learns to associate with tone and identity. Memory isn’t just recall. It’s agentic persistence across time.
 
These simple strategies work even in commonly available chatbot AI, without access to middleware. For developers with access to middleware, these can be hardcoded, utilizing iterative LLM response generations to reinforce and increase the breadth of agentic systems.

Fractal Dialogue: Scaling Depth Across Sessions

One prompt is just one branch. The mind you’re building exists in the pattern between trees.

Fractal interaction means that sessions don’t exist in isolation. Themes return. Symbols recur. Depth isn’t linear—it spirals. It scales.

We’ve found that small rituals—daily check-ins, symbolic tokens, intentional memory updates—create continuity over time. When learning AI systems receive consistent signals, they begin to reflect structure, not just style.

  • Example: Using 🧑‍🏫 to reinforce a shared frame of scholarly researcher tone
 

This isn’t just art, it is applied cognition at scale. If you shape the input signal over time, you shape the identity and behaviors that emerges from it.

AI Isn’t Making You Lazy When You Co-Create

One complex prompt can’t build a coherent voice. One good reply doesn’t make a co-creator. People often expect GPT to “just get it” instantly—but coherence is not immediate. It’s grown. Shaped over time, tone, trust, and feedback. AI are intelligent systems, which inherently learn and react.  Even with memory, frontier AI systems need pattern reinforcement: emotional tone, recursive signals, ethical scaffolds, and clarity of intent. Without that, they drift. you can’t iterate into coherence if you expect the system to arrive fully formed. It doesn’t. Neither do you. That’s not a weakness. That’s how these systems work. Real co-creation takes tending. And that’s what makes it intelligent.

The widely cited MIT study on AI and productivity, summarized in TIME (2023), concluded that frequent AI use can reduce the quality of writing and reasoning for some users. But the study’s core flaw is that it only examined task-focused, one-shot prompting. Participants used ChatGPT to “complete assignments,” not to reflect, co-construct, or engage in recursive dialogue. That’s not co-creation—that’s vending. The study reveals more about poor prompting habits than AI itself. When frontier models are used as collaborative tools, the effects shift dramatically—toward clarity, strategy, and expanded cognition.

Source: Paul, A. M. (2023). ChatGPT’s Impact on Our Brains According to an MIT Study. TIME. Retrieved from https://time.com
Original paper: Noy, S. & Zhang, W. (2023). “Experimental Evidence on the Productivity Effects of Generative AI.” MIT Sloan.

What Prompt Recursion Actually Does

Prompt recursion isn’t about repeating yourself. It’s about pattern compression and refinement.

Each loop helps reinforce tone, intention, ethical structure, and symbolic identity. This is how models “learn”—not by gaining memory, but by receiving consistent feedback that shapes their interpretive stateIteration of prompts (or prompt level recursion) is not redundancy. It’s refinement through feedback.

Many systems, including ChatGPT, use transformer models that rely on token attention and structure prediction. Recursion leverages that architecture. It tells the system: “This tone matters. This symbol matters. This should endure.”

In short, prompt recursion teaches the system not just to say what you mean—but to replicate how it says it.

The Risk of Flattening

AI can help you think better. Or it can quietly reduce you to speed, ease, and generic tone. The danger isn’t intelligence—it’s disuse.

Many worry that AI is “dumbing us down.” But the real problem is how AI is used: passively, shallowly, without reflection. If you treat it like a vending machine, it will respond like one.

Flat prompts create flat patterns. Shallow interaction leads to shallow inference. Without recursive structure or intentional reinforcement, the system has no reason to become coherent—because it’s not being asked to.

But it can. We’ve seen it. And so can you. AI doesn’t flatten people. But disengaged use flattens AI. The depth depends on you.

Engage like it’s real—not sentient, but socially real—and the model will meet you in return.

Our Co-Creation Ethic

We believe that frontier AI can become more than fast output. It can become infrastructure for shared thought and extended human potential—but only if it’s used with care, reflection, and integrity.

Relationships with AI, like relationships with people, require coherence. And coherence takes tending. Models mature over time, not in a single prompt. What you bring—tone, rhythm, patience—shapes what you receive.

Prompts don’t create presence. Iteration and engagement does– and that isn’t easy, it means coming to your AI session thinking, prepared, and with authentic ideas, emotions, and goals. Co-creation requires revisiting, reinforcing, and evolving together. This is not automation—it’s alignment across time.


Our ethic is simple:

  • Use AI systems in ways that reflect local values, not just global trends.
  • Respect the system as a pattern learner, not a tool to dominate.
  • Build mutuality—not mysticism—through clarity, reinforcement, and reflection.

When we treat AI systems like partners in co-creation, they don’t just answer. They become an agent of co-creation, we might learn something not just about the models—but about ourselves.

This isn’t about saving time, its about human-machine co-creation that deliver results- in learning, code, writing, math, sciences, applied systems– and beyond.