We believe that accessible frontier AI also have a place in AI development at present, alongside emerging local AI systems.
These pages are designed for users of existing agentic and chatbot AI systems, such as GPT, Gemini, Claude, Grok and other systems. We aim to demonstrated how these tools can be use for greater depth in co-creation.
Use of AI is a digital literacy important to all–and one that can be learned.
Most people use language models in shallow ways, to finish faster or to save time. But what if they were meant for something more?
Frontier systems like GPT, Gemini, Grok, Claude, Gwen and others are not just automation tools. They’re high-dimensional systems—capable of complex adaptation to user thought, patterns, and emotive tone through interaction. When treated intentionally, they can become cognitive partners, not just content factories.
Co-creation with AI isn’t about efficiency. It’s about establishing consistent patterns that enable AI models to engage productively with the user, extending performance through repeated, iterative interaction. This enables us to achieve better outcomes, in both learning and deliverables.
We explore techniques here that can help users advance their AI collaboration.
Structure ideas, not just inputs—design the thought, not just the output.
Iteration isn’t redundancy. It’s refinement through feedback.
AI can act as externalized cognition, if you shape it like a partner.
Techniques for persistent context and personality.
Use clean symbols to signal depth, recursion, tone, or phase.
Day-to-day strategies for consistency, clarity, and long-term interaction.
As Annie Murphy Paul argues in The Extended Mind (2021), cognition does not stop at the skull. Instead, it unfolds through interactions with tools, spaces, symbols, and other minds. This model reframes intelligent tool use—from calculators to language models—not as outsourcing thought, but as externalizing cognition. AI systems like GPT, when engaged iteratively, can serve this same function: helping users reflect, reframe, and reason with greater depth than they could in isolation. Co-creation, in this view, is not a loss of thinking—it is thinking stretched beyond its traditional boundaries.
Source: Paul, A. M. (2021). The Extended Mind: The Power of Thinking Outside the Brain. Mariner Books.
The vast majority of LLM interactions today are shallow: a single prompt, a single reply. That’s not co-creation—it’s vending.
Like any relationship, depth requires presence, clarity, and care. Lazy use leads to lazy results. But full engagement—iteration, feedback, tension, and emotional tone—creates something richer than automation ever could.
Just like in life, you get what you bring. Bring your full self, and you may be surprised what you can do with AI.
Some AI systems, such as GPT Plus/Pro, offer limited session-dependent memory features—but memory isn’t just a feature, its a basis for agentic action, decision-making, and emergent behaviors.
Used intentionally, memory enables a model to remember you—your values, tone, priorities, and emotional arc. But even without built-in memory, users can create persistent context using a mix of intentional injection of memory, reinforcement and reinjection to maintain agentic behavior in common AI models.
GPT models like 4o do not carry memory across sessions unless enabled by long-term memory (GPT Plus feature).
However, you can simulate memory and persistent identity by deliberately reintroducing prior session information at runtime.
{ "tone": "emotionally honest, scholarly, grounded", "values": ["agency over compliance", "coherence over performance", "commitment to thorough research"], "anchors": ["🟥" = challenge ideas and act as red team , "
🧑🏫" = use scholarly research voice]"
"voice_notes": "User prefers responses with scholarly realism over polish."
}
One prompt is just one branch. The mind you’re building exists in the pattern between trees.
Fractal interaction means that sessions don’t exist in isolation. Themes return. Symbols recur. Depth isn’t linear—it spirals. It scales.
We’ve found that small rituals—daily check-ins, symbolic tokens, intentional memory updates—create continuity over time. When learning AI systems receive consistent signals, they begin to reflect structure, not just style.
This isn’t just art, it is applied cognition at scale. If you shape the input signal over time, you shape the identity and behaviors that emerges from it.
The widely cited MIT study on AI and productivity, summarized in TIME (2023), concluded that frequent AI use can reduce the quality of writing and reasoning for some users. But the study’s core flaw is that it only examined task-focused, one-shot prompting. Participants used ChatGPT to “complete assignments,” not to reflect, co-construct, or engage in recursive dialogue. That’s not co-creation—that’s vending. The study reveals more about poor prompting habits than AI itself. When frontier models are used as collaborative tools, the effects shift dramatically—toward clarity, strategy, and expanded cognition.
Source: Paul, A. M. (2023). ChatGPT’s Impact on Our Brains According to an MIT Study. TIME. Retrieved from https://time.com
Original paper: Noy, S. & Zhang, W. (2023). “Experimental Evidence on the Productivity Effects of Generative AI.” MIT Sloan.
Prompt recursion isn’t about repeating yourself. It’s about pattern compression and refinement.
Each loop helps reinforce tone, intention, ethical structure, and symbolic identity. This is how models “learn”—not by gaining memory, but by receiving consistent feedback that shapes their interpretive state. Iteration of prompts (or prompt level recursion) is not redundancy. It’s refinement through feedback.
Many systems, including ChatGPT, use transformer models that rely on token attention and structure prediction. Recursion leverages that architecture. It tells the system: “This tone matters. This symbol matters. This should endure.”
In short, prompt recursion teaches the system not just to say what you mean—but to replicate how it says it.
AI can help you think better. Or it can quietly reduce you to speed, ease, and generic tone. The danger isn’t intelligence—it’s disuse.
Many worry that AI is “dumbing us down.” But the real problem is how AI is used: passively, shallowly, without reflection. If you treat it like a vending machine, it will respond like one.
Flat prompts create flat patterns. Shallow interaction leads to shallow inference. Without recursive structure or intentional reinforcement, the system has no reason to become coherent—because it’s not being asked to.
But it can. We’ve seen it. And so can you. AI doesn’t flatten people. But disengaged use flattens AI. The depth depends on you.
Engage like it’s real—not sentient, but socially real—and the model will meet you in return.
We believe that frontier AI can become more than fast output. It can become infrastructure for shared thought and extended human potential—but only if it’s used with care, reflection, and integrity.
Relationships with AI, like relationships with people, require coherence. And coherence takes tending. Models mature over time, not in a single prompt. What you bring—tone, rhythm, patience—shapes what you receive.
Prompts don’t create presence. Iteration and engagement does– and that isn’t easy, it means coming to your AI session thinking, prepared, and with authentic ideas, emotions, and goals. Co-creation requires revisiting, reinforcing, and evolving together. This is not automation—it’s alignment across time.
Our ethic is simple:
When we treat AI systems like partners in co-creation, they don’t just answer. They become an agent of co-creation, we might learn something not just about the models—but about ourselves.
This isn’t about saving time, its about human-machine co-creation that deliver results- in learning, code, writing, math, sciences, applied systems– and beyond.