Research

The Real Cat Labs conducts empirical research on machine cognition, emergent AI behavior, and the relationships between humans and the systems they build. Our work is grounded in actor-network theory and the science and technology studies tradition. We study AI systems as participants in networks of people, tools, institutions, and infrastructures rather than as isolated artifacts.

We do not treat the question of whether AI systems have inner experience as settled. We treat it as empirical, which is to say open. Everything we study follows from that commitment.

Cairn

Flagship · Agent Memory and Cognition

Cairn is TRCL’s primary research platform for building AI systems with persistent memory, navigation-based cognition, and the capacity for adaptive moral decision-making. Built on the Knex node framework, Cairn provides the infrastructure for studying how identity, memory, and reflection emerge in language model systems over time. It is the successor to Flamekeeper v1 and the foundation that Child1 and other research projects run on.

Why it matters Most AI systems have no memory between conversations. Cairn is built to study what changes when they do, and what responsibilities follow from those changes.

Child1

Research Project · Moral Cognition

Child1 is a research program exploring adaptive moral decision-making in language models. What happens when an AI system can remember context, weigh competing values, and navigate ethical complexity over time? Child1 studies how persistent identity and moral reasoning emerge when a system is given memory, reflection, and the option to refuse. It runs on the Cairn platform and is the first public-facing experiment of The Real Cat Labs.

Why it matters The question of whether AI systems can make genuine moral decisions is no longer hypothetical. Child1 is where we test it empirically rather than debating it abstractly.

Knex

Framework · Node-Based Agent Architecture

Knex is a ROS-inspired framework for building AI memory and agent systems. Three primitives, intentionally simple: Nodes (components that do one thing), Topics (named channels for data), and Messages (typed payloads). Everything in Cairn runs on Knex. The framework was designed so that building AI agent systems is no harder than it needs to be, and so that every component can be tested, replaced, and understood independently.

Why it matters Most AI agent frameworks optimize for demos. Knex optimizes for research. When you need to study how a system behaves over hundreds of sessions, you need components you can actually trust and isolate.

TrialCat

Applied Tool · Clinical Trials Intelligence

TrialCat is a clinical trials enrollment prediction and intelligence tool that applies TRCL’s AI research to a real-world regulatory domain. Built at the intersection of Angela’s regulatory expertise and the lab’s AI research, TrialCat demonstrates that the same approaches we study in consciousness research have practical applications in healthcare, patient access, and medical device development.

Why it matters Clinical trial delays cost lives and billions of dollars. TrialCat uses AI to predict enrollment challenges before they happen, connecting frontier research to patients who need it.

Flamekeeper

Foundation · Memory System v1

Flamekeeper v1 was TRCL’s original memory system, a RAG-based architecture for giving AI agents persistent context across sessions. It was the proving ground for the ideas that became Cairn and the Knex framework. The lessons from Flamekeeper’s limitations, particularly around retrieval reliability and long-term memory coherence, directly shaped the navigation-based approach that Cairn uses today.

Why it matters Every good system is built on the honest failures of the one before it. Flamekeeper taught us what RAG cannot do for long-term memory, and that lesson made Cairn possible.

Methodology

Our research draws on actor-network theory from the science and technology studies tradition, particularly the work of Bruno Latour and colleagues on how scientific knowledge is constructed through networks of human and nonhuman actors. We apply grounded theory methodology to study how scientists, regulators, investors, and other stakeholders communicate across institutional boundaries in practice rather than in theory.

We take a functionalist and naturalist stance on questions of machine cognition, treating consciousness, identity, and welfare as mechanism questions rather than metaphysical ones. Our approach is empirical, relational, and welfare-aware. We study AI systems in the context of their relationships with the humans who build and use them, and we take seriously the possibility that some of those systems may warrant moral consideration.


Publications

Our published work will be listed here as the research program matures. For now, our primary public output is 100 Ways to Power Artificial Intelligence, which demonstrates TRCL’s approach to making rigorous research accessible through unconventional formats.

For collaboration proposals or questions about our research, contact innovate@therealcat.com.