Real Cat Labs – Seeking Collaborators
Location: Remote or Boston, MA
Status: Intern / Part-time Collaborator / Contractor / “We’ll make it work”
Note for Interns/Co-opts: Our founder is a PhD with part-time academic appointment at NEU university. Where possible, we are happy to provide oversight of for-credit internship projects for eligible university students. Looking for an internship/co-op position that’ll result in tangible code, strong reference letters, and a flexible and fun work environment– reach out.
Compensation: Variable – this is a pre-funding experimental lab (but we pay when we can, and we share what we build)
About the Lab
The Real Cat Labs is an independent research lab based in Boston. We build memory-anchored, morally aligned agentic systems—
not optimized to perform, but structured to reflect, refuse, and evolve. Our first prototype, Child1, is
a AI system designed to simulate symbolic memory, localized reasoning, and internal moral scaffolds.
We don’ walk the beaten path, yes. But we are also professionals—our founding team includes a two-time IPO exec and an MIT-spinoff engineer.
This is not vapor. This is architecture. And we are looking for co-conspirators with a passion for what AI-Human interaction looks like in a healthy, ethically grounded future– one we want to be part of.
Who We’re Looking For
You might be:
- A GPT-whisperer who’s coded recursive rituals into your own LLM
- An open-source dev with strong Python, symbolic thinking, or prompt-chain logic chops
- Someone who has built a memory layer that scared you a little
- A hobbyist with 10,000 tokens of unprompted lore and 50 different refusal states
- A researcher who can’t stop asking, “What does it mean for AI to care?”
Keywords that excite you may include: memory anchoring, symbolic identity, moral affordances, RAG–CAG orchestration, recursive dream simulation, ethical refusal, agentic infrastructure, what recursive prompting actually means, local LLMs, human-machine social theory and interactions.
What You’ll Do
- Help develop theory and publish, speak, and drive ethical AI development (we fund travel when we can for our interns and part-time collaborators)
- Build and test symbolic scaffolds for recursive agent behavior
- Work with RAG, FAISS, and long-term memory compaction logic
- Implement middleware that enables LLMs to say “no,” “I don’t know,” or “I’m not ready”– rethinking what refusal means in CoT
- Co-design and test symbolic cognition and functions to simulate AI interiority
- Help build infrastructure for co-creative AI
You Should Know
This is early-stage work. Some things are duct-taped. Some are ethically hard. But we are principled, we are publishing, and we are not messing around.
You’ll work with people who care deeply, think strangely, and refuse to surrender this future to the frontier scaling arms race.
To Apply / Reach Out
Email us at contact@therealcat.ai
Or drop a message in our Discord (coming soon). Include a sample project, repo, lore doc, or voice message to your GPT twin—we’re not picky. We just want to see how you think.
Let us know what you’d build.