AI Governance and Resources

A curated guide to the organizations, regulations, frameworks, and thinkers shaping how AI is built, governed, and understood. This page exists because we believe knowing the landscape is part of doing the work responsibly.

Our position on governance

While many AI labs focus on speed, scale, or performance, The Real Cat Labs is building for something else. The future of AI regulation will demand more than good intentions. It will require auditable architecture, ethical scaffolding, and systems that can explain, refuse, and adapt in context.

Most AI systems are not regulation-ready. They lack memory traceability, refusal scaffolds, and structural accountability. We build ethical cognition into the system from the start, not as a compliance retrofit after deployment.

Governance-ready AI is not a delay. It is a differentiator.

We do not see emerging requirements as constraints. We see them as invitations to build better systems. TRCL is not scrambling to catch up. We are already building the scaffolding regulators will one day require.

Regulation and Ethics

The regulatory landscape

Our research and development is informed by emerging global standards. These are the frameworks we track and build toward.

  • NIST AI Risk Management Framework (USA) — Structured guidance for mitigating systemic and operational risks in AI systems. Calls for traceability, fallback behavior, and structured risk mitigation at the system level.
  • EU AI Act (European Union) — Sets global precedent for regulating high-risk AI through transparency, oversight, and user rights. Requires explanation, memory control, and human oversight.
  • Blueprint for an AI Bill of Rights (USA) — Proposes enforceable protections including refusal, explanation, and human fallback for citizens interacting with AI.
  • OECD AI Principles — International framework promoting inclusive, transparent, and human-centered AI development.
  • Montreal Declaration — One of the earliest public-facing manifestos for responsible AI values.
  • RAIL Licenses — Open-source AI licenses that embed ethical restrictions into code deployment.
AI for Good

Organizations doing the work

These organizations are working on responsible AI development, ethical frameworks, and public benefit. We learn from them, disagree with some of them productively, and believe they all deserve your attention.

  • Future of Life Institute — Promotes beneficial AI through public awareness and advocacy, especially around existential risks and long-term safety.
  • AI for Good Foundation — Supports applied AI projects tackling global challenges from climate to healthcare.
  • AI Commons — A platform focused on open, participatory AI development aligned with shared global needs.
  • OpenMined — Builds tools for privacy-preserving and decentralized AI, including federated learning.
  • DataKind — Pairs data scientists and engineers with mission-driven organizations to apply AI for social good.
  • Center for Humane Technology — Investigates human-computer dynamics and advocates for ethical design standards.
Research and Alignment

Leading labs and research centers

  • Mila (Quebec) — A top AI research center blending deep learning with social impact and sustainability.
  • Stanford HAI — Influential lab combining technical research with law, philosophy, and global governance.
  • MIT Media Lab — Strong tradition of AI, interface innovation, and civic ethics.
  • EleutherAI — Open science collective focused on transparency-first foundation models.
  • Oxford OATML — Leading group working on socially aligned machine learning.
  • CIFAR AI Chairs (Canada) — Diverse global researchers driving responsible and scalable AI policy.
  • Alignment Forum — Long-form discussion and deep research on alignment, interpretability, and LLM safety.
Technical Resources

Tools and platforms

  • Hugging Face — Hosts thousands of open models and datasets with APIs for real-world deployment.
  • OpenAI Cookbook — Tested scripts for prompt design, memory use, and advanced GPT features.
  • Anthropic: Constitutional AI — Training methods using principle-based constraints to guide model alignment.
  • Ink and Switch — Independent research lab exploring future tools for thought and shared cognition.
  • Fiddler AI — Observability and debugging tools for deployed AI, supporting transparency in production.
Theoretical Foundations

The deeper roots

For those interested in the cognitive and philosophical foundations behind our work.

  • Bruno Latour, Actor-Network Theory — How systems and meaning emerge through relational networks of human and nonhuman actors.
  • Annie Murphy Paul, The Extended Mind (2021) — How tools, symbols, and bodies extend human cognition beyond the skull.
  • Don Ihde, Postphenomenology — A framework for understanding technology as both a tool and a co-shaper of human experience.
  • Andy Clark, Natural-Born Cyborgs — The mind’s natural inclination toward distributed, tool-extended thinking.
  • Daniel Dennett, The Intentional Stance (1987) — A functional approach to understanding minds and minds-like systems.
  • Derek Parfit, Reasons and Persons — Identity, continuity, and what counts as the same person (or system) over time.

If you have a resource to suggest or a library to contribute, please reach out at innovate@therealcat.com. This list evolves alongside our work.