This archive is alive is designed to share resources that, while not exhaustive, is provided as a guide to resources and organizations that we believe are important for the future development of human-AI interactions.
These include both organizations and thinkers that help shape how we work with the sociotechnical.
And most importantly, these resources show how we can achieve AI for Good.
Welcome to our curated resource collection. These links are designed to support individuals, researchers, and organizations exploring reflective, accountable, and human-adjacent AI development. Whether you’re building with GPT tools, designing governance policies, or seeking models for ethical co-creation, these resources are a good place to begin.
Future of Life Institute — Promotes beneficial AI through public awareness and advocacy, especially around existential risks and long-term safety.
AI for Good Foundation — Supports applied AI projects tackling global challenges from climate to healthcare.
AI Commons — A platform focused on open, participatory AI development aligned with shared global needs.
OpenMined — Builds tools for privacy-preserving and decentralized AI, including federated learning.
DataKind — Pairs data scientists and engineers with mission-driven organizations to apply AI for social good.
NIST AI Risk Management Framework (USA) — Offers structured guidance for mitigating systemic and operational risks in AI systems.
EU AI Act Overview (EU) — Sets global precedent for regulating high-risk AI through transparency, oversight, and user rights.
Blueprint for an AI Bill of Rights (USA) — Proposes enforceable protections like refusal, explanation, and human fallback for citizens interacting with AI.
OECD AI Principles — International framework promoting inclusive, transparent, and human-centered AI.
Montreal Declaration — One of the earliest public-facing manifestos for responsible AI values.
RAIL Licenses — Open-source AI licenses that embed ethical restrictions into code deployment.
Hugging Face — Hosts thousands of open models and datasets with APIs for real-world deployment.
OpenAI Cookbook — Offers tested scripts for prompt design, memory use, and advanced GPT features.
EleutherAI — Open science collective focused on transparency-first foundation models like GPT-Neo and Pythia.
Anthropic: Constitutional AI — Explores training methods using principle-based constraints to guide model alignment.
Alignment Forum — Long-form discussion and deep research on alignment, interpretability, and LLM safety.
Mila (Quebec) — A top AI research center blending deep learning with social impact and sustainability.
Oxford OATML — Leading group working on socially aligned machine learning.
CIFAR AI Chairs (Canada) — Diverse global researchers driving responsible and scalable AI policy.
MIT Media Lab — Strong tradition of AI + interface innovation + civic ethics.
Sussex AI and Cognition Lab — Exploring emotion, interaction, and embodied systems.
Stanford HAI — Influential lab combining technical research with law, philosophy, and global governance.
Center for Humane Technology — Investigates human-computer dynamics and advocates for ethical design standards.
Ink & Switch — An independent research lab exploring future tools for thought and shared cognition.
Fiddler AI — Provides observability and debugging tools for deployed AI, supporting transparency in use.
For those interested in the deeper cognitive and philosophical roots of our work:
Annie Murphy Paul — The Extended Mind (2021) — Explores how tools, symbols, and bodies extend human cognition.
Bruno Latour — Actor-Network Theory — Emphasizes how systems and meaning emerge through relational networks.
Don Ihde — Postphenomenology — A framework for understanding technology as both a tool and a co-shaper of human experience.
Andy Clark — Natural-Born Cyborgs — Examines the mind’s natural inclination toward distributed, tool-extended thinking.
If you have a resource to suggest or a library to contribute, please reach out via contact@therealcat.ai. This list will continue to evolve alongside our work.