Centre for AI Security and Access

The Centre for AI Security and Access (CASA) was founded with a singular purpose: to address one of the most critical tensions in global AI governance—the gap between equitable access and necessary security. In a rapidly evolving technological landscape, advanced AI systems hold immense promise for addressing global challenges, yet their development and deployment remain concentrated among a handful of powerful entities. At CASA, we work to ensure that the benefits of AI technology can be widely and equitably distributed without compromising essential security considerations that protect individuals, communities, and nations from potential harm.

We envision a future where advanced AI technologies are developed, governed, and deployed in ways that respect the sovereignty, needs, and contexts of diverse global stakeholders, particularly those from the Global Majority who have historically been excluded from technological decision-making. Through rigorous research, thoughtful diplomacy, and practical tool-building, we aim to create pathways for meaningful participation in the AI ecosystem that balance accessibility with responsible safeguards.

What we publish

Our 2025-2026 priority is scoping a research agenda on Global Majority AI access and security. All our public work appears here, e.g., white papers, research papers, op-eds, and commentaries. We publish as our research develops—subscribe to our Substack to follow our work.

Join in

We’re building this agenda collaboratively. Ways to get involved:

Sumaya Nur Adan

Joanna Wiaterek