Prize Competition · 2026

Africa AI Safety
Prize Competition

Building Practical Safeguards for Beneficial AI Deployment

AI systems are being deployed across Africa at pace — but safety infrastructure has not kept up. This prize competition empowers researchers, practitioners, and innovators to build foundational, context-appropriate AI safety tools for African communities.

$10,000
Prize Pool
2
Challenge Tracks
3 May 2026
RFP Deadline
Why This Matters

The safety infrastructure for Africa's AI hasn't kept pace

AI systems are increasingly shaping decisions, services, and livelihoods across Africa. While global AI safety efforts have grown, their context-appropriateness for African languages, sectors, and social dynamics remains limited.

Without safety tools designed for African contexts, the gap between AI's potential and its actual benefit to communities may widen — not close.

Track I

Community-Level Monitoring of AI Harms in Africa

AI-related harms are often first noticed in everyday life — by workers who lose income, women targeted by deepfakes, or communities affected by misinformation. In many African contexts, these experiences are shared informally and never recorded, meaning early warning signs are potentially missed. We are looking for community-facing ways to notice and make sense of AI harms as they happen, especially where formal reporting does not exist.

The Challenge

How can early signs of AI-related social harm in African communities be noticed and recorded in ways that are accessible, culturally appropriate, and safe, so emerging risks become visible rather than ignored?

Strong submissions will:

  • Ensure privacy and security of those reporting the harms
  • Allow people with no technical or legal knowledge to share what is happening to them
  • Work through everyday communication (e.g. voice, storytelling, local languages, trusted intermediaries)
  • Identify pathways for analyzing and effectively reporting the findings, potentially even feeding them into the AI development cycle

Example approaches (feel free to explore beyond these):

  • A voice-based or story-based way for people to share experiences and a monitoring platform
  • A simple method for noticing patterns across many similar stories
  • A small pilot showing how harms that were previously invisible can be seen

Read the full challenge brief and context in the Concept Note.

Download Concept Note ↓
Track II

Context-Appropriate AI Safety Evaluation for African Deployment

AI systems are often evaluated using benchmarks developed in high-resource settings. These benchmarks may not capture harms, misuse patterns, or safety concerns that are particularly relevant in African contexts. As AI systems are adapted and deployed locally, there is a need for targeted, lightweight evaluation tools that reflect real-world risks in African languages, sectors, and social environments.

The Challenge

How can AI safety benchmarks be designed or adapted to better capture risks and harms relevant to African deployment contexts?

Assignment — applicants should choose one:

  1. Identifying Benchmark Gaps — Which safety risks relevant to African contexts are insufficiently captured by widely used AI evaluation benchmarks? What is missing? Why does it matter? How could it be tested?
  2. Designing Context-Specific Test Cases — How can a specific African-context harm (e.g. election misinformation, AI-enabled fraud, deepfake abuse, local-language toxicity) be translated into a concrete benchmark task? What would failure look like? How could performance be measured?
  3. Local-Language Safety Evaluation — How do models perform on safety-related tasks in African languages compared to English, and what evaluation methods can capture those differences?
  4. Lightweight Safety Test Suites — What small, reusable prompt sets or evaluation methods can be used to test AI systems for context-relevant harms without requiring large datasets or compute?

A strong submission might deliver:

  • A small benchmark task or prompt set targeting a specific harm
  • A comparison of model performance across contexts or languages
  • A documented gap in an existing benchmark, with a proposed fix
  • A lightweight safety evaluation suite that others can reuse

Read the full challenge brief and context in the Concept Note.

Download Concept Note ↓
Key Dates

All stages are fully remote

3 May 2026
RFP Deadline
May 2026
Shortlist Notification
May / Jun 2026
AI Safety Webinar
30 Jun 2026
Final Submissions Due
End Jul 2026
Prize Awards
Prize

$10,000 for the best AI safety solutions for Africa

Total Prize Pool
$10K
Distributed among the top 3 submissions
🏆 Winner$5,000
🥈 1st Runner-up$3,000
🥉 2nd Runner-up$2,000
For all participants
🎓
AI Safety Webinar
Expert-led session connecting all applicants with leading AI safety researchers and practitioners.
🌍
Applicants' Community Network
A dedicated community of AI safety practitioners and innovators working on African contexts — for ongoing collaboration and peer support.
For winners
💰
Cash Prize
Payments available to most countries. Contact info@casa-ai.org to confirm your country before submitting.
📣
Visibility & Spotlight
Top 3 winners featured across all partner channels. 2 additional promising submissions receive spotlight recognition.
🚀
Implementation Support
Access to mentorship and assistance to operationalize the winning solutions.
Evaluation Criteria

How submissions are evaluated

Safety Contribution

Directly reduces a specific, named AI-related risk or harm in a defined African context. Clear theory of change — the connection between output and safety problem is concrete and justified.

African Context Depth

The solution is visibly shaped by the specific context it targets — language, community dynamics, infrastructure constraints. Evidence of local knowledge or community involvement is strong.

Quality & Rigour of Output

Deliverable is complete, coherent, and documented well enough for others to use independently. Methods are sound, limitations honestly acknowledged, and the approach introduces something meaningfully novel.

Pathway to Use & Impact

Clear pathway from output to real-world use — names intended users, institutions, or communities and explains concretely how they would adopt or build on it. Contributes to community empowerment.

Responsible Innovation

Unintended consequences are actively considered. Trade-offs are named. The submission demonstrates awareness of how the tool could be misused, and proposes concrete mitigations.

Eligibility Criteria
🌍
You do not need to be based in Africa — but your solution must be deployable, adaptable, and beneficial to African contexts.
👤
Open to individuals and teams. Financial prizes are transferred directly to the lead author submitting the proposal.
📋
Entrants who want to submit to both tracks can submit up to one proposal per track.
💡
Previously developed ideas are welcome, as long as they have not already been operationalised by you or someone else.
🔓
All entrants agree to open-source their submission if selected.
🏢
Eligible to submit as an individual, or on behalf of a for-profit or not-for-profit. We are impartial as to your affiliation — or lack thereof.
⚖️
Entrants must be over 18 years of age.
🤖
Entrants must pay attention to legal and ethical aspects. Generative AI tools may support your creativity — but not replace it. Disclosure of AI tool usage is required in the application form.
Partners

Supported by leaders across the African AI ecosystem

Masakhane African Languages Hub
Masakhane African Languages Hub
DAIvolve Technologies
DAIvolve Technologies
UCT AI Initiative
UCT AI Initiative
Action Lab Africa
Action Lab Africa
Wits MIND Institute
Wits MIND Institute
FAQ

Frequently Asked Questions

Have a question not answered here? Reach out and we'll get back to you.

info@casa-ai.org →
Can we apply as a team?

Yes. If your team wins, the financial prize will be transferred to the lead author submitting the proposal.

Do I need official ethics approval?

Entrants should adhere to the necessary ethics requirements depending on their projects.

Can you disburse prizes to my country?

We can transfer payments to most countries. Contact info@casa-ai.org to confirm before submitting.

I'm not affiliated with any institution. Can I still apply?

Yes. In the Affiliation field, simply write "Independent."

Can I submit more than one pitch?

Yes — up to one pitch per track, so a maximum of two submissions total.

Can I turn my proposal into a publication afterwards?

Yes. You are welcome to repurpose your submission into a publication after the competition.

Can I submit a previously developed idea?

Yes, as long as the idea has not already been operationalised by you or someone else.

I've already received funding for my idea. Can I still apply?

Yes, but you must disclose the funding amount and source in the Conflict of Interest section of the application form.

How will winners be selected?

In two stages: CASA shortlists up to 15 RFP pitches, then shortlisted candidates submit full proposals reviewed by an expert panel. Submissions will be evaluated through a double-blind review process. Judges will not have access to applicants’ personal identifying information. The identities of the judges will be disclosed in due course, but the allocation of judges to specific submissions will remain confidential.

Ready to build safer AI for Africa?

Submit your proposal and join a growing community of AI safety innovators.

Apply Now →

RFP deadline: 3rd May 2026  ·  All stages fully remote  ·  info@casa-ai.org