The University of Cape Town (UCT) has launched the African Hub on AI Safety, Peace & Security in partnership with the Global Centre on AI Governance (GCG), a first‑of‑its‑kind platform designed to put African priorities at the centre of global AI safety debates.
Announced on 28 September 2025 and reported by UCT News on 2 October 2025, the hub aims to advance research, build capacity, and influence policy on AI’s impacts across the continent and beyond.
Highlights
- First-of-its-kind African hub: Anchored at UCT with GCG, dedicated to AI safety, peace, and security in African contexts.
- Policy + research focus: Three pillars over the next three years—research, capacity strengthening, policy influence.
- High‑level backing: Remarks from VC Prof Mosa Moshabela (UCT), Emily Middleton (UK DSIT), and Maggie Gómez Vélez (IDRC) at the launch.
- Continental networks: Intent to collaborate with Masakhane, Deep Learning Indaba, CAIR and others to root governance in local realities.
- Global moment: Aligns Africa with international efforts such as the Bletchley Declaration and International Network of AI Safety Institutes.
- Why now: Rapid AI adoption raises risks from disinformation and surveillance to labour market disruption—areas where current global frameworks underrepresent African needs.

What UCT Launched and Why It Matters
According to UCT’s announcement, the hub will serve as a continental convening point for AI risk research and standards, ensuring that safety debates include African languages, data, and governance realities.
Associate Professor Jonathan Shock (interim director, UCT AI Initiative) framed the mission as societal: tools built for Africa can be more robust globally when they confront multilingual datasets, infrastructural constraints, and diverse institutions head‑on.
The keynote by Dr Chinasa Okolo urged reframing AI safety through African leadership, warning that global initiatives remain incomplete without it. This dovetails with the growing recognition (across the UN, OECD, and national AI safety institutes) that AI governance must be inclusive to be legitimate and effective.
The State of AI Safety, Peace & Security
1) From frontier risks to lived risks. Global debates often prioritise existential scenarios. Africa’s near‑term exposure runs through elections, information integrity, biometric systems, border/security tech, and resource governance. Ground‑truthing evaluations in African languages and dialects is table stakes for any credible safety regime.
2) Standards are global; adoption is local. Frameworks like the OECD AI Principles and the Bletchley track set direction, but impact depends on regulatory capacity, procurement rules, and testing infrastructure on the ground. The UCT hub’s capacity‑building mandate is therefore decisive.
3) Security isn’t just ‘cyber’. AI intersects with peacekeeping, surveillance, and humanitarian response—from mapping conflict risks to deepfake‑driven escalation. Safety for Africa must span defence, justice, civic tech, and media ecosystems.
4) The compute question. AI safety research and evaluation require access to models and compute. By anchoring in a research university tied into networks like AI4D (IDRC) and CAIR, the hub can broker access, benchmarking, and red‑teaming without outsourcing Africa’s voice.
5) Economics of trust. Markets reward vendors who can show compliance‑by‑design. Expect demand for auditable datasets, bias and safety evaluations, and model cards that actually reflect African use cases.
Who Was in the Room?
University leadership: Prof Mosa Moshabela underscored UCT’s responsibility to ensure AI tools are developed responsibly and inclusively.
International partners: Emily Middleton (UK DSIT) highlighted Africa’s under‑representation despite high exposure to AI risks; Maggie Gómez Vélez (IDRC) placed the hub within a network of 13 AI4D labs.
African research voices: Dr Chinasa Okolo called for multilingual evaluation and public participation, challenging Western‑centric safety frames.
UCT AI Initiative: Assoc Prof Jonathan Shock emphasised that African diversity is an asset—systems that work here are likely to be more resilient globally.

What Comes Next: Programme Priorities to Watch
- Research & evaluation: Methods and datasets for election integrity, harmful content detection, surveillance governance, and critical infrastructure risks in African contexts.
- Capacity strengthening: Training judges, regulators, civil society, and engineers in auditing, red‑teaming, and standards adoption; facilitating model access for research.
- Policy influence: Input into continental and national AI strategies, procurement standards, and participation in the International Network of AI Safety Institutes.
- Ecosystem partnerships: Collaboration with Masakhane, Deep Learning Indaba, and CAIR to keep the hub anchored in African R&D.
Business Angle: Why Executives Should Care
- Compliance moat: Firms operating in Africa will increasingly need evidence of safety testing tuned to local languages, contexts, and regulations.
- Procurement shift: Public agencies may embed AI safety clauses (evals, transparency, incident reporting) in tenders, advantage to vendors who prepare now.
- Media & platforms: Expect stronger scrutiny around synthetic media, identity systems, and risk of harm in content ranking.
- Finance & telecoms: Model risk, fraud, and security tooling will need Africa‑calibrated benchmarks; hubs like UCT’s can convene shared evaluation suites.
- Opportunity map: Build services around testing, audits, and governance tooling—the economics of trust will reward early movers.
Risks & Unknowns
- Funding continuity: Sustained financing is critical; safety institutes globally are still maturing governance and mandates.
- Access to models/compute: Without predictable access, evaluations risk lagging behind frontier systems.
- Regulatory fragmentation: Divergent national rules could raise compliance costs; coordination through AU and regional blocs will matter.
- Talent pipeline: Retaining researchers requires clear career paths and partnerships that keep intellectual property and skills on the continent.

Conclusion
UCT’s hub marks a pragmatic turn in global AI governance: African‑led standards, datasets, and evaluations that speak to real‑world risks in elections, security, and public services. If the work stays resourced and connected to industry/regulators, this initiative can shift the centre of gravity from rhetoric to testable, exportable practice, with Africa setting the pace.


 
