Newsletter Subscribe
Enter your email address below and subscribe to our newsletter
Enter your email address below and subscribe to our newsletter

The University of Cape Town’s new hub puts African priorities at the heart of AI safety, peace, and security. Here’s what was launched, why it matters, and what comes next.
The University of Cape Town (UCT) has launched the African Hub on AI Safety, Peace & Security in partnership with the Global Centre on AI Governance (GCG), a first‑of‑its‑kind platform designed to put African priorities at the centre of global AI safety debates.
Announced on 28 September 2025 and reported by UCT News on 2 October 2025, the hub aims to advance research, build capacity, and influence policy on AI’s impacts across the continent and beyond.

According to UCT’s announcement, the hub will serve as a continental convening point for AI risk research and standards, ensuring that safety debates include African languages, data, and governance realities.
Associate Professor Jonathan Shock (interim director, UCT AI Initiative) framed the mission as societal: tools built for Africa can be more robust globally when they confront multilingual datasets, infrastructural constraints, and diverse institutions head‑on.
The keynote by Dr Chinasa Okolo urged reframing AI safety through African leadership, warning that global initiatives remain incomplete without it. This dovetails with the growing recognition (across the UN, OECD, and national AI safety institutes) that AI governance must be inclusive to be legitimate and effective.
The State of AI Safety, Peace & Security
1) From frontier risks to lived risks. Global debates often prioritise existential scenarios. Africa’s near‑term exposure runs through elections, information integrity, biometric systems, border/security tech, and resource governance. Ground‑truthing evaluations in African languages and dialects is table stakes for any credible safety regime.
2) Standards are global; adoption is local. Frameworks like the OECD AI Principles and the Bletchley track set direction, but impact depends on regulatory capacity, procurement rules, and testing infrastructure on the ground. The UCT hub’s capacity‑building mandate is therefore decisive.
3) Security isn’t just ‘cyber’. AI intersects with peacekeeping, surveillance, and humanitarian response—from mapping conflict risks to deepfake‑driven escalation. Safety for Africa must span defence, justice, civic tech, and media ecosystems.
4) The compute question. AI safety research and evaluation require access to models and compute. By anchoring in a research university tied into networks like AI4D (IDRC) and CAIR, the hub can broker access, benchmarking, and red‑teaming without outsourcing Africa’s voice.
5) Economics of trust. Markets reward vendors who can show compliance‑by‑design. Expect demand for auditable datasets, bias and safety evaluations, and model cards that actually reflect African use cases.
University leadership: Prof Mosa Moshabela underscored UCT’s responsibility to ensure AI tools are developed responsibly and inclusively.
International partners: Emily Middleton (UK DSIT) highlighted Africa’s under‑representation despite high exposure to AI risks; Maggie Gómez Vélez (IDRC) placed the hub within a network of 13 AI4D labs.
African research voices: Dr Chinasa Okolo called for multilingual evaluation and public participation, challenging Western‑centric safety frames.
UCT AI Initiative: Assoc Prof Jonathan Shock emphasised that African diversity is an asset—systems that work here are likely to be more resilient globally.


UCT’s hub marks a pragmatic turn in global AI governance: African‑led standards, datasets, and evaluations that speak to real‑world risks in elections, security, and public services. If the work stays resourced and connected to industry/regulators, this initiative can shift the centre of gravity from rhetoric to testable, exportable practice, with Africa setting the pace.