"The question is not whether AI will influence international peace and security, but how we will shape that influence." – Secretary General Antonio Guterres
In a rapidly evolving AI landscape, no domain remains untouched, but particular sensitivities arise when addressing conflict prevention. From world leaders to grassroots organizations, the question of AI’s impact on peace is more pressing than ever. Understanding the opportunities and risks of AI becomes essential for everyone committed to global peace. For practitioners in the peacebuilding space, many questions remain open: how will AI be governed? How is it already being used? How can we ensure its use is ethical and minimize potential risks? With the stakes so high, the need for informed, ethical, and inclusive dialogue about AI’s role in conflict prevention has never been greater. In response, UNSSC launched a three-part webinar series, bringing together experts from across the United Nations (UN) system and the peacebuilding community to explore these questions, offering a dedicated space for dialogue, learning, and exchange.
One key message resonated across all webinars: AI holds real promise for conflict prevention, but human engagement, expertise and oversight remain crucial. AI can enhance but never replace.
Until recently, AI development has been shaped by a relatively small group of actors. The UN is playing a pivotal role in changing that by ensuring all voices are heard and that smaller states can meaningfully influence global AI governance frameworks through initiatives such as the Independent International Scientific Panel on AI and the Global Dialogue on AI.
But the UN is not only a convener of this conversation, it is also an active participant. AI is already being used across the peace and security pillar. As our speaker Avishan Bodjnoud noted, the guiding principle must be that AI serves the mandate, not the other way around. It should be introduced only where it measurably improves effectiveness by handling repetitive, low-value tasks so staff can focus on what matters most: judgment, diplomacy, analysis, and leadership.
Responsible use also means extending principles already central to peacebuilding, such as Do No Harm and trust, to AI. The DPPA Innovation Cell operationalizes this by approaching AI as a stack: ensuring these principles are respected at every layer, from the data the model relies on to the infrastructure, compute, model design, and final application.
The second webinar moved from frameworks to practice, exploring how AI is already being deployed across the peacebuilding landscape. In conflict zones with frequent shocks, early warning systems benefit from augmented intelligence: effectively combining machine learning's ability to process large-scale data and detect weak signals with human contextual expertise and ethical oversight that no algorithm can replicate. In post-conflict settings, AI tools are enabling more inclusive mediation: in Yemen, a WhatsApp chatbot broadened participation, especially among groups often sidelined, such as women and young people.
Different in scope and method, these initiatives share a common thread: AI is there to support and extend what peacebuilders do, not to replace them.
The final webinar turned from what AI can do to what we must do to govern it responsibly. A clear message emerged: ethical AI principles exist in abundance, but they cannot remain theoretical. In conflict settings especially, they must be translated into concrete, context-specific compliance practices.
Equally important is precision. Discussions about AI risk are often too broad to be actionable. The risks in early warning differ fundamentally from those in social media monitoring or mediation support. Addressing them requires the right actors at the table: diplomats, researchers, mediators, and the private sector each have a distinct role. And as AI models grow increasingly opaque, rigorous open-source documentation of ethical practice becomes essential, not just as good practice, but as accountability.
The response spoke for itself as each webinar drew between 300 and 500 registrants, a clear signal that practitioners are actively seeking guidance on these issues.
But three webinars can only go so far. The breadth of ground covered made one thing plain: each theme deserves to go deeper. Governance frameworks mean different things to a diplomat, a data scientist, and a field officer. Actionable learning requires that level of specificity, and that is where the next step lies.
UNSSC is uniquely positioned to fill this gap. As a learning organization serving the entire UN system, our role is not just to convene conversations but to translate them into knowledge practitioners can use. The interest generated by this series confirms there is both appetite and urgency for knowledge on AI and we intend to be a key part of providing it.
If you missed any of the webinars or want to revisit them, the recordings are available here. We encourage anyone working at the intersection of AI and peacebuilding to watch, share, and stay tuned for what comes next.