Conference Agenda

Session
Session 6: Technological Innovations in Crisis Prevention and Conflict Resolution
Time:
Friday, 12/Sept/2025:
10:40am - 12:10pm

Session Chair: Dr. Neslihan Yanikömer, Forschungszentrum Jülich, Germany

Presentations

Evaluating European Missile Defense Against New Missile Threats

T. Kadyshev

IFSH, Germany

In view of the growing threat to European security due to the ongoing war and uncertainty of the future US role in NATO and European defense, European countries and Germany in particular are trying to build up their military and close the “gaps” in their defensive capabilities. One such step was acquisition of the Arrow 3 system in order to provide defense “against ballistic missiles that travel at high altitudes.” Given significant price tag of such systems it is important to provide an independent technical assessment of their capabilities, particularly against new threats they will have to be used against. One such threat is the new Russian medium-range “Oreshnik” system.

Given the lack of reliable (or even any) information no both defensive and offensive systems, we use both publicly available information and technical analysis in order to understand capabilities of both the Arrow 3 and Oreshnik systems. Using the recently developed computer program for calculation and analysis of missile defense footprints, we assess Arrow 3 capabilities against Oreshnik and similar missiles. The analysis shows that Arrow 3 can in theory be capable of covering significant areas against Oreshnik if it is integrated in the NATO networked sensor system. At the same time, countermeasures employed by attacking missiles can significantly degrade its effectiveness. Further conclusions are made regarding Arrow 3’s potential utility for conventional and non-conventional defenses.



Autonomous Weapon Systems and Military Decision Making using Artificial Intelligence: Concepts for Preventing Unintended Escalation

J. Altmann

TU Dortmund University, Germany

For autonomous weapon systems (AWS) the scenario of „flash war“ by interaction between enemy algorithms of battle management has been discussed as a major problem, possibly more important and urgent than the feared violations of international humanitarian law. Similar escalation could ensue from more general military decision making by algorithm, in particular by using artificial intelligence (AI), with the greatest danger if such decisions would concern the use of nuclear weapons. To solve the problem for AWS, a prohibition has been proposed, but international agreement could not be found so far. For more general uses of AI there have been global „Summits on Responsible use of AI in the Military Domain“ (REAIM); in concrete terms, these have recommended „to maintain human control and involvement … concerning nuclear weapons employment“. But for other uses the commitments remain on a relatively general level, e.g. „Al applications in the military domain should be developed, deployed and used in a way that maintains and does not undermine international peace, security and stability“ and „Appropriate human involvement needs to be maintained in the development, deployment and use of Al in the military domain“ (without specifying what this means). [1]

The presentation on the one hand will discuss several possibilities for preventive limitations of the escalation risk by design, assessing in a qualitative way the respective expected damping effect and the chance of their being accepted by states, as well as difficulties and options for verification. These measures might include: a prohibition of AWS, qualitative and quantitative, temporal and spatial limitations on AWS including swarms; a stipulation that attacks be done under human control or at least human supervision; limitations on algorithms; limitations on machine-learning hardware; exchanges of algorithms; exchanges of training data.

On the other hand, it will present options for how the armed foces could act and react in a heavy crisis, that is how battle-management algorithms could be programmed. How to react to an indication of being attacked (that might be erroneous) covers a wide spectrum, from doing nothing and just accepting the damage from a potential first attack by the enemy, via keeping a certain waiting time for clarification whether the warning signals are correct, to immediately reacting „at machine speed“, or to pre-empt such attack. There could also be a rule to avoid dangerous encounters at short range (along the lines of the Incidents at Sea Agreement), or the automatic real-time exchange of information between the enemy algorithms. Principally, such measures could be tested in advance in common exercises. Also here the respective risk reduction and the acceptability will be assessed qualitatively, together with a discussion of the verification issues.

Some considerations will be devoted to the cyber sphere, and possibilities and limitations of export control will be discussed.

[1] “Blueprint for Action”, REAIM Summit 2024.



Socio-Technical Responses to Disinformation and Information Operations

F. Schneider, K. Hartwig, T. Biselli, C. Reuter

Science and Technology for Peace and Security (PEASEC), TU Darmstadt

The proliferation of the internet and social networks has created unprecedented opportunities for state and non-state actors to influence public opinion and behaviour through information (warfare) operations. These operations aim to destabilise societies and create division by employing strategies of deception, distraction, division, and information overload. Their tactics manifest as disinformation, hate speech, and other digital threats, undermining trust and cohesion within communities.

Information operations conducted through social networks often intersect with the organic behaviours of online crowds or align with the short-term goals of diverse interest groups pursuing different long-term objectives and are shaped by technological affordances such as platform features and algorithmic structures. A socio-technical research perspective reveals intertwined dynamics of technology, society, and human behaviour in information operations and has the potential to inform counterstrategies with regard to policymaking, intelligence services, platform development, education and the broader societal discourse.

Our research particularly focuses on combating disinformation as a core strategy of information operations. It employs a multifaceted approach, commencing at various junctures: Adopting a 'bottom-up' approach that prioritises the individual social media user, we investigate the potential of platform design to enhance media literacy (e.g. through the implementation of user-centred indicators) and to develop intervention strategies that incentivise users to refrain from participating in information operations through the dissemination of disinformation (e.g. through the utilisation of personalised nudges). From a 'top-down' perspective, we are developing strategies and technological support for organisations with security tasks in monitoring information operations such as disinformation attacks in the event of a crisis --thereby supporting more resilient, informed, and secure societies.



Safeguards as a Knowledge Infrastructure: Exploring Technology Development and Diffusion into the IAEA Verification Regime

J. Schäfer

RWTH Aachen, Germany

Safeguards applied by the International Atomic Energy Agency (IAEA) are an important element of the global nuclear non proliferation regime. To implement safeguards effectively and efficiently, new safeguards technologies constantly need to be developed, adjusted, and further improved. However, implementation of safeguards is not a linear and exclusively technical process in which new technologies are straightforwardly utilized; it requires extensive cooperation and communication among various stakeholders, such as Member State Support Programs, the IAEA, as well as regulators and operators of nuclear facilities. Interpreting safeguards as a complex knowledge infrastructure, this paper explores how the IAEA maintains and develops this infrastructure to remain effective and efficient in a constantly evolving socio-technical landscape. With a focus on non-destructive assay of spent fuel verification, this paper is based on the results of an ongoing exploratory interview study with IAEA staff, researchers, Member State Support Coordinators, and non-traditional partners of the IAEA. It presents their perception of the technology diffusion process, identifies key factors in the development process, and perceived barriers. This work emphasises the mechanisms and role of (formal and informal) communication within the process. By analysing the dynamics of technology development and implementation in IAEA safeguards, this study aims to provide new insights into the technology diffusion within the IAEA's safeguards regime. It thus contributes to a deeper understanding of the socio-technical dynamics that influence the IAEA's ability to uphold global non-proliferation commitments. This paper is part of a PhD thesis within the interdisciplinary research project VeSpoTec, funded by the German Federal Ministry of Education and Research.