AI in Defense: How to Minimize Bias and Legal Risks
Artificial intelligence is becoming an increasingly important part of military capabilities, especially in the United States, Russia, and China. However, it is also becoming more common in European armies, including the Czech Armed Forces. Technologies that were considered futuristic just a few years ago are now finding practical applications in fire control systems, intelligence analysis, autonomous systems, and combat operations decision support. However, this development raises a fundamental question: to what extent is this technology reliable, and what are the risks of biased decision-making by artificial intelligence systems? According to a recent study by the SIPRI think tank, this is not only a technical problem, but also a legal, strategic, and ethical one. This challenge also affects the Czech Republic, whose defense industry and military are increasingly involved in the development and implementation of systems using artificial intelligence.

Bias in the context of artificial intelligence for military use can be defined as systematic injustice in the decision-making of artificial systems. In such cases, algorithms favor or disadvantage certain groups of people or objects based on characteristics that are not relevant from a military point of view. SIPRI warns of the risks associated with compliance with international humanitarian law. If military artificial intelligence systematically misclassifies targets (for example, by assessing civilians as combatants or overlooking the presence of protected persons), this can lead to serious violations of international principles. These include, in particular, the distinction between civilian and military targets, the proportionality of civilian casualties when attacking military targets, and the obligation to take measures to minimize damage. These principles form the core of the Geneva Conventions and are binding on all states, including the Czech Republic.
There are many sources of bias, and they arise at various stages of the artificial intelligence systems' life cycle. The first source is social bias, i.e., the transfer of historical inequalities and stereotypes into test data. If systems learn from data sets that are unbalanced (i.e., contain more photos of a certain ethnic group as enemies), they will replicate these patterns in live deployment. An example of this is the use of media sources that depict a certain ethnic group in the context of armed activities more often than others. In such cases, the algorithm will assign a higher probability of threat to people of that origin, even in situations where there is no real threat.
Other sources of bias include stages of development where the choice of indirect signs of target behavior (proxy indicators) can lead to unintended consequences. If the system evaluates raised arms as the only possible signal of surrender (typical in Western countries), gestures from other cultures may lead to an unlawful attack. A third source of bias arises during system use, when the algorithm continues to rely on the original data despite the development of the conflict, leading to errors in estimates and decisions. Human interaction with computer technology can then exacerbate errors when users blindly rely on machine recommendations and abandon critical oversight.
The consequences of such failures are dramatic in terms of both law and operational effectiveness. If the system mistakenly identifies a civilian object as military and an attack occurs, it is not only a legal problem but also a reputational disaster. This can undermine the legitimacy of the entire operation and provide the enemy with a powerful propaganda weapon. SIPRI therefore emphasizes that while bias can never be completely eliminated, its impact can be mitigated by appropriate measures. These include careful management of test data, diversification of test data, transparent verification and testing methodologies, the introduction of multi-level target verification processes, and, above all, the preservation of the decisive role of humans in the application of lethal force.
How does this relate to the Czech Republic? More than you might think. The Czech defense industry is increasingly involved in projects where artificial intelligence plays a key role. For example, ERA, part of the Omnipol group, is a world leader in VERA-NG passive surveillance systems. These currently operate on the principle of signal triangulation, with the trend moving towards the integration of artificial intelligence for target classification, behavior analysis, and prediction of future movements. Once artificial intelligence becomes an integral part of these systems, it will be necessary to eliminate biases from automatic decision-making that could lead to erroneous outputs threatening civil aviation or non-military facilities. Similarly, Retia, which develops fire control systems and radars, will need to consider the risk of bias when creating algorithms for detecting objects in complex electromagnetic environments.
This is not just about industry. The Ministry of Defense and the Military Technical Institute face the challenge of setting acquisition requirements for systems that contain elements of artificial intelligence. They will have to define methodologies for testing for bias and implement processes to ensure the transparency of data sources and their representativeness. This is in line with SIPRI recommendations, which call on states to build national expertise on bias in artificial intelligence as part of the process of procuring and evaluating military systems. The Czech Republic should develop this expertise in cooperation with the academic sphere (primarily with the Czech Technical University, Masaryk University, and the CATRIN research center – Czech Advanced Technology and Research Institute), which have experience in research on machine learning, ethics, and law. The common goal must be to create a framework for so-called responsible artificial intelligence, which will include regular auditability, testing in realistic scenarios, and international interoperability, especially within NATO.
Cybersecurity represents another dimension. The bias of automated systems can be not only an unintended consequence of faulty development, but also the result of a targeted attack in which an adversary manipulates data to reinforce existing biases in the system. In an environment of hybrid threats, the risk of "data poisoning" is very real. For Czech companies and state institutions, this means combining bias reduction with cyber protection measures and data source integrity verification. In this regard, the Czech Republic has the advantage of a strong background in cyber defense, represented, for example, by the National Cyber and Information Security Agency (NÚKIB), which may play an important role in setting standards for military artificial intelligence.
From a legal perspective, we should emphasize the absence of the term "bias" in international humanitarian law, even though its principles touch upon this issue. The principles of distinguishing targets, proportionality, and minimizing harm also apply to the use of autonomous or semi-autonomous systems. If the Czech Armed Forces acquire and deploy a system whose algorithmic decision-making leads to systematic discrimination against certain groups of civilians, the state will bear international legal responsibility. This again demonstrates the need to build legal and ethical capacities in parallel with technological development.
The issue of bias in artificial intelligence is no longer an academic topic; on the contrary, it is becoming a real factor influencing the future shape of armed conflicts and international law. For the Czech Republic, this means the need to integrate this issue into strategic documents, acquisition processes, and training programs. An interdisciplinary framework linking industry, academia, and government needs to be created to minimize the risks of algorithmic bias and ensure compliance with international commitments. Otherwise, technologies designed to increase accuracy and reduce civilian casualties may paradoxically lead to an increase in casualties and thus weaken the security they are supposed to protect.