Introduction
Modern artificial intelligence in defense systems is not merely a tool, but an architecture of selection that filters reality before it reaches human consciousness. This article analyzes why military automation represents a systemic challenge rather than just a technical one. The reader will learn how to avoid the trap of automation bias, why ethics is a rigorous theory of uncertainty management, and how quantum physics supports the sovereignty of human decision-making in the face of technological determinism.
AI as a reality-filtering regime and the accountability trap
AI in defense must be treated as a filtering regime because it actively decides what becomes information and what remains a digital entity. It is not a moral agent, as it lacks conscience and agency. Treating it as a tool is essential to maintain the enforceable accountability of creators and operators. The question of AI is whether the system enhances human agency or reduces the human to the role of a ceremonial witness. To avoid the "human alibi" trap, we must implement Meaningful Human Control—the real ability to reject a machine's decision, rather than just formally accepting it. Designing AI systems is a systemic challenge, as every reduction of complexity is a political act that requires oversight of the values embedded in the objective function.
From static signatures to the political economy of perception
The transition from analyzing signal parameters to a hermeneutics of behavior changes the nature of electronic intelligence (ELINT). Instead of cataloging static features, systems reconstruct the hidden intentions and operational styles of an adversary. This is a paradigm shift: from a "librarian" recognizing signatures to an analyst interpreting the grammar of emissions. The ethical consequence is the necessity of epistemic humility—the machine must be able to admit "I don't know" instead of forcibly classifying unknown phenomena. Resource management (QoS) becomes a policy of priorities here, where every allocation of sensor power is a decision about what the institution deems worthy of attention. Mathematical optimization is not neutral; it requires political oversight, as systems, much like empires, reveal their values in their budget structures and alert priorities.
Quantum engineering and explainability as foundations of security
Borrowings from quantum physics, such as the Pauli exclusion principle, help solve the problem of trace coalescence, preventing objects from merging in data representation. Quantum computing and Ising-type modeling allow for the efficient resolution of combinatorial problems, delegating optimization to the level of the system's physical evolution. Explainability (xAI) is not a luxury, but a condition for operational sobriety—it allows us to "lift the lid of the black box" so that the operator is not a blind notary of recommendations. Managing uncertainty through xAI and Track before Detect (TbD) allows for the accumulation of subtle premises rather than the cult of the single piece of evidence. Real institutional accountability requires that systems be auditable and that humans possess a genuine capacity for intervention, which protects against "KPI theater" and superficial control.
Summary
Artificial intelligence does not remove uncertainty from the battlefield; it merely changes the price we pay for ignoring it. The true test for modern institutions is not how precisely machines can calculate, but whether humans will retain the courage to say "no" to the machine at the decisive moment. In a world of perpetual volatility, our subjectivity is the only resource that cannot be optimized or outsourced. Are we ready for systems that, in the name of security, require from us not only technical proficiency but, above all, political and moral sovereignty?
📄 Full analysis available in PDF