At the Speed of Hypersonic Warfare in the future, how to build intelligence capabilities based on artificial intelligence? (2)

Practice while fighting to build trust in AI-based intelligence decision-making

What happens when the AI-driven tempo of operations exceeds the combatant commander's control of the situation? The speed of combat will inevitably drive away some parts of the human interaction along the sensor-to-shooter process, forcing people to trust machine-derived decisions or risk losing the battlefield initiative to an adversary. However, operational decision-making needs to reflect the moral requirements for human command, and making decisions without manual review is what most generals hate. So how can combatant commanders build trust in AI-based decisions?

The U.S. Army Doctrine Reference Publication (ADRP 6-0) describes trust as a value that "is earned or lost through everyday actions, not through grand or occasional gestures.

It comes from successful shared experience and training, often acquired incidentally in action, but also cultivated deliberately by commanders. This description implies that the application of artificial intelligence programs can earn trust by verifying that its performance is reliable and consistent over time.

"Practice while fighting" is a military concept that also applies to the application of artificial intelligence. to effectively integrate trust into intelligence activities and the operations they support, AI must be rigorously tested in peacetime and progressively more complex applied, increasing the need for AI by establishing a hierarchy of requirements satisfaction trust. That is, there is a hierarchy of automation requirements that each additional level of mission can meet and prove to be accurate and reliable, and that builds on that to provide the warfighter with confidence in the performance of each new capability. Today’s high-precision automation of low-level application scenarios can help build confidence as people move into increasingly complex AI processes.

It is also important to involve intelligence analysts, operational decision makers, and related personnel in the development of military applications of artificial intelligence. The direct interaction of algorithms with human analysts will help improve AI performance and, once human analysts are satisfied with the validation results, lead to greater trust in the application.

銆怬pen Source Intelligence銆戔棌5 Hacking Forums Accessible by Web Browsers
銆怰esources銆戔棌The 27 most popular AI Tools in 2023
【Web Intelligence Monitoring】●Advantages of open source intelligence
【Open Source Intelligence】●10 core professional competencies for intelligence analysts
銆怤etwork Security銆戔棌9 popular malicious Chrome extensions
【Dark Web】●5 Awesome Dark Web Links
銆怰esources銆戔棌The Achilles heel of AI startups: no shortage of money, but a lack of training data