Possible pitfalls of artificial intelligence in intelligence analysis:analysts' distrust

In addition to what was mentioned in the previous article(Possible pitfalls of artificial intelligence in intelligence analysis: time consuming): intelligence analysts believe that artificial intelligence has wasted their limited time, there may also be distrust of intelligence analysts.

Analysts' distrust

Analyst trust is critical to successful mass adoption of artificial intelligence. Analysts are the most skeptical of artificial intelligence compared to technologists, management or executives, the findings show. As mentioned above, if employees don't see the value in a tool, they're less likely to use it.

To overcome this skepticism and reap the greatest benefits from artificial intelligence, management needs to focus on educating employees and reconfiguring business processes to seamlessly integrate tools into workflows. Without these steps, artificial intelligence can be an expensive afterthought. For example, one federal agency implemented an artificial intelligence pilot to generate leads for its investigators to follow up on. Investigators, however, are simultaneously generating leads of their own. With limited follow-up time, investigators naturally prioritize the leads they come up with and rarely use AI-generated leads.

Overcoming an analyst's initial skepticism about a given artificial intelligence tool comes down to building trust between the analyst and the tool. Because they must support their assessments, even though powerful people may disagree, analysts may be reluctant to believe what they cannot explain and defend. For example, having an interface that allows analysts to easily scan the data underpinning simulation results, or view a representation of how the model reached its conclusions, would greatly help analysts incorporate the technology as part of their workflow. This will make the data more reliable, trustworthy, and will provide more reliable analytical products to warfighters and decision makers.

While having a workforce that lacks confidence in artificial intelligence output can be a problem, the reverse can also be a key challenge. For decades, intelligence community leaders have been aware of the phenomenon that adding data to an analyst's judgment increases the analyst's confidence. They can feel that their judgment is sound, when in fact it does not improve the overall accuracy of the work. In other words, more data fuels analysts' confirmation bias—they use new evidence to support their preconceived conclusions rather than help create more precise analyses.

The psychology experiments at the heart of this observation used 2 to 5 times more data. Artificial intelligence will provide analysts with vast amounts of data, which may exacerbate analysts' confirmation bias. In the financial services industry, for example, early experience shows that artificial intelligence can provide analysts with roughly 30 times the amount of data that is available today. How human cognition responds to such vast amounts of data is unknown. Analysts’ confidence in artificial intelligence judgments may decline due to information overload. Or, conversely, with so much data to analyze, analysts could become overconfident and implicitly trust artificial intelligence. The latter can be especially dangerous: as many aviation accidents have shown, a mismatch between human trust in automation and human understanding and oversight of it can lead to tragedy.

Conversely, artificial intelligence can effectively help analysts overcome confirmation bias and other human cognitive limitations. For example, artificial intelligence can be given tasks that help to check that the assessment is correct, because it is difficult for a human to find the time to do it, or it is cumbersome. Machines will be very good at continuous critical asset checks, competing what-if analysis and quality of information checks. Senior analysts can also use artificial intelligence to alert them to incoming evidence that does not align with their team's assessment, giving them the opportunity to guide analytical line reviews and focus attention on problem areas.

Ultimately, the impact that artificial intelligence may have on analysts' cognitive biases is simply not understood. Leaders need to pay careful attention to analysts' egos, assess business process designs, and continuously monitor artificial intelligence performance to help prevent any potential pitfalls.

【Web Intelligence Monitoring】●Advantages of open source intelligence
銆怤etwork Security銆戔棌9 popular malicious Chrome extensions
銆怬pen Source Intelligence銆戔棌5 Hacking Forums Accessible by Web Browsers
【Open Source Intelligence】●10 core professional competencies for intelligence analysts
銆怰esources銆戔棌The Achilles heel of AI startups: no shortage of money, but a lack of training data
銆怰esources銆戔棌The 27 most popular AI Tools in 2023
銆怤ews銆戔棌Access control giant hit by ransom attack, NATO, Alibaba, Thales and others affected
【Artificial Intelligence】●Advanced tips for using ChatGPT-4