Possible pitfalls of artificial intelligence in intelligence analysis: time consuming

Artificial intelligence (AI) may require new tasks to ensure it functions correctly, highlighting a potential danger: Artificial intelligence may consume more time than analysts. Organizations adopting artificial intelligence at scale will experience a certain amount of friction given that artificial intelligence is bringing about such a big change. You simply can't change 20% of your workforce's tasks, or add weeks' worth of new tasks, without straining your workforce, business processes, and existing tools. Smart organizations that want to reap the greatest benefits from artificial intelligence need to identify these pitfalls and find ways to mitigate them.

New technology is a time eater.

Perhaps the most important pitfall is that instead of creating new value, artificial intelligence ends up monopolizing analysts' time. This has happened before, such as with the implementation of electronic health records (EHRs) in the healthcare industry. While electronic medical record systems promise to reduce the workload of health care professionals, recent research shows that EHR systems actually increase the time doctors spend documenting patient visits. Physicians using electronic medical records spend more time typing during patient visits, which reduces the time they spend face-to-face with patients. Overall, this reduction in communication creates negative perceptions among both patients and physicians.

Interestingly, the EHR example can help intelligence agencies avoid this trap. While doctors spend more time recording EHRs than paper notes, nurses and clerical staff actually save a lot of time in their tasks. So an EHR that causes physicians to spend more time isn't necessarily a failure of technology; rather, it reflects an organization's strategic focus, essentially shifting some of the billing and clerical workload from staff to physicians. Instead, it reflects the need to reassess the business and technology strategies that led to it.

If the intelligence community is to avoid similar problems with large-scale adoption of artificial intelligence, it must clarify its priorities and how artificial intelligence fits into its overall strategy. Artificial intelligence tools that organizations focus on improving productivity will be very different from artificial intelligence tools that seek to improve the accuracy of analytical judgments. Artificial intelligence is not a solution to every problem, and having a clear understanding of its value can help ensure it is applied to the right problems. Being clear about the goals of an artificial intelligence tool can also help leaders communicate their vision for artificial intelligence to employees and mitigate misunderstanding or uncertainty about how the tool will be used.

Second, the intelligence community should avoid investing in “null technologies”—uses of artificial intelligence without access to the data it needs to succeed. Artificial intelligence is a bit like a flour mill: without the grain to feed it, it doesn't generate much value.

Even the most advanced artificial intelligence tools will have limited utility if they lack valid training data or sufficient input data. Without the right data, artificial intelligence tools can still eat up analytics trying to use them, but their output will have limited utility. The result would be frustrating analysis in which intelligence analysts would consider artificial intelligence a waste of their limited time.