Integration of Artificial Intelligence into the US Intelligence System: An Analysis of Ethical Dilemma
Artificial intelligence has been a controversial contradiction since its birth, and
the huge technological prospect is in stark contrast to a series of social and
ethical issues that follow. For the intelligence community, the "human-machine
relationship" between intelligence personnel and artificial intelligence is an
unavoidable problem, and data security and data traps are also aspects that must be
paid attention to at all times. How to understand the current development dilemma of
artificial intelligence is the premise of whether it can be effectively dealt with
in the future.
1. Humanistic Ethical Issues
The
U.S. intelligence community has two distinct views on the use of artificial
intelligence. One side believes that artificial intelligence systems lack abstract
thinking, so there is a greater risk in handing over the power of analysis and
judgment to algorithms that rely entirely on patterned decision-making. The other
side believes that the computing power of artificial intelligence far exceeds that
of the human brain, so as long as the model is properly built, it can undertake most
of the analysis work. In fact, this is the long-standing dispute between the
"rationalism" that emphasizes the symbolization of thinking and the
"people-oriented" concept that serves human beings.
On the one hand, whether
the use of artificial intelligence will harm the dignity of intelligence personnel
is indeed a question to be explored. The introduction of artificial intelligence
into intelligence work will not only touch the interests of human intelligence
employees, but also weaken the confidence of intelligence personnel in the long run.
The two views of the intelligence community are reflected in the actual promotion
level as the dispute between "artificial intelligence" (AI) and "intelligence
augmentation" (IA). The essence is still entangled in the use of technology to
replace humans or enhance humans as a tool. Granted, AI will never be at its best
when it comes to the nuances of language, complex problem solving, and certain tasks
of emotional and social intelligence. Machines and algorithms always lack sufficient
human value. When faced with intelligence problems that are difficult to quantify
and describe, analysts can often make quick judgments based on work experience and
knowledge accumulation, which is undoubtedly a test for machines.
On the
other hand, the purpose of using artificial intelligence systems in the intelligence
community is to collect and screen as much intelligence information as possible.
Especially for open source intelligence and signal intelligence, artificial
intelligence will inevitably pose a threat to citizens' right to privacy in the
process of data collection. AI-based systems such as big data analytics, persistent
ISR, facial recognition, and cyber capabilities could enable dictators to spy on
their populations, target dissidents, censor content, or otherwise violate basic
human rights. The rights and responsibilities of personal privacy data itself have
legal implications. Once leaked, it will cause serious social consequences.
Intelligence agencies not only need to bear corresponding responsibilities, but also
have a great negative impact on their own image. The case of former CIA employee
Edward Snowden exposing the secret surveillance program of the U.S. intelligence
community is a typical example.
2. Crisis of confidence
For or against the development of lethal autonomous weapons systems (LAWS)
is an important ethical question facing the field of artificial intelligence and
robotics. The intelligence community faces the same problem. Autonomous systems can
collect, analyze and output intelligence with almost no human intervention, but
complex algorithms also prevent people from simply interpreting and understanding
the analysis process and conclusions of artificial intelligence. This makes the
accuracy, degree of autonomy, and authority definition of autonomous systems face
considerable difficulties.
On the one hand, artificial intelligence can
quickly analyze large amounts of data and provide short-term judgments for making
decisions, but intelligence analysis is a continuous and complex mental
confrontation between the enemy and us. Although the analysis process requires
objectivity based on the data level, accurate judgment conclusions are often the
result of the analyst's perception of the current situation and the logic of
thinking. This essentially subjective judgment is beyond the reach of model
artificial intelligence algorithms. .
On the other hand, after the
introduction of artificial intelligence and robots into the U.S. intelligence
system, although it has undertaken a lot of transactional work for intelligence
personnel, it has inevitably replaced part of the mental work. And assuming the
additional responsibility that comes from machine decision-making is undoubtedly
something that any intelligence officer would do his best to avoid. In fact,
artificial intelligence does not have the traditional elements of "cognition" and
"consciousness" to determine the attribution of responsibility. In other words,
under the premise that the algorithm cannot guarantee that its choice is completely
correct, even if artificial intelligence can analyze and judge the situation through
complex calculations, and take corresponding measures or make decisions, they still
lack ethical decision-making capabilities. Therefore, responsibility for actions or
decisions that humans entrust to them remains with the human agents who develop and
use the technology. However, at the current level of technology, neither developers
nor users of software systems can have absolute trust in artificial intelligence,
and naturally they are unwilling to take corresponding responsibilities for it. This
instinctive lack of trust cannot be avoided by any technology.