The CIA is studying how generative AI like ChatGPT can help intelligence agencies
The CIA is pursuing collaborations to deploy generative artificial intelligence
through federally funded research and development centers and other partners in
academia and industry, according to the chief of AI at the CIA. The study explores
whether chatbots and generative artificial intelligence can assist its officers in
their day-to-day work and general espionage tasks.
Lakshmi Raman, the CIA's
director of artificial intelligence, also confirmed the plan at the annual AI summit
at the Officers Club of the Potomac in Virginia. Raman, who joined the CIA as a
software engineer in 2002, emphasized the need for a careful understanding of "the
guardrails required" when applying this emerging technology to intelligence
community operations. She said they have seen the excitement surrounding ChatGPT.
ChatGPT is certainly an inflection point for generative artificial skills
technology, and they definitely need to "explore" new technologies that can be
utilized and upcoming technical methods. And she thinks that the way "exploration"
is handled needs to be done in a constrained way.
ChatGPT is a chatbot
developed by OpenAI. Since its launch at the end of 2022, it has become popular on
the Internet. The tool belongs to the field of generative artificial intelligence,
which involves large language models that can generate audio, code, images, text,
video and other media after human input needs.
Over the past few months,
multiple federal entities have announced strategic research into this nascent
technology—and now the CIA has joined the quest.
In her keynote speech,
Raman said that the CIA is currently doing a lot of work to ensure that this
artificial intelligence-driven technology is better used in the CIA's daily
intelligence work, and to enhance the CIA's understanding of adversary's use of
artificial intelligence and machine learning. For example, she detailed that the CIA
will create a common platform to enable shared services—ultimately enable and expand
AI applications—and establish new resources and opportunities to increase its
workforce's familiarity with intelligence and automation technologies.
Recently,
the CIA's artificial intelligence team has also begun to focus on how ChatGPT and
related intelligence technology can provide more assistance to agency personnel in
critical intelligence processes. Raman said assessing which tools are right for the
U.S. government's needs is a daunting task. She noted that the CIA is looking for
help in deploying generative AI through federally funded research and development
centers and other partners in academia and industry, evaluating existing tools and
commercial products.
In her view, the CIA is currently grappling with
integrating outside tools into its "high-end" environments, or classified
environments, those associated with national security computer systems. The approval
and certification process is often long and arduous for new contractors, and most,
including Raman, believe that the approval and certification process should be
simplified. She said that although the process of introducing artificial
intelligence will bring a lot of uncertainty to the CIA, it must be clear how to
introduce artificial intelligence innovatively.