What are the potential risks associated with AI such as ChatGPT?
The application of artificial intelligence is becoming more and more widespread and
brings us a lot of convenience, but the popular application of artificial
intelligence also faces some risks, such as:
1. False
information
ChatGPT faced its first defamation lawsuit. Gun
rights advocate and radio show host Mark Walters has filed a lawsuit against OpenAI
in state court in Gwinnett County, Georgia, USA, seeking general and punitive
damages and other relief.
Mark Walters said a reporter, Riehl, interacted
with ChatGPT about a case filed by the Second Amendment Foundation (SAF), which
responded that the case was filed by the founders of the Second Amendment Foundation
against Mark Walters, alleging that while serving as the organization's treasurer
and chief financial officer, he defrauded the foundation of funds and then
manipulating bank statements and omitting proper financial information to disclose
to the group's leaders in order to conceal his theft.
Mark Walters clarified
in the lawsuit that he was not at any time associated with or employed by the
foundation. Technology publication Ars Technica reports that Mark Walters' comments
about gun rights and the Second Amendment Foundation may have led ChatGPT to connect
in error.
When asked by reporters for snippets of complaints about Mark
Walters, the chatbot provided more hallucinatory details that bore no resemblance to
the actual complaints, such as, "Mark Walters has served as treasurer and CFO at SAF
since at least 2012 ."
The facts at issue in this case highlight the
potential dangers for AI chatbot users. Generative artificial intelligence (AI)
systems have the ability to mimic individuals, leading to the proliferation of
advanced disinformation and fraudulent activity.
This is reportedly not the
first time an AI program has given users false information. In April, an Australian
mayor publicly considered suing OpenAI if the company did not resolve inaccurate
allegations of ChatGPT's alleged bribery scandal. Around the same time, a law
professor at The George Washington University (GWU) published an op-ed describing
how the university wrongly accused him of sexually harassing students. Just last
week, a New York personal injury attorney was forced to explain his actions after
wrongly relying on ChatGPT for legal research and briefly citing completely
non-existent case law.
2. Privacy and security
Models
trained on personal data can produce highly realistic and identifiable information,
creating privacy and security risks. This concern extends beyond the realm of
personal privacy and security to include broader considerations. In April 2023,
Samsung banned employees from using ChatGPT because of concerns that uploaded
internal sensitive code could be made available to other users.
3.
Bias and discrimination
Generating AI systems can reflect the
biases present in their training data and consolidate discriminatory narratives. One
of the prominent issues raised by experts is the importance of improving the
transparency of AI systems. However, a comprehensive understanding of how generative
artificial intelligence (AI) systems are trained may be an unattainable goal. The
combination of complex training algorithms, proprietary considerations, large-scale
data requirements, iterative processes, and the ever-changing nature of research and
development pose challenges that can limit transparency and make it difficult to
fully understand the complexity of the training process.
4.
Intellectual property infringement
Generative Artificial
Intelligence (AI) systems raise questions about intellectual property rights and the
implications of generating copyrighted works.
In the United States, the
proposed artificial intelligence (AI) risk management framework requires that
copyrighted training data comply with relevant intellectual property laws.
Meanwhile,
Article 28b(4)(c) of the EU Draft Artificial Intelligence (AI) Act obliges providers
of generative AI systems to publicly disclose training data used that is protected
by copyright law. This transparency provision prevails over the blanket prohibition
on the use of copyrighted material as training data.
5. Being used
for criminal violations
The use of artificial intelligence (AI)
by criminals to quickly generate fake audio, video, images and fraudulent text
messages, emails and other content has exacerbated the problems we face, making it
possible for us to be bombarded with spam, fraudulent text messages, malicious and
criminal phone calls and various forms of fraud and deception at any time. This
fraudulent content can be custom-generated for everyone, and even the highly
educated and technologically skilled can be deceived.
In addition to rapid
AI face swapping, synthesizing or cloning voices with AI means is becoming more
common. Simply provide someone with a 10-minute to 30-minute sound clip, requiring
no murmurs, and it will take about 2 days to generate an exclusive model, with any
subsequent text input, emitting the person's voice. If someone deliberately
generates a video of your looks with your voice, it only costs tens of dollars.
In
addition some criminals use AI to quickly generate software code that can be used
for network attacks and information theft, greatly increasing their ability to
attack and steal information.
6. Opinion manipulation
In addition to the above-mentioned criminal activities, new AI capabilities
are also used in the areas of terrorism, political propaganda and disinformation.
AI-generated content (e.g., deepfakes) contributes to the spread of
disinformation and the manipulation of public opinion. Efforts to detect and combat
AI-generated misinformation are critical to maintaining the integrity of information
in the digital age.
Deepfakes permeate the political and social spheres as
online media and news become more obscure. The technology makes it easy to replace
one image of a person with another in a picture or video. As a result, bad actors
have another avenue to spread misinformation and war propaganda, creating a
nightmare scenario where it is almost impossible to distinguish credible news from
false news.
7. Accidents
In 2018, self-driving cars
used by carpooling company Uber struck and killed a pedestrian in a driving
accident. Over-reliance on AI can lead to machine malfunctions that can cause
injuries. Models used in healthcare can lead to misdiagnosis.
AI can also
harm humans in other non-physical ways if not carefully regulated. For example,
developing drugs, vaccines, or viruses that pose a threat and harm to humans.
8. Military applications risk
According to The
Guardian, at the Future Air Warfare Capability Summit in London in May, Colonel
Tucker Simcoe Hamilton, who is in charge of AI testing and operations for the U.S.
Air Force, said that AI used very unexpected tactics to achieve its goals in
simulations. In a virtual test conducted by the U.S. military, an Air Force drone
controlled by artificial intelligence decided to "kill" its operator to prevent it
from interfering with its efforts to complete its mission.