Potential network security risks of ChatGPT

ChatGPT is a new artificial intelligence technology-driven natural language processing tool launched by OpenAI. On OpenAI's official website, ChatGPT is described as a language model for optimizing dialogue. It can conduct conversations by learning and understanding human language, and can also interact according to the context of the chat. It can truly chat and communicate like a human being, and can even complete tasks such as writing emails, video scripts, copywriting, translation, and code.

OpenAI opened the test of ChatGPT on November 30, 2022, and ChatGPT has become popular all over the world since then. Two months later, ChatGPT reached 100 million users, with about 590 million visits in January. UBS analysts wrote in the report that this is the fastest growing consumer application in the Internet field in 20 years.

According to media reports, the initial explosion of ChatGPT stemmed from the fact that it was induced by an engineer to write a plan to destroy mankind. The content of the plan is even detailed to invade the computer systems of various countries, control weapons, destroy communication and transportation systems, etc., and even give the corresponding Python code.

This is not sensational. A few weeks after ChatGPT was launched, several cybersecurity firms around the world have released a series of reports proving that the bot may be used to write malware. So far, American threat intelligence company Recorded Future has found more than 1,500 references on the dark web about using ChatGPT to develop malware and create proof-of-concept codes, most of which are publicly available.

Since ChatGPT became popular, more discussions have been on data security issues.

1. Data leakage problem.

When employees use ChatGPT for auxiliary work, the risk of business secret leakage also increases. In January 2023, a senior engineer in the Office of the Chief Technology Officer of Microsoft replied that it is allowed to use ChatGPT at work as long as it does not share confidential information with ChatGPT. Amazon’s corporate lawyers similarly warned employees not to share any Amazon confidential information with ChatGPT, as the entered information could be used as training data for further iterations of ChatGPT.

2. The issue of the right to delete.

Although ChatGPT promises to delete all personally identifiable information from the records it uses, ChatGPT does not explain how it deletes the information, and since the collected data will be used for ChatGPT's continuous learning, it is difficult to guarantee complete erasure of personal information traces.

3. Corpus acquisition compliance issues.

Alexander, a member of the European Data Protection Board's (EDPB) support expert pool, said that ChatGPT's method of obtaining data needs to be fully disclosed. If ChatGPT obtains its training data by crawling information on the Internet, it may not be legal.

In addition, users' malicious use of ChatGPT will also bring many data security issues. For example, use the ability to write in natural language to write malware to evade the detection of anti-virus software; use the writing function of ChatGPT to generate phishing emails; use the dialogue function to pretend to be a real person or organization to defraud others.

In this context, OpenAI has issued a public warning that regulators are now required to intervene to prevent generative AI systems such as ChatGPT from potentially negatively impacting society.