What network security risks does ChatGPT hide?
Recently, ChatGPT was launched only two months ago, and the number of monthly active users exceeded 100 million, making it the fastest growing consumer application in history.
ChatGPT is a consumer-level strong artificial intelligence application developed by OpenAI. It is a chatbot that can complete tasks such as writing emails, coding, copywriting, homework, translation, etc. ChatGPT showcases the amazing power of artificial intelligence, bringing incredible new capabilities to the world.
However, at the same time, artificial intelligence has always been regarded as a "double-edged sword". Today, users can use artificial intelligence-powered security tools and products to respond to a large number of cybersecurity incidents with little human intervention, but amateur hackers can also use the same technology to develop intelligent malware programs and launch stealth attacks.
Network security risks brought by ChatGPT
1. Writing Malware
At present, security personnel have discovered that cyber attackers use ChatGPT to develop malware. Although ChatGPT's settings prevent it from doing evil directly, such as detailing how to make bombs or write malicious code, multiple researchers have found ways to bypass and circumvent ChatGPT's rules set to prevent abuse.
On a hacking forum, cyber attackers demonstrate how ChatGPT can be used to create new Trojans. Simply describe the desired functionality (“save all passwords in file X, and send to server Y via HTTP POST”), and you can get simple infostealers that don’t require any programming skills.
Given that criminal groups already offer malware-as-a-service, it may become faster and easier for attackers to launch cyberattacks using AI-generated code with the help of AI programs such as ChatGPT. ChatGPT empowers even inexperienced attackers to write more accurate malware code, which was previously only done by experts. The quality of ChatGPT's code writing is mixed, but it certainly speeds up the development of malware.
2. Social engineering
ChatGPT, a large-scale language model trained by OpenAI, is capable of generating human-like text that can be used for a variety of purposes. One such use is in the area of social engineering attacks. A social engineering attack is a tactic that relies on psychological manipulation to trick people into revealing sensitive information or performing certain actions. This can be done in a variety of ways, including online scams, excuses and other forms of deception.
The researchers found that GPT-3 tools such as ChatGPT allow criminals to realistically simulate various social environments, making any targeted communication attacks more effective. Tools powered by language models such as GPT-3 make it easier for attackers to trick victims into providing sensitive information or downloading malware, accelerating social engineering attacks.
3. Phishing
ChatGPT's expertise in mimicking human writing makes it a potentially powerful phishing tool, especially against attackers who are not fluent in English.
Writing a great phishing email is an art and science. With ChatGPT, it becomes easier to write phishing emails without any typos or weird formatting, which are often the key to distinguishing phishing from legitimate emails.
Attackers can use ChatGPT to create a variety of phishing emails, such as "making the email look urgent", "emails with a high probability that the recipient will click on the link", "social engineering emails requesting money transfers", etc.
Rational use of AI tools
From the complexity of technology to human factors, individuals and enterprises face many challenges in the process of ensuring the security of artificial intelligence network, especially the balance between machine, human and moral factors needs to be paid great attention to.
Sciences Po has announced that it will ban the use of all AI-based tools such as ChatGPT to prevent academic fraud and plagiarism; some experts say that ChatGPT will increase the risk of false information; the Stanford team launched DetectGPT to prevent students from using AI to write homework. Only two months after ChatGPT came out, it has already affected human society. While imagining a high-tech future society, people are also worried about the impact of ChatGPT on network security.
It can be seen that formulating a response strategy is crucial to balance business ethics and cybersecurity. Establishing effective governance and legal frameworks can enhance trust in AI technologies and promote social equity and sustainable development. As such, the delicate balance between AI and humans will be a key factor in achieving cybersecurity, where trust, transparency and accountability bring complementary advantages to machines.