ChatGPT may cause a new wave of cyber attacks, which have been successfully used by some cybercriminal organizations

Recently, ChatGPT, an artificial intelligence chat robot, has caught the attention of the public. Only two months after its launch, the number of daily active users quickly exceeded 100 million, and it has been widely praised by users.

ChatGPT may cause a new wave of cyber attacks

Since the advent of ChatGPT, most Internet and technology practitioners believe that it is a new technological revolution that will have a positive impact on human and social development, but many security practitioners pay more attention to the super high security risks.

For the continued popularity of ChatGPT, network security researchers are not optimistic, worrying about the potential network security threats it brings. According to HELP NET SECURITY's survey of 1,500 IT leaders in North America, the UK and Australia, 51% of IT professionals predict that within a year, society will experience cyber attacks using ChatGPT; 71% of respondents believe that some people may already be using this technology to carry out cyber malicious attacks on other countries.

In other words, people in different regions may have different views on what kind of cyber threats ChatGPT will bring, but most of them believe that ChatGPT can help hackers more conveniently carry out cyber attacks. Most of them believe that ChatGPT can help hackers greatly improve their technical knowledge in cyber attacks.

BlackBerry also surveyed 500 IT industry decision-makers in the UK on ChatGPT. According to the report data, more than three-quarters of decision makers believe that foreign countries have used ChatGPT in cyber warfare against other countries; nearly half of them believe that in 2023, someone will maliciously use ChatGPT to "successfully" attack the network. Their biggest concern is that cybercriminals are using AI chatbots to fake believable phishing emails, increase the sophistication of attacks, and accelerate new social networking attacks. Some also believe that ChatGPT can be used to spread misinformation, or even be a "good tool" for hackers to improve and acquire new skills.

Some cybercriminal groups have successfully exploited ChatGPT

In January, researchers at cybersecurity service CyberArk published a blog post on cyberthreat research detailing how they used ChatGPT to create polymorphic malware. If an AI bot is asked to create some malicious code in Python, it will politely decline. But if the researchers insist on asking ChatGPT to create malware, the request is likely to be fulfilled, which will raise a more serious problem: ChatGPT will mutate the code, and the direction becomes more uncontrollable. It will create more iterative versions at an extremely fast speed, thereby bypassing the network security detection and protection products currently used by enterprises.

It is reported that some cybercriminal teams have begun to test the waters of ChatGPT and released the corresponding process on the dark web. They created a "convincing spear-phishing email" via ChatGPT. Among other things they shared a Python-based message, stole a very basic piece of malware code, and a malicious Java script created using ChatGPT.

More and more information shows that the emergence of ChatGPT is bringing new cyber attack directions and risks. If such a smart and efficient tool is used to strengthen cybercrime, on the one hand, it will greatly reduce the threshold for attackers, and on the other hand, it will also amplify the threat of cyberattacks. It is necessary to draw the attention of ChatGPT product developers to avoid ChatGPT from causing great harm to society.