- The cybersecurity industry is already seeing evidence of ChatGPT being used by criminals.
- ChatGPT can quickly generate targeted phishing emails or malicious code for malware attacks.
- AI companies could be held liable if chatbots advise criminals, as Section 230 may not apply.
Be it writing essays or analyzing data, ChatGPT can be used to ease a person’s workload. This also applies to cybercriminals.
Sergey Shykevich, Principal ChatGPT Researcher at cybersecurity firm Checkpoint security, has seen cybercriminals harness the power of AI to create code that can be used in a ransomware attack.
Shykevich’s team began investigating the potential of AI to lend itself to cybercrimes in December 2021. Using the AI’s big language model, they created phishing emails and malicious code. As it became clear that ChatGPT could be used for illegal purposes, Shykevich told Insider that the team wanted to see if their findings were “theoretical” or if they could find “the bad guys using it in the wild.”
Because it’s hard to tell if a harmful email sent to someone’s inbox was written with ChatGPT, his team turned to the dark web to see how the app was being used.
On December 21, they found their first piece of evidence: the cybercriminals were using the chatbot to create a python script that could be used in a malware attack. The code had a few errors, Shykevich said, but much of it was correct.
“What’s interesting is that these guys who posted it had never developed anything before,” he said.
Shykevich said ChatGPT and Codex, an OpenAI service that can write code for developers, “will allow less experienced people to be presumptive developers.”
The misuse of ChatGPT – which now powers Bing’s already troubling new chatbot – is worrying cybersecurity experts, who see the potential for chatbots to aid phishing, malware and hacking attacks.
Justin Fier, director of Cyber Intelligence & Analytics at Darktrace, a cybersecurity company, told Insider that when it comes to phishing attacks, the barrier to entry is already low, but ChatGPT could make it easier to create. effective from dozens of targeted scam emails – as long as they craft good prompts.
“For phishing, it’s all about volume – imagine 10,000 emails, very targeted. And now, instead of 100 positive clicks, I have three or 4,000,” Fier said, referring to a hypothetical number of people likely to click on a phishing email. , which is used to trick users into giving personal information, such as banking passwords. “It’s huge, and it’s all about that goal.”
A science fiction movie”
In early February, cybersecurity firm Blackberry released a survey of 1,500 information technology experts, 74% of whom said they fear ChatGPT could contribute to cybercrime.
The survey also found that 71% believed ChatGPT may already be used by nation states to attack other countries through hacking and phishing attempts.
“It has been well documented that people with malicious intent are testing the waters, but over the course of this year we expect to see hackers better understand how to successfully use ChatGPT for nefarious purposes,” said Shishir Singh, chief technology officer. of cybersecurity at BlackBerry, wrote in a press release.
Singh told Insider that those fears stem from the rapid advances in AI over the past year. Experts said advances in large language models – which are now better able to mimic human speech – have moved faster than expected.
Singh described Rapid Innovations as something out of a “sci-fi movie”.
“Everything we’ve seen in the last 9 to 10 months, we’ve only seen in Hollywood,” Singh said..
Cybercrime uses could be a liability for Open AI
As cybercriminals begin to add things like ChatGPT to their toolkit, experts like former federal prosecutor Edward McAndrew are wondering if companies would bear some responsibility for these crimes.
For example, McAndrew, who has worked with the Department of Justice investigating cybercrime, pointed out that if ChatGPT, or a similar chatbot, advises someone to commit a cybercrime, it could be a liability for companies facilitating such chatbots.
In dealing with illegal or criminal content on their sites from third-party users, most technology companies cite Section 230 of the Communications Decency Act of 1996. The law states that providers of sites that allow people to post content – like Facebook or Twitter – are not responsible for the speech on their platforms.
However, since the speech comes from the chatbot itself, McAndrew said the law may not protect OpenAI from civil lawsuits or lawsuits — although open-source versions could make it harder to link cybercrimes to OpenAI.
The scope of legal protections for tech companies under Section 230 is also being challenged this week in the Supreme Court by the family of a woman killed by ISIS terrorists in 2015. The family argues that Google should be held accountable for its algorithm promoting extremist videos.
McAndrew also said ChatGPT could also provide a “treasure trove of information” for those tasked with gathering evidence of such crimes if they were able to subpoena companies like OpenAI.
“These are really interesting questions that are years away,” McAndrew said, “but as we’ve seen it’s been true since the dawn of the internet, criminals are some of the early adopters. let’s see again, with a lot of AI tools.”
Faced with these questions, McAndrew said he envisions a political debate about how the United States — and the world at large — will set parameters for AI and tech companies.
In the Blackberry survey, 95% of IT professionals surveyed said governments should be responsible for creating and enforcing regulations.
McAndrew said the task of regulating it can be difficult, because there is not one agency or level of government exclusively responsible for creating mandates for the AI industry, and the issue of the technology of AI goes beyond American borders.
“We’re going to have to have international coalitions and international norms around cyber behavior, and I expect that will take decades to develop if we’re ever able to develop it.”
Technology still not perfect for cybercriminals
One thing about ChatGPT that could make cybercrime more difficult is that it’s known to be confidently wrong – which could pose a problem for a cybercriminal trying to craft an email intended to impersonate someone. another, experts told Insider. In the code that Shykevich and his colleagues discovered on the dark web, errors had to be fixed before they could contribute to a scam.
Additionally, ChatGPT continues to implement security barriers to deter illegal activity, although these security barriers can often be bypassed with the right script. Shykevich pointed out that some cybercriminals are now looking at ChatGPT’s API models – open-source versions of the app that don’t have the same content restrictions as the web UI.
Shykevich also said that at this stage, ChatGPT cannot help create sophisticated malware or create fake websites that appear, for example, to be a leading bank’s website.
However, this may one day become a reality as the AI arms race created by tech giants could accelerate the development of better chatbots, Shykevich told Insider.
“I’m more concerned about the future and now it looks like the future is not 4-5 years away but more like a year or two away,” Shykevich said.
Open AI did not immediately respond to Insider’s request for comment.
WATCH NOW: Popular Videos from Insider Inc.