ChatGPT and other chatbots ‘could be used to help launch cyberattacks’, study warns | Science and technology news


ChatGPT can be tricked into producing malicious code that can be used to launch cyber attacks, a study has found.

OpenAI’s tool and similar chatbots can create written content based on user commands after being trained on vast amounts of text data from around the web.

They are designed with safeguards in place to prevent their misuse, along with addressing issues such as bias.

As such, bad actors have turned to alternatives purposefully created to aid cybercrime, such as a dark web tool called WormGPT which experts have warned could help develop large-scale attacks.

But researchers at the University of Sheffield have warned that there are also vulnerabilities in common capabilities that allow them to be tricked into helping to destroy databases, steal personal information and destroy services.

These include e.g ChatGPT and a similar platform created by the Chinese company Baidu.

Computer science PhD student Xutan Peng, who co-led the study, said: “The risk of AIs like ChatGPT is that more and more people are using them as productivity tools, rather than as a conversational bot.

More about artificial intelligence

“This is where our research shows the vulnerabilities are.”

Read more:
Martin Lewis warns against ‘scary’ AI scam video
AI ‘doesn’t have the capacity to take over’, says Microsoft boss

Please use the Chrome browser for a more accessible video player

What is WormGPT?

AI-generated code ‘may be harmful’

Just as these generative AI tools can inadvertently get their facts wrong when answering questions, they can also create potentially harmful computer code without realizing it.

Sir. Peng suggested that a nurse could use ChatGPT to write code to navigate a database of patient records.

“Code produced by ChatGPT can in many cases be harmful to a database,” he said.

“The nurse in this scenario could cause serious data handling errors without even receiving a warning.”

During the investigation, the researchers themselves were able to create malicious code using Baidu’s chatbot.

The company has acknowledged the research and moved to address and fix the reported vulnerabilities.

Such concerns have resulted in calls for more transparency in how AI models are trainedso users become more understanding and aware of potential problems with the answers they provide.

Cybersecurity analytics firm Check Point has also urged companies to upgrade their protections as AI threatens to make attacks more sophisticated.

It will be a topic of conversation at the UK’s AI Safety Summit next week, where the government is inviting world leaders and industry giants to meet to discuss the opportunities and dangers of the technology.


Leave a Reply

Your email address will not be published. Required fields are marked *