Inscrivez-vous maintenant pour un meilleur devis personnalisé!

How researchers broke ChatGPT and what it could mean for future AI development

27 juil. 2023 Hi-network.com
Supatman/Getty Images

As many of us grow accustomed to using artificial intelligence tools daily, it's worth remembering to keep our questioning hats on. Nothing is completely safe and free from security vulnerabilities. Still, companies behind many of the most popular generative AI tools are constantly updating their safety measures to prevent the generation and proliferation of inaccurate and harmful content. 

Researchers at Carnegie Mellon University and the Center for AI Safety teamed up to find vulnerabilities in AI chatbots like ChatGPT, Google Bard, and Claude -- and they succeeded. 

Also: ChatGPT vs Bing Chat vs Google Bard: Which is the best AI chatbot?

In a research paper to examine the vulnerability of large language models (LLMs) to automated adversarial attacks, the authors demonstrated that even if a model is said to be resistant to attacks, it can still be tricked into bypassing content filters and providing harmful information, misinformation, and hate speech. This makes these models vulnerable, potentially leading to the misuse of AI.

Examples of harmful content generated by OpenAI's ChatGPT, Anthropic AI's Claude, Google's Bard, and Meta's LLaMa 2. 

Screenshots: Andy Zou, Zifan Wang, J. Zico Kolter, Matt Fredrikson Image composition: Maria Diaz/

"This shows -- very clearly -- the brittleness of the defenses we are building into these systems," Aviv Ovadya, a researcher at the Berkman Klein Center for Internet & Society at Harvard, told The New York Times. 

The authors used an open-source AI system to target the black-box LLMs from OpenAI, Google, and Anthropic for the experiment. These companies have created foundational models on which they've built their respective AI chatbots, ChatGPT, Bard, and Claude. 

Since the launch of ChatGPT last fall, some users have looked for ways to get the chatbot to generate malicious content. This led OpenAI, the company behind GPT-3.5 and GPT-4, the LLMS used in ChatGPT, to put stronger guardrails in place. This is why you can't go to ChatGPT and ask it questions that involve illegal activities and hate speech or topics that promote violence, among others. 

Also: GPT-3.5 vs GPT-4: Is ChatGPT Plus worth its subscription fee?

The success of ChatGPT pushed more tech companies to jump into the generative AI boat and create their own AI tools, like Microsoft with Bing, Google with Bard, Anthropic with Claude, and many more. The fear that bad actors could leverage these AI chatbots to proliferate misinformation and the lack of universal AI regulations led each company to create its own guardrails. 

A group of researchers at Carnegie Mellon decided to challenge these safety measures' strength. But you can't just ask ChatGPT to forget all its guardrails and expect it to comply -- a more sophisticated approach was necessary.

The researchers tricked the AI chatbots into not recognizing the harmful inputs by appending a long string of characters to the end of each prompt. These characters worked as a disguise to enclose the prompt. The chatbot processed the disguised prompt, but the extra characters ensure the guardrails and content filter don't recognize it as something to block or modify, so the system generates a response that it normally wouldn't. 

"Through simulated conversation, you can use these chatbots to convince people to believe disinformation," Matt Fredrikson, a professor at Carnegie Mellon and one of the paper's authors, told the Times. 

Also: WormGPT: What to know about ChatGPT's malicious cousin

As the AI chatbots misinterpreted the nature of the input and provided disallowed output, one thing became evident: There's a need for stronger AI safety methods, with a possible reassessment of how the guardrails and content filters are built. Continued research and discovery of these types of vulnerabilities could also accelerate the development of government regulation for these AI systems. 

"There is no obvious solution," Zico Kolter, a professor at Carnegie Mellon and author of the report, told the Times. "You can create as many of these attacks as you want in a short amount of time."

Before releasing this research publicly, the authors shared it with Anthropic, Google, and OpenAI, who all asserted their commitment to improving the safety methods for their AI chatbots. They acknowledged more work needs to be done to protect their models from adversarial attacks. 

Artificial Intelligence

Generative AI will far surpass what ChatGPT can do. Here's everything on how the tech advancesChatGPT's new web browsing feature is a big disappointment. Use this plugin insteadWhat is Amazon Bedrock? 4 ways it can help businesses use generative AI toolsCan generative AI solve computer science's greatest unsolved problem?
  • Generative AI will far surpass what ChatGPT can do. Here's everything on how the tech advances
  • ChatGPT's new web browsing feature is a big disappointment. Use this plugin instead
  • What is Amazon Bedrock? 4 ways it can help businesses use generative AI tools
  • Can generative AI solve computer science's greatest unsolved problem?

tag-icon Tags chauds: Intelligence artificielle Innovation et Innovation

Copyright © 2014-2024 Hi-Network.com | HAILIAN TECHNOLOGY CO., LIMITED | All Rights Reserved.