Inscrivez-vous maintenant pour un meilleur devis personnalisé!

The 5 biggest risks of generative AI, according to an expert

25 avr. 2023 Hi-network.com
Image: Getty Images/imaginima

Generative AIs, such as ChatGPT, have revolutionized how we interact with and view AI. Activities like writing, coding, and applying for jobs have become much easier and quicker. With all the positives, however, there are some pretty serious risks.

A major concern with AI is trust and security, which has even caused some countries to completely ban ChatGPT as a whole or to reconsider policy around AI to protect users from harm. 

AlsoThis new technology could blow away GPT-4 and everything like it

According to Gartner analyst Avivah Litan, some of the biggest risks of generative AI concern trust and security and include hallucinations, deepfakes, data privacy, copyright issues, and cybersecurity problems.

1. Hallucinations

Hallucinations refer to the errors that AI models are prone to make because, although they are advanced, they are still not human and rely on training and data to provide answers. 

If you've used used an AI chatbot, then you have probably experienced these hallucinations through a misunderstanding of your prompt or a blatantly wrong answer to your question.

Also: ChatGPT's intelligence is zero, but it's a revolution in usefulness, says AI expert

Litan says the training data can lead to biased or factually incorrect responses, which can be a serious problem when people are relying on these bots for information. 

"Training data can lead to biased, off-base or wrong responses, but these can be difficult to spot, particularly as solutions are increasingly believable and relied upon," says Litan. 

2. Deepfakes

A deepfake uses generative AI to create videos, photos, and voice recordings that are fake but take the image and likeness of another individual. 

Perfect examples are the AI-generated viral photo of Pope Francis in a puffer jacket or the AI-generated Drake and the Weeknd song, which garnered hundreds of thousands of streams. 

"These fake images, videos and voice recordings have been used to attack celebrities and politicians, to create and spread misleading information, and even to create fake accounts or take over and break into existing legitimate accounts," says Litan. 

Also: How to spot a deepfake? One simple trick is all you need

Like hallucinations, deepfakes can contribute to the massive spread of fake content, leading to the spread of misinformation, which is a serious societal problem. 

3. Data privacy

Privacy is also a major concern with generative AI since user data is often stored for model training. This concern was the overarching factor that pushed Italy to ban ChatGPT, claiming OpenAI was not legally authorized to gather user data. 

"Employees can easily expose sensitive and proprietary enterprise data when interacting with generative AI chatbot solutions," says Litan. "These applications may indefinitely store information captured through user inputs, and even use information to train other models -- further compromising confidentiality."

Also: AI may compromise our personal information

Litan highlights that, in addition to compromising user confidentiality, the stored information also poses the risk of "falling into the wrong hands" in an instance of a security breach.

4. Cybersecurity

The advanced capabilities of generative AI models, such as coding, can also fall into the wrong hands, causing cybersecurity concerns.

"In addition to more advanced social engineering and phishing threats, attackers could use these tools for easier malicious code generation," says Litan. 

Also: The next big threat to AI might already be lurking on the web

Litan says even though vendors who offer generative AI solutions typically assure customers that their models are trained to reject malicious cybersecurity requests, these suppliers don't equip end users with the ability to verify all the security measures that have been implemented. 

5. Copyright issues

Copyright is a big concern because generative AI models are trained on massive amounts of internet data that is used to generate an output. 

This process of training means that works that have not been explicitly shared by the original source can then be used to generate new content. 

Copyright is a particularly thorny issue for AI-generated art of any form, including photos and music. 

Also: How to use Midjourney to generate amazing images

To create an image from a prompt, AI-generating tools, such as DALL-E, will refer back to the large database of photos they were trained on. The result of this process is that the final product might include aspects of an artist's work or style that are not attributed to them. 

Since the exact works that generative AI models are trained on are not explicitly disclosed, it is hard to mitigate these copyright issues. 

What's next?

Despite the many risks associated to generative AI, Litan doesn't think that organizations should stop exploring the technology. Instead, they should create an enterprise-wide strategy that targets AI trust, risk, and security management. 

"AI developers must urgently work with policymakers, including new regulatory authorities that may emerge, to establish policies and practices for generative AI oversight and risk management," says Litan. 

Artificial Intelligence

The impact of artificial intelligence on software development? Still unclearAndroid 14's AI-generated wallpapers are super fun. Here's how to create themAI aims to predict and fix developer coding errors before disaster strikesGenerative AI is everything, everywhere, all at once
  • The impact of artificial intelligence on software development? Still unclear
  • Android 14's AI-generated wallpapers are super fun. Here's how to create them
  • AI aims to predict and fix developer coding errors before disaster strikes
  • Generative AI is everything, everywhere, all at once

tag-icon Tags chauds: Intelligence artificielle Innovation et Innovation

Copyright © 2014-2024 Hi-Network.com | HAILIAN TECHNOLOGY CO., LIMITED | All Rights Reserved.