Inscrivez-vous maintenant pour un meilleur devis personnalisé!

This is why AI-powered misinformation is the top global risk

Jan, 11, 2024 Hi-network.com
darts coming out of a megaphone
jasmina buinac/Getty Images

With many nations expected to hold elections during the next two years, the use of misinformation and disinformation -- powered by artificial intelligence (AI) -- will be the most severe global risk. 

As citizens continue to worry about the rising cost of living, associated risks from AI-powered misinformation on societal cohesion will dominate the landscape this year, according to the Global Risks Report 2024 released by the World Economic Forum (WEF). "The nexus between falsified information and societal unrest will take centerstage amid elections in several major economies that are set to take place in the next two years," the report stated.

Also: 4 ways to overcome your biggest worries about generative AI

During this time period, misinformation and disinformation will emerge as the leading global risk, followed by extreme weather events, and societal polarization. The report then ranks cyber insecurity and interstate armed conflict to round out the top-five global risks. 

Misinformation and disinformation is placed as the top risk in India, while it ranks as the sixth highest risk in the US, and eighth in the European Union. 

Also: AI safety and bias: Untangling the complex chain of AI training

WEF notes that the disruptive capabilities of manipulated information are rapidly accelerating. These capabilities are being fuelled by open access to increasingly sophisticated technologies and deteriorating trust in information and institutions. 

Over the next couple of years, a wide set of actors will capitalize on the explosion of synthetic content, amplifying societal divisions, ideological violence, and political repression, WEF said. 

With almost three billion citizens heading to the polls, including in India, Indonesia, the US, and the UK, the widespread use of misinformation and disinformation, as well as the tools to disseminate it, could undermine the legitimacy of new incoming governments. 

Niche skillsets are no longer required to access tools with user-friendly interfaces and large-scale AI models, WEF noted. This access has already led to an explosion in falsified information and "synthetic' content, such as sophisticated voice cloning and counterfeit websites.

"Synthetic content will manipulate individuals, damage economies, and fracture societies in numerous ways over the next two years," it said. "Falsified information could be deployed in pursuit of diverse goals, from climate activism to conflict escalation."

Also: AI in 2023: A year of breakthroughs that left no human thing unchanged

New classes of crimes also will proliferate, such as non-consensual deepfake pornography or stock market manipulation, WEF added.

All of these problems might lead to violent protests, hate crimes, civil conflicts, and terrorism, the non-governmental organization cautioned. 

To combat the risks of AI-generated information, some countries have already begun deploying new and evolving regulations that target both hosts and creators of online information and illegal content. 

Nascent regulation of generative AI is also likely to complement such efforts, it added, pointing to requirements in China to watermark AI-generated content as an example. Such rules might help identify false information, including unintentional misinformation through AI-hallucinated content.

Generally, however, the speed and effectiveness of regulation is unlikely to match the pace of development, WEF said.

It notes that recent technological advances have enhanced the volume, reach, and efficacy of falsified information, with flows that are more difficult to track, attribute, and control. Social media companies that are working to ensure platform integrity will also likely be overwhelmed due to multiple overlapping campaigns. 

Also: We're not ready for the impact of generative AI on elections

In addition, disinformation will be increasingly personalized to its recipients and targeted to specific groups, such as minority communities, and disseminated through more opaque messaging platforms, such as WhatsApp or WeChat.

WEF notes also that it's increasingly difficult to discern between AI-generated and human-generated content, even for detection mechanisms and tech-savvy individuals. However, some countries are attempting to address this challenge.

Singapore creating facility for deepfake detection 

Singapore this week announced plans to invest SG$20 million ($15.04m) in an online trust and safety research program, which will include a center tasked with building tools to curb harmful online content. Led by its Ministry of Communications and Information (MCI), the initiative is slated to run through 2028.

The new Centre for Advanced Technologies in Online Safety will aim to gather researchers and organizations from the country's online trust and safety sector to build "a vibrant ecosystem for a safer internet", MCI said. Scheduled for launch during the first half of 2024, the facility will focus on building and customizing tools to detect harmful content, such as deepfakes and non-factual claims. 

The center will seek to identify societal vulnerabilities and develop possible interventions, such as flagging or correcting misinformation, that could reduce online users' susceptibility to content deemed harmful. The facility will also test digital trust technologies, such as watermarking and content authentication. 

Tools developed at the center will be put forward for trial and adoption. 

Also: Singapore must take caution with AI use, review approach to public trust

MCI said discussions with local researchers and technology developers are already underway, with more than 100 professionals from academia and both public and private sectors already part of its community network. 

This network includes 30 participants, such as scientists, engineers, and operations staff, who are involved in work that will be carried out at the new center. 

Global populations should hope for such development efforts to yield effective detection tools because, if left unaddressed, misinformation may lead to two different circumstances. 

"Some governments and platforms, aiming to protect free speech and civil liberties, may fail to act to effectively curb falsified information and harmful content, making the definition of 'truth' increasingly contentious across societies," WEF said in its report. "State and non-state actors alike may leverage false information to widen fractures in societal views, erode public confidence in political institutions, and threaten national cohesion and coherence."

On the flip side, some nations may choose to address the problem with control.  

"As truth is undermined, the risk of domestic propaganda and censorship will also rise in turn," WEF said. "In response to mis- and disinformation, governments could be increasingly empowered to control information based on what they determine to be 'true'. Freedoms relating to the internet, press, and access to wider sources of information that are already in decline, risk descending into broader repression of information flows across a wider set of countries." 

tag-icon Tags chauds: technologie

Copyright © 2014-2024 Hi-Network.com | HAILIAN TECHNOLOGY CO., LIMITED | All Rights Reserved.