Inscrivez-vous maintenant pour un meilleur devis personnalisé!

Why Biden's AI order is hamstrung by unavoidable vagueness

31 oct. 2023 Hi-network.com
Chip Somodevilla/Getty Images

US President Joseph Biden on Monday issued an executive order on artificial intelligence that -- the order states -- "establishes new standards for AI safety and security" and a variety of other laudable goals. 

The order's pitfalls, unfortunately, are many, mostly having to do with an unavoidable vagueness. 

Also:The ethics of generative AI: How we can harness this powerful technology

Granted, the Biden Administration's order is somewhat more specific and concrete than some other government position statements, such as one issued in March by the United Kingdom's Secretary of State for Science, Innovation, and Technology,  which is so general as to be potentially meaningless. 

But the Biden plan also leaves a lot of loopholes that will be hard to close. One proposal is to require companies to report to the government on "red-teaming" efforts, the process of assessing dangers in AI programs. But that directive doesn't explicitlyrequirered-teaming. It seems to leave it up to the companies whether or not they will red-team at all. The companies "must share the results of all red-team safety tests," it states. But how often and how extensively companies must test is not clear. 

There is a reference to the US Department of Commerce's National Institute of Standards and Technology, NIST, setting "rigorous standards for extensive red-team testing to ensure safety before public release." But does that mean red-testing will be required? Again, it's not clear. 

There is language about countering deep fakes by watermarking generative AI content. "The Department of Commerce will develop guidance for content authentication and watermarking to clearly label AI-generated content," the directive states. This is not a bad idea, except that malicious actors will obviously seek to avoid such watermarking, and it's not clear what will compel them otherwise, or how the watermarking of legitimately generated content might curtail illegitimately generated content. 

Also: With GPT-4, OpenAI opts for secrecy versus disclosure

There is an extensive discussion of protecting privacy, but it's entirely open-ended and vague. "The President calls on Congress to pass bipartisan data privacy legislation to protect all Americans, especially kids," the text reads. But the willingness to protect kids, while laudable, is so general it's not clear what policy choices should emerge from it. Clearly, both sides of the aisle in US politics can get behind child protection, but that hasn't helped produce much legislative substance in the past. It's not clear how putting AI in the mix will change that. 

The language about equal rights and protecting against discrimination is similarly open-ended. The executive order refers to "developing best practices on the use of AI in sentencing, parole and probation, pretrial release and detention." But best practices are simply a placeholder in this case. Probably, such practices emerge precisely from the many cases that will go on the docket and reveal how AI can help or harm outcomes that are just. 

The positive assertions of the initiative, such as driving research breakthroughs in AI -- "Catalyze AI research across the United States" -- are similarly vague. 

Also:Organizations are fighting for the ethical adoption of AI. Here's how you can help

The over-arching problem the Biden administration is up against -- the same problem that affects all regulators -- is that AI is a blanket term so broad that it covers just about anything, which makes it hard to be specific about AI. 

The term artificial intelligence was invented by a young computer scientist, John McCarthy, in 1956. At the time, it was simply a branding exercise on his part, a way to get grant funding. It didn't refer to anything specific. 

Years later, McCarthy's collaborator, Marvin Minsky of MIT, told an interviewer,  "I never used the word AI." The term, he said, had no real meaning, it was simply applied to anyone trying to "get machines to do more." 

"AI is just the most forward-looking part of computer science," said Minsky. "That's the definition that makes sense in terms of the history." 

Also:OpenAI assembles team of experts to fight 'catastrophic' AI risks - including nuclear war

In other words, trying to regulate AI is trying to regulate something that is so broad -- all of the latest computer science -- that it risks being meaningless. 

The Biden executive order comes at a time when numerous parties who are actually involved in the work of building computer systems are sounding an alarm. They include companies trying to be on the right side of public opinion and also get out in front of any potential regulation, such as OpenAI, with its team of experts approach.

The efforts also include concerned scientists working to avert an apparent Oppenheimer moment, a catastrophic release of deadly technology. That is the stated intention of the authors of the effort Managing AI Risks, including Turing Award-winning AI pioneers Geoffrey Hinton and Yoshua Bengio.

Also: There's a big risk in not knowing what OpenAI is building in the cloud, warn Oxford scholars

Those parties, both corporations and concerned scientists, know very well what they're working on. Unlike the government, they are not vague in their understanding. But there's a huge gulf between their very specific concerns and the government discussion that is broad and very shallow. 

Some of the responsibility for the gap lies with the scientists themselves, who have done far too little to educate the public about what this "AI" stuff is. Responsibility also rests with corporations such as OpenAI that have increasingly shrouded what they do in secrecy. 

Crossing that comprehension gap will be essential to producing regulation that has any teeth to it. It's not yet clear what will bridge that gap.

Artificial Intelligence

AI at the edge: Fast times ahead for 5G and the Internet of ThingsAI pioneer Daphne Koller sees generative AI leading to cancer breakthroughsWorried about AI gobbling up your job? Start doing these 3 things nowWith AI, organizations are now seeing software developers as great collaborators
  • AI at the edge: Fast times ahead for 5G and the Internet of Things
  • AI pioneer Daphne Koller sees generative AI leading to cancer breakthroughs
  • Worried about AI gobbling up your job? Start doing these 3 things now
  • With AI, organizations are now seeing software developers as great collaborators

tag-icon Tags chauds: Intelligence artificielle Innovation et Innovation

Copyright © 2014-2024 Hi-Network.com | HAILIAN TECHNOLOGY CO., LIMITED | All Rights Reserved.