Inscrivez-vous maintenant pour un meilleur devis personnalisé!

Nouvelles chaudes

Microsoft Vice Chair and President Brad Smith testimony before Senate on AI

13 sept. 2023 Hi-network.com

Microsoft Vice Chair and President Brad Smith testafied before a Senate Judiciary subcommittee in a hearing titled 'Oversight of A.I.: Legislating on Artificial Intelligence.' In his written statement, Brad Smith outlines Microsoft's proposed principles for shaping legislation to promote the safe, secure, and reliable development of AI.

Read Brad Smith's full testimony.

Brad Smith welcomed the framework released by Senators Blumenthal and Hawley, which he believes is a strong and positive step towards effectively regulating AI. He holds the view that regulatory considerations should be built upon this framework, a point he further elucidated in his recommendations.

The Blumenthal and Hawley framework builds on other federal efforts, like the White House AI commitments unveiled in July and the bipartisan AI Insight Forums, providing the constructive interplay needed between the executive and legislative branches. The centrepiece of the Blumenthal-Hawley framework is the creation of a federal oversight body that would license new AI models that companies seek to put on the market for either general consumption or for more specific purposes such as facial-recognition software.

Read about the key elements of the Blumenthal-Hawley framework
  • Establish a Licensing Regime Administered by an Independent Oversight Body: Companies developing sophisticated general-purpose AI models or models used in high-risk situations should be required to register with an independent oversight body. This body would have the authority to audit companies seeking licenses and cooperate with other enforcers such as state Attorneys General1.
  • Ensure Legal Accountability for Harms: The framework proposes that Congress should require AI companies to be held liable when their models and systems breach privacy, violate civil rights, or cause other harms1.
  • Promote Transparency and Protect Personal Data: The framework emphasizes the importance of transparency in AI systems and the protection of consumers' personal data1.

'As the legislative process moves forward, I hope Congress will include three goals in the list of priorities that deserve the most attention.' These main points outline the key themes and recommendations presented in the text.

Congress should prioritise AI safety and security.

Congress should prioritise AI safety and security with the Blumenthal-Hawley framework. The Blumenthal-Hawley framework addresses these needs in a strong manner, including by proposing a licensing regime under an independent oversight body with a risk-based approach for AI models and uses. Microsoft supports this approach.

Congress should ensure that AI is used in a manner that complies with longstanding legal protections for consumers and citizens. 

This should include the protection of privacy, civil rights, and the needs of children, as well as safeguards against dangerous deepfakes and election interference. The Blumenthal-Hawley framework addresses these issues while considering the roles of AI developers and deployers, aiming for a practical balance between technology advancement and citizen protection.

Congress should ensure that AI is put to good use to build a government that can better serve our citizens.

  • AI for government improvement: AI should be leveraged to build a government that better serves citizens. The text highlights the opportunity to use AI to improve healthcare, education, public services, and government efficiency.
  • Expanding the framework: The hope is that the Blumenthal-Hawley framework will expand to address both risks and opportunities associated with AI, particularly in building a more efficient government.
Microsoft logo

Further, his testimony explored the fundamental principles that Microsoft believes should shape AI legislation. In his view, these principles are vital for creating a regulatory framework that ensures AI's responsible and effective use.

  1. Promote accountability in AI development and deployment:Microsoft believes that accountability is crucial in AI systems. This means effective human oversight of AI systems and accountability for developers and deployers to ensure the rule of law is upheld.
  2. Build on existing regualtory efforts:Initiatives like the White House voluntary commitments and the NIST AI Risk Management Framework provide a foundation for AI safety and should be considered in regulatory efforts.
  3. Require Safety Brakes for Critical Infrastructure AI:Highly capable AI models used in critical infrastructure should have 'safety brakes' to ensure human control in case of failures.
  4. KY3C: Know Your Customer, Cloud, and Content:Microsoft proposes a regulatory framework that requires actors in the AI value chain to 'know' their customers, the cloud infrastructure they use, and the content they produce. This helps in responsible AI use and guarding against misuse.
  5. Ensure Regulatory Framework Matches AI Technology Architecture:Regulations should align with the different layers of AI technology, including applications, AI models, and cloud infrastructure. Highly capable AI models may require specific licensing and oversight to ensure safety and security.

tag-icon Tags chauds: Intelligence artificielle Protection des consommateurs Ressources internet critiques

Copyright © 2014-2024 Hi-Network.com | HAILIAN TECHNOLOGY CO., LIMITED | All Rights Reserved.
Our company's operations and information are independent of the manufacturers' positions, nor a part of any listed trademarks company.