Inscrivez-vous maintenant pour un meilleur devis personnalisé!

US forms task force to explore guardrails for AI

Feb, 26, 2024 Hi-network.com

In a new effort to push for AI regulations, the US House of Representatives members have formed a bipartisan task force to explore legislation to address the growing worries and concerns related to AI adoption.

"The task force will seek to produce a comprehensive report that will include guiding principles, forward-looking recommendations, and bipartisan policy proposals developed in consultation with committees of jurisdiction," said the press release announcing the task force.

The task force will explore "guardrails that may be appropriate to safeguard the nation against current and emerging threats," the press release said.

A responsible and ethical tech strategy is critical for the long-term benefits of AI, pointed out Charlie Dai, vice president and principal analyst at Forrester. "While the potential legislation efforts will urge enterprises and tech vendors to rebalance the AI investment priorities, which might slowdown the innovation pace from a business perspective in the short term, it will substantially foster AI advancement in terms of security, privacy, ethics, and sustainability, which will be critical for the public trust on AI in the long run."

According to Counterpoint Research Senior Analyst, Akshara Bassi, "AI regulation would come into play when it becomes part of active decision-making. So far, we are still using rule-based intelligence to complement decision-making. As AI becomes more evolved and sophisticated, regulations will help give AI models a structure and help in clear demarcation of boundaries, especially related to data sharing, privacy, and copyrights."

Lack of clear regulations may be counterproductive

The US has taken several steps to come up with regulations to leverage AI for economic growth while addressing the concerns related to AI adoption. For instance, AI-generated voices were declared illegal by the Federal Communications Commission earlier this month.

Recently, the US government announced the establishment of the US AI Safety Institute (AISI) under the National Institute for Standards and Technology (NIST) to harness the potential of AI while mitigating its risks. Several major technology firms, including OpenAI, Meta, Google, Microsoft, Amazon, Intel, and Nvidia, joined the consortium to ensure the safe development of AI.

Even so, a lack of clear and well-defined regulations can potentially impact the country's growth in AI.

Delays in drafting a comprehensive set of legislation may deter enterprises from deploying the technology to grow their business.

"In 2023 alone, 190 bills were introduced at the state level to regulate AI, and 14 became law. At the federal level, the Federal Trade Commission (FTC) has begun to enforce existing laws with new powers from executive orders as well as more attention from FTC leadership. This could cause a dampening effect on enterprise AI innovation and strategy," said a recent blog post by Michele Goetz, principal analyst, and Alla Valente, senior analyst at Forrester. 

Recently, the EU emerged as the first major power to introduce laws to govern the use of AI. Several countries, including the UK and Australia, among others, are working towards developing regulations and policies so they can confidently use AI to grow their economy while protecting themselves from potential risks.

The launch of OpenAI's ChatGPT in November 2022 was disruptive and led to a significant increase in the adoption of the technology. At the same time, it has raised several cybersecurity and data privacy concerns, prompting countries to accelerate AI regulations.

tag-icon Tags chauds: Intelligence artificielle réglementation

Copyright © 2014-2024 Hi-Network.com | HAILIAN TECHNOLOGY CO., LIMITED | All Rights Reserved.