Anthropic, Google, Microsoft, and OpenAI, four of the most influential AI companies, have joined to establish the Frontier Model Forum, an initiative poised to delve into cutting-edge AI research and establish guidelines for responsible AI development in the industry. The forum's mission is to ensure the safe and human-controlled advancement of AI technology, with membership reserved for companies actively engaged in developing large-scale machine-learning models that surpass existing capabilities. Thus, its focus will be on addressing risks related to highly potent AI, rather than dealing with current regulatory concerns about copyright and privacy. The primary goals of the group include:
Why does it matter?By formulating their own guidelines, big tech wants to influence the development and deployment of AI, driven by industry decisions rather than strict regulatory measures. This approach aligns with historical practices in the US, where it was favoured self-regulation by the industry over stringent government regulations when dealing with new technologies and big tech.
Nevertheless, sceptics contend that these companies might be leveraging this initiative to evade more rigorous regulations and, instead, advocate for stricter regulations akin to those proposed by the EU. They argue that such measures are necessary to ensure close scrutiny of AI development.