The UK's Competition and Markets Authority (CMA) has warned about the potential risks of artificial intelligence in its newly published review into AI foundation models.
Foundation models are AI systems that have been trained on massive, unlabeled data sets. They underpin large language models - like OpenAI's GPT-4 and Google's PaLM - for generative AI applications like ChatGPT, and can be used for a wide range of tasks, such as translating text and analyzing medical images.
The new report proposes a number of principles to guide the ongoing development and use of foundation models, drawing on input from 70 stakeholders, including a range of developers, businesses, consumer and industry organizations, academics, and publicly available information.
The proposed principles are:
While the CMA report highlights how people and businesses stand to benefit from correctly implemented and well developed foundation models, it cautioned that if competition is weak or AI developers fail to comply with consumer protection laws, it could lead to societal harm. Examples given include citizens being exposed to "significant levels" of false and misleading information and AI-enabled fraud.
The CMA also warned that in the longer term, market dominance from a small number of firms could lead to anticompetition concerns, with established players using foundation models to entrench their position and deliver overpriced or poor quality products and services.
"The speed at which AI is becoming part of everyday life for people and businesses is dramatic. There is real potential for this technology to turbo charge productivity and make millions of everyday tasks easier -but we can't take a positive future for granted," said Sarah Cardell, CEO of the CMA, in comments posted alongside the report.
"There remains a real risk that the use of AI develops in a way that undermines consumer trust or is dominated by a few players who exert market power that prevents the full benefits being felt across the economy."
The CMA said that as part of its program of engagement, it would continue to speak to a wide range of interested parties, including consumer groups, governments, other regulators, and leading AI foundation model developers such as Anthropic, Google, Meta, Microsoft, NVIDIA, and OpenAI.
The regulator will provide an update on its thinking, including how the principles have been received and adopted, in early 2024.
The CMA is just one regulator that the UK government has tasked with weighing in on the country's AI policy. In March, the government published a white paper setting out its guidelines for the "responsible use" of the technology.
However, in order to "avoid heavy-handed legislation which could stifle innovation," the government has opted to give responsibility for AI governance to sectoral regulators who will have to rely on existing powers in the absence of any new laws.
"The CMA has shown a laudable willingness to engage proactively with the rapidly growing AI sector, to ensure that its competition and consumer protection agendas are engaged as early a juncture as possible," said Gareth Mills, partner at law firm Charles Russell Speechlys.
He added that while the principles contained in the report are "necessarily broad," they have been clearly designed to create a low entry requirement for the sector, allowing smaller players to compete effectively with more established names, while mitigating against the potential for AI technologies to negatively impact consumers.
"It will be intriguing to see how the CMA seeks to regulate the market to ensure that competition concerns are addressed," Russell said.