Inscrivez-vous maintenant pour un meilleur devis personnalisé!

What is artificial general intelligence?

Jul, 01, 2024 Hi-network.com
growing brain concept
Yuichiro Chino/Getty Images

What is artificial general intelligence?

The term artificial general intelligence has been used for almost three decades in computer science. It's generally used to refer to a computer system that can solve problems as well as, or better than, a human being. 

The term is broad and vague, and so it has acquired different meanings. With the success in recent years of generative AI programs (Gen AI), and large language models such as GPT-4, some experts have sought to define AGI as having the ability to surpass the narrow problem-solving capabilities of individual generative AI models.

However, observers are divided over just what things AGI should be capable of, as is made clear in a survey of the concept published by researchers at Google's DeepMind unit. 

As lead author Meredith Ringel Morris and team relate, some thinkers limit AGI to "cognitive" tasks, which means, non-physical goals, so that an AGI program would not have to, for example, be able to move like a person through the physical world. That definition would leave robotics out of the equation. 

Also: 81% of workers using AI are more productive. Here's how to implement it

Others argue for an "embodied" AGI that can handle real physical tasks. For example, Apple co-founder Steve Wozniak has set as one of the key tests of computer intelligence literally making a cup of coffee. "When is a computer ever going to get to that level," he asked in 2010. That challenge is not achievable yet with today's AI-driven robotics systems.

Another division is how much AGI should mimic the processes of the human brain. Before the term AGI emerged, philosopher John Searle famously argued for something similar to AGI -- what he called "strong AI". In Searle's view, a strong AI program should not just perform tasks like a person does, it should replicate the human thought process. 

"The appropriately programmed computer really is a mind," wrote Searle, "in the sense that computers given the right programs can be literally said to understand and have other cognitive states." 

Others argue that states of mind, consciousness, sentience, and thought processes are irrelevant if a computer can produce human-like behavior, as measured in tests such as Alan Turing's famous "imitation game".

For example, the founding charter of OpenAI defines AGI as "highly autonomous systems that outperform humans at most economically valuable work," with no reference to a machine having thought processes that mimic those of humans.

When will artificial general intelligence be invented?

Leaving aside precise definitions, most people in AI agree that AGI does not yet exist. Expectations for when it may exist vary widely, from quite soon to never. 

The DeepMind authors emphasize that there are levels of AGI, just like there are levels of autonomous driving, where navigating the road rises gradually from cruise control to no one at the wheel. 

"Much as the adoption of a standard set of Levels of Driving Automation allowed for clear discussions of policy and progress relating to autonomous vehicles, we posit there is value in defining 'Levels of AGI'," they write.

By that measure, the authors argue that today's Gen AIs such as ChatGPT represent "emerging AGI", with capabilities "equal to an unskilled human". 

From emerging AGI, the levels identified by the DeepMind team rise through "competent AGI", "expert AGI", "virtuoso AGI", and "artificial superintelligence" or, "ASI", passing from capabilities that equal a modestly skilled adult to those capabilities that outperform 100% of humans.

Also: Do employers want AI skills or AI-enhanced skills?

However, many observers have not relied on graded definitions but instead hypothesize a tipping point, or threshold, where computer intelligence becomes qualitatively equal or even superior to human capabilities. 

One such advocate of a hard break is Ray Kurzweil, Google's director of engineering. In his recent book The Singularity is Nearer, Kurzweil reiterated his view that AGI will arrive in 2029. Kurzweil defined AGI as "AI that can perform any cognitive task an educated human can." By 2029, he argued, a "robust" version of the Turing Test will be passed by AGI. 

Kurzweil cautioned that some abilities of human-level intelligence will take longer. "It remains an open question which skills will ultimately prove hardest for AI to master," he wrote. 

"It might turn out, for example, that in 2034 AI can compose Grammy-winning songs but not write Oscar-winning screenplays, and can solve Millennium Prize Problems in math but not generate deep new philosophical insights."

Some have suggested AGI's arrival even sooner. A former OpenAI employee, Leopold Aschenbrenner, has written that AGI is "strikingly possible" come 2027, based on extrapolating the advances made by OpenAI's GPT models. Those models, he claimed, are now equal in ability at problem-solving to a "smart" human high school student. 

"By 2025/26, these machines will outpace college graduates," predicted Aschenbrenner. "By the end of the decade, they will be smarter than you or I; we will have superintelligence, in the true sense of the word."

Also: What is a Chief AI Officer, and how do you become one?

AI critic Gary Marcus argued none of the current paths in AI, including large language models, will lead to AGI. "LLMs are not AGI, and (on their own) never will be," Marcus wrote in June 2024, "scaling alone was never going to be enough." 

Elsewhere, Marcus has declared current "foundation models," such as GPT-4, to be "more like parlor tricks than genuine intelligence. They work impressively well some of the time but also frequently fail, in ways that are erratic, unsystematic, and even downright foolish."

In a recent Substack, Marcus pointed out what he sees as the misguided approach of today's Gen AI by calling attention to remarks by OpenAI's CTO, Mira Murati. Murati is quoted at a recent conference as saying that AI models under development internally at OpenAI are "not that far ahead" of what currently exists.

Another observer who is similarly skeptical of the advent of AGI is Meta Properties's chief scientist, Yann LeCun. In an interview with in 2022, LeCun declared that most of today's AI approaches will never lead to true intelligence, as he sees it.

"We see a lot of claims as to what should we do to push forward towards human-level AI," LeCun said. "And there are ideas which I think are misdirected."

"We're not to the point where our intelligent machines have as much common sense as a cat," observed LeCun. "So, why don't we start there?"

What could an artificial general intelligence do?

Given the many definitions of AGI and divisions, there are also many predictions about what AGI will be like if and when it arrives or is created.

In their survey, the Google DeepMind scholars agreed with definitions of AGI that don't involve mimicking human thought processes. In contrast to Searle's strong AI, which said AGI would have thoughts like a human brain, the DeepMind scholars argued for capabilities, the practical achievements or output of AGI.  

"This focus on capabilities implies that AGI systems need not necessarily think or understand in a human-like way (since this focuses on processes)," wrote Ringel Morris and team. 

"Similarly, it is not a necessary precursor for AGI that systems possess qualities such as consciousness (subjective awareness)."

The focus on capabilities is echoed by Kurzweil in his book. Kurzweil emphasized that AGI should replicate the capabilities of the human brain, though it need not follow the same processes as the brain.

Also: Master AI with no tech skills? Why complex systems demand diverse learning

For example, AGI will demonstrate a creative ability similar to the human neocortex, the youngest part of the human brain, argued Kurzweil. The neocortex is responsible for higher-level cognitive functions, such as analogical reasoning, and an ability to operate in multiple domains, such as language and imagery.

In particular, Kurzweil sees the apex of AI as replicating the varying levels of abstraction that exist in the neocortex. 

"Much like artificial neural networks running on silicon hardware, neural networks in the brain use hierarchical layers that separate raw data inputs (sensory signals, in the human case) and outputs (for humans, behavior)," wrote Kurzweil. "This structure allows progressive levels of abstraction, culminating in the subtle forms of cognition that we recognize as human."

Kurzweil also describes multiple "deficiencies" in today's AI that could presumably be resolved by AGI: "contextual memory, common sense, and social interaction." 

For example, a program that can understand all of the topics that come up in conversation -- a very long context, in other words -- would also have the ability, claims Kurzweil, to "write a novel with a consistent and logical plot."

To close the gaps in current Gen AI, Kurzweil believes the impressive language ability of today's large language models will need to improve significantly. "Today, AI's still-limited ability to efficiently understand language acts as a bottleneck on its overall knowledge," he declared. 

Also: Beyond programming: AI spawns a new generation of job roles

In contrast to Kurzweil's positive assertions, critic Marcus has at times described AGI in terms of what he believes the current approaches will never achieve. 

"In 2029, AI will not be able to watch a movie and tell you accurately what is going on [...] Who are the characters? What are their conflicts and motivations? etc."

However, Marcus has also offered positive definitions of AGI. In 2022, for example, he wrote on X, "Personally, I use it as a shorthand for any intelligence (there might be many) that is flexible and general, with resourcefulness and reliability comparable to (or beyond) human intelligence."

Marcus's emphasis on "flexibility" implies that AGI should not only achieve many of the tasks of the average person, but should also be able to acquire new task-solving capabilities, a kind of "meta" capability. Many observers, including Kurzweil, would agree that the learning aspect of a general intelligence is a key element of an AGI.

Still, other scholars frame the AGI discussion in terms of traditional human processes, much as Searle did. NYU philosophy professor David Chalmers, for example, has raised the question of consciousness as a fundamental question in AI development.

Chalmers maintains that there may be "greater than 20% probability that we may have consciousness in some of these [large language model] systems in a decade or two."

How would you create an artificial general intelligence?

Creating AGI roughly falls into two camps: sticking with current approaches to AI and extending them to greater scale, or striking out in new directions that have not been as extensively explored. 

The dominant form of AI is the "deep learning" field within machine learning, where neural networks are trained on large data sets. Given the progress seen in that approach, such as the progression of OpenAI's language models from GPT-1 to GPT-2 to GPT-3 and GPT-4, many advocate for staying the course.

Kurzweil, for example, sees AGI as an extension of recent progress on large language models, such as Google's Gemini. "Scaling up such models closer and closer to the complexity of the human brain is the key driver of these trends," he writes. 

To Kurzweil, scaling current AI is like the famous Moore's Law rule of semiconductors, by which chips have gotten progressively more powerful. Moore's Law progress, he writes, is an instance of a broad concept coined by Kurzweil, "accelerating returns." The progress in Gen AI, asserts Kurzweil, has shown even faster growth than Moore's Law because of smart algorithms.  

Programs such as OpenAI's DALL*E, which can create an image from scratch, are the beginning of human-like creativity, in Kurzweil's view. Describing in text an image that has never been seen before, such as, " A cocktail glass making love to a napkin," will prompt an original picture from the program. 

Also: Generative AI is the technology that IT feels most pressure to exploit

Kurzweil views such image generation as an example of "zero-shot learning", when a trained AI model can produce output that is not in its training data. "Zero-shot learning is the very essence of analogical thinking and intelligence itself," writes Kurzweil. 

"This creativity will transform creative fields that recently seemed strictly in the human realm," he writes.

But, neural nets must progress from particular, narrow tasks such as outputting sentences to much greater flexibility, and a capacity to handle multiple tasks. Google's DeepMind unit created a rough draft of such a flexible AI model in 2022, the Gato model, which was followed the same year by another, more flexible model, PaLM.

Larger and larger models, argues Kurzweil, will even achieve some of the areas he considers deficient in Gen AI at the moment, such as "world modeling", where the AI model has a "robust model of how the real world works." That ability would allow AGI to demonstrate common sense, he maintains.

Kurzweil insists that it doesn't matter so much how a machine arrives at human-like behavior, as long as the output is correct. 

"If different computational processes lead a future AI to make groundbreaking scientific discoveries or write heartrending novels, why should we care how they were generated?" he writes.

Again, the authors of the DeepMind survey emphasize AGI development as an ongoing process that will reach different levels, rather than a single tipping point as Kurzweil implies.

Also: 8 ways to reduce ChatGPT hallucinations

Others are skeptical of the current path given that today's Gen AI has been focused mostly on potentially useful applications regardless of their "human-like" quality.  

Gary Marcus has argued that a combination is necessary between today's neural network-based deep learning and the other longstanding tradition in AI, symbolic reasoning. Such a hybrid would be "neuro-symbolic" reasoning. 

Marcus is not alone. A venture-backed startup named Symbolica has recently emerged from stealth mode championing a form of neuro-symbolic hybrid. The company's mission statement implies it will surpass what it sees as the limitations of large language models.

"All current state of the art large language models such as ChatGPT, Claude, and Gemini, are based on the same core architecture," the company says. "As a result, they all suffer from the same limitations."

The neuro-symoblic approach of Symbolica goes to the heart of the debate between "capabilities" and "processes" cited above. It's wrong to do away with processes, argue Symbolica's founders, just as philosopher Searle argued. 

"Symbolica's cognitive architecture models the multi-scale generative processes used by human experts," the company claims.

Also: ChatGPT is 'not particularly innovative,' and 'nothing revolutionary', says Meta's chief AI scientist

Also skeptical of the status quo is Meta's LeCun. He reiterated his skepticism of conventional Gen AI approaches in recent remarks. In a post on X, LeCun drew attention to the failure of Anthropic's Claude to solve a basic reasoning problem. 

Instead, LeCun has argued for doing away with AI models that rely on measuring probability distributions, which include basically all large language models and related multimodal models.

Instead, LeCun pushes for what are called energy-based models, which borrow concepts from statistical physics. Those models, he has argued, may lead the way to "abstract prediction", says LeCun, allowing for a "unified world model" for an AI capable of planning multi-stage tasks.

Also: Meta's AI luminary LeCun explores deep learning's energy frontier

Chalmers maintains that there may be "greater than 20% probability that we may have consciousness in some of these [large language model] systems in a decade or two."

Could an artificial general intelligence outsmart humans?

Most theories about AGI imply that, at some point, the programs will not only equal but will outstrip human cognitive and problem-solving abilities.

As mentioned earlier, the DeepMind survey authors imply that the top of the pyramid of AGI levels, Level 5, implies a machine that beats all humans on any general intelligence challenge. That is a natural consequence of increasing computational power, and more and more data, "unlocking" capabilities, as the authors put it. 

Kurzweil sees the prospect of super-human performance being achieved. "Any kind of skill that generates clear enough performance feedback data can be turned into a deep-learning model that propels AI beyond all humans' abilities," he writes.

Also: The new Turing test: Are you human?

However, it's not necessarily the case, argues Kurzweil, that a super-human AGI will have mastered everything; it may be good only at certain things. "It is even possible that AI could achieve a superhuman level of skill at programming itself before it masters the commonsense social subtleties of the Turing test."

As a result, "We'll also need to develop more sophisticated means of assessing the complex and varied ways that human and machine intelligence will be similar and different."

Indeed, the modern activities of humans, such as "doomscrolling" social media accounts, call into question the fundamental notions of what is human-like, which means that before AGI arrives, it's also possible human cognition, and intelligence, will be fundamentally reassessed. 

What is superintelligence?

Along with different definitions of AGI, there are also different notions of how to view human intelligence in the context of AGI. 

The DeepMind authors, as mentioned, see the culmination of levels of AGI in artificial superintelligence, where the machine "outperforms 100% of humans" across tasks. 

And yet, the authors also hold out the prospect that an ASI can be designed as a complement to human ability, provided the right human-computer interface design is pursued. 

"The role of human-AI interaction research can be viewed as ensuring new AI systems are usable by and useful to people such that AI systems successfully extend people's capabilities (i.e., intelligence augmentation')," they write.

Others take the term superintelligence at face value, as mentioned by former OpenAI employee Aschenbrenner, who equates the term with machines surpassing humans. 

Rather than outsmarting humans, Kurzweil's focus is on augmenting human intelligence -- what he calls "the singularity". 

The singularity is an extension of the neocortex "into the cloud", in Kurzweil's view.

"When humans are able to connect our neocortices directly to cloud-based computation, we'll unlock the potential for even more abstract thought than our organic brains can currently support on their own," he writes.

Also: How does ChatGPT actually work?

A cloud-connected electronic device would form connections with only part of the human neocortex, its "upper regions," while most human cognitive activity would still be in the human neocortex.

That merger could happen via a variety of approaches, including the "Neuralink" implant that Elon Musk is backing. 

However, Kurzweil anticipates a far gentler merger: "Ultimately, brain-computer interfaces will be essentially noninvasive-which will likely entail harmless nanoscale electrodes inserted into the brain through the bloodstream."

Some skeptics have argued that AGI is distracting from the benefits that AI can bring to human functioning. For example, Stability.ai's founder and CEO, Emad Mostaque, has told : "We are in the right place, ethically in terms of bringing this technology to everyone by focusing not on AGI to replace humans, but how do we augment humans with small, nimble models." 

How do we stop a general AI from breaking its constraints?

Among the many ethical considerations inherent in AGI is the issue of how to prevent such programs from causing harm. The debate among scholars is not just over how to prevent harm, but also how to define it.

The DeepMind survey authors emphasize that even a Level 5, artificial superintelligence might not actually be "autonomous." It might have cognitive capabilities but be constrained in its task execution for safety reasons.

The authors make the analogy to user interface design and human-computer interface design -- it's up to creators of AGI, and society, to decide how much autonomy is given to such systems. 

"Consider, for instance, the affordances of user interfaces for AGI systems," write Ringel Morris and team. "Increasing capabilities unlock new interaction paradigms, but do not determine them [emphasis the authors']. Rather, system designers and end-users will settle on a mode of human-AI interaction."

The DeepMind team emphasizes that whether there is harm or not is dependent not only on the capabilities unlocked in AGI but also on what kinds of interaction are designed into that AGI. A positive outcome, such as human superintelligence, can come with the right design of human-computer interaction.

Others see a broad and deep potential for harm for which there are no immediate answers. 

Kurzweil writes: "Superintelligent AI entails a fundamentally different kind of peril -- in fact, the primary peril. If AI is smarter than its human creators, it could potentially find a way around any precautionary measures that have been put in place. There is no general strategy that can definitively overcome that."

Similarly, Gary Marcus has argued that there is no plan to deal with any AGI that should arise. "If the fiasco that has been Gen AI has been any sign, self-regulation is a farce," wrote Marcus in a recent Substack, "and the US legislature has made almost no progress thus far in reining in Silicon Valley. It's imperative that we do better."