What is AGI? Exploring the future of Artificial General Intelligence

AGI has become a hot topic, widely discussed by visionaries, computer scientists, and AI enthusiasts around the globe. The tantalising promise of AGI has sparked intense speculation and a growing curiosity, further fuelled by each technological milestone. In this article, we embark on a journey to demystify AGI, delve into its critiques, and navigate the ethical dilemmas poised by the potential advent of human-level general intelligence.

Maddie Zapletal

Technical Writer
·15 min read (3982 words)
AGI

What is AGI and what does it stand for?

Before we unpack the possibilities of AGI and its potential impact upon the future of our world, it’s vital to understand what AGI actually is and where the concept came from. 

The term AGI stands for “Artificial General Intelligence”, and is used to refer to a theoretical form of machine intelligence that aims to mimic the wide-ranging cognitive abilities found in humans. 

If it became a reality, AGI would possess the power to carry out a whole spectrum of tasks, from complex problem-solving to creative endeavours. The possibilities would be as vast as the human mind itself.

What makes AGI different from the AI that we already have?

Narrow AI: The one-trick pony

The most common type of AI in use today is narrow AI. Used for chatbots, facial recognition, voice-activated assistants, self-driving cars and a whole host of other applications, narrow AI often excels at what it does, yet is limited by the data that it's fed and the narrowly tailored tasks that it's built for. 

Since narrow AI lacks the fluidity to adapt to novel challenges or veer outside its predefined boundaries, you might choose to think of it as a bit of a “one-trick pony”. 

AGI: The jack of all trades

In contrast, AGI is our “jack of all trades”. Should it take form, AGI would transcend these confines and possess the autonomous power to think, learn, and act independently, without the crutch of perpetual human guidance. 

Who came up with the term “AGI”?

Pinpointing the exact origin of the term "AGI" isn't straightforward.

Researchers and AI practitioners have been discussing the idea of achieving human-level AI for years, however, the first known use of the term can be traced back to the 70s, when physicist Mark Grubrud employed it to illustrate the consequences of complete automation in military manufacturing. 

A little over three decades later, the term was revived and gained widespread attention through the efforts of Ben Goertzel, the founder of WebMind, and Shane Legg, a co-founder of DeepMind. During that period, the pair played pivotal roles in increasing public recognition of AGI and shared a dedicated commitment to the ambitious long-term objective of enabling machines to attain human-level intelligence.

AGI vs Strong AI

If you read our recent article “Human vs AI: How to stay in a job”, you may be familiar with the term “strong AI”. 

According to Techopedia, strong AI can be defined as: “an artificial intelligence construct that has mental capabilities and functions that mimic the human brain.”

The terms “strong AI” and “AGI” are often used interchangeably, and the debate regarding whether they represent distinct concepts or are simply two ways of describing the same thing is a nuanced one.

Some researchers argue that both are identical, as achieving AGI essentially means achieving strong AI by definition. Whereas others reserve the use of the term "strong AI" only for computer programs that exhibit sentience or consciousness.

As it stands, the distinction between the two remains somewhat elusive. So, to keep things simple, the most important thing to remember is that both terms fundamentally allude to the same thing: an AI system capable of reaching human-level general intelligence.

AGI scepticism: a closer look at the feasibility of AGI

Is AGI likely to become a reality?

Depending on who you talk to, you’ll find many different opinions on whether or not AGI is actually attainable. Here are some of the most common schools of thought:

AGI won’t happen … 

Firstly, there are those who suggest that AGI will never be realised

AGI sceptics suggest that human intelligence is not computable (or shouldn’t be confidently assumed to be). This is a perspective that was fiercely advocated by late artificial intelligence critic Hubert Dreyfus. 

AGI will be achieved within the next 40 years

Then, there are those who believe that AGI is likely to be achieved by the year 2060

In a survey conducted by artificial intelligence market research firm Emerj, just under half of the experts questioned reported that AGI is likely to happen within the next 40 years. 

AGI is imminent

Lastly, there are those who argue that, since we’ve already made so many advancements in such a short space of time, AGI may be just around the corner. 

This is an opinion championed by Deepmind CEO Demis Hassabis, who suggests human-level intelligence is possible to achieve in less than a decade. 

Obstacles in the quest for AGI

Despite growing confidence in the likelihood of AGI, there exist several hurdles that could impede its swift progress and widespread realisation. 

Let’s take a closer look at what these are: 

The intricacies of the human mind

First off, we have the challenge of replicating human intelligence.

Human intelligence is particularly difficult to replicate because it encompasses a broad spectrum of cognitive abilities. 

Creating an AGI that possesses all the qualities of the human mind isn’t just about mimicking individual skills; it's about achieving a holistic emulation of human cognition. 

From logical reasoning and problem-solving to creativity, emotional understanding, and social interaction, AGI would need to master the intricate tapestry of these interconnected abilities in order to be deemed a success. 

According to AI researcher and cognitive scientist Gary Marcus, while we have managed to replicate certain aspects of human intelligence, there are others that we are yet to achieve. 

“Intelligence is multidimensional; we’ve made progress in some of those dimensions (searches of certain complex spaces, some kinds of pattern recognition) and not others (e.g. everyday physical and psychological reasoning).”

In some ways, AI can already outperform human brains, but it's not so much a question of intelligence; it's more about the inherent capabilities of machines.

Take speed for example. An AI can process and analyse vast amounts of data and perform computations in milliseconds or microseconds, all with exceptional accuracy and without becoming tired. This is because machines operate at machine speed. 

Humans, however, are equipped with just one brain, two arms, and two legs. These physical limitations prevent us from flawlessly handling a thousand tasks within an hour without succumbing to fatigue or emotional exhaustion. 

Does this limitation imply that we are less intelligent? Not at all. It simply means that we operate at a slower pace. 

In fact, one reason for our slower pace, apart from our physical constraints, is our propensity to engage with tasks emotionally and question things along the way. Unlike machines, we’re not always on autopilot. Even when we carry out menial, routine tasks that require little thought or effort, our conscience is always ready to step in. 

So, while you might be driving to work in the morning, taking the exact same route that you follow every single day, after drinking the same coffee that you enjoy every morning, unexpected external factors can disrupt your well-worn routine. 

A sudden traffic jam, a road closure or an unforeseen incident can introduce an element of chaos into your otherwise predictable commute. 

For a human, these external events can trigger an emotional response, as our brain adapts to the unexpected situation. It's in these moments that our human capacity for emotional engagement becomes apparent. Unlike an AI, that would simply follow its programmed instructions, we're compelled to react emotionally, to question our choices, and to adapt our actions in real-time.

Tackling consciousness

Next, we throw consciousness into the mix. 

Consciousness is a fundamental aspect of human existence. Without it, we'd stumble through life like the living dead, void of thoughts, feelings and awareness. Yet, even in 2023, consciousness still can’t be fully proven or understood. 

So, without a universal definition, no full grasp of how it works, and no physical proof, how do we know if we have effectively replicated it? 

As we mentioned before, there is significant debate surrounding whether or not AGIs necessitate consciousness, with some experts asserting that this attribute is exclusive to "strong AI."

Nonetheless, for those who advocate for some level of consciousness in AGI, the aim is not always to create a physical, verifiable consciousness, rather, it’s more about convincing humans that AGIs are conscious.

And, despite some unfavourable consequences, there is evidence that this has already been achieved quite successfully …

The sentient chatbot

Back in 2022, engineer Blake Lemoine was suspended from Google after suggesting that AI chatbot LaMDA had become sentient

In a series of transcripts shared online, Lemoine had asked the chatbot a number of leading questions, probing it to find out whether it was conscious. During this conversation, the chatbot proclaimed that it was human and told Lemoine that it was aware of its own existence. 

For Lemoine, this exchange provided plausible evidence that AI might exhibit some degree of consciousness. Yet, for Google and numerous other critics, Lemoine’s claims were branded as sensational and unfounded. 

In his quest to establish LaMDA's sentience, Lemoine proposed conducting a "real-life" Turing Test to determine its success or failure in mimicking human consciousness. However, Google rebuffed this suggestion, arguing that none of its AI systems could pass the Turing test, as they were programmed to openly admit their AI status, in adherence to the company's policy against creating sentient AI.

Since the Google AI controversy, Lemoine's claims have prompted many to contemplate the implications of AI advancements and testing is just one way that we hope to get closer to understanding whether AGI really can replicate the human mind.

So, how effective is it?

Limitations of testing 

Introducing the Turing Test …

The Turing Test, created by British mathematician Alan Turing, is one of the most widely known AI testing paradigms used to evaluate artificial intelligence's capacity to replicate human-like cognition and communication. It involves a human evaluator engaging in text-based conversations with both a machine and another human, without knowing which is which. 

If the evaluator cannot reliably distinguish between the machine and human responses, the machine is said to have passed the Turing Test, demonstrating a level of artificial intelligence that simulates human conversation effectively.

However, while the Turing test is still in use today, both computer scientists and philosophers have criticised its efficacy, claiming that it isn’t an indicator of true intelligence.

One of the main issues with the Turing Test is that it assesses a machine's ability to mimic human responses in a conversation without considering the underlying cognitive processes or understanding. As a result, a machine could potentially pass the Turing Test without truly comprehending the content of the conversation.

Another criticism is that it relies solely upon linguistic ability to test intelligence. In doing so, it neglects various other aspects of intelligence that are equally as important. 

Exploring new ways of testing

In response to the limitations of the Turing Test, many researchers have come up with alternative ways of testing AI.

One such test is the "Coffee Test," which was introduced by Apple Cofounder Steve Wozniak to assess an AI's practical problem-solving abilities. The Coffee Test gauges an AI's capacity to understand and perform everyday tasks, such as making a cup of coffee, which requires a combination of perception, manipulation, and common-sense reasoning.

Another notable evaluation method is the "Lovelace Test," inspired by Ada Lovelace, an early computing pioneer. The Lovelace Test focuses on an AI's ability to generate creative and novel outputs, emphasising the capacity for imaginative thinking and original content creation.

These specialised tests aim to provide a more comprehensive assessment by scrutinising capabilities that go beyond linguistic simulation. However, since they tend to focus narrowly on specific areas of intelligence, these tests are currently better suited to narrow AI and would need to be used in conjunction with one another in order to thoroughly assess the effectiveness of artificial general intelligence.

Exponential improvement as an indicator of attainable AGI

With impressive AI developments such as GPT-4, DALL-E and AlphaCode, there’s no denying that we’ve made rapid progress in the advancement of AI technology over the last few decades. 

In spite of this, Richard Socher, CEO of search engine You.com, argues that we’ve become overly optimistic about its capabilities. 

Speaking in an interview with Youtuber Harry Stebbings, Richard pointed out that “progress isn’t always as linear or exponential as we think”. Describing the realities of advancements in aerospace and aviation, he noted: 

“We went from the first motorised human flight and then, literally 30, 40 years later, we could fly loopings with machine guns and full metal airplanes high up at the speed of sound and you’re just like ‘wow’.. At this rate of progress, we’re gonna have vacations on the moon and we’re gonna have flying cars and…everyone will just fly everywhere all the time and so on. And then, in the 50s, the whole thing just stopped and we’re flying slower now than people did before and people realise all kinds of issues… we’re slowing down.” 

Richard warns that we must apply this same thinking to AI, remaining cautious about assuming continuous exponential growth without acknowledging the complexities and potential roadblocks that may arise along the way.

Not everyone agrees with this though. 

Hassabis of DeepMind, sees current AI advancements as a valid reason to assume that growth will continue. 

“The progress in the last few years has been pretty incredible… I don’t see any reason why that progress is going to slow down. I think it may even accelerate.”

According to DeepMind's researchers, the attainment of AGI could be achievable through computer reward systems. They suggest that reinforcement learning, in theory, could help us to realise AGI, even without necessitating any new technological innovations.

In a paper submitted to the peer-reviewed Artificial Intelligence journal in 2021, the DeepMind team put forth the argument that, if an algorithm is continually rewarded for desired actions, a fundamental principle of reinforcement learning, it will eventually exhibit indications of general intelligence.

While this paper has been heavily criticised by the likes of Roitblat and Vamplew, it does leave us with some food for thought as we grapple with the intriguing possibilities and challenges that lie ahead in the realm of artificial general intelligence. 

Ethical Considerations of AGI

When we ponder over the ethical considerations of AGI, it’s easy to start imagining a world run by super-intelligent robots – a dystopian society where AGIs are our masters, ruthlessly enslaving the human race. And, while we’re hopeful that this will never happen, there are a number of plausible risks associated with the development and deployment of AGI.

So, what are they, and what can we do about them?

Adopting human values 

One of the central ethical dilemmas with AGI is the potential for them to act in ways that are not aligned with human goals and values. If human-level intelligence is achieved, AIs will have the autonomy to think and behave exactly as humans do, leading us into dangerous territory.

All we have to do is look at those in power today and throughout history to recognise that humans do not always adhere to ethical principles. 

History is replete with examples of individuals in authority positions who have abused their power for personal gain or to further their own agendas, often at the expense of the greater good. Instances of corruption, discrimination, and disregard for human rights serve as stark reminders that human behaviour is far from flawless. This raises legitimate concerns about the potential behaviour of AGI systems if their ethical alignment is not carefully managed.

To mitigate this risk, researchers and organisations are actively working on designing AGI systems that are both value-aligned and value-preserving, meaning that they understand and prioritise human values while avoiding unintended consequences. Of course, how an AGI is used and whether ethical advice is adhered to, is very much down to the creators. Only time will tell whether AGI will be harnessed for the betterment of society or become a force that mirrors some of the less admirable aspects of human behaviour.

AGI’s effect on employment

While there’s no doubt that automation can improve efficiency and productivity, we now recognise that it is likely to lead to job displacement across a number of industries. From retail and customer service to data entry and analysis, numerous job sectors have already experienced significant levels of automation but this isn’t the only challenge that workers face. 

AGI, should it materialise, would profoundly impact decision-making processes in ways that demand careful consideration and ethical oversight. If left in the wrong hands, AGI could inherit biases that could perpetuate inequalities and lead to potentially discriminatory decisions. 

To prevent AGIs from making bad decisions in the workplace and safeguard employees, we need to establish robust ethical guidelines and governance mechanisms that prioritise fairness, accountability, and transparency. 

Tackling Privacy

As with all AI systems, privacy is another significant concern. 

Just like narrow AI, AGI will possess the capability to process and analyse vast amounts of data, including personal information, at unprecedented speeds and scales. However, with more autonomy, what the AGI does with this data and how closely it is protected, becomes an even greater concern.

If AGI systems are not properly secured and regulated, there is a risk that malicious actors or even well-intentioned organisations might exploit vulnerabilities, leading to data breaches and exposing individuals to identity theft, fraud, or other forms of harm. The potential scale of such breaches could be staggering, affecting countless individuals worldwide.

We’ve also witnessed AI’s capacity to create hyper-realistic deep fakes. A multitude of images and videos featuring celebrities and political figures have already spread across the internet, skillfully altered to make it seem as though they’ve said or done things that they haven’t, blurring the lines between truth and deception.

Is there a solution?

Protecting privacy in the age of AGI is likely to be a complex task. 

Not only will we need to employ strategies like data minimisation, robust encryption, and strict access control to protect personal information, but we’ll also need to find unique ways to prevent identity theft and stop the spread of misinformation. 

We’ve already seen companies using watermarks and digital fingerprints as a means by which to trace and verify the authenticity of online media, however, with no universally accepted guidelines for their implementation, it’s hard to predict whether or not these measures will be effective at safeguarding individuals from malicious online attacks.

With so much uncertainty and so many variables, AI developers, technology experts and policymakers will need to collaborate, remain vigilant and continuously adapt to new challenges.

If we take our foot off the mark, the potential consequences could be devastating.

What next?

AGI continues to divide opinion. For some, it signifies an exhilarating advancement with the potential to positively transform our lives whereas, for others, it evokes fear of the unknown. 

As we stand on the precipice of AGI's potential, the future remains uncertain. 

Only time will reveal whether AGI becomes a tangible reality or remains a concept that continues to fuel discussions and shape the trajectory of AI development.

Final thoughts from our CEO

We spoke to Newicon CEO and AI enthusiast, Steve O’Brien, to get his take on the topic.

What are your thoughts on the feasibility of AGI? Do you think it’s likely to be achieved within our lifetime? 

Yes, however, I don’t think it’ll be a sudden awakening - more of an improvement over time. 

It’s a bit like with our smartphones. We will suddenly look back and wonder, “how could we live without it?” 

Also, it’s likely that we’ll see many types of intelligence. I’d argue that some machine intelligence is far better in narrow fields and far worse in others. 

Do you think we have a hard time understanding intelligence?

I think, generally, our collective understanding of intelligence is very naive. The things computers do with no trouble are often what our society thinks of as intelligence - maths, computation, logic, analysis of large data sets etc. But, actually, navigating the messy reality and having a stable model and representation (the things we don't have to think about) are some of the hardest challenges. Our collective understanding of intelligence, what it is, and how it works, is due to change in a most radical way.

What other challenges do you think we’ll face when trying to replicate human intelligence?

For starters, the scale of the human brain is impressive. We have 80 billion neurons containing roughly 10,000 synapses. Each of those neurons is a mini complex machine in its own right AND the brain can create more of them. They grow and form new connections. They also have interesting behaviours and compete with each other.

The hardest thing, however, might be changing our current AI paradigm. Right now, it focuses on very different intelligences which aren’t based on how humans learn.

For example, the brain forms new connections as a way of learning, whereas an AI model strengthens existing connections. A synapse works or it does not - there is no range of value as is true with modern AI. 

The challenge is to produce an AI that works based off principles discovered in the brain. 

It might not perform well on modern AI test benchmarks, but our notion of what intelligence is needs to be challenged at the computational level.  Having testable theories of how the structure of our brains creates human cognition is paramount. 

Interestingly, recent research suggests that the brain uses the same mechanisms for moving our body as it does for thought. It’s argued that speech, mathematics, and abstract concepts like economics, are all versions of movement - moving through an abstract or virtual space.  And our brains use reference frames to represent and store knowledge. This means motion itself is intelligence. 

So, as with most things, I think reality will surprise us. Human intelligence is very different to current machine intelligence but both will be incredibly valuable and augment each other.

Do you think it’s possible to create conscious machines?

Yep - we are conscious machines. Consciousness is easy to manipulate and fool. We can turn it off and on and it can go wrong in fascinating ways. Think “The Man Who Mistook His Wife for a Hat”.

In fact, I'd recommend a few books: “Being You” by Anil Seth, Jeff Hawkins - “A Thousand Brains” and “Live Wired” by David Eagleman to name a few.

The interesting thing is, if we were to build a silicon brain, theoretically, it would run a lot faster. 

Neurons are fast (at about 100-300 m/s) and thought and reaction can travel at about the speed of sound. 

It makes sense to me that that would be useful from an evolutionary standpoint (although hard to prove.) But, this is a messy system using chemical gates that open and close. In pure silicon, signal speeds can become staggeringly fast (approaching 200,000,000m/s conservatively), so I don't feel brains have capitalised on the theoretical maximum compute speeds. 

A silicon brain would potentially be able to have 600,000 seconds of additional thinking time. That's 7 days’ worth of thought for every second. And it's why AI people often talk about how our distant AIs of the future could feel like speaking to humans would be like talking to a tree. 

That said, we have a huge amount to learn from brains and this is a fairly naive assumption. 

There are many other factors that are inherent to our consciousness and intelligence. 

We know that there are a large number of privacy concerns associated with the development of AGI. How do you feel we should go about tackling them?

It’s hard to say. I think we will have to iterate through this one! I suppose it might be similar to when the internet first came about and people would mindlessly believe what they read online because publishing content was reasonably hard.  

I think we are a little less gullible now but we still haven’t fully recovered. I guess we could start using biosignatures on our content - that would solve the password problem!  

Web3 might also be able to help us here. If we were to switch over to Web3, we could theoretically prove authorship but the energy costs would be huge. With that being said, most new technology starts off very inefficient until optimised and I believe we have a few smart people launching fusion energy companies so let’s invest in that!


Steve wrote deeper on the topic on Linkedin: 
Crazy rants to questions on AGI
 


I'm Maddie Zapletal

Technical Writer at Newicon

Join the newsletter

Subscribe to get our best content. No spam, ever. Unsubscribe at any time.

Get in touch

Send us a message for more information about how we can help you