Navigating the Labyrinth of Artificial General Intelligence (AGI): When Frankenstein’s Tale Ushers Thoughts on Modern Science
In the ever-evolving landscape of technology, we stand at a crossroads, gazing into the vast potential and uncertainty of artificial intelligence (AI). As we ponder the future, the age-old tales of human ambition and its consequences echo in our collective consciousness, reminding us of the lessons from the past.
Mary Shelley’s “Frankenstein“, for example, serves as a poignant allegory for our times. The novel paints a haunting portrait of a creator grappling with the unintended ramifications of his creation. Today, as we delve deeper into the realms of AI, we can be reminded of this classic tale, confronting the potential outcomes of machines that could eclipse human intelligence.
The Emergence of “God-like” AI
In the AI sphere, there’s growing anticipation around the advent of artificial general intelligence (AGI), often dubbed “God-like” AI. Ian Hogarth, the chair of the UK taskforce on AI safety, expressed concerns about AGI, describing it as an AI system that could perform tasks at a human, or above human, level – and could evade our control. Max Tegmark, a prominent scientist, further highlights the urgency, stating:
“A lot of people here think that we’re going to get to God-like artificial general intelligence in maybe three years. Some think maybe two years.”
Skeptics and Their Reservations
However, not everyone is convinced of this impending reality. Some voices in the industry argue that the clamour over AGI is a cynical ploy to regulate the AI market, benefiting giants like OpenAI, Google, and Microsoft. The Distributed AI Research Institute points out that the focus on existential risks overshadows immediate AI concerns, such as copyright infringements and the exploitation of low-paid workers. William Dally, chief scientist at AI chipmaker Nvidia, dismisses the idea of uncontrollable AGI, asserting:
“Uncontrollable artificial general intelligence is science fiction and not reality.”
The True Peril: Unbridled Competence
For AGI proponents, the concerns are tangible and multifaceted. Connor Leahy, CEO of AI safety research company Conjecture, distills the issue:
“The deep issue with AGI is not that it’s evil or has a specifically dangerous aspect that you need to take out. It’s the fact that it is competent. If you cannot control a competent, human-level AI then it is by definition dangerous.”
Immediate AI Concerns on the Horizon
While AGI discussions take center stage, other imminent AI threats loom. The UK government, for instance, is apprehensive about AI models being weaponized by malefactors to craft threats like bioweapons. The proliferation of open-source AI, with its freely modifiable models, is another pressing concern.
A Global Spotlight on AI’s Potential Ramifications
As global leaders gear up for the AI safety summit at Bletchley Park, the message is unequivocal: the potential perils of AI, both immediate and in the distant future, demand our undivided attention. Much like Dr. Frankenstein’s realization, we too must recognize and address the implications of our creations.