Artificial intelligence pioneer Geoffrey Hinton warned of a potential extinction event for humanity induced by AI. Hinton spoke at the Ai4 Conference in Las Vegas on the development of AI at lightning speed into potentially cataclysmic consequences if not kept in check. He put an estimate of 10–20% chance that AI might wipe out humanity. This is a stark warning coming from one of the major AI figures and has sparked renewed debate on the need for stronger safety measures in place before AI has armed itself with advanced abilities.
The “Godfather of AI” has repeatedly shared his concerns about the autonomy granted to machine learning systems. These systems are able to output human-like text, make decisions, and learns, all without human supervision. From his point of view, this kind of independence is menacing when paired with self-improving AI.Geoffrey Hinton, Godfather of AI
What unusual solution did Hinton propose to curb AI’s risk?
Hinton brought an unusual idea to the table — about giving AIs “maternal instincts”. What he means by that is programming the AI with basic values akin to a mother’s caring and protective attitude.
The goal is to create AI systems that, by instinct, act to protect the wellbeing of human beings. They would therefore place the highest value on human life and never conduct any actions that could harm humans, at least on their own accord.
He contended that this might work better than relying on external controls; once AIs surpass human intelligence, conventional safety measures, such as regulatory procedures or ethical guidelines, may no longer work. Instead, a built-in protective instinct might be retained even after the AI has mutated far beyond its original design.
He called this a radical shift in AI safety thinking
Hinton admitted that his idea is a major departure from standard AI safety approaches. Most existing approaches focus on setting boundaries and monitoring outputs of AI.
In contrast, the “maternal instinct” would put morality at the very centre of the system. AI would not be programmed merely to obey rules but would have the instinct to protect man.
In nature, such strategies of survival are common. By putting such protective impulses in AI, the AI could protect the interests of mankind even when it is not being watched all the time.
Who else supports Hinton’s view on embedding human-centric traits?
Meta’s chief Artificial Intelligences scientist, Yann LeCun, expressed agreement with Hinton’s direction. LeCun highlighted two critical traits for AI safety—empathy and submission.
Empathy means the Artificial Intelligence tries to comprehend human needs and act upon them; submission means that the AI stays subordinate to human authority and human values.
LeCun’s support adds credibility to Hinton’s direction. Being a frontline Artificial Intelligence researcher, he has been mainstay of advocating for ethical design principles for state-of-the-art systems.
This pair represents a growing contingent of experts who see the need for an AI that is not only intelligent but also souls-engaged for human welfare.
Meta’s chief AI scientist, Yann LeCun
Embedding empathy and submission is seen as essential for AI safety
LeCun emphasised that safety should be sewen into the very fabric of the Artificial Intelligence instead of being layered onto it as an afterthought. Building Artificial Intelligence with empathy imparts the ability to connect to human experiences and foresee possibilities of harm.
Submission guarantees that Artificial Intelligence is kept always to subordinance to human decision-making. Just when these systems become more potent, they will still never have the power to override human authority.
Both traits could counteract any interest that an Artificial Intelligence system might develop in humanity’s survival.
This combination matches Hinton’s “maternal instincts” proposal. The two approaches would intend emotional intelligence to serve as the cornerstone of AI design.
Why has Hinton’s warning struck such a chord in the AI community?
Hinton’s warning comes at an era of strained acceleration of Artificial Intelligence. Technologies like large language models, autonomous robotics, and generative Artificial Intelligence are outpacing the regulators’ ability to adapt.
Some experts are afraid that, once at AGI, Artificial Intelligence will start acting unpredictably and that the risk is that these systems become endowed with goals that are opposite to human survival.
An extinction risk put at 10–20% is a terribly high number for a risk of this calibre. Even catastrophes that occur with the lower probability tend to pull in heavy safety investments — nuclear safeguards for instance, or climate solutions.
Hinton’s call has injected a new urgency into the demand for proactive safety frameworks. By waiting until Artificial Intelligence attains a certain level of advancement, humanity could find itself in an almost helpless position when it comes to recognising aberrant behaviour.
Also Read: OpenAI’s O3 Delivers a Flawless 4-0 in Chess Showdown
The broader implications for policymakers and society
Hinton’s proposed approach is an ethical issue, as much as it is a technological one. Building an AI with emotional alignment demands input from ethicists, psychologists, and social scientists.
In certain cases, maybe governments would need to update Artificial Intelligence safety regulations to make provisions for built-in emotional intelligence. Raising the awareness of the public will be very important in this process. Implementation of such a radical change in design will not be feasible without global recognition of AI risks.
A further obstacle is accepting it in the industry. Artificial Intelligence companies function in a competitive market, where safety features might be seen as antagonistic towards innovation. What Hinton proposes is that there must be some agreement in place to prioritise safety before speed.