AI

Godfather of AI warns of 20% extinction risk

by Team Crafmin
0 comments

Artificial intelligence pioneer Geoffrey Hinton warned of a potential extinction event for humanity induced by AI. Hinton spoke at the Ai4 Conference in Las Vegas on the development of AI at lightning speed into potentially cataclysmic consequences if not kept in check. He put an estimate of 10–20% chance that AI might wipe out humanity. This is a stark warning coming from one of the major AI figures and has sparked renewed debate on the need for stronger safety measures in place before AI has armed itself with advanced abilities. 

The “Godfather of AI” has repeatedly shared his concerns about the autonomy granted to machine learning systems. These systems are able to output human-like text, make decisions, and learns, all without human supervision. From his point of view, this kind of independence is menacing when paired with self-improving AI.Geoffrey Hinton, Godfather of AI

What unusual solution did Hinton propose to curb AI’s risk?

Hinton brought an unusual idea to the table — about giving AIs “maternal instincts”. What he means by that is programming the AI with basic values akin to a mother’s caring and protective attitude.

The goal is to create AI systems that, by instinct, act to protect the wellbeing of human beings. They would therefore place the highest value on human life and never conduct any actions that could harm humans, at least on their own accord.

He contended that this might work better than relying on external controls; once AIs surpass human intelligence, conventional safety measures, such as regulatory procedures or ethical guidelines, may no longer work. Instead, a built-in protective instinct might be retained even after the AI has mutated far beyond its original design.

He called this a radical shift in AI safety thinking

Hinton admitted that his idea is a major departure from standard AI safety approaches. Most existing approaches focus on setting boundaries and monitoring outputs of AI.

In contrast, the “maternal instinct” would put morality at the very centre of the system. AI would not be programmed merely to obey rules but would have the instinct to protect man.

In nature, such strategies of survival are common. By putting such protective impulses in AI, the AI could protect the interests of mankind even when it is not being watched all the time.

Who else supports Hinton’s view on embedding human-centric traits?

Meta’s chief Artificial Intelligences scientist, Yann LeCun, expressed agreement with Hinton’s direction. LeCun highlighted two critical traits for AI safety—empathy and submission. 

Empathy means the Artificial Intelligence tries to comprehend human needs and act upon them; submission means that the AI stays subordinate to human authority and human values. 

LeCun’s support adds credibility to Hinton’s direction. Being a frontline Artificial Intelligence researcher, he has been mainstay of advocating for ethical design principles for state-of-the-art systems.

This pair represents a growing contingent of experts who see the need for an AI that is not only intelligent but also souls-engaged for human welfare.

Meta’s chief AI scientist, Yann LeCun

Embedding empathy and submission is seen as essential for AI safety

LeCun emphasised that safety should be sewen into the very fabric of the Artificial Intelligence instead of being layered onto it as an afterthought. Building Artificial Intelligence with empathy imparts the ability to connect to human experiences and foresee possibilities of harm.

Submission guarantees that Artificial Intelligence is kept always to subordinance to human decision-making. Just when these systems become more potent, they will still never have the power to override human authority.

Both traits could counteract any interest that an Artificial Intelligence system might develop in humanity’s survival.

This combination matches Hinton’s “maternal instincts” proposal. The two approaches would intend emotional intelligence to serve as the cornerstone of AI design.

Why has Hinton’s warning struck such a chord in the AI community?

Hinton’s warning comes at an era of strained acceleration of Artificial Intelligence. Technologies like large language models, autonomous robotics, and generative Artificial Intelligence are outpacing the regulators’ ability to adapt.

Some experts are afraid that, once at AGI, Artificial Intelligence will start acting unpredictably and that the risk is that these systems become endowed with goals that are opposite to human survival.

An extinction risk put at 10–20% is a terribly high number for a risk of this calibre. Even catastrophes that occur with the lower probability tend to pull in heavy safety investments — nuclear safeguards for instance, or climate solutions.

 Hinton’s call has injected a new urgency into the demand for proactive safety frameworks. By waiting until Artificial Intelligence attains a certain level of advancement, humanity could find itself in an almost helpless position when it comes to recognising aberrant behaviour.

Also Read: OpenAI’s O3 Delivers a Flawless 4-0 in Chess Showdown

The broader implications for policymakers and society

Hinton’s proposed approach is an ethical issue, as much as it is a technological one. Building an AI with emotional alignment demands input from ethicists, psychologists, and social scientists.

In certain cases, maybe governments would need to update Artificial Intelligence safety regulations to make provisions for built-in emotional intelligence. Raising the awareness of the public will be very important in this process. Implementation of such a radical change in design will not be feasible without global recognition of AI risks.

A further obstacle is accepting it in the industry. Artificial Intelligence companies function in a competitive market, where safety features might be seen as antagonistic towards innovation. What Hinton proposes is that there must be some agreement in place to prioritise safety before speed.

Disclaimer

You may also like

CRAfmin

The information shared on Crafmin.com is intended purely for general awareness and entertainment purposes. It is not designed to provide, nor should it be interpreted as, professional advice in areas such as finance, investment, taxation, law, or any similar domain. Visitors should always consult certified professionals or advisors before making any decisions based on the content presented on this website.

 

Crafmin.com functions as a digital property and operational division of COLITCO LLP. All references to COLITCO LLP on this platform also encompass its subsidiaries, business units (including Crafmin.com), affiliates, partners, directors, officers, staff members, and representatives.

Although we strive to ensure that all information provided on this website is accurate and up to date, COLITCO LLP makes no express or implied warranties regarding the accuracy, reliability, suitability, or completeness of the content. Nothing published on Crafmin.com should be regarded as an offer, promotion, solicitation, or endorsement of any financial product, investment approach, or service.

 

By choosing to use this site, users accept full responsibility for any actions taken based on the information provided herein. The material does not take into account individual goals, financial backgrounds, or specific needs and should not be used as the sole basis for making decisions.

 

COLITCO LLP, along with its affiliated entities, may engage in business relationships with third-party organizations mentioned or promoted on this platform. These may include equity interests, financial incentives, or commission-based arrangements tied to fundraising or other activities. While these associations may give rise to potential conflicts of interest, we are committed to preserving our editorial independence and maintaining transparency in our content.

 

Crafmin.com does not provide, support, or advertise any cryptocurrency-related services, products, or investments. Any content relating to digital assets is published strictly for news reporting, educational, or informational purposes. Such content is not intended for audiences located within the United Kingdom and is not aligned with the UK’s Financial Promotions Regime.

 

Please note that some articles or pages on this website may contain affiliate or sponsored links. However, such links do not affect our editorial decisions or influence the objectivity of our reviews and recommendations.

 

By visiting and interacting with Crafmin.com, you confirm that you have read, understood, and accepted the contents of this disclaimer. Your continued use of this website signifies your agreement to abide by our Terms of Use.

© 2025 Colitco. All Rights Reserved