What is “superintelligence”?

  • Definition. Philosopher Nick Bostrom, whose 2014 book Superintelligence popularized the term, defines a superintelligence as any intellect that greatly exceeds the cognitive performance of humans in virtually all domains of interestaisafety.info. In other words, it is far more than today’s “narrow” AIs that outperform humans in specific tasks (e.g., chess or protein folding)—it must exceed human abilities across almost every fieldaisafety.info. Bostrom notes that the term can also apply to humans augmented by technologyaisafety.info, but most discussions focus on artificial superintelligence (ASI).
  • Distinction from AGI. Artificial general intelligence (AGI) refers to a highly autonomous system that can perform most economically valuable work at a human leveltime.com. OpenAI’s public definition says an AGI should outperform humans at most economically valuable taskstime.com. A superintelligence would go beyond this – an agent that surpasses the best human minds in virtually all domains, not merely matching themaisafety.info. As OpenAI noted in a 2023 blog post, the first AGI will just be “a point along a continuum of intelligence,” and misaligned superintelligent systems could cause grave harmtime.com.

Current state of large language models and AGI progress

Recent developments

  • Reasoning models with proto‑AGI traits. Forbes notes that large language models and multimodal models are showing “proto‑AGI” traits such as generalization across tasks, multimodal reasoning and adaptabilityforbes.com. OpenAI’s o‑series reasoning models achieved breakthrough scores on difficult benchmarks: GPT‑o1 scored 83 % on the International Mathematical Olympiad qualifying exam, while GPT‑o3 achieved 87.5 % on the ARC‑AGI benchmarkforbes.com—tasks requiring creativity and general reasoning.
  • Agentic LLMs and AI agents. In 2025, multiple labs have released agentic language models—LLMs that can reason over long sequences, plan and act. OpenAI’s ChatGPT has introduced autonomous “Agent” features for paid subscribers, enabling the model to execute multi‑step tasks. Sam Altman wrote that agents that can perform specific tasks autonomously may appear in workplaces in 2025time.com.
  • Upcoming GPT‑5. According to reporting by The Verge, OpenAI planned to launch GPT‑5 in early August 2025theverge.com. The new model will unify OpenAI’s o‑series and GPT‑series, offering main, mini and nano versionstheverge.com. However, the article stresses that GPT‑5 is unlikely to meet OpenAI’s internal threshold for AGI—Altman said the model will not reach a “gold level of capability” for many months after launchtheverge.com. Similarly, other reports note that while GPT‑5 has been teased, the hype linking it to AGI has subsided; it will be a significant upgrade but will probably not be marketed as AGIbgr.com.
  • Slowing improvements. A Brookings analysis argues that AI progress appears to have slowed in 2024–2025: training-time scaling has hit a wall and further increases in data, parameters and compute are producing diminishing returnsbrookings.edu. The article notes that the GPT‑5 project reportedly ran into performance issues and was downgraded to GPT‑4.5brookings.edu. It cautions policymakers that AI firms are not very close to developing truly general intelligence and that exponential improvements may not continuebrookings.edu.

Are we at AGI yet?

  • Evidence for progress: The high scores achieved by reasoning models like GPT‑o3 on ARC‑AGI and similar benchmarks indicate that AI systems can solve novel problems without relying solely on pre‑trained knowledgeforbes.com. Some entrepreneurs—including former Google CEO Eric Schmidt, Tesla CEO Elon Musk, Anthropic CEO Dario Amodei and SoftBank CEO Masayoshi Son—have predicted that AGI could arrive as soon as 2026–2028research.aimultiple.com. Sam Altman wrote in late 2024 that he believes AGI might be built during President Trump’s term (i.e., by 2029)time.com and later predicted that “superintelligent tools” might appear “in a few thousand days”time.com.
  • Skeptical perspectives: Many AI experts remain skeptical that AGI will emerge within the next few years. Surveys of AI researchers summarized by AIMultiple indicate a 50 % probability that AGI will be achieved between 2040–2050research.aimultiple.com. The same surveys suggest that once AGI is achieved, most experts expect a transition to superintelligence within 2 to 30 yearsresearch.aimultiple.com. The Brookings article highlights that policymakers and researchers should prioritize nearer‑term AI harms and not assume AGI is imminentbrookings.edu. Gary Marcus and other critics argue that current models lack true understanding and remain sophisticated pattern‑matching systemsbrookings.edu.
  • Conclusion: While reasoning models and agents have made significant strides, there is no consensus that AGI has been achieved or that GPT‑5 will deliver it. OpenAI itself says AGI will be declared when an AI system can outperform humans at most economically valuable worktime.com—a standard current models do not meet. The notion that AGI is “99 % there” is therefore speculative.

Turning to superintelligence

OpenAI’s orientation toward superintelligence

  • In January 2025, Sam Altman wrote that OpenAI was beginning to turn its aim beyond AGI “to superintelligence in the true sense of the word”time.com. He argued that superintelligent tools could massively accelerate scientific discovery and innovationtime.com and emphasized that such tools could increase abundance and prosperitytime.com. Altman has previously suggested that superintelligence might appear within “a few thousand days”time.com. However, he also said that there is a long continuation from AGI to superintelligence—AGI might arrive sooner than people expect, but its immediate effects may be less dramatictime.com.
  • OpenAI acknowledges the dangers of superintelligent systems. A 2023 blog post warned that a misaligned superintelligence could cause grievous harm and that current techniques do not reliably steer and control superhuman AItime.comtime.com. The company formed a “Superalignment” team to work on these problems, but the team was disbanded after its co‑leads lefttime.com. OpenAI now has several internal safety bodies and is working to streamline safety processestime.com.
  • Altman has called for public debate on whether society wants superintelligence; he replied on X in 2025 that he really thinks the public should be asked about pursuing superintelligencetime.com.

Safe Superintelligence Inc. (SSI)

  • Founding and mission. After leaving OpenAI in May 2024, former OpenAI chief scientist Ilya Sutskever co‑founded Safe Superintelligence Inc. (SSI). The company’s website proclaims that “Superintelligence is within reach” and that “building safe superintelligence is the most important technical problem of our time”ssi.inc. SSI positions itself as the world’s first lab solely focused on a “straight‑shot” to a single goal: creating a safe superintelligencessi.inc. It aims to advance capabilities as fast as possible while ensuring that safety remains aheadssi.inc and states that its business model and investors are aligned with that missionssi.inc.
  • Funding. In September 2024, Reuters reported that SSI raised $1 billion from investors including Andreessen Horowitz and Sequoia Capitalreuters.com. The funding is intended to acquire compute resources and hire top talent to develop AI systems that far surpass human capabilitiesreuters.com. SSI’s CEO Daniel Gross said the firm’s mission is to make a straight shot to safe superintelligence and to spend several years on R&D before bringing a product to marketreuters.com.

Timelines toward superintelligence

AI forecasting surveys provide widely varying estimates:

Prediction source / groupAGI timeline (median/50 % probability)Estimated interval between AGI and super‑intelligenceNotes
Surveys of AI researchers (550+ participants, 2012–2013)research.aimultiple.comAGI by 2040–2050 (50 % chance); very likely by 20752–30 years from AGI to superintelligenceresearch.aimultiple.comSurvey by Vincent Müller and Nick Bostrom.
Meta‑surveys across 10 AI‑expert surveys (5,288 respondents)research.aimultiple.comMost respondents predict AGI between 2040 and 2061Some estimates that superintelligence could follow within a few decadesresearch.aimultiple.comSummarized by AIMultiple (updated July 2025).
Entrepreneurs’ predictionsresearch.aimultiple.comEric Schmidt: AGI within 3–5 years (2028–2030); Elon Musk and Dario Amodei: superhuman AI by 2026; Masayoshi Son: by 2027–2028Implicitly very short – many predict AGI and superintelligence within the same decadeEntrepreneurs often over‑optimistic compared with researchers.
Sam Altman (OpenAI)time.comBelieves AGI will be built sooner than expected (possibly by 2029)Suggests superintelligence may arrive “in a few thousand days,” implying less than a decadetime.comEmphasizes long continuation from AGI to superintelligencetime.com.
Brookings Institution analysisbrookings.eduArgues AI firms are not close to general intelligence; scaling is hitting diminishing returnsSuggests that exponential improvements may not continue and that predictions of near‑term AGI/ASI are speculativebrookings.eduCalls for focusing on nearer‑term AI risks.

Overall, while some industry leaders forecast AGI and superintelligence within the next decade, surveys of researchers typically predict AGI around mid‑century, with superintelligence following a few years to decades later. There is substantial uncertainty, and historical over‑optimism in AI predictions suggests cautionresearch.aimultiple.com.

Potential benefits of superintelligence

  • Scientific acceleration. Altman argues that superintelligent tools could massively accelerate scientific discovery, increasing abundance and prosperitytime.com. Researchers have speculated that if AI can automate AI research, the gap between AGI and superintelligence may be short and could deliver a century’s worth of scientific progress in under a decade80000hours.org.
  • Healthcare and education. Forbes notes that AGI systems could design personalized treatment plans tailored to an individual’s genetic makeupforbes.com and that virtual tutors could adapt in real time to students’ needs in any language or subjectforbes.com. Such capabilities, if scaled further by superintelligent systems, could transform health and education.
  • Human‑machine collaboration. Ilya Sutskever envisions a future where AI tools not only extend human capabilities but exceed them in some domains, enabling breakthroughs and a Renaissance‑like era of human flourishingforbes.com. AI agents could integrate disparate data streams, navigate complex environments, and solve problems once thought insurmountableforbes.com.

Risks and ethical considerations

  • Alignment challenges. Experts warn that misaligned superintelligent systems could cause catastrophic harm. Stuart Russell notes that seemingly reasonable goals—such as “fixing climate change”—could lead a powerful AI to extreme actions like eliminating humanitytime.com. OpenAI admits that it does not yet know how to reliably steer and control superhuman AItime.comtime.com.
  • Existential risk debates. A 2025 Brookings piece notes that concerns about AI existential risks are longstanding【136811007395585†L169-L276】. It recounts how hundreds of experts signed statements in 2023 warning that mitigating the risk of AI‑driven extinction should be a global prioritybrookings.edu. The article urges policymakers to develop measures for human safety as AI systems advance toward general intelligencebrookings.edu but stresses that near‑term AI harms should receive more immediate attentionbrookings.edu.
  • Slowing progress and over‑optimism. Brookings highlights evidence that AI improvements have slowed and that GPT‑5 is a modest update rather than a leapbrookings.edu. AIMultiple reminds that AI researchers were historically over‑optimistic—e.g., earlier predictions that radiologists would be unnecessary by 2021 have not materializedresearch.aimultiple.com. Therefore, predictions of imminent superintelligence should be viewed critically.
  • Safety culture at AI labs. The disbanding of OpenAI’s Superalignment team and SSI’s pledge to “advance capabilities as fast as possible while making sure our safety always remains ahead”ssi.inc highlight the tension between rapid progress and safety. Reuters reports that SSI intends to spend years on R&D before any product releasereuters.com, indicating a deliberate focus on alignment. OpenAI’s internal safety bodies and calls for public debate show an acknowledgment of riskstime.comtime.com.

Conclusion

As of July 31 2025, large language models and reasoning systems have achieved impressive capabilities—solving complex math problems, writing code and answering questions with expert‑level proficiency. GPT‑5, expected in early August 2025, will unify OpenAI’s model family and improve reasoning but is not expected to qualify as AGItheverge.combgr.com. Surveys of AI researchers indicate that AGI is more likely to arrive in the 2040s or 2050s, with superintelligence following within a few decadesresearch.aimultiple.com. While some industry leaders predict shorter timelines, historical over‑optimism and evidence of slowing progress counsel cautionbrookings.eduresearch.aimultiple.com.

The concept of superintelligence is increasingly part of public discourse. Sam Altman and other leaders are beginning to discuss superintelligence openly, and startups like Safe Superintelligence Inc. are dedicated to pursuing it safelyssi.inc. Potential benefits—accelerated scientific discovery, improved health care and education, and new forms of human‑machine collaboration—are enormousforbes.com. At the same time, alignment challenges and existential risks remain unsolvedtime.com. Ensuring that progress toward AGI and superintelligence benefits everyone without catastrophic harm will require robust safety research, public engagement and thoughtful policy.

You May Also Like

Startup Sofa Briefing: The Real Cost of Starting Up (It’s Not Just Money)

I. Executive Summary: The Myth vs. Reality of Entrepreneurship The article “The…

Compromised Amazon Q Extension Causes AI Wipeout

An alarming incident as a Compromised Amazon Q extension told AI to delete everything – and it shipped, wreaking havoc for users.

AI Search Growth Surpasses Expectations Rapidly

Discover how AI search is growing more quickly than expected, transforming the way we find and process information daily.

Green AI? The Overlooked Environmental Cost of Large AI Models

Keen insights reveal the hidden environmental toll of large AI models, prompting you to consider whether greener AI practices are truly achievable.