AI Pioneer Pushes Back Predicted Date for Potential Human Extinction

0
18

Key Takeaways:

  • A leading artificial intelligence expert, Daniel Kokotajlo, has revised his timeline for AI doom, predicting it will take longer for AI systems to code autonomously and develop into superintelligence.
  • The initial scenario, AI 2027, envisioned AI development leading to the creation of a superintelligence that destroys humanity, but Kokotajlo now believes this is unlikely to happen by 2027.
  • Experts are pushing their timelines for transformative artificial intelligence further out, citing the complexity of real-world problems and the limitations of current AI systems.
  • The term "artificial general intelligence" (AGI) is being reevaluated, with some experts arguing it is no longer a meaningful concept.
  • The development of AI systems that can do AI research is still a goal of leading AI companies, but the timeline for achieving this is uncertain.

Introduction to AI 2027
The concept of artificial intelligence (AI) has been a topic of discussion and debate in recent years, with some experts predicting that AI could surpass human intelligence and become a superintelligence. One such expert, Daniel Kokotajlo, a former employee of OpenAI, released a scenario in April called AI 2027, which envisioned AI development leading to the creation of a superintelligence that destroys humanity. This scenario sparked a heated debate, with some experts calling it a "work of fiction" and others referencing it in discussions about the US’s artificial intelligence arms race with China. As Kokotajlo noted, "Our timelines … are a bit longer still," indicating that the development of superintelligence may take longer than initially predicted.

Revising the Timeline
Kokotajlo and his team initially predicted that AI would achieve "fully autonomous coding" by 2027, but they have now revised their expectations, putting this milestone in the early 2030s. The new forecast sets 2034 as the horizon for "superintelligence," and does not contain a guess for when AI may destroy humanity. As Kokotajlo wrote, "Things seem to be going somewhat slower than the AI 2027 scenario. Our timelines were longer than 2027 when we published and now they are a bit longer still." This revision is significant, as it suggests that the development of superintelligence may not be as imminent as some experts had predicted.

The Complexity of Real-World Problems
Experts are pushing their timelines for transformative artificial intelligence further out, citing the complexity of real-world problems and the limitations of current AI systems. As Malcolm Murray, an AI risk management expert, noted, "A lot of other people have been pushing their timelines further out in the past year, as they realise how jagged AI performance is." Henry Papadatos, the executive director of the French AI nonprofit SaferAI, added, "The term AGI made sense from far away, when AI systems were very narrow – playing chess, and playing Go. Now we have systems that are quite general already and the term does not mean as much." This suggests that the development of superintelligence may be more complex and challenging than initially thought.

The Limitations of Current AI Systems
The development of AI systems that can do AI research is still a goal of leading AI companies, but the timeline for achieving this is uncertain. As Sam Altman, the CEO of OpenAI, noted, "Having an automated AI researcher by March 2028 is an internal goal of our company, but we may totally fail at this goal." Andrea Castagna, a Brussels-based AI policy researcher, added, "The fact that you have a superintelligent computer focused on military activity doesn’t mean you can integrate it into the strategic documents we have compiled for the last 20 years." This highlights the challenges of integrating AI systems into complex real-world systems and the need for further research and development.

The Future of AI Development
The revision of the AI 2027 scenario and the pushing out of timelines for transformative artificial intelligence suggest that the development of superintelligence may be more complex and challenging than initially thought. As Kokotajlo noted, "The world is a lot more complicated than that." The development of AI systems that can do AI research is still a goal of leading AI companies, but the timeline for achieving this is uncertain. As the field of AI continues to evolve, it is likely that our understanding of the potential risks and benefits of AI will also evolve, and that the development of superintelligence will be shaped by a complex interplay of technical, social, and economic factors.

https://www.theguardian.com/technology/2026/jan/06/leading-ai-expert-delays-timeline-possible-destruction-humanity

SignUpSignUp form

LEAVE A REPLY

Please enter your comment!
Please enter your name here