‘Historian Predicts AI Will Surpass Humans Within Two Centuries’

0
18

Key Takeaways:

  • The world is misjudging the timescale of artificial intelligence (AI) and its potential impact on society
  • The real danger of AI lies not in its speed of development, but in how casually it’s being treated by decision-makers
  • The long-term effects of AI are impossible to grasp, even if its development were to stop today
  • Complacency and short-term thinking among powerful decision-makers are major concerns
  • AI’s social consequences will play out far beyond the current horizon, whether the world is ready or not

Introduction to the AI Conundrum
Yuval Noah Harari, a renowned historian and author of "Sapiens: A Brief History of Humankind," has warned that the world is badly misjudging the timescale of artificial intelligence. Speaking at the World Economic Forum in Davos, Harari emphasized that the real danger lies not in how fast the technology is moving, but in how casually it’s being treated. As he noted, "A lot of the conversations here in Davos, when they say ‘long term’ they mean like two years. When I mean long term, I think 200 years." This disparity in perspective is a significant concern, as it highlights the need for a more nuanced understanding of AI’s potential impact on society.

The Industrial Revolution Analogy
Harari compared the current moment in AI to the early days of the Industrial Revolution, saying that humanity tends to misunderstand transformative technologies as they unfold. The deepest consequences of industrialization took generations to fully emerge, often through social, political, and geopolitical upheaval that no one could have predicted in advance. As Harari stated, "You can test for accidents. But you cannot test the geopolitical implications or the cultural implications of the steam engine in a laboratory. It’s the same with AI." This analogy serves as a reminder that the long-term effects of AI will likely be far-reaching and unpredictable.

The Uncertainty of AI’s Long-Term Effects
Harari warned that even if AI development were to stop today, its long-term effects would still be impossible to grasp. "The stone has been thrown into the pool, but it just hit the water," he said. "We have no idea what waves have been created, even by the AIs that have been deployed a year or two ago." This uncertainty is a major concern, as it highlights the need for careful consideration and planning in the development and deployment of AI systems. As Harari noted, "I’m mainly concerned about the lack of concern that we are creating the most powerful technology in human history."

The Complacency of Decision-Makers
Harari joins a plethora of senior AI researchers and tech leaders who have expressed concerns about AI’s risks, ranging from waves of job losses to human extinction. However, what worries him most is not just uncertainty, but complacency. Many of the most powerful decision-makers shaping AI are focused on short-term incentives rather than long-term consequences. As Harari stated, "Very smart and powerful people are worried about what their investors say in the next quarterly report. They think in terms of a few months, or a year or two." This short-term thinking is a major concern, as it highlights the need for a more nuanced understanding of AI’s potential impact on society.

The Need for Long-Term Thinking
AI’s social consequences, Harari said, will play out far beyond the current horizon, whether the world is ready or not. As he noted, "I’m mainly concerned about the lack of concern that we are creating the most powerful technology in human history." This lack of concern is a major concern, as it highlights the need for careful consideration and planning in the development and deployment of AI systems. As Harari emphasized, the world needs to start thinking in terms of centuries, rather than just years or decades. By doing so, we can work towards creating a future where AI is developed and used in a responsible and beneficial manner.

Conclusion
In conclusion, Yuval Noah Harari’s warning about the world’s misjudgment of AI’s timescale and impact is a timely and important reminder of the need for careful consideration and planning in the development and deployment of AI systems. As Harari noted, "The stone has been thrown into the pool, but it just hit the water. We have no idea what waves have been created, even by the AIs that have been deployed a year or two ago." By recognizing the uncertainty and unpredictability of AI’s long-term effects, and by working towards a more nuanced understanding of its potential impact on society, we can work towards creating a future where AI is developed and used in a responsible and beneficial manner.

https://www.businessinsider.com/sapiens-author-ai-timeline-warning-lack-of-concern-2026-1

SignUpSignUp form

LEAVE A REPLY

Please enter your comment!
Please enter your name here