Key Takeaways:
- The 2026 Consumer Electronics Show (CES) featured keynotes from Nvidia’s Jensen Huang and AMD’s Lisa Su, who offered differing visions for the future of artificial intelligence (AI)
- Huang emphasized the importance of "physical AI" and the development of AI "factories" that can perceive, reason, and act in the physical world
- Su focused on the need for massive scale and flexibility in computing infrastructure to support the growth of AI workloads
- Both executives agreed on the importance of pushing intelligence closer to the edge, where data is generated
- The future of AI will depend on the development of infrastructure that can turn compute into deployed intelligence
Introduction to the Future of AI
The 2026 Consumer Electronics Show in Las Vegas provided a platform for two of the world’s most influential chip executives, Jensen Huang and Lisa Su, to share their visions for the future of artificial intelligence (AI). Huang, the founder and CEO of Nvidia, and Su, the chair and CEO of AMD, presented sharply different roadmaps for AI’s next phase. According to Huang, AI has moved beyond software models running in data centers and into a new phase where systems must "perceive, reason and act in the physical world." This shift, he argued, is a structural change for the industry rather than an incremental improvement.
Physical AI and the Rise of AI Factories
Huang’s keynote emphasized the importance of "physical AI" and the development of AI "factories" that can produce intelligence at industrial scale. He highlighted Nvidia’s autonomous driving software and robotics platforms, arguing that AI systems trained in simulation can now generalize to complex, real-world scenarios. As Huang noted, "The ChatGPT moment for physical AI is here," referring to the point at which machines begin to understand real-world environments and operate within them. He also emphasized the importance of digital twins, which allow companies to train AI systems faster and deploy them more safely. By creating simulated replicas of factories, vehicles, and infrastructure, companies can move AI closer to the edge, where it can interact directly with physical environments.
Push for Compute at Unprecedented Scale
In contrast, Su’s keynote focused on the rapid growth in AI workloads and the scale of computing power required to sustain that trajectory. She warned that future AI systems could require levels of compute far beyond today’s supercomputers, using the term "yottaflop" to describe the aggregate compute capacity AI could demand over time. A yottaflop, she explained, refers to one septillion floating-point operations per second, or 10² calculations per second. Su presented AMD’s CPUs, GPUs, and adaptive silicon as modular infrastructure that customers can tune across data centers, PCs, and embedded systems. She also addressed energy constraints directly, noting that advances in performance per watt will determine how quickly the industry can scale, particularly as more AI workloads move closer to users and devices.
Different Routes Toward the Edge
Despite their differing emphases, both Huang and Su converged on a shared argument: AI’s next growth phase depends on pushing intelligence closer to where data is generated. As Su noted, "How many of you know what a yottaflop is?" This question highlighted the massive scale of computing power required to support the growth of AI workloads. Huang, on the other hand, emphasized the importance of developing AI "factories" that can produce intelligence at industrial scale. Both executives made clear that AI’s future will not hinge on a single model or breakthrough, but rather on how effectively the industry builds and distributes the infrastructure that turns compute into deployed intelligence.
The Future of AI Infrastructure
The keynotes from Huang and Su highlighted the importance of developing infrastructure that can support the growth of AI workloads. As AI models grow larger and more complex, they will require massive amounts of computing power and energy. The development of modular infrastructure, such as AMD’s CPUs, GPUs, and adaptive silicon, will be critical in supporting this growth. Additionally, the development of AI "factories" that can produce intelligence at industrial scale will be essential in deploying AI in real-world scenarios. As Huang noted, "Enterprises will install complete AI production stacks rather than assembling infrastructure component by component." This shift towards integrated systems will require significant investments in infrastructure and will have major implications for the future of AI.
Conclusion
In conclusion, the keynotes from Huang and Su at CES 2026 highlighted the different routes that the industry is taking towards the development of AI. While Huang emphasized the importance of "physical AI" and the development of AI "factories," Su focused on the need for massive scale and flexibility in computing infrastructure. Both executives agreed, however, that the future of AI will depend on pushing intelligence closer to the edge, where data is generated. As the industry continues to evolve, it will be important to develop infrastructure that can support the growth of AI workloads and turn compute into deployed intelligence.
