NVIDIA CEO Jensen Huang Live GTC Paris Keynote at VivaTech 2025

Synthetic Minds -

NVIDIA CEO Jensen Huang Live GTC Paris Keynote at VivaTech 2025

NVIDIA CEO Jensen Huang announces significant advancements in AI technology, including new computing architectures, AI models, and partnerships that will enable major breakthroughs in industries such as robotics, supercomputing, and autonomous vehicles

 

Questions to inspire discussion

AI Computing and Infrastructure

🧠 Q: What is NVIDIA's CUDA Q library and how does it enhance quantum computing?
A: NVIDIA's CUDA Q library enables quantum classical computing on GPUs, accelerating quantum algorithms and classical machine learning for deep neural networks and inference workloads in AI factories.

🖥️ Q: How does NVIDIA's Grace Blackwell system function as a thinking machine?
A: Grace Blackwell is a thinking machine that reasons, plans, and generates tokens for agentic AI, enabling embodied robots to perform tasks like walking, reaching, and using tools.

🔗 Q: What are the capabilities of NVIDIA's NVLink interconnect?
A: NVLink enables massive virtual GPUs with 130 terabytes per second bandwidth, connecting 144 Blackwell dies and 72 GPU packages for 30-40 times more performance than traditional computing.

Digital Twins and Simulation

🌐 Q: How do NVIDIA's digital twins improve system design and operation?
A: Digital twins allow for complete digital design, planning, optimization, and operation of physical systems before deployment, enabling massive scalability and efficiency gains.

🔬 Q: How does NVIDIA's CUDA Q library facilitate quantum computing emulation?
A: CUDA Q enables emulation of quantum computers on classical machines, allowing applications to run on quantum-classical accelerated computing before actual quantum hardware is available.

AI Model Deployment and Scaling

☁️ Q: What is NVIDIA's DGX Leptton system and how does it simplify AI deployment?
A: DGX Leptton allows deploying AI models across multiple clouds (Lambda, AWS, GCP, NBS, Yoda, Scale) with one deployment and one model architecture, enabling universal deployment of AI models.

🌍 Q: How widespread is NVIDIA's AI architecture availability?
A: NVIDIA's AI architecture is available in every major cloud (AWS, GCP, Azure) and is the only AI architecture aside from x86 that is universally available.

AI Model Enhancement and Specialization

🚀 Q: How does NVIDIA's Neutron platform improve open-source AI models?
A: Neutron enhances models like Llama with post-training, neural architecture search, reinforcement learning, and reasoning capabilities, making them top performers in benchmarks.

🤖 Q: What are NVIDIA's Nemo reasoning large language models used for?
A: Nemo models are world-class and serve as the foundation for building specialized AI agents for various tasks and industries.

AI Factory Design and Operation

🏭 Q: How does NVIDIA's Omniverse platform assist in AI factory design?
A: Omniverse allows designing, building, and operating AI factories digitally, including digital twins, to optimize their utilization and cost-effectiveness.

Language and Cultural AI

🌐 Q: What is the goal of NVIDIA's partnership with Perplexity?
A: The partnership aims to connect regional AI models to Perplexity's reasoning search engine, enabling users to ask questions and get answers in their native language, culture, and sensibility.

Cloud GPU Access and Training

☁️ Q: What does NVIDIA's DGX Cloud Leptin offer to developers?
A: DGX Cloud Leptin provides on-demand access to a global network of GPUs across clouds, regions, and partners, enabling developers to scale up nodes quickly and start training with pre-integrated tools and training-ready infrastructure.

Autonomous Vehicles and Robotics

🚗 Q: How does NVIDIA's Drive platform enhance autonomous vehicle development?
A: NVIDIA Drive uses the Halo safety system to build safe AVs with diverse software stacks and sensors, starting with massive data training and generating realistic synthetic data to address edge cases.

🤖 Q: What is NVIDIA's Thor processor designed for?
A: Thor is a robotic computer for humanoid robots, containing sensors and a supercomputer chip for training and operating robots in virtual worlds before physical deployment.

Future AI Infrastructure

💡 Q: What is the purpose of NVIDIA's Blackwell system?
A: Blackwell is a thinking machine designed for reasoning and inference, powering new AI factories focused on generating tokens for the next wave of AI with exploding inference workloads.

🏭 Q: What is NVIDIA building in Europe?
A: NVIDIA is constructing the world's first industrial AI cloud in Europe for design, simulation, digital twins, and training robots for future applications like self-driving cars and humanoid robots.

 

Key Insights

AI Infrastructure and Computing Power

  1. 🖥️ NVIDIA's Grace Blackwell system is a giant virtual GPU with 1.2 million parts, 2 tons of components, manufactured in 150 factories with 200 technology partners and a $40 billion R&D budget.
  2. 🚀 The Grace Blackwell system offers 30-40 times more performance than the Hopper system, enabling higher revenues and faster factory thinking.
  3. 🔗 NVIDIA's NVLink system directly connects CPUs to GPUs in compute nodes, providing 130 terabytes per second of all-to-all bandwidth.
  4. 🌐 NVIDIA is partnering with 20 European countries to build indigenous AI infrastructure, including 20 AI factories and several gigawatt factories, increasing AI computing capacity by a factor of 10 in two years.

AI Model Development and Deployment

  1. 🧠 The Neotron strategy involves post-training open-source AI models with better data, reinforcement learning, and reasoning capabilities to enhance performance.
  2. 🌍 NVIDIA's partnership with Perplexity will connect regional AI models to a reasoning search engine, enabling agentic AI agents in various languages and contexts.
  3. ☁️ The DGX Lepton system allows deployment of AI microservices across public, regional, and private clouds, enabling AI agents to run in different environments.
  4. 🏭 NVIDIA announced the world's first industrial AI cloud in Europe for design, simulation, virtual wind tunnels, digital factories, and robotics training in real-time.

AI Applications and Systems

  1. 🚗 NVIDIA's Halo safety system for autonomous vehicles ensures safe performance and emergency stops, using massive amounts of diverse data to address edge cases.
  2. 🤖 The Thor computer devkit is a robotic computer with sensors and a supercomputer chip for humanoid AI that can learn from teaching using Nemo toolkits.
  3. 🏙️ NVIDIA's Omniverse platform creates digital twins of factories, warehouses, and train stations, enabling real-time design, simulation, and robotics training.
  4. 💻 The RTX Pro server is a new enterprise system that can run everything ever developed by NVIDIA, including AI, Omniverse, RTX for video games, and various operating systems.

AI Infrastructure Management and Development

  1. 🔄 The Lepton platform provides on-demand access to a global network of GPUs across clouds, regions, and partners with fast provisioning and real-time monitoring.
  2. 🛠️ NVIDIA's Nemo platform is a framework for AI agent lifecycle management, including onboarding, fine-tuning, training, evaluation, and continuous improvement.
  3. 💰 AI data centers are now revenue-generating facilities that produce intelligent tokens through AI generation for various industries.

Technological Advancements

  1. 🧪 The Grace Blackwell system begins as a blank silicon wafer, with hundreds of chip processing steps building up 200 billion transistors layer by layer.
  2. ❄️ The system is fully liquid cooled with custom copper blocks to maintain optimal temperatures for the chips.
  3. 🌟 NVIDIA's systems are designed to enable geniuses to do their life's work, fueling discoveries and solutions that will shape our future.

 

#SyntheticMinds

XMentions: @HabitatsDigital @NVIDIA 

 

Clips

  • 00:00 🤖 NVIDIA CEO Jensen Huang announces advancements in AI, including token development, quantum computing, and a new wave of AI focused on reasoning and problem-solving, enabling robotics and supercomputing.
    • NVIDIA CEO Jensen Huang highlights the transformative power of tokens in AI, emphasizing their role in advancing technology, enhancing efficiency, and creating new opportunities across various fields.
    • NVIDIA has developed a suite of libraries, including CUDA Q for quantum computing, QDNN for deep neural networks, and others, that accelerate applications and algorithms across various domains, enabling new opportunities in fields such as semiconductor design, AI, and medical imaging.
    • NVIDIA announces that its quantum algorithm stack is accelerated on Grace Blackwell 200, enabling quantum-classical computing and collaboration between quantum processing units (QPUs) and GPUs for the next generation of supercomputers.
    • NVIDIA CEO Jensen Huang discusses the progression of AI in three waves: perception, generative AI, and a new wave focused on reasoning, planning, and problem-solving abilities that enable humans to apply learned rules to solve unfamiliar problems.
    • Agentic AI enables robotics and physical embodiment through generative motion, revolutionizing computer graphics and deep learning, all rooted in the advancements initiated by GeForce.
    • The speaker experiences discomfort and cramping due to heat.
  • 20:15 🤖 NVIDIA unveils new AI computing architecture, including the GB200 "thinking machine" and Blackwell node, with advanced interconnects and liquid cooling, to simulate and optimize the physical world.
    • Simulations enable the creation of digital twins for designing, planning, optimizing, and operating everything in the physical world digitally before implementation.
    • NVIDIA's new GeForce, the GB200, is a 2.5-ton, $3 million, 1.2 million-part "thinking machine" that functions as a single, giant virtual GPU, comprised of multiple components connected by low-power, high-efficiency interconnects.
    • NVIDIA's new Blackwell node, with 2 CPUs and 4 GPUs, outperforms the previous Hopper system, which cost around $500,000, and is fully liquid cooled with integrated CPUs and GPUs.
    • NVIDIA's new MVLink system connects multiple CPUs with a revolutionary interconnect, enabling easy scaling out of computing by directly linking memory semantics, unlike traditional networks.
    • NVIDIA's new MVLink spine, a 100% copper coax interconnect, enables 130 terabytes per second bandwidth, surpassing global internet traffic, to connect multiple GPUs and Blackwell dies.
  • 27:02 🤖 NVIDIA unveils its new Blackwell architecture and AI supercomputer, achieving massive performance gains and scaling AI processing capabilities, enabling advanced AI models and applications.
    • NVIDIA's Blackwell architecture achieves 30-40 times more performance than Hopper, enabling advanced reasoning models that require massive computational capability to think, reflect, and generate multiple solutions.
    • Maximizing AI processing speed and factory output is crucial for increasing revenue in AI-driven operations.
    • NVIDIA's Grace Blackwell is built through a complex process involving 200 billion transistors, 32 Blackwell dies, and 128 HBM stacks assembled with custom liquid cooling and stress testing.
    • NVIDIA's new AI supercomputer, comprised of 1.2 million components, 130 trillion transistors, and 2 miles of copper cable, achieves 130 terabits per second of all-to-all bandwidth by connecting 144 GPU dies into a massive virtual GPU.
    • NVIDIA's Grace Blackwell Systems, producing 1000 supercomputer systems a week, have reached unprecedented scale and performance, surpassing 2018's largest Volta system, Sierra supercomputer.
    • NVIDIA offers a range of AI systems, including the Grace Blackwell system, with varying configurations to cater to different data center needs, from liquid-cooled systems to enterprise stacks compatible with traditional IT systems like Linux and VMware.
  • 36:43 🤖 NVIDIA unveils RTX Pro server, partners with European companies to build AI factories, revolutionizing industries and boosting AI computing capacity in Europe by a factor of 10 within two years.
    • NVIDIA unveiled the RTX Pro server, a new enterprise system that can run virtually any application, including AI, Omniverse, Windows, Linux, and video games, powered by eight Blackwell RTX Pro 6000 GPUs.
    • AI data centers, fundamentally different from traditional data centers, are designed to produce intelligent tokens, and should be viewed as revenue-generating factories rather than just storage facilities.
    • AI factories are now a crucial part of a country's infrastructure, driving a new industrial revolution that will transform every industry and become a growth manufacturing sector.
    • NVIDIA is partnering with European companies to build over 20 AI factories, increasing AI computing capacity in Europe by a factor of 10 within two years.
    • NVIDIA accelerates applications through partnerships with key software developers, and has reinvented the computing stack, transforming the way software is developed and integrated across various industries and regions.
    • NVIDIA has strong partnerships in the UK, Germany, Italy, and France.
  • 47:27 🤖 NVIDIA announces partnerships to enhance AI models, build a European AI cloud, and integrate regional language models, aiming to improve enterprise applications and user experiences.
    • NVIDIA partners with Mistral to build a sizable AI cloud in Europe, enabling delivery of AI models and applications for startups, utilizing digital twins and Omniverse for optimized operations.
    • NVIDIA enhances open-source AI models like Mistral, Llama, and others through post-training, neural architecture search, and reinforcement learning, and calls this effort Neotron.
    • NVIDIA aims to enhance AI models with enormous context capabilities for enterprise applications, allowing users to download these capabilities as a NIM package from their website.
    • NVIDIA's Neotron AI model can be enhanced and improved tremendously through post-training capabilities, and will remain open and top-performing, with continuous updates and generations.
    • NVIDIA is partnering with European model makers to adapt and enhance regional language models, emphasizing that data ownership and history belong to the people and companies, such as NVIDIA's 33 years of internal data.
    • NVIDIA partners with Perplexity to integrate regional AI models into Perplexity's search engine, enabling users to get questions answered in their local language and culture.
  • 54:38 🤖 NVIDIA CEO Jensen Huang announces the emergence of Agentic AI, a significant advancement in AI that enables agents to work together to solve complex problems using various tools and context.
    • Current AI systems have limitations, such as hallucinating, lacking access to up-to-date information, and struggling with reasoning, which critics have accurately pointed out.
    • NVIDIA's CEO Jensen Huang discusses the emergence of Agentic AI, a significant advancement from oneshot AI, enabled by integrating technologies like retrieval augmented generation, multimodal understanding, and reinforcement learning.
    • NVIDIA's AI agents work together, using various tools and context from memory, to break down complex problems into multi-step plans and execute tasks, such as starting a food truck in Paris.
    • NVIDIA's Grace Blackwell is necessary for building high-performance agents that can generate vast amounts of tokens to solve complex problems, as exemplified by Perplexity's approach.
    • NVIDIA created a platform with tools and frameworks, including reasoning models and a multimodal search engine, to help build specialized AI agents for businesses.
    • NVIDIA's DGX Cloud and AI ops ecosystem enable deployment of AI microservices, composed of multiple specialized models, across various environments, including public, regional, and private clouds.
  • 01:02:58 🤖 NVIDIA CEO Jensen Huang announces AI model deployment across multiple clouds and platforms with a single process, enabling seamless training and inferencing with unified access to global GPUs.
    • NVIDIA's architecture allows deploying one AI model on multiple clouds, including AWS, GCP, and others, with a single deployment process.
    • NVIDIA's CEO Jensen Huang recounts how the company's first AI supercomputer, DGX1, was initially met with confusion but found its first customer in Open AI, teaching him to always say yes to developers in need of GPU resources.
    • NVIDIA's architecture, based on Grace Blackwell, enables seamless deployment of AI models across multiple clouds and platforms, including Lepton and Hugging Face, with one-click integration for training and inferencing.
    • NVIDIA's DJX Cloud Leptin provides on-demand access to a global network of GPUs across clouds and regions, enabling developers to quickly provision, train, and deploy AI models with a unified interface.
    • NVIDIA's platform, NIMS, packages large language models and integrates with Nemo, a framework that manages the AI agent lifecycle, being adopted by various companies for diverse applications.
    • NVIDIA's Lepton enables deployment of AI models anywhere, including cloud, on-premises, and private cloud environments, across various platforms and virtual machines.
  • 01:10:50 🤖 NVIDIA CEO Jensen Huang announces major advancements in industrial AI, autonomous vehicles, and humanoid robotics, with new platforms and partnerships to revolutionize industries and enable widespread adoption of AI-driven technologies.
    • NVIDIA partners with companies like Siemens to drive the industrial AI revolution, combining European industrial capabilities with artificial intelligence to transform industries.
    • NVIDIA is building an industrial AI cloud in Europe for design, simulation, and digital twins, enabling real-time design and simulation of complex systems, such as factories, warehouses, and fusion reactors.
    • NVIDIA is working with major companies to develop AI-driven autonomous vehicles, leveraging its Halo safety system, Omniverse, and AI supercomputers to enable safe and efficient self-driving cars.
    • NVIDIA is on the verge of revolutionizing the robotics industry with humanoid AI that can learn from teaching, enabling small and medium-sized companies to easily deploy robots, potentially leading to a billion robots worldwide.
    • NVIDIA's Omniverse platform enables robots to learn and train in a virtual world that obeys the laws of physics, allowing them to adapt to real-world scenarios, as demonstrated with a humanoid robot named Greck that learned to walk and interact in a simulated environment.
    • NVIDIA's CEO Jensen Huang announces that the next wave of AI has started, requiring exponential growth in inference workloads, and introduces Blackwell, a thinking machine designed for AI factories that will generate tokens.

-------------------------------------

Duration: 0:0:0

Publication Date: 2025-06-11T08:34:15Z

WatchUrl:https://www.youtube.com/watch?v=X9cHONwKkn4

-------------------------------------


0 comments

Leave a comment

#WebChat .container iframe{ width: 100%; height: 100vh; }