Tesla Kills Dojo for AI6! Hereโ€™s Why

AI, AI5, AI6, Tesla -

Tesla Kills Dojo for AI6! Hereโ€™s Why

Tesla has discontinued its Dojo supercomputer project and is shifting its focus to developing a new AI chip, specifically the AI6 chip, which will handle both training and inference, marking a strategic shift in its AI development

ย 

Questions to inspire discussion

AI6 Chip Overview

๐Ÿ”ง Q: What is Tesla's AI6 chip?
A: AI6 is a convergence architecture for both training and inference, replacing Dojo, built exclusively by Samsung in Texas using vertical integration.

๐Ÿš€ Q: How does AI6 compare to previous Tesla chips?
A: AI6 will be 10x faster than AI5, capable of 22,500 trillion operations per second, consuming 800 watts of power.

โฐ Q: When will AI6 be released?
A: The AI6 chip is expected to be due in 2028 or 2029, with no release planned for 2027.

AI6 Applications

๐Ÿš— Q: How will AI6 be used in Tesla vehicles?
A: AI6 will be used for FSD inference, with two chips in every car, enabling advanced autonomous driving capabilities.

๐Ÿค– Q: What role will AI6 play in Optimus?
A: AI6 will enable on-device learning and reinforced learning in Optimus, enhancing its AI capabilities.

๐Ÿ”‹ Q: Will AI6 be used in other Tesla products?
A: AI6 will be integrated into every edge device produced by Tesla, including Tesla Semi, Mega Pack, and security cameras.

Technical Specifications

๐Ÿ’ป Q: What is the architecture of AI6?
A: AI6 will use a cluster model of individual chips with a software layer on top, similar to Dojo 3 for training.

โšก Q: How does AI6 handle different precision levels?
A: AI6 uses a generalized block quantization method to dynamically switch between floating point 64, 32, 16, 8 without separate math cores.

๐Ÿญ Q: How will AI6 be manufactured?
A: AI6 will be built by Samsung using a 2 nanometer node technology with a focus on high yield and low latency.

Comparison to Dojo

๐Ÿ”„ Q: How does AI6 differ from Dojo?
A: Unlike Dojo, which was specialized for training, AI6 is a general-purpose architecture for both inference and training.

๐Ÿงฉ Q: What was Dojo's design?
A: Dojo used a unique system-on-wafer design with a 5x5 grid of chips and high-speed interconnects, stacked in cabinets of 10 tiles.

Tesla's Strategy

๐Ÿ“ˆ Q: Why is Tesla moving towards vertical integration in chip design?
A: Vertical integration allows Tesla to control the entire value chain, accelerate innovation, and reduce costs.

๐Ÿ”ฌ Q: How does vertical integration benefit Tesla's AI development?
A: It enables Tesla to squeeze every ounce of performance out of its AI models, leading to superior products with high margins.

Future Implications

๐ŸŒ Q: How will AI6 impact Tesla's position in the market?
A: AI6 is expected to make Tesla a leader in autonomous driving and robotics markets in the future.

๐Ÿ”ฎ Q: What potential applications could AI6 enable beyond current Tesla products?
A: AI6 could power AI features in future Tesla products like drones and smart home devices with autonomous capabilities.

AI6 vs. Competitors

๐Ÿ’ช Q: How does AI6 compare to third-party chips?
A: AI6's price-performance-power ratio is expected to be highly differentiated and superior to third-party alternatives.

๐Ÿ”— Q: How does AI6 fit into Tesla's overall AI strategy?
A: AI6 represents Tesla's move towards pure vertical integration in AI, similar to their approach with batteries, controlling the entire value chain.

ย 

Key Insights

Revolutionary AI Architecture

๐Ÿš€ Tesla's AI6 chip combines training and inference in a single architecture, replacing the Dojo system and enabling vertical integration in AI processing.

๐Ÿ’ก The AI6 chip is expected to offer 10x or more headroom over AI5, with release planned for 2028-2029.

๐Ÿ”„ AI6 will be a convergence chip for both FSD and Optimus inference, as well as training in data centers.

Evolution of Tesla's AI Chips

๐Ÿ“ˆ Tesla's AI chip evolution progressed from HW3 (2019) to HW4 (2023), with AI4 powering FSD versions 13 and 14.

๐Ÿ”ฎ AI5, due in 2026, will achieve 2,500 trillion operations per second for FSD and Optimus inference.

Dojo System and D1 Chip

๐Ÿ–ฅ๏ธ Dojo, a purpose-built training computer, was designed for video training of neural nets using accelerated computing with matrix multiply operations.

๐Ÿงฉ Dojo's system on wafer design featured a 5x5 grid of tiles with high-speed interconnects for efficient data transfer between chips.

AI6 Chip Design and Manufacturing

๐Ÿญ Samsung will build AI6 in Texas using 2nm node technology, with Tesla planning to eventually fabricate it in-house.

๐Ÿ”ฌ AI6 will use a system on wafer design with a 2x2 grid of 5x5 tiles of D1 chips, requiring 100% yield.

Inference and Training Capabilities

๐Ÿค– Inference chips can be used for both inference and training, with AI6 enabling Optimus to learn on the fly at the edge.

โš–๏ธ AI6 is a generalized architecture chip that performs well in both inference and training, using dynamic shifting of data and computation.

Vertical Integration and Performance Optimization

๐Ÿ”ง Tesla's vertical integration allows them to squeeze every ounce of performance from their models, achieving the best cycle time for inference.

๐Ÿ“Š The quantization process enables massive performance optimization between training and inference by downsizing high-precision data sets.

Wide-ranging Applications

๐Ÿš› AI6 will be used in every edge device produced by Tesla, including the Tesla Semi, Mega Pack, and other products.

๐ŸŽฅ The chip will power intelligent and autonomous devices beyond cars and robots, including security cameras, drones, and household appliances.

Market Dynamics and Future Implications

๐Ÿ“Š Inference is becoming the dominant architecture in AI, with training being a smaller market.

๐ŸŒ AI6 represents a classic playbook for pure vertical integration, similar to Samsung and Panasonic in batteries.

๐Ÿ”ฌ The convergence of training and inference in AI6 could reshape the landscape of autonomy, robotics, and AI across various industries.

ย 

#AI6 #AI5 #Tesla

XMentions: @Tesla @HabitatsDigital @herbertong @pbeisel

Clips

  • 00:00 ๐Ÿค– Tesla is ditching its Dojo system for the new AI6 chip, a more efficient convergence architecture that will handle both training and inference, marking a strategic shift in its AI development.
    • Tesla is replacing its Dojo system with the new AI6 chip, a convergence architecture that will handle both training and inference across its products, expected to be implemented around 2027-2029.
    • Tesla is shifting towards pure vertical integration for its AI chips, with plans to design and fabricate its own chips, making AI6 a key step in this process.
    • Elon Musk shut down the Dojo project and shifted focus to AI6, a new chip built exclusively with Samsung, which marks a significant change in architecture and potentially makes the Dojo name obsolete.
    • Elon Musk's decision to shut down Dojo and focus on AI6 marks a strategic convergence of training and inference architectures, driven by the growing dominance of inference in the AI market.
    • Neural networks are shifting from training to inference, with the majority of computations happening in inference, which will further increase with applications like Optimus that can learn and train on the fly at the edge.
    • Tesla's Dojo and AI6 chips are similar mathematically, featuring a large number of matrix multiply operations, and differ more in their architecture than in their function as parallel processing chips.
  • 11:50 ๐Ÿค– Tesla discontinued its Dojo supercomputer project due to its specialized design being difficult to produce and no longer viable, shifting focus to a new AI chip with a single architecture for both training and inference.
    • An inference chip on a device, like those used in Tesla's Full Self-Driving (FSD) technology, can perform training and learning, but its application depends on the specific use case, with on-device learning being less crucial for FSD due to safety and validation concerns.
    • Tesla's new AI chip, likely used for both training and inference, will have a single architecture, with optimists seeing different behaviors as it utilizes reinforced learning and on-device training.
    • Tesla's Dojo supercomputer was a specialized computer architecture designed for fast training on large video data for full self-driving and Optimus, but was ultimately discontinued.
    • Tesla built a unique AI chip, "Dojo", using a 5x5 grid "tile" design with high-speed interconnects etched on silicon, allowing for stacked "cabinets" of 10 tiles.
    • Tesla killed the Dojo project because it was a specialized computer that didn't advance quickly enough and was difficult to produce at a satisfactory level, making it no longer viable.
    • They need to take action regardless, as the world changed quickly and it's now hard to maneuver.
  • 19:12 ๐Ÿค– Tesla shifts focus from Dojo to new AI chips, including AI5 (2026) and AI6 (2028/2029), to enhance FSD and robotics capabilities.
    • Tesla's AI chip, initially called Hardware 3 and later renamed AI4, was used in vehicles until 2023, when it was replaced with Hardware 4.
    • Tesla's new AI chip, produced by Samsung at 7nm, can handle FSD versions 13 and 14, and supports robo-taxi and full self-driving capabilities.
    • Tesla is shifting focus from the Dojo supercomputer to a new AI chip, AI5, which will offer significantly faster performance with 2,250 trillion operations per second and be produced using a 3 nanometer node process by Taiwan Semi, expected to be available in late 2026.
    • Tesla's AI chip, likely to be used for FSD and Optimus, consumes around 800 watts of power and is expected to be used in most production vehicles, including Cybercabs and future Tesla vehicles.
    • Tesla likely won't reach full scale production of Cybercab until late 2026, which coincidentally aligns with the expected timeline for AI5, and early Cybercab testing units may still use Hardware 4 or early AI5 silicon.
    • Tesla's upcoming AI6 chip, expected around 2028 or 2029, will significantly outperform AI5, be built by Samsung using advanced technology, and serve multiple functions in vehicles and robotics.
  • 29:32 ๐Ÿค– Tesla kills Dojo for AI6, shifting focus from high-bandwidth data processing to speed and low latency for applications like vehicle control.
    • The speaker appreciates a table for teaching them a lot.
    • The Dojo chip, D1, consists of a 5x5 grid of interconnected chips that work as a unit, not independently.
    • Tesla's Dojo AI computer uses a "system on wafer" design, requiring a 100% yield from each 5x5 grid of chips, as defective chips cannot be binned or sold separately.
    • The D1 chip is focused on high-bandwidth data processing with floating-point math, whereas AI6 chip seems to prioritize speed and low latency, particularly for applications like vehicle control.
  • 34:20 ๐Ÿค– Tesla kills Dojo for AI6, shifting focus to a general-purpose inference chip with dynamic precision, replacing custom training chip with Nvidia clusters.
    • Tesla's new AI6 chip prioritizes reducing latency in matrix math operations using a generalized block quantization method, differing from the previous Dojo chip optimized for high-precision floating-point numbers.
    • The new AI chip can dynamically switch between different floating point operations (64, 32, 16, 8) without separate math cores, efficiently processing data blocks with varying precision.
    • Quantization maps high-precision numbers to a smaller set, losing data in the process, allowing large AI models to fit in vehicles like those using the AI4 chip.
    • Tesla's new AI6 chip uses a patented methodology to dynamically adjust data processing, allowing it to efficiently handle both training and inference with varied precision, from 64bit to 16bit floating-point numbers.
    • Tesla's new chip, optimized for inference, will be used in a cluster design for training, potentially replacing Dojo, and will be called Dojo 3.
    • Tesla is shifting focus from a custom chip for training AI to a general-purpose inference chip, as training can be done with existing Nvidia chips and coherent clusters can be built with many chips to achieve cohesive performance.
  • 42:14 ๐Ÿค– Tesla kills Dojo project to focus on developing its own AI chips, specifically AI6, to control its autonomous technology future and prioritize inference capabilities.
    • Tesla's Dojo was replaced by AI6, which offers the best of both worlds for training and inference, making Dojo, a specialized training computer, obsolete.
    • Tesla is focusing on developing its own AI chips, specifically inference and training compute, to control its future in autonomous technology and avoid relying on external suppliers.
    • Tesla can continue to use Nvidia chips for training, as they've already demonstrated ability to work with mixed architecture, making Dojo less crucial.
    • Tesla prioritizes focus on inference for its AI, aiming to deeply integrate software and hardware to achieve low latency and better performance, rather than relying on Nvidia chips.
    • Tesla's increased processing rate, such as running 60 times per second, enhances safety and performance in vehicles like Optimus, allowing for better environmental perception and power management.
  • 48:02 ๐Ÿค– Tesla kills Dojo for AI6, a crucial chip that will power all their products, enabling advanced AI capabilities and autonomy in vehicles, Semi, and other edge devices.
    • Tesla takes a model, quantizes it, and then the software team optimizes it to squeeze performance, reduce latency, and improve safety.
    • Tesla's vertical integration of hardware and software allows them to optimize their AI models effectively, giving them an advantage similar to companies like Apple.
    • Tesla's AI6 product is crucial as it will run every other product and the company is making significant progress in accelerated computing, specifically in GPUs for inference and training.
    • Tesla's future relies heavily on AI6, with every edge device produced, including vehicles, Semi, and other products, incorporating this chip to enable advanced AI capabilities.
    • Every device, no matter how mundane, will soon have an AI inference chip, effectively becoming intelligent and autonomous, as the trend of integrating microcontrollers and CPUs into everyday objects continues.

-------------------------------------

Duration: 0:54:2

Publication Date: 2025-08-17T15:10:22Z

WatchUrl: https://www.youtube.com/watch?v=9UFZVv2N7rc

-------------------------------------


0 comments

Leave a comment

#WebChat .container iframe{ width: 100%; height: 100vh; }