
Broadcom’s latest AI networking chip, the Tomahawk Ultra, targets the core of HPC and AI systems. This Ethernet switch is optimized to connect thousands of chips in large-scale data center environments. Additionally, it is built for lossless, low-latency communication.
Tomahawk Ultra was developed over three years using TSMC’s 5 nm process. It’s highly efficient, open-standard design now puts it in a position to challenge Nvidia’s dominance. Broadcom now solidifies its position as a top AI infrastructure enabler, supporting Google’s AI initiatives as well.
Tomahawk Ultra Sets New AI Networking Benchmark
Broadcom‘s Tomahawk Ultra, which allows for 250 ns latency at 51.2 Tbps throughput, represents a breakthrough in AI networking. The chip processes up to 77 billion packets per second and provides line-rate switching at 64-byte packet sizes. Additionally, it maintains complete Ethernet compliance while increasing efficiency. Its optimized Ethernet switch headers, which are downsized to 10 bytes, make it possible.
The chip incorporates a lossless fabric and is designed for workloads in next-generation data centers. For large AI models and high-performance computing, it eliminates data loss through the use of Link Layer Retry and Credit-Based Flow Control. Furthermore, these features ensure a consistent and dependable network layer.
Additionally, the chip reduces the load on accelerators by performing in-network collectives, such as Broadcast and AllReduce. It supports multiple architectures regardless of endpoint type, reduces AI job times, and increases system utilization. Furthermore, the switch supports Dragonfly and Torus topologies, enhancing deployment flexibility for data centers.
Will Broadcom’s Chip Shift Future AI Architectures
The Tomahawk Ultra, which is completely compatible with the previous Tomahawk 5, is currently shipping. Its design blends in perfectly with established platforms and racks. The chip plays a crucial part in enabling “scale-up” AI. This necessitates close communication between dozens of computer chips to train many models.
Broadcom also unveiled SUE-Lite, a condensed version of the Scale-Up Ethernet spec, as part of its AI strategy. It uses less power and space, making it perfect for accelerators, while maintaining lossless and low-latency characteristics. As a result, Ethernet switch technology is more widely used in data center AI systems.
Can Broadcom Dethrone Nvidia in AI Networking?
Broadcom is at the forefront of AI networking with the Tomahawk Ultra, which offers superior speed and scalability via standard Ethernet. Open, high-performance solutions like these could characterize the next wave of data center innovation as generative AI continues to evolve. Therefore, given that deployment is already underway, Broadcom’s Ethernet wager seems to be well-timed.