P4 Programmability
Adaptive Packet Processing allows new features to be rolled out post-deployment without silicon spin. One hardware design can be optimized for many roles, reducing unique SKUs and simplifying inventory.
Call For Better Pricing! 844-294-0782 | Free Shipping! 
Cisco Silicon One G300
The industry’s most advanced 102.4 Tbps scale-out switching ASIC, purpose-built for AI cluster networks with 1.6T ports, 200 Gbps SerDes, and 252MB fully shared packet buffer.
Cisco Silicon One G300 is the industry’s most advanced scale-out switching ASIC, delivering 102.4 terabits per second in a single device. With integrated 200 Gbps SerDes, 1.6T Ethernet ports, and a massive 252MB fully shared packet buffer, the G300 enables distributed AI workloads and backend fabrics for massive GPU clusters—delivering measurable improvements in throughput, job completion time, and power efficiency.
The G300 was engineered from the ground up to optimize total cost of ownership and drive higher profitability for AI network clusters through two strategic pillars: Intelligent Collective Networking and Future-Proofed Infrastructure.
Industry-leading switching capacity contained within a single device, enabling massive-scale AI cluster networking.
On-chip integrated 200 Gbps SerDes developed in-house at Cisco for low power consumption and high performance with longer reach.
Massive fully shared packet buffer embedded directly into silicon, allowing any packet from any port to occupy any available space.
High radix scaling for a flatter network that connects more compute resources closer to the edge, reducing latency and simplifying topology.
The G300 powers the new Cisco N9364F-SG3 switches, offering 64 ports of 1.6T OSFP connectivity in a compact form factor. These switches deliver breakthrough performance for high-density AI clusters in both 100% liquid-cooled and air-cooled deployments.
Up to 2.5x increased burst absorption compared to industry alternatives, absorbing synchronized microbursts without packet loss to keep the network running at peak intensity.
Directs traffic across all possible paths and reacts to congestion or faults in hardware at speeds 100,000x faster than software-based tuning, eliminating manual optimization.
Rich programmable session-level diagnostics help proactively identify and address network faults and optimization opportunities with minimal software intervention at runtime.
P4 programmability enables operators to evolve infrastructure without replacing hardware. One design can be optimized for back-end, front-end, and disaggregated scale-across roles.
Intelligent Collective Networking
Intelligent Collective Networking delivers measurable benefits for AI data centers at scale. In simulations, the G300 architecture achieves significant improvements across throughput, job completion time, and power efficiency.
The G300 is the latest innovation in the Cisco Silicon One portfolio, which now powers over 60 Cisco systems. Alongside the P200 for scale-across routing and switching, Silicon One delivers a multigenerational approach to AI networking that prioritizes network efficiency and lower total cost of ownership.
Silicon One Family
102.4 Tbps switching ASIC optimized for AI cluster backend fabrics with 1.6T ports, 252MB shared buffer, and Intelligent Collective Networking.
51.2T deep-buffer routing and switching with quantum-safe encryption, powering Cisco 8000 and Nexus 9000 systems for DCI, universal spine, and core routing.
G300-powered N9364F-SG3 and P200-powered N9364E-SP2R fixed switches deliver both scale-out and scale-across capabilities on the proven N9000 platform.
OSFP 800G Linear Pluggable Optics reduce per-module power by 50% and system power by 30% by integrating signal processing into the Silicon One ASIC.
Whether you are building out AI training clusters or scaling inference workloads, our certified Cisco specialists can help you design a Silicon One G300 solution optimized for your requirements.