Whilst I wasn’t able to make it this year, Cisco Live EMEA in Amsterdam was full of annoucements and updates to their products sets. The message to customers and partners goes back to their roots “the network is now the backbone of the AI era” .
To that effect, Cisco unveiled a new generation of their silicon, systems, optics, software and collaboration devices designed to scale AI clusters, cut energy and operational costs, and bring real‑time AI and secure access into production.
For a mid-year event this was a huge set of announcements, so let’s take a look at these.
Cisco Live EMEA 2026 – Announcements at glance
Cisco Silicon One G300 and the new systems
With Cisco’s latest home-grown Silicon upgrades they can now deliver a whopping 102.4 Tbps programmable switching ASIC built to power gigawatt‑scale AI clusters and reduce job completion time while increasing GPU utilisation.
Cisco positions the G300 as the networking foundation for “agentic” AI workloads, with features such as a fully shared packet buffer, path‑based load balancing and proactive telemetry.
This is important of course for the global Hyper Scalers but also for enterprise organisations running or building AI workloads on their own data center hardware, since AI training and inference are increasingly limited by data movement. The new G300 and the G300‑powered Nexus 9000 and Cisco 8000 systems have been designed to treat the network as part of the compute fabric, improving throughput and reducing stalls that waste GPU cycles. Cisco said that organisations should expect a 28% improvement in job completion time in their benchmarks.
Optics and cooling for energy efficiency
Still in the DC switching space, Cisco announced new high‑density optics (including 1.6T OSFP and 800G LPO) plus 100% liquid‑cooled system options for the Nexus N9000/8000 family. Cisco claim these advances can improve energy efficiency by close to 70% for AI scale‑out deployments. which need increased cooling technologies.
This is an importnant impact with the awareness and concerns abou the engery consumption that AI data centers command. On top of that, energy and cooling are major operational costs for any organisation running AI data centers. Higher‑density optics and liquid cooling reduce power per bit and allow denser GPU/network co‑location, lowering TCO for large AI clusters.
Nexus One: new Unified management for AI fabrics
Next up – Cisco announced that Nexus One will now deliver a unified management plane intended to simplify fabric deployment, scale predictably, and operate securely across both on‑premises and cloud environments. Cisco positon that as a key step to remove operational friction when scaling AI data centers.
For any organisation that runs their own on-premises data centres, predictable scale and simpler operations are often the gating factors for enterprises that want to run real production AI workloads rather than experiments. The move towards a true single management plane reduces integration risk and speeds time to value.
Updates to Cisco SASE and AI Defense
Initially announced last year, Cisco announced what it calls its biggest update yet to their AI Defense and SASE stacks which forms part of the “Cisco Secure Cloud Suite”. This latest update brings AI‑driven detection, automated workflows and tighter integration between network telemetry and security enforcement across both “bought AI” and an inhouse developed AI tools and services.
Amongst the extra capabilities announced was the new ability to catalog all Model Context Protocol (MCP) servers an business uses in-house and elsewhere. This helps SecOp teams spot systems and AI agents that might not be caught by existing security policies and AI guardrails.
As organisations move past the PoC and experimental phase of AI agents and workloads proliferate and attack surfaces change. Combined with the amount of Shadow AI (such as consumer AI services in the workplace) and AI built into everyday apps, embedding AI into Cisco’s core detection and response and tying that to SASE policy enforcement helps organisations maintain security posture at the speed of AI.
New Webex Devices – Room Kit Pro G2 and Desk Pro G2
Announced initially at ISE 2026, Cisco have announced new AI‑ready collaboration endpoints in form of the Room Kit Pro G2, Desk Pro G2. Cisco are bringing real‑time speech translation in Webex that boldly claims to also preserves tone and emotion (something many AI translation services fail to do). Cisco also emphasised real‑time, human‑centric collaboration features powered by their mix of on‑device and cloud AI.
As Cisco continues to re-invest the Webex brand, hybrid work continues to demand better meeting experiences; real‑time translation and improved endpoint intelligence reduce friction and broaden collaboration across languages and locations.
Final Thoughts
Cisco’s announcements shift the conversation from “can we run AI” to “how do we run AI efficiently, securely and at scale.”
The core messaging from the keynotes [“the network is now the backbone of the AI era“] was clearly reminding their customers and parters of Cisco’s role in powering and securing networks for the AI era. AI was clearly the agenda and what Cisco aim to help customers achieve – shifting from speeds and feeds (although there was lots of speed and feed talk) to creating measurable business outcome opportunities. This is about faster model runs, lower operational cost, more sustainable IT, combined with stronger security for the AI era.
it certainly does feel like Cisco have got their innovation hats on and are once again doing what they did at the start of the cloud era – driving innovation and leading by example with a joined up, integrated approach. It’s a fun time to be a Cisco Partner. In fact, it’s a great time to work in the tech world – even though it feels like it’s almost impossible to keep up-to-date!

