back to top
More
    HomeNewsCisco Convenes AI Industry Leaders for February 3 Global Summit

    Cisco Convenes AI Industry Leaders for February 3 Global Summit

    Published on

    Australia’s First Cisco Secure AI Factory: What 1,024 NVIDIA Blackwell Ultra GPUs Mean for Enterprise AI

    Enterprises across Asia-Pacific now have access to sovereign, high-performance AI infrastructure that keeps sensitive data entirely onshore. Australia’s first Cisco Secure AI Factory, built with Sharon AI and NVIDIA, combines cutting-edge GPU

    Quick Brief

    • The Event: Cisco hosts its second annual AI Summit on February 3, 2026, in San Francisco with ungated global livestream featuring Jensen Huang (NVIDIA), Sam Altman (OpenAI), and Marc Andreessen (a16z)
    • The Scale: McKinsey forecasts approximately $6.7 trillion in data center capital investment needed by 2030, with $5.2 trillion for AI-focused facilities alone
    • The Momentum: Cisco secured over $2 billion in AI infrastructure orders during fiscal 2025 and projects $3 billion in AI revenue through fiscal 2026
    • The Access: Free, registration-free livestream democratizes strategic AI infrastructure conversations

    Cisco announced its second annual AI Summit scheduled for February 3, 2026, assembling technology industry leaders to address AI’s evolution from experimental tool to critical infrastructure. Chair and CEO Chuck Robbins and President Jeetu Patel will host the event in San Francisco with simultaneous global streaming at 9:00 a.m. PT, granting unrestricted access to conversations shaping the AI economy.

    McKinsey forecasts approximately $6.7 trillion in data center capital investment by 2030, with $5.2 trillion required for AI-focused facilities and $1.5 trillion for traditional IT workloads. Global data center capacity could nearly triple to 219 gigawatts by 2030, with approximately 70 percent of new demand driven by AI workloads.

    Speaker Lineup Spans AI Infrastructure Stack

    The summit convenes executives controlling critical segments of AI development and deployment. Jensen Huang, NVIDIA’s Founder and CEO, will discuss “The AI Factory” concept, examining AI as an industrial stack requiring next-generation accelerated computing and open models designed to manufacture intelligence at planetary scale. Sam Altman, OpenAI’s Co-Founder and CEO, will examine frontier models and their implications for labor markets, institutions, and geopolitics.

    Marc Andreessen, Co-Founder and General Partner of Andreessen Horowitz, will analyze venture capital dynamics as AI capabilities approach commodity status at near-zero marginal cost. Matt Garman, AWS CEO, will address enterprise operational readiness for AI at scale, emphasizing secure, observable platforms that reduce complexity. Dr. Fei-Fei Li, CEO and Co-Founder of World Labs, will focus on trust architecture and aligning AI systems with human values through 3D spatial intelligence.

    Additional speakers include Lip-Bu Tan (Intel CEO), Amin Vahdat (Google Chief Technologist for AI Infrastructure), Dylan Field (Figma CEO and Co-Founder), Aaron Levie (Box CEO and Co-Founder), Mike Krieger (Anthropic Chief Product Officer), and Kevin Weil (OpenAI VP for Science). The summit also features Anne Neuberger (Cisco Strategic Advisor) and Brett McGurk (Cisco Special Advisor for International Affairs, Venture Partner at Lux Capital) examining geopolitical implications. Francine Katsoudas, Cisco’s Executive Vice President and Chief People, Policy & Purpose Officer, will address workforce transformation.

    Speaker Title/Position Company/Organization Focus Area
    Jensen Huang Founder and CEO NVIDIA The AI Factory: Next-generation accelerated computing and open models
    Sam Altman Co-Founder and CEO OpenAI Frontier models and implications for labor markets, institutions, and geopolitics
    Marc Andreessen Co-Founder and General Partner Andreessen Horowitz (a16z) Venture capital dynamics as AI approaches commodity status
    Matt Garman CEO Amazon Web Services (AWS) Enterprise operational readiness for AI at scale
    Dr. Fei-Fei Li CEO and Co-Founder World Labs Trust architecture and aligning AI with human values through 3D spatial intelligence
    Lip-Bu Tan CEO Intel Silicon supply chain strategy
    Amin Vahdat Chief Technologist for AI Infrastructure Google Compute and network bottlenecks
    Dylan Field CEO and Co-Founder Figma Design transformation in the AI era
    Aaron Levie CEO and Co-Founder Box AI integration into enterprise workflows
    Mike Krieger Chief Product Officer Anthropic AI product development and applications
    Kevin Weil Vice President for Science OpenAI AI in scientific research
    Anne Neuberger Strategic Advisor Cisco Geopolitical implications of AI infrastructure
    Brett McGurk Special Advisor for International Affairs, Venture Partner Cisco / Lux Capital Geopolitical implications of AI infrastructure
    Francine Katsoudas Executive Vice President and Chief People, Policy & Purpose Officer Cisco Workforce transformation in the AI economy

    Cisco’s Financial Performance Reflects AI Infrastructure Demand

    Cisco reported $14.7 billion revenue in its fourth fiscal quarter ending July 26, 2025, marking 8% year-over-year growth. Full fiscal year 2025 revenue reached $56.7 billion, up 5% year-over-year.

    AI infrastructure orders from webscale customers exceeded $800 million in Q4 fiscal 2025, contributing to full-year AI orders totaling over $2 billion more than double the company’s initial $1 billion target. For fiscal 2026, Cisco raised its revenue guidance to $60.2 billion to $61 billion following Q1 fiscal 2026 results in November 2025, up from the earlier estimate of $59 billion to $60 billion. The company projects $3 billion in AI infrastructure revenue from hyperscale customers by fiscal 2026.

    AI infrastructure orders reached $1.3 billion in Q1 fiscal 2026, balanced between Silicon One systems and optics. Cisco reported a pipeline exceeding $2 billion for high-performance networking products across sovereign cloud, neo-cloud, and enterprise customers.

    Distributed AI Infrastructure Strategy

    Cisco introduced the 8223 router in October 2025, delivering 51.2 terabits per second of Ethernet routing capacity in a 3-rack-unit form factor. Powered by Cisco’s Silicon One P200 chip, the system addresses challenges as AI compute requirements outgrow individual data center capacities, necessitating workload distribution across multiple facilities.

    The 8223 features 64 ports of 800G connectivity and supports coherent optics reaching up to 1,000 kilometers. The system delivers processing capability exceeding 20 billion packets per second with scaling capacity up to 3 Exabits per second. Martin Lund, Executive Vice President of Cisco’s Common Hardware Group, stated that “AI compute is outgrowing the capacity of even the largest data center, driving the need for reliable, secure connection of data centers hundreds of miles apart”.

    Cisco expects to ship its one millionth Silicon One chip in Q2 of fiscal year 2026. Product orders for AI use cases beyond hyperscaler training grew double digits in Q1 fiscal 2026 as customers prepare networks for inferencing and agentic workflows.

    Market Dynamics and Infrastructure Investment

    Synergy Research Group analysis indicates hyperscalers control more than 1,000 large data centers globally, accounting for 41% of worldwide data center capacity. Hyperscalers are forecast to control 60% of worldwide data center capacity by 2029, compared to 20% for on-premises locations. This represents a shift from 2018, when nearly 60% of data center capacity resided in on-premises facilities.

    Hyperscalers currently spend 60% of their operating cash flows on capital expenditures including data centers and AI infrastructure. ABI Research projects that large and mega-sized colocation facilities, currently representing 28% of total worldwide data centers, will grow to 43% by 2030 to accommodate AI workloads.

    Frequently Asked Questions (FAQs)

    When is the Cisco AI Summit and how can I access it?

    The summit occurs February 3, 2026, at 9:00 a.m. PT with free global livestreaming requiring no registration at ciscoaisummit.com.

    Which executives are speaking at the Cisco AI Summit?

    Speakers include Jensen Huang (NVIDIA), Sam Altman (OpenAI), Marc Andreessen (a16z), Matt Garman (AWS), Fei-Fei Li (World Labs), and Lip-Bu Tan (Intel).

    What is Cisco’s AI infrastructure revenue forecast?

    Cisco projects $3 billion in AI infrastructure revenue from hyperscale customers by fiscal 2026, with over $2 billion in AI orders secured during fiscal 2025.

    How much investment is needed for AI data centers by 2030?

    McKinsey forecasts approximately $6.7 trillion in total data center investment by 2030, with $5.2 trillion specifically for AI-focused facilities.

    Mohammad Kashif
    Mohammad Kashif
    Senior Technology Analyst and Writer at AdwaitX, specializing in the convergence of Mobile Silicon, Generative AI, and Consumer Hardware. Moving beyond spec sheets, his reviews rigorously test "real-world" metrics analyzing sustained battery efficiency, camera sensor behavior, and long-term software support lifecycles. Kashif’s data-driven approach helps enthusiasts and professionals distinguish between genuine innovation and marketing hype, ensuring they invest in devices that offer lasting value.

    Latest articles

    Australia’s First Cisco Secure AI Factory: What 1,024 NVIDIA Blackwell Ultra GPUs Mean for Enterprise AI

    Enterprises across Asia-Pacific now have access to sovereign, high-performance AI infrastructure that keeps sensitive data entirely onshore. Australia’s first Cisco Secure AI Factory, built with Sharon AI and NVIDIA, combines cutting-edge GPU

    OpenClaw + Ollama: The Local AI Agent Setup That Keeps Your Data Off the Cloud

    Your AI agent does not need to live in a server farm 3,000 miles away. OpenClaw, paired with Ollama, puts a fully autonomous, multi-step AI agent directly on your own hardware, with no subscription, no telemetry, and no data leaving your

    NVIDIA Cosmos on Jetson: World Foundation Models Now Run on Edge Hardware

    NVIDIA just demonstrated that physical AI inference no longer requires a data center. Cosmos world foundation models now run directly on Jetson edge hardware, from the AGX Thor down to the compact Orin Nano Super.

    Manus AI Email Agent: Build One That Actually Runs Your Inbox

    Manus AI reverses that dynamic entirely, placing an autonomous agent between you and the flood of incoming messages. This tutorial shows you exactly how to build,

    More like this

    Australia’s First Cisco Secure AI Factory: What 1,024 NVIDIA Blackwell Ultra GPUs Mean for Enterprise AI

    Enterprises across Asia-Pacific now have access to sovereign, high-performance AI infrastructure that keeps sensitive data entirely onshore. Australia’s first Cisco Secure AI Factory, built with Sharon AI and NVIDIA, combines cutting-edge GPU

    OpenClaw + Ollama: The Local AI Agent Setup That Keeps Your Data Off the Cloud

    Your AI agent does not need to live in a server farm 3,000 miles away. OpenClaw, paired with Ollama, puts a fully autonomous, multi-step AI agent directly on your own hardware, with no subscription, no telemetry, and no data leaving your

    NVIDIA Cosmos on Jetson: World Foundation Models Now Run on Edge Hardware

    NVIDIA just demonstrated that physical AI inference no longer requires a data center. Cosmos world foundation models now run directly on Jetson edge hardware, from the AGX Thor down to the compact Orin Nano Super.
    Skip to main content