Inscrivez-vous maintenant pour un meilleur devis personnalisé!

Power Your GenAI Ambitions with New Cisco AI-Ready Data Center Infrastructure

Oct, 29, 2024 Hi-network.com

Let's start with a staggering statistic: According to McKinsey, generative AI, or GenAI, will add somewhere between$2.6T and$4.4T per year to global economic output, with enterprises at the forefront. Whether you're a manufacturer looking to optimize your global supply chain, a hospital that's analyzing patient data to suggest personalized treatment plans, or a financial services company wanting to improve fraud detection-AI may hold the keys for your organization to unlock new levels of efficiency, insight, and value creation.

Many of the CIOs and technology leaders we talk to today recognize this. In fact, most say that their organizations are planning full GenAI adoption within the next two years. Yet according to the Cisco AI Readiness Index, only 14% of organizations report that their infrastructures are ready for AI today. What's more, a staggering 85% of AI projects stall or are disrupted once they have started.

The reason? There's a high barrier to entry. It can require an organization to completely overhaul infrastructure to meet the demands of specific AI use cases, build the skillsets needed to develop and support AI, and contend with the additional cost and complexity of securing and managing these new workloads.

We believe there's an easier path forward. That's why we're excited to introduce a strong lineup of products and solutions for data- and performance-intensive use cases like large language model training, fine-tuning, and inferencing for GenAI. Many of these new additions to Cisco's AI infrastructure portfolio are being announced at Cisco Partner Summit and can be ordered today.

These announcements address the comprehensive infrastructure requirements that enterprises have across the AI lifecycle, from building and training sophisticated models to widespread use for inferencing. Let's walk through how that would work with the new products we're introducing.

Accelerated Compute

A typical AI journey starts with training GenAI models with large amounts of data to build the model intelligence. For this important stage, the newCisco UCS C885A M8 Serveris a powerhouse designed to tackle the most demanding AI training tasks. With its high-density configuration of NVIDIA H100 and H200 Tensor Core GPUs, coupled with the efficiency of NVIDIA HGX architecture and AMD EPYC processors, UCS C885A M8 provides the raw computational power necessary for handling massive data sets and complex algorithms. Moreover, its simplified deployment and streamlined management makes it easier than ever for enterprise customers to embrace AI.

Cisco UCS C885A M8 Server: High-density server for demanding AI training tasks

Scalable Network Fabric for AI Connectivity

To train GenAI models, clusters of these powerful servers often work in unison, generating an immense flow of data that necessitates a network fabric capable of handling high bandwidth with minimal latency. This is where the newly releasedCisco Nexus 9364E-SG2 Switchshines. Its high-density 800G aggregation ensures smooth data flow between servers, while advanced congestion management and large buffer sizes minimize packet drops-keeping latency low and training performance high. The Nexus 9364E-SG2 serves as a cornerstone for a highly scalable network infrastructure, allowing AI clusters to expand seamlessly as organizational needs grow.

The new Cisco Nexus 9364E-SG2 Switch provides 800G aggregation for AI connectivity

Purchasing Simplicity

Once these powerful models are trained, you need infrastructure deployed for inferencing to provide actual value, often across a distributed landscape of data centers and edge locations. We have greatly simplified this process with new Cisco AI PODs that accelerate deployment of the entire AI infrastructure stack itself. No matter where you fall on the spectrum of use cases mentioned at the beginning of this blog, AI PODs are designed to offer a plug-and-play experience with NVIDIA accelerated computing. The pre-sized and pre-validated bundles of infrastructure eliminate the guesswork from deploying edge inferencing, large-scale clusters, and other AI inferencing solutions, with more use cases planned for release over the next few months.

Our goal is to enable customers to confidently deploy AI PODs with predictability around performance, scalability, cost, and outcomes, while shortening time to production-ready inferencing with a full stack of infrastructure, software, and AI toolsets. AI PODs include NVIDIA AI Enterprise, an end-to-end, cloud-native software platform that accelerates data science pipelines and streamlines AI development and deployment. Managed through Cisco Intersight, AI PODs provide centralized control and automation, simplifying everything from configuration to day-to-day operations, with more use cases to come.

Cloud Deployed and Cloud Managed

To help organizations modernize their data center operations and enable AI use cases, we further simplify infrastructure deployment and management with Cisco Nexus Hyperfabric, a fabric-as-a-service solution announced earlier this year at Cisco Live. Cisco Nexus Hyperfabric features a cloud-managed controller that simplifies the design, deployment, and management of the network fabric for consistent performance and operational ease. The hardware-accelerated performance of Cisco Nexus Hyperfabric, with its inherent high bandwidth and low latency, optimizes AI inferencing, enabling fast response times and efficient resource utilization for demanding, real-time AI applications. Furthermore, Cisco Nexus Hyperfabric's comprehensive monitoring and analytics capabilities provide real-time visibility into network performance, allowing for proactive issue identification and resolution to maintain a smooth and reliable inferencing environment.

Cisco Nexus Hyperfabric delivers cloud-managed, high-performance AI networking

By providing a seamless continuum of solutions, from powerful training servers and high-performance networking to simplified inference deployments, we are enabling enterprises to accelerate their AI initiatives, unlock the full potential of their data, and drive meaningful innovation.

Availability Information and More

The Cisco UCS C885A M8 Server is now orderable and is expected to ship to customers by the end of this year. The Cisco AI PODs will be orderable in November. The Cisco Nexus 9364E-SG2 Switch will be orderable in January 2025 with availability to begin Q1 calendar year 2025. Cisco Nexus Hyperfabric will be available for purchase in January 2025 with 30+ certified partners. Hyperfabric AI will be available in May and will include a plug-and-play AI solution inclusive of Cisco UCS servers (with embedded NVIDIA accelerated computing and AI software), and optional VAST storage.

For more information about these products, please visit:

  • Cisco UCS C885A M8 Server
  • Cisco AI PODs
  • Cisco Nexus 9364E-SG2 Switch
  • Cisco Nexus Hyperfabric

If you are attending the Cisco Partner Summit this week, please visit the solution showcase to see the Cisco UCS C885A M8 Server and Cisco Nexus 9364E-SG2 Switch. You can also attend the business insights sessionBIS08entitled"Revolutionize tomorrow: Unleash innovation through the power of AI-ready infrastructure"for more details on the products and solutions announced.


tag-icon Tags chauds: Centre de données Cisco UCS Cisco Cloud Cisco Meraki Intersight Generative AI (genAI) À propos de Nexus Nexus HyperFabric Nexus Dashboard

Copyright © 2014-2024 Hi-Network.com | HAILIAN TECHNOLOGY CO., LIMITED | All Rights Reserved.