Inscrivez-vous maintenant pour un meilleur devis personnalisé!

Nouvelles chaudes

Power of Cloud Application Centric Infrastructure (Cloud ACI) in Service Chaining

Jun, 16, 2021 Hi-network.com

It is a reality that most enterprise customers are moving from a private data center model to a hybrid multi-cloud model. They are either moving some of their existing applications or developing newer applications in a cloud native way to deploy in the public clouds. Customers are wary about sticking to just a single public cloud provider for fear of vendor lock-in. Hence, we are seeing a very high percentage of customers adopting a multi cloud strategy. According to Flexera 2021 State of the cloud report, this number stands at 92%. While a multi cloud model gives customers flexibility, better disaster recovery and helps with compliance, it also comes with a number of challenges. Customers have to learn not just one, but all of the different public cloud nuances and implementations.

Navigating the different islands of public cloud

When customers adopt a multi cloud strategy, they often begin with one and then expand to other clouds. Though most public clouds were built with an over-arching goal  of providing access to resources instantly at a lower cost, their individual implementations and corresponding cloud native constructs are different. Hence automation artifacts built for a specific public cloud provider, cannot be re-used for other clouds.  As we see our customers undertake the multi cloud journey, it is increasingly clear that having an automated way to configure the cloud constructs for various clouds is a huge benefit for our customers.

Cisco provides this solution to our customers via Cloud ACI. Cisco Application Centric Infrastructure (ACI) is Cisco's premier Software Defined Networking (SDN) solution for the data center.  The ACI solution now caters not only to on-premises data center, but the public cloud as well. Thereby, offering a seamless experience to customers to orchestrate and manage consistent policies for their workloads irrespective of where the workload resides. Cloud ACI provides that needed abstraction across multiple public clouds, providing a single policy model for customers to define their intent. Cisco ACI solution takes care of automating the user intent into required cloud native construct of each cloud.

Cloud ACI solution achieves this by deploying the Cisco Cloud Application Infrastructure Policy Controller (Cloud APIC)  in the cloud site, like Amazon AWS or Microsoft Azure. The cloud APIC is registered with the Cisco Nexus Dashboard Orchestrator (formerly Multi-Site Orchestrator) -the master controller for managing different ACI sites. The user defines the policies on the Nexus Dashboard Orchestrator, which pushes it down to the sites where the user policy needs to be applied.The Cloud ACI controller at the site takes care of configuring the right networking and security cloud constructs for that cloud site.

Let us take an example of an enterprise that plans to deploy workloads both in AWS and Azure. Resources in AWS are deployed within a VPC, whereas Azure requires a Resource Group. AWS provides native load balancing services via Elastic Load Balancers, whereas in Azure, you would use an Application Gateway for L7 load balancing and Network Load Balancer for L4 traffic. The native cloud constructs are different and end users have to learn both AWS as well as Azure languages. If the enterprise uses Cloud ACI, configuring a VRF (Virtual Routing context) from the Nexus Dashboard Orchestrator will translate to creating a VPC in a AWS site and a Virtual Network (VNET) in the Azure site. It's that simple!!!

Load Balancers and More!

Cloud ACI can be particularly powerful when automating your applications behind native load balancing services. Both large web scale applications as well as  smaller enterprise applications are typically deployed behind a load balancer for high availability and elasticity. Hence, all major public cloud players offer load balancing as a native service. Load balancers have a frontend, which is the IP and port to reach the application and a backend with the servers serving that application. Depending on the load, the servers hosting the application can be scaled up/down elastically.

Cloud ACI provides a neat way to automate the creation of the native load balancers as well as configure and manage the lifecycle of the load balancers. The solution provides an innovative way to add the backend servers as targets to the load balancers dynamically. This is done via tagging the servers and creating a service graph in ACI. A service graph represents the flow of data between consumers and providers via one or more service devices. Cloud ACI provides the ability to create load balancers and configures the frontend port based on user configuration. Once a user specifies via a contract the desired provider endpoint group (EPG), the solution takes care of automatically adding the servers that belong to the provider endpoint group as the backend of the load balancer.

This is pretty powerful, with VMs scaling up and down, there is no need to manually add/remove these servers from the load balancer backend. Cloud APIC auto detects the servers and classifies them into the right EPG.  The Cloud APIC then dynamically adds/removes these servers from the backend of the load balancer.

Unleash the power of service chaining

For web applications reachable over the internet, it is paramount that there is additional security built in to protect the application and the backend servers from security attacks. In such cases, it is common for customers to insert a firewall before the traffic hits the load balancer. The firewall could be Cisco's FTD, or 3rd party firewalls from vendors like Checkpoint, Fortinet, VM-Series Next-Generation Firewall from Palo Alto etc, available in the public cloud marketplace. Cloud ACI provides the perfect automation for this use case by providing users with a way to build a multi node service graph. To provide high availability for the firewall, a load balancer may be placed in front of the firewall like shown in the below picture

Cloud ACI can automate the entire flow by managing the lifecycle of both the front end and the Backend LB. It automates the creation of the load balancers, configuring the frontend port/protocol and adding the right backend targets.  As defined by the service chain, it adds the firewall instances as the targets of the Frontend LB. It adds the application servers as the targets of the backend application load balancer (ALB). Cloud APIC also configures the security groups at each layer with the right set of rules based on the contract. This ensures that no un-intended traffic flows between the user and the backend application servers. Can it get better than this! The only configuration that is required from cloud ACI is

  • creation of the logical devices for the load balancers and firewall
  • creation of a service graph specifying the location of the service devices in the chain
  • configuring a contract between the consumer and the backend application server endpoint group

As you can see, this is extremely simple and saves time and reduces configuration complexity for the user. What more, the network admin can be at peace knowing that any dynamic scaling of the backend servers by the application/server admin, will be handled by cloud APIC.

And not just this, cloud ACI service chain also supports redirecting to a service device. Interested in learning more? Details to follow in a future blog!  Meanwhile, checkout these whitepapers on cloud ACI and the integration with AWS and Azure

Cloud ACI and AWS

Cloud ACI and Azure


tag-icon Tags chauds: Multi-Site service chaining Service Stitching Cloud ACI Nexus Dashboard Orchestrator Cloud Load Balancing

Copyright © 2014-2024 Hi-Network.com | HAILIAN TECHNOLOGY CO., LIMITED | All Rights Reserved.