How to Improve Resilience in Kubernetes with Advanced Traffic Management

Original: https://www.nginx.com/blog/improve-kubernetes-resilience-with-advanced-traffic-management/

There’s a very easy way to tell that a company isn’t successfully using modern app development technologies – its customers are quick to complain on social media. They complain when they can’t stream the latest bingeworthy release. Or access online banking. Or make a purchase, because the cart is timing out.

Even if customers don’t complain publicly, that doesn’t mean their bad experience doesn’t have consequences. One of our customers – a large insurance company – told us that they lose customers when their homepage doesn’t load within 3 seconds.

All of those user complaints of poor performance or outages point to a common culprit: resiliency…or the lack of it. The beauty of microservices technologies – including containers and Kubernetes – is that they can significantly improve the customer experience by improving the resiliency of your apps. How? It’s all about the architecture.

I like to explain the core difference between monolithic and microservices architectures by using the analogy of a string of holiday lights. When a bulb goes out on an older‑style strand, the entire strand goes dark. If you can’t replace the bulb, the only thing worth decorating with that strand is the inside of your garbage can. This old style of lights is like a monolithic app, which also has tightly coupled components and fails if one component breaks.

But the lighting industry, like the software industry, detected this pain point. When a bulb breaks on a modern strand of lights, the others keep shining brightly, just as a well‑designed microservices app keeps working even when a service instance fails.

Kubernetes Traffic Management

Containers are a popular choice in microservices architectures because they are ideally suited for building an application using smaller, discrete components – they are lightweight, portable, and easy to scale. Kubernetes is the de facto standard for container orchestration, but there are a lot of challenges around making Kubernetes production‑ready. One element that improves both your control over Kubernetes apps and their resilience is a mature traffic management strategy that allows you to control services rather than packets, and to adapt traffic‑management rules dynamically or with the Kubernetes API. While traffic management is important in any architecture, for high‑performance apps two traffic‑management tools are essential: traffic control and traffic splitting.

Traffic Control

Traffic control (sometimes called traffic routing or traffic shaping) refers to the act of controlling where traffic goes and how it gets there. It’s a necessity when running Kubernetes in production because it allows you to protect your infrastructure and apps from attacks and traffic spikes. Two techniques to incorporate into your app development cycle are rate limiting and circuit breaking.

Traffic Splitting

Traffic splitting (sometimes called traffic testing) is a subcategory of traffic control and refers to the act of controlling the proportion of incoming traffic directed to different versions of a backend app running simultaneously in an environment (usually the current production version and an updated version). It’s an essential part of the app development cycle because it allows teams to test the functionality and stability of new features and versions without negatively impacting customers. Useful deployment scenarios include debug routing, canary deployments, A/B testing, and blue‑green deployments. (There is a fair amount of inconsistency in the use of these four terms across the industry. Here we use them as we understand their definitions.)

How NGINX Can Help

You can accomplish advanced traffic control and splitting with most Ingress controllers and service meshes. Which technology to use depends on your app architecture and use cases. For example, using an Ingress controller makes sense in these three scenarios:

If your deployment is complex enough to need a service mesh, a common use case is splitting traffic between services for testing or upgrade of individual microservices. For example, you might want to do a canary deployment behind your mobile front‑end, between two different versions of your geo‑location microservice API.

However, setting up traffic splitting with some Ingress controllers and service meshes can be time‑consuming and error‑prone, for a variety of reasons:

With NGINX Ingress Controller and NGINX Service Mesh, you can easily configure robust traffic routing and splitting policies in seconds. Check out this livestream demo with our experts and read on to learn how we save you time with easier configs, advanced customizations, and improved visibility.

Easier Configuration with NGINX Ingress Resources and the SMI Specification

These NGINX features make configuration easier:

Our tutorial, Deployments Using Traffic Splitting, walks through sample deployment patterns that leverage traffic splitting, including canary and blue-green deployments.

More Sophisticated Traffic Control and Splitting with Advanced Customizations

These NGINX features make it easy to control and split traffic in sophisticated ways:

Interpret Traffic Splitting Results with Dashboards

You’ve implemented your traffic splitting…now what? It’s time to analyze the result. This can be the hardest part because many organizations are missing key insights into how their Kubernetes traffic and apps are performing. NGINX makes getting insights easier with the NGINX Plus dashboard and pre‑built Grafana dashboards that visualize metrics exposed by the Prometheus Exporter. For more on improving visibility to gain insight, read How to Improve Visibility in Kubernetes on our blog.

Master Microservices with NGINX

The NGINX Ingress Controller based on NGINX Plus is available for a 30-day free trial that includes NGINX App Protect to secure your containerized apps.

To try NGINX Ingress Controller with NGINX Open Source, you can obtain the release source code, or download a prebuilt container from DockerHub.

The always‑free NGINX Service Mesh is available for download at f5.com.

Retrieved by Nick Shadrin from nginx.com website.