nginMesh: NGINX Proxy in an Istio Service Mesh - NGINX

Original: https://www.nginx.com/blog/nginmesh-nginx-as-a-proxy-in-an-istio-service-mesh/

This post is adapted from a presentation at nginx.conf 2017 by A.J. Hunyady, Senior Director of Product Management at NGINX, Inc. You can view the complete presentation, Deploying NGINX Proxy in an Istio Service Mesh, on YouTube. Also see the press release announcing nginMesh, the nginMesh GitHub repo, and Ed Robinson’s conference talk on NGINX open source projects.

Table of Contents

0:54 Agenda
1:32 Modern Apps
4:29 Cloud‑Native Apps
7:52 The Services Mesh
9:37 What Is Istio?
12:20 Istio with NGINX
14:25 Demo of Istio and NGINX in action
31:48 nginMesh Roadmap
34:00 Q&A

Good afternoon, everyone. My name is A.J. Hunyady. I’m going to give a talk on NGINX as a proxy within an Istio service mesh. I’m the product owner and I’ll be joined on stage by Sehyo Chang, who’s the chief architect for this project. He’ll be doing a demo for us.

Before I get started, I’d like to ask a couple of questions. How many of you are familiar with service mesh? How about Istio, have any of you heard of Istio? Oh, that’s a fair number. All right, great.

0:54 Agenda

This is the agenda. I’ll speak about the evolution of microservices and what I perceive as the new stack. Then, I’ll briefly talk about the role of the service mesh, for those of you who haven’t run across it yet.

After that, I’ll do a brief intro on Istio and talk about how NGINX and Istio will work together in giving you a service mesh for enterprise – maybe I should call it an enterprise‑grade service mesh. Then Sehyo will be doing a demo for us. And finally, I’ll talk about roadmaps, since we’re going to deliver this in several phases.

1:32 Modern Apps

Let’s get started. You might have seen this slide live this morning [in Owen Garrett’s presentation of the NGINX Product Roadmap 2017]. I’m trying to set some context regarding the iteration of application architecture and the transitions it’s gone through, from cloud servers in the 1990s to three‑tier apps, and from then on in the 2000s to Web 2.0 applications.

With the evolution of container technology, what we’re seeing now is the next transition: toward cloud‑native apps. If you look at cloud‑native apps, it’s all about microservices, and they’re defined a small, loosely coupled workloads that have a well‑defined API.

They’re portable and they communicate with each other through a networking layer. This is all nice and great for developers, because they have the ability to use any infrastructure they like. Since [microservices‑based applications] are portable, they can easily move from the laptop into the cloud.

Developers can use their favorite programming language – which might be C, Go, Java, and so forth – and they have the ability to work on smaller workloads, so they don’t really have to deal with the big, monolithic types of applications.

But there’s a downside, and the downside is that we’re [transferring] some of the complexity to IT Operations, which has to deal with deploying all these workloads across the ecosystem, across the data center.

You may deploy a thousand microservices, or maybe a million if you’re Netflix. (They just announced that they hit the one‑million mark of services deployed on the network in April of this year.) It’s one thing to [deploy large numbers of applications] in one data center, but it’s quite another to do it in multiple data centers and across geographical locations.

And you want to get the same type of reliability that you’ve seen before [with traditional application design], the same type of troubleshooting capabilities that you’ve seen for your apps. As a matter of fact, Gartner announced that by 2021, about half the enterprises worldwide will move from “cloud‑first” to “all in the cloud”, which gives us about four years. Not a big deal, right? But think of other transitions – in particular, the one to virtualization – they have taken about 10, 15 years.

But there’s some good news. A lot of companies and vendors in this space are building innovative solutions.

4:29 Cloud-Native Apps

Let me talk a little bit about the new stack. If you look at cloud‑native apps and microservices, you may think, “Oh my gosh, service‑oriented architecture [SOA] all over again”. It’s becoming cool, right? I would argue that that may be the case, but it’s quite a different type of environment. It’s no longer services bus‑related; it’s built on orchestration layers.

If you look at this new stack, it has several components. The first one is packaging – which is, I would argue, a pretty well‑solved problem by now. Docker has done a pretty good job of providing you the portability you need and the ability to push polyglots [applications written in different languages] within your system.

Docker has done something very interesting. In late 2014 or early 2015, CoreOS announced that they had their own container technology called rkt (Rocket), and then Red Hat announced Atomic. So Docker decided to start the OCI (Open Container Initiative).

What they’ve done is bring all these companies together to write a Version 1 specification. They’re trying to make sure that the packaging stays uniform across all enterprises, rather than just Docker‑specific packaging.

The next layer on the stack is orchestration. Once you figure out how to package – once you figure out how to get your workloads into testing – you have to deal with orchestration‑type challenges: how do you take containers and assign them to computing jobs?

There are three major vendors there, maybe four if you include HashiCorp [which produces Nomad]: you have Kubernetes, Mesos, Docker Swarm, and Nomad. I would argue that, with about 40% of the market, Kubernetes is doing a good job trying to standardize that orchestration function.

The next layer, as Gus mentioned this morning, is interconnectivity between services. As you might have noted, when you’re deploying containers, networking them together is not a trivial task. It’s even harder to secure them. This is where service mesh connectivity comes in. I’ll talk a little bit more about this because it has a lot to do with Istio.

The last part of the platform is the application platform. That’s where we bring in the policy layer, the workflows, and deploying applications across multiple environments. Those are the types of problems that NGINX Controller, which we introduced this morning, is designed to solve by giving you workflows, giving you policy, giving you role‑based access. OpenShift is also solving some issues by giving you the ability to set up access control across your environments.

7:52 Service Mesh

What is a service mesh? It’s a uniform infrastructure layer for service‑to‑service communication. It utilizes lightweight proxies and it can be deployed, typically, in two flavors.

It may be deployed side by side with the application. If you look at this diagram, it may look familiar to you as it’s our reference architecture [the NGINX Microservices Reference Architecture] developed last year by the team led by Chris Stetson – I see him here in the first row. Some of our customers have tried it. It gives you the ability to run your proxy functionality side by side as a process, side by side with the application level.

Another implementation of that is done through a service proxy. That’s the approach Istio has taken, which I’m going to describe in more detail in the next set of slides.

Why is a service mesh required? It gives you:

9:37 What Is Istio?

What is Istio? Istio is an open platform that provides a uniform way to connect, manage, and secure microservices. It was introduced by Google in collaboration with IBM and other vendors only a few months ago, on May 23, 2017.

It’s currently in alpha, version 0.16. Google anticipates that they’ll come up with 0.2 version towards the end of the month [September 2017]. They have a website they’ve put up and there’s a lot more information there. Istio has multiple layers that I’m going to talk to you about.

Istio offers a control plane within Istio itself. As I mentioned in the previous slides, there are two approaches to deploying a proxy: as a sidecar or integrated.

Istio has chosen to give you a sidecar proxy which is transparent to the application, but it’s deployed on top of a Kubernetes environment, so each service that’s deployed by Kubernetes has a proxy side by side with it.

Then it enables the application services to communicate with each other transparently. It will patch the traffic that moves from one service to another through to the sidecar proxy. Then, they’re able to communicate with each other without the application being involved.

It also takes the identity of the application. Why is that important? You can do some really interesting things with regard to security by taking the identity of the application. You can set up things such as MTLS [mutual TLS] where you authenticate both the client and the server (both sides of the service), so you can ensure that only authorized services that should communicate with each other.

It also does things such as certificate authority automation. It enables you to rotate certificates; that’s no longer a manual operation. If you look at Istio, there are really three main components:

  1. Pilot, where you have the configuration for the routing domain and a plug‑in into service discovery.
  2. Mixer, which does three things, in a sense: it makes sure that only services that should communicate will communicate; it does monitoring; and it also does quota management.
  3. Auth, which handles authentication, as I’ve already mentioned.

12:20 Istio with NGINX

Where does NGINX come into play? I think that’s kind of a giveaway. NGINX will be represented in this diagram by becoming the sidecar proxy in the Istio environment, which gives you the best‑in‑class features you already know: from routing to load balancing, circuit‑breaker capabilities, caching, and encryption.

More importantly, you have the ability to build your own custom third‑party modules. You can bring in your choice of authentication mechanisms. You can even bring in dynamic language support. For example, if you have Lua scripting or nginScript, they can now, all of a sudden, be integrated in an Istio‑like environment.

And we have roadmap that we’re going to go through as we’re iterating to the Istio deployment. On one side, we have Unit – which we announced this morning – that gives you the ability to take the sidecar proxy and integrate it side by side with services.

There are some performance improvements from that: you deploy one component instead of two; it gives you better compatibility. As you’re probably aware, within the Kubernetes environment there are Pods, but those don’t really exist in a Docker Swarm or Mesos environment. Having the Unit component that does the service‑mesh function as well as the application‑server function means you can easily port from one environment to another.

On top of that, you’re going have NGINX Controller, which gives you the ability to control your workflows. Because, while Pilot enables you to set various routes for your application – for blue‑green, A/B, and canary types of workloads – you need a high‑level abstraction that gives you the ability to map applications together, write beta data around it, and also move those applications from one cloud to another, or from one environment to another.

14:25 Demo of Istio and NGINX in Action

I’m going to ask Sehyo Chang, NGINX’s Director of Engineering, to come up and show us a demo of Istio and NGINX in action. We’ll be running Kubernetes, Istio, and NGINX together.

Click here for demo by Sehyo Chang

We have a few more slides that we’re going to go over. I think what Sehyo has demonstrated is that NGINX makes quite a capable proxy within an Istio environment. We actually have several components that we plugged in. As part of the Istio environment, we have a plugin which we bring into NGINX to communicate the mixer to JRPC.

We also have an agent that enables us to proxy all the traffic that comes into the service to our agent. It goes in, then it’s funneled through to the service engine that we’ve provided, and it goes up in the service.

It’s pretty much transparent to the application. You don’t have to make any changes to the application. You deploy the application as you would in a Kubernetes environment, and then you’re making changes to traffic routing without really impacting the bytes of the application itself.

It’s been a really interesting setup. We weren’t able to get everything to mirror.

Anyway, what we’re trying to say is that this project is available on GitHub today. We just open‑sourced it, so you can play around with it yourself. It comes with instructions.

31:48 nginMesh Roadmap

In terms of the nginMesh roadmap, as we said, it’s in Tech Preview today. We have it available on GitHub at github.com/nginmesh. This is our new product name. We’re going to have it available in beta, and at that time, we’ll publish more than the container.

We’re going to do that with Istio 0.2 because there are several components that will be changing within the environment. We’ll also add OAuth. Then, toward the end of October, we’ll add the Ingress Controller part of this, so you’ll be able to have a full chain of information, and you’ll have full visibility across the ecosystem.

Last but not least, we have a bunch of future enhancements that we’d like to make. We’ll bring in gRPC on the upstreams through NGINX, and then we’ll integrate NGINX Plus support, as well as Unit and Controller.

In conclusion, we’d like leave you with the statement that NGINX has joined CNCF, and we’re partnering with Istio to build innovative solutions that help enterprises transition to modern microservices architectures. The project, as I said, is available on GitHub. Thank you very much.

34:00 Q&A

Q: What’s the difference between an API gateway and a service mesh?

A: When you think of an API gateway, you’re dealing with high‑level abstractions. You’re assigning different types of authentication layers. Typically, it has to do with the traffic coming to your network. A service mesh deals with service‑to‑service communications. It provides you security.

It provides you mutual authentication. Within an API gateway, it’s a typical one‑way authentication. You have an SSL certificate and you’re attempting to get the client. Typically, an API gateway comes with a control plane that enables you to provide AVS for multiple types of applications.

Service mesh, and Istio itself, are more about interservice communication and abstracting applications from each. They probably have different functions.

Retrieved by Nick Shadrin from nginx.com website.