Containers, Kubernetes, and microservices form the foundation of a cloud-native architecture, but they’re not the only considerations. In fact, as I write this, the Cloud-Native Computing Foundation (CNCF) is considering adding a fourth pillar to their cloud-native requirements: the service mesh. A service mesh architecture for microservices makes sense, and in this blog, we explain why.
What is Service Mesh?
When it comes to understanding and managing microservices, the service mesh for microservices is critical. Microservices are very small and tend to move around a lot, making them difficult to observe and track. At the service mesh layer, network operators can finally and clearly see how microservices interact with one another (and with other applications), secure those interactions, and manage them based on customizable policies.
The Importance of Service Mesh Architecture
- Provides Load Balancing for Microservices
- Improves Microservices Security
- Provides Visibility of Microservices
Provides Load Balancing for Microservices
One of the functions that a service mesh provides is load balancing for microservices. Recalling that microservices are instantiated in a dynamic fashion—that is, they can appear and disappear quickly—traditional network management tools aren’t granular enough to manage these microservice life cycle events. The service mesh, however, understands which microservices are active, which microservices are related (and how), and can provide policy enforcement at a granular level by deciding how workloads should be balanced. For example, if a microservice is upgraded, the service mesh decides which requests should be routed to the microservices running the stable version and which requests should be routed to the microservices running the upgraded version. This policy can be modified multiple times during the upgrade process and serves as the basis for what the industry calls a “canary upgrade” approach.
Improves Microservices Security
Another area where the service mesh plays a valuable role is in microservices security. It is considered best practice to use the same security guidelines for communications between microservices and for their communications with the “outside” world. This means authentication, authorization, and encryption need to be enforced for all intra-microservice communications. The service mesh enforces these security measures without affecting application code, as well as enforce security-related policies such as whitelists/blacklists or rate-limiting in the event of a denial-of-service (DoS) attack. But the service mesh doesn’t stop at security between microservices; it extends security measures to inbound/outbound communications that take place through the ingress and egress API gateways that connect microservices to other applications.
Provides Visibility of Microservices
Finally, the service mesh provides much-needed visibility into the microservices themselves. There are several tools available today that help with this:
- Istio, which provides the control plane for microservices.
- Envoy, a microservices sidecar that acts as the communications proxy for the API gateway functions.
- Kiali, which visualizes the service mesh architecture at a given point in time and displays information such as error rates between microservices.
If you’re unfamiliar with the sidecar concept, you can think of it as an adjunct container attached to the “main” microservice container that provides a supporting service—in the case of Envoy, intercepting both inbound and outbound REST calls.
While CNCF will likely decide in favor of adding the service mesh to their cloud-native requirements, you can get those benefits today with Affirmed Networks. It’s just another example of our forward-thinking approach since it makes a lot more sense to include those capabilities into our cloud-native architecture right from the beginning than to mesh around it with later.