What is a Service Mesh?

A service mesh is a new paradigm that provides containers and microservices-based applications with services that integrated directly from within the compute cluster. A service mesh provides monitoring, scalability, and high availability services for modern applications through APIs instead of using discrete appliances.

 

What are Microservices?

Microservices is an architectural design for building a distributed application (for almost all practical implementations of a microservices architecture). Microservices get their name because each function of the application operates as an independent service. This microservices architecture allows for each service to scale or update without disrupting other services in the application.

Service Mesh

A microservices framework creates a massively scalable and distributed system, which avoids the bottlenecks of a central database. It also enable continuous integration / continuous delivery (CI/CD) pipelines for applications and modernizing the technology stack.

Companies like Amazon and Netflix have re-architected monolithic applications to microservices applications, setting a new standard for container technology

Microservice Benefits

The biggest microservices benefit is simplicity. Applications are easier to build, optimize and maintain when they’re split into a set of smaller parts. Managing the code also becomes more efficient because each microservice is composed of different code in different programming languages, databases and software ecosystems. More microservice benefits include:

  • Independence — Small teams of developers can work more nimbly than large teams.
  • Resilience — An application will still function if part of it goes down because microservices allow for spinning up a replacement.
  • Scalability — Meeting demand is easier when microservices only have to scale the necessary components, which requires fewer resources.
  • Lifecycle automation — The individual components of microservices can more easily fit into continuous delivery pipelines when monoliths bring complexities.

What are Containers?

Containers are a lightweight, efficient and standard way for applications to move between environments and run independently. Everything needed (except for the shared operating system on the server) to run the application is packaged inside the container object: code, run time, system tools, libraries and dependencies.

Types of Containers

Stateless Microservices

Don’t save or store data. Stateless microservices handle requests and return responses. Any data required for the request is lost when the request is complete. Stateless containers may use limited storage, but anything stored is lost when the container restarts.

Stateful Microservices

Requires storage to run. Stateful microservices directly read from and write to data saved in a database. Storage persists when the container restarts. However, stateful microservices don’t usually share databases with other microservices.

Monolithic Architecture versus Microservices Architecture

Applications were traditionally built as monolithic pieces of software. Monolithic applications have long life cycles, are updated infrequently and changes usually affect the entire application. Adding new features requires reconfiguring and updating the entire stack — from communications to security. This costly and cumbersome process delays time-to-market and updates in application development.

Microservices architecture was designed to remedy this problem. All services are created individually and deployed separately. This allows for autoscaling based on specific business needs. Containers and microservices require more flexible and elastic load balancing due to the highly transient nature of container workloads and the rapid scaling needs without affecting other parts of the application.

Monolithic Architecture

  • Application is a single, integrated software instance
  • Application instance resides on a single server or VM
  • Updates to an application feature require reconfiguration of entire app
  • Network services can be hardware based and configured specifically for the server

Microservices Architecture

  • Application is broken into modular components
  • Application can be distributed across the clouds and datacenter
  • Adding new features only requires those individual microservice to be updated
  • Network services must be software-defined and run as a fabric for each microservice to connect to

Why Microservices Architecture Needs a Service Mesh

Applications require a set of services from their infrastructure—load balancing, traffic management, routing, health monitoring, security policies, service and user authentication, and protection against intrusion and DDoS attacks. These services are often implemented as discrete appliances. Providing an application with these services required logging into each appliance to provision and configure the service.

This process was possible when managing dozens of monolithic applications, but as these monoliths become modernized into microservices based applications it isn’t practical to provision hundreds or thousands of containers in the same way. Observability, scalability, and high availability can no longer be provided by discrete appliances.

The advent of cloud-native applications and containers created a need for a service mesh to deliver vital application services, such as load balancing. By contrast, trying to place and configure a physical hardware appliance load balancer at each location and every server is overly challenging and expensive. And require businesses need to deploy microservices to keep up with application demands and multi-cloud environments.

A solution to this problem is the service mesh — a new way to deliver service-to-service communication through APIs that cannot be provided by appliances.

Microservices Architecture Definition

“Microservices architectural style is an approach to developing a single application as a suite of small services, each running in its own process and communicating with lightweight mechanisms, often an HTTP resource API,” according to authors Martin Fowler and James Levis in their article Microservices. “These services are built around business capabilities and independently deployable by fully automated deployment machinery. There is a bare minimum of centralized management of these services, which may be written in different programming languages and use different data storage technologies.”

What Is a Service Mesh?

A service mesh provides monitoring, scalability, and high availability services for modern applications through APIs instead of using discrete appliances.

A service mesh provides an array of network proxies alongside containers. Each proxy serves as a gateway to each interaction that occurs, both between containers and between clusters. The proxy accepts the connection and spreads the load across the service mesh. Therefore, the concept of a “mesh” looks like an illustration of the many connections because they create a woven effect.

A central controller, together with the container orchestration platform like Kubernetes provides application services for the containerized applications. While the service traffic flows directly between proxies, the control plane knows about each interaction. The controller tells the proxies to implement access control and collects performance metrics.

Service Mesh as an Architectural Pattern

As an architectural pattern, service mesh addresses the networking and application services challenges of microservices architecture. Effective delivery requires application and network services to be separated into different layers of the technology stack. But the side-effect is a fragmented infrastructure ecosystem.

Service mesh architectural pattern diagram

Service mesh avoids the fragmentation by providing a layer of communication at every infrastructure level — from distributed load balancing, firewalling and visibility. Service mesh makes it possible to deliver application services such as traffic management, security, and observability while matching the granularity required by container-based applications.

While service mesh started with containers and microservices, its benefits can be applied to traditional applications. Service mesh addresses workloads that stretch across clusters with isolation and security. It extends beyond Kubernetes container clusters to bare metal servers.

Service mesh promotes the following changes in applications:

  • Monolithic applications have been broken down and disaggregated into loosely coupled, distributed microservices.
  • Deployment models have evolved from bare metal servers to virtual machines, and now to container clusters.
  • The operating environments span on-prem data centers and public clouds.

How Does a Service Mesh Work?

Prior to service mesh, this is how a rudimentary service is deployed:

  • Discrete appliances: A set of discrete appliances (usually proprietary hardware). Deployment re-routes traffic for service chaining. As a result, this happens with manual configuration or a set of adapters and plugins to automate service creation.

The deployment of service proxies with a service mesh happens in a variety of ways:

  • Service proxy per node: Every node in the cluster has its own service proxy. Application instances on the node always access the local service proxy.
  • Service proxy per application: Every application has its own service proxy. Application instances access their own service proxy.
  • Service proxy per application instance: Every application instance, often in the form of a container, has its own “sidecar” proxy.

Benefits of a Service Mesh

  • Smaller companies can create application features that only larger companies could afford under the traditional model of using customized code and reconfiguring every server.
  • Faster development, testing and deployment of applications.
  • More efficient and quick application updates.
  • A lightweight disaggregated data layer of proxies located alongside the container cluster can be useful and highly effective in managing the delivery of network services.
  • More freedom to create truly innovative apps with container-based environments.

Service mesh benefits applications because no matter the application environment, they need a set of services from the infrastructure. Like load balancing, traffic managing, routing, health monitoring, application monitoring, security features and policies, service and user authentication, protection against intrusion and DDoS attacks, support for graceful application upgrades, error handling (for microservices).

Traditionally these services were implemented as discrete appliances. where each appliance was managed individually. Monitoring, scaling, and providing high availability for these appliances is hard and impractical.

Service mesh is a new architectural paradigm that delivers these services integrated within the compute cluster. It scales with the cluster delivering highly available services and is programmable through APIs.

Service Mesh Implementation

There are two ways to implement service mesh: as a host shared proxy and as a sidecar container:

Host Shared Proxy

This is called a DaemonSet in Kubernetes. DaemonSets manage groups of replicated pods (the smallest, most basic deployable objects in Kubernetes). Deployment by host shared proxy uses fewer resources when many containers exist on the same host. But if one proxy fails, all the containers on a host will terminate. This can be avoided with a sidecar proxy.

Sidecar Container

The proxy runs alongside the main service as it is injected into each pod. Deployments can require substantial additional memory per pod. Using only one sidecar container per pod can limit risk of proxy failure to a single pod without jeopardizing other pod that share a host.

Open Source Service Mesh

Istio is an open source service mesh that provides operational control and performance insights for a network of containerized applications. Istio provides service mesh software such as load balancing, authentication and monitoring.

Istio open source service mesh provides the following benefits:

  • Traffic management — Controls the flow of traffic and application program interface (API) calls between services.
  • Observability — Provides insights on performance. A dashboard offers visibility to quickly identify issues.
  • Policy enforcement — Ensuring policies are enforced and allowing for policy changes without changing application code.
  • Security — Secure service communications allow for consistent enforcement of policies consistently across all protocols. These include authentication, authorization, rate limiting and a distributed firewall for both ingress and egress.

Elastic Service Mesh

An elastic service mesh provides a flexible framework for an array of network services such as load balancing, monitoring and security. It also provides traffic management for containerized applications with a microservices architecture. It removes the operational complexity associated with modern microservices applications by forming a fabric connecting containers and microservices so they can communicate and interoperate securely.

Elastic service mesh provides the following benefits:

Global and Local Traffic Management

An array of network service proxies on each node in the container cluster serves as a gateway between containers and servers.

Application Monitoring and Analytics

Collects, aggregates, accumulates and stores metrics and logs for containerized applications.

Dynamic Service Discovery

Bridges the gap between a service’s name and access information (IP address) by providing dynamic mapping.

Deployment of Microservices

In a modern microservices architecture, the deployment of microservices plays an important role in the effectiveness and reliability of an application infrastructure. The following components of microservices guidelines should be considered in the deployment strategy:

  • Ability to deploy/un-deploy independently of other microservices.
  • Scalability at each microservices level
  • Failure in one microservice must not affect any of the other services.

Docker is a standard way to deploy microservices using the following steps:

  • Package the microservice as a container image.
  • Deploy each service instance as a container.
  • Scale based on changing the number of container instances.

Kubernetes provides the software to build and deploy reliable and scalable distributed systems. Large-scale microservice deployments rely on Kubernetes to manage a cluster of containers as a single system. It also lets enterprises run containers across multiple hosts while providing service discovery and replication control. Red Hat OpenShift is a commercial offering based on Kubernetes for enterprises.

Service Mesh Comparison

Istio Service Mesh

Istio service mesh began in 2017 an open-source collaboration between Lyft, IBM and Google. Istio pairs with Envoy by default and provides a universal control plane to manage underlying service proxies. Avi Network’s Universal Service Mesh integrates with Istio Service Mesh to provide comprehensive application services from traffic management and security to observability and performance management in a single platform.

AWS App Mesh

AWS app mesh uses the open source Envoy proxy. It configures each service to export monitoring data and implements consistent communications control logic across an application.

Google Service Mesh

Google service mesh is also known as GCP service mesh. The Google Cloud Platform (GCP) uses the open-source Istio project. It is automatically installed and upgraded on Kubernetes Engine clusters as a part of the Cloud Services Platform.

Envoy Service Mesh

Envoy service mesh began in 2016 as an open-source project at Lyft. It provides a universal data plane for service mesh architectures. It is also designed to be used as a standalone proxying layer. When Envoy is paired the Istio control plane, it provides a complete offering of service mesh features and solutions.

Kubernetes Service Mesh

There are many different kinds of Kubernetes service mesh providers. Istio is the most well known. Istio service mesh is used when an organization adopts container applications on Kubernetes and microservices architectures. Other types of Kubernetes service mesh providers include Linkerd and Consul.

Azure Service Mesh

Azure service mesh was created by Microsoft. Azure service fabric mesh is a highly-scalable distributed systems platform. It is designed to manage scalable microservices and container-based applications for Windows and Linux.

Conduit Service Mesh

Conduit service mesh began in 2017 as as open-source project by Buoyant. Conduit is designed to optimize the Kubernetes user experience. Conduit contains a data plane written in Rust and a control plane written in Go. It has a minimalist architecture and is considered easy to understand and use.

Red Hat OpenShift Service Mesh

Red Hat OpenShift Service Mesh is based on the open source Istio project. Red Hat OpenShift Service Mesh does not require changes to the service code to add a transparent layer on existing distributed applications. Red Hat OpenShift Service Mesh is deployed with a special sidecar proxy.

LinkerD Service Mesh

Linkerd popularized the term “service mesh.” It began in 2016 as an open-source project sponsored by Buoyant. Linkerd was built on Twitters Finagle library. Linkerd service mesh is written in Scala and combines a proxying data plane and the “Namerd” control plane in one package.

Does Avi Networks Support Service Mesh?

Yes. Universal Service Mesh is optimized for North-South (ingress) and East-West traffic management, including local and global server load balancing (GSLB), web application firewall (WAF) and performance monitoring, across multi-cluster, multi-region, and multi-cloud environments. Avi integrates with Istio service mesh, OpenShift and Kubernetes for container orchestration and security.

Service Mesh Architecture

Avi Universal Service Mesh architecture is built upon five fundamental blocks:

  • Software-defined — The separation of control and service mesh data planes to centralize policy storage and control.
  • Elasticity — A service mesh data plane composed of a single fabric of service engines that scales out and in elastically.
  • Multi-cloud — A modern architecture that supports both traditional and cloud-native applications.
  • Intelligence — Infrastructure can react seamlessly based on real-time workloads and metrics of the service fabric mesh.
  • Automation — Integration into any ecosystems and end-to-end automation through the lifecycle of the mesh app and service architecture.

Service mesh architecture also provides three core capabilities to application services: traffic management, security and observability.

Service Mesh - Traffic Management

First of all, the key functionality of a service mesh is traffic management, which includes routing the traffic from external sources into the cluster through an ingress gateway or out of the cluster through an egress gateway, and within the cluster(s) to communicate between microservices. These are called north-south and east-west traffic management respectively.

Service mesh traffic management capabilities include:

  • Ingress gateway with integrated IPAM/DNS, blacklist/whitelist and rate limiting
  • L4-7 load balancing with SSL/TLS offload
  • Automated service discovery and application map

Service Mesh - Security

The next logical step of an application lifecycle is to secure the application, especially in the case of thousands of microservices. The connectivity is dynamic, and each service-to-service communication needs to be encrypted, authenticated and authorized.

Service mesh security capabilities include:

  • Zero trust security model and encryption
  • Distributed WAF for application security
  • SSO for enterprise-grade authentication and authorization

Service Mesh - Observability

Observability in service mesh is important because most enterprises replace monolithing applications incrementally. As microservices are introduced, there can be many different applications that need to communicate with each other and interact with the monolithic applications that remain. Service mesh observability is key for understanding the complicated architecture and root-causing problems when failures happen. It allows for health checks with a broad view of application interactions.

Service mesh observability capabilities include:

  • Real-time application and container performance monitoring with tracing
  • Big data driven connection log analytics
  • Machine learning-based insights and app health analytics

Service Mesh Tutorial

The following tutorial videos by Avi Networks demonstrate how universal service mesh works:

Universal Service Mesh Demo

This video shows how Avi Networks integrates with Istio to provide a highly secure, scalable and enterprise grade ingress gateway. Manish Chugtu, Avi CTO for cloud infrastructure and microservices, demonstrates per-tenant (namesake) ingress gateway and autoscaling based on rich traffic metrics.

Universal Service Mesh Walkthrough

This video introduces Universal Service Mesh by Avi Networks. Avi CTO Ranga Rajagopalan discusses how applications have evolved and how services are evolving over time.

Ready to See Avi in Action?

Schedule a Demo