White paper written by Joel Engardio for Avi Networks
Microservices and Service Mesh 101
What are Containers and Microservices?
Containers are a lightweight, efficient and standard way for applications to move between environments and run independently. Everything needed to run the application is packaged inside the container: code, run time, system tools, libraries and settings. Microservices is a architectural design for building a distributed application using containers. Microservices architecture treats each function of the application as an independent service that can be altered, updated, or taken down without affecting the rest of the application.
“Microservices architectural style is an approach to developing a single application as a suite of small services, each running in its own process and communicating with lightweight mechanisms, often an HTTP resource API,” according to authors Martin Fowler and James Levis in their article Microservices. “These services are built around business capabilities and independently deployable by fully automated deployment machinery. There is a bare minimum of centralized management of these services, which may be written in different programming languages and use different data storage technologies.”
Companies like Amazon and Netflix have re-architected monolithic applications to microservices applications, setting a new standards for container technology.
Monolithic Architecture versus Microservices Architecture
Applications were traditionally built as a monolithic pieces of software. Adding new features requires reconfiguring and updating everything from communications to security within the application. Monolithic applications have long lifecycles, are updated infrequently and changes usually affect the entire application. This costly and cumbersome process delays advancements and updates in application development.
Microservices architecture was designed to solve this problem. All services are created individually and deployed separately from one another. This allows for scaling services based on specific business needs. Services can also be rapidly changed without affecting other parts of the application.
Monolithic Architecture
- Application is a single, integrated software instance
- Application instance resides on a single server or VM
- Updates to an application feature require reconfiguration of entire app
- Network services can be hardware based and configured specifically for the server
Microservices Architecture
- Application is broken into modular components
- Application can be distributed across the clouds and datacenter
- Adding new features only requires those individual microservice to be updated
- Network services must be software-defined and run as a fabric for each microservice to connect to
Why Microservices Architecture Needs a Service Mesh
When monolithic applications are separated into dozens or hundreds of microservices, containers become the best means to improve the speed of developing, deploying and scaling applications. However, containerized applications and microservices create new challenges in managing service provisioning.
Providing each container with application and network services in the same way as monolithic applications isn’t efficient or realistic. Containers, by their very nature, are portable so the application may be distributed across local, on-premises and various cloud environments during its lifecycle. The services must also be spread across multiple hosts. This can create logistical and operational challenges that, if not addressed, can increases vulnerabilities and cost.
The solution for this problem is a service mesh — a persistent layer of services across all environments that containerized applications and microservices can tap into as needed.
What Is a Service Mesh?
A service mesh is a layer of communication infrastructure that efficiently handles the service discovery for container-based applications and microservices. A service mesh allows for the development of applications without having to worry about underlying connectivity and network services. A service mesh gives organizations a flexible framework for network services to deploy applications built utilizing container technology.
The advent of cloud-native applications and containers created a need for a service mesh to deliver vital application services, such as load balancing. By contrast, trying to place and configure a physical hardware appliance load balancer at each location and every server is overly challenging and expensive.
A service mesh provides an array of network proxies alongside containers. Each proxy serves as a gateway to each interaction that occurs, both between containers and between servers. The proxy accepts the connection and spreads the load across the service mesh. Therefore, the concept of a “mesh” looks like an illustration of the many connections because they create a woven effect.
A central controller orchestrates the connections. While the service traffic flows directly between proxies, the control plane knows about each interaction. The controller tells the proxies to implement access control and collects performance metrics. The controller also integrates with platforms like Kubernetes and Mesos, which are open-source systems for automating the deployment and management of containerized applications.
How Does a Service Mesh Work?
The deployment of service proxies with a service mesh happens in a variety of ways:
- Discrete appliances: A set of discrete appliances (usually proprietary hardware). Deployment re-routes traffic for service chaining. As a result, this happens with manual configuration or a set of adapters and plugins to automate service creation.
- Service proxy per node: Every node in the cluster has its own service proxy. Application instances on the node always access the local service proxy.
- Service proxy per application: Every application has its own service proxy. Application instances access their own service proxy.
- Service proxy per application instance: Every application instance has its own “sidecar” proxy.
Benefits of a Service Mesh
- Smaller companies can create application features that only larger companies could afford under the traditional model of using customized code and reconfiguring every server.
- Faster development, testing and deployment of applications.
- More efficient and quick application updates.
- A lightweight disaggregated data layer of proxies located alongside the container cluster can be useful and highly effective in managing the delivery of network services.
- More freedom to create truly innovative apps with container-based environments.