Microservices is a trending topic among software engineers and Cloud Consultants today, hand in hand with containers technologies like Docker and DevOps. But what is a Microservice?
Implementing Microservices or using a Microservice architectural style, is an approach to developing a single application as a suite of small services, each running in its own process and communicating with lightweight mechanisms, often an HTTP resource API.
Microservices architecture
These services are built around business capabilities and independently deployable, by fully automated deployment machinery. We can think about Microservices as a variant of the service-oriented architecture (SOA), architectural style that structures an application as a collection of loosely coupled services.
In a microservices architecture, services are fine-grained and the protocols are lightweight. The benefit of decomposing an application into different smaller services is that it improves modularity. This makes the application easier to understand, develop, test, and become more resilient to architecture erosion.
Microservices are:
- Highly maintainable and testable
- Loosely coupled
- Independently deployable
- Organized around business capabilities.
Computer microservices can be implemented in different programming languages, and might use different infrastructures. Therefore the most important technology choices are the way microservices communicate with each other (synchronous, asynchronous, UI integration) and the protocols used for the communication (REST, messaging, …).
OK, so now that we answered the question what is a Microservice?, we need to get into how they communicate to each other. That leads us to the definition of Service Mesh.
What is a Service Mesh?
In a Service mesh, each service instance is paired with an instance of a reverse proxy server, called a service proxy or sidecar.
In a Service mesh, each service instance is paired with an instance of a reverse proxy server, called a service proxy, sidecar proxy, or sidecar. The service instance and sidecar proxy share a container, and the containers are managed by a container orchestration tool such as Kubernetes.
We can think of a Service Mesh, as a dedicated infrastructure layer that controls service-to-service communication over a network. It provides a method in which separate parts of an application can communicate with each other. Service meshes appear commonly in concert with cloud-based applications, containers and microservices.
A service mesh is:
- Configurable
- Low‑latency
- Designed to handle a high volume of network‑based interprocess communication
- Fast
- Reliable
- Secure
The mesh provides critical capabilities including service discovery, load balancing, encryption, observability, traceability, authentication and authorization, and support for the circuit breaker pattern.
Istio, backed by Google, IBM, and Lyft, is currently the best‑known service mesh architecture. Kubernetes, which was originally designed by Google, is currently the only container orchestration framework supported by Istio.