The Impact of APIs in the SaaS industry
Enhanced Operational Efficiency with Data - Driven IT Strategy
The Standard Work of IT
CIOs Shouldn't See OpenStack and Public Clouds as an Either/ or...
Travel APIs: Easing the Turbulence from Origin to Destination
Matt Minetola, EVP & Global CIO, Travelport
Moving Modernized: The Technology Revolutionizing Relocation Services
Mark Scullion, President of Workplace and Commercial Services, The Suddath Companies
Thank you for Subscribing to CIO Applications Weekly Brief
Moving Traditional Microservices into Service Mesh
By Polerio T. Babao III, Sr. Technology Architect, U.S. Bank
In the year 2000, the technology landscape was already following the three-tier architectural approach. The idea on how to migrate from monolithic applications to medium and large-scale companies has been clear. Monolithic applications are used by some companies to jumpstart their ideas and get a fast turnaround time for their products or proof of concepts.
Microservices are architectural style that structures an application as a collection of services that are highly maintainable and stable. Microservices are also loosely coupled, independently deployable, organized around business capabilities, and owned by a small team. Traditional microservices are also heavily supporting resiliency features to ensure that applications are always available. Circuit breakers, retries, and fallbacks are additional microservices features that are usually coded within the microservices.
In 2010, while making the microservices more resilient, the infrastructure landscape has changed as companies are dealing with multi-cloud and multi-datacenter installation. On-premise application interacts with cloud-based applications and vice-versa. The introduction of smartphones and tablets have also contributed into the infrastructure management complexity as the number of transactions has increased. Applications can be accessed not just on workstations, but also on smartphones and tablets. The requirements for microservices to satisfy resiliency have become more complicated. One of the resilient mechanisms introduced are the use of load balancing and auto scaling capabilities. It is helping companies in saving infrastructure cost as during peak or non-peak hours as infrastructure can be scaled up or down dynamically.
At present time, the technology and infrastructure landscape that Developers and Operations community have been maintaining have become more complicated. Companies are maintaining virtual machines in Cloud, and bare metal machines in on-premise installation. In addition to it, now they are dealing with Containers. A container is a standard unit of software that packages its code and all its dependencies.
A new architectural pattern is born out of the need to modernize the technological and infrastructural management, and maintenance of microservices. This technology is called Service Mesh
A new architectural pattern is born out of the need to modernize the technological and infrastructural management, and maintenance of microservices. This technology is called Service Mesh. It removes resilient capabilities that are being manually coded by the developers within microservices as it now can be handled within Service Mesh. Service mesh improves the infrastructure processes by simplifying the management of different networking capabilities such as weighted routing, traffic splitting, etc.
What is a Service Mesh?
It is a configurable low latency infrastructure layer designed to handle a high volume of network-based inter process communication among application infrastructure services using application programming interfaces (APIs). It ensures that communication among containerized and often ephemeral application infrastructure services is fast, reliable, and secure. It also provides critical capabilities including service discovery, load balancing, encryption, observability, traceability, authentication and authorization, and support for the circuit breaker pattern.
Service mesh works by using a proxy instance called sidecar. Sidecars handle interservice communications, monitoring, and security related concerns. The sidecar proxy manages the security which include encryption, data integrity, and authentication. The fault tolerance capabilities implemented in sidecar proxy include circuit breaking, rate limiting, bulk heading, automatic retrying, response caching, etc.
Converting microservice to use Service Mesh
Service Mesh divides its architecture into two categories: control planes, and data planes. The control plane manages the configuration of the data plane. Its features include control plane UI/CLI, workload scheduler, service discovery, and sidecar proxy configuration APIs. Some of the data plane capabilities include application health-checks, routing, load balancing, authentication and authorization, and observability. Most of the implementation of Service Mesh uses Kubernetes as its platform. Service mesh has improved the load balancing, and autoscaling capabilities that Kubernetes provides to microservices.
There are multiple service mesh implementation of control planes and data planes. Control plane implementations include Istio, Nelson, Hashicorp Consul Connect, Nginx Controller, AWS App-mesh, etc. The data plane implementations include HAProxy, Linkerd, traefik, Envoy, Hashicorp Consul, and Nginx sidecar proxies. On top of data and control planes, service mesh orchestrations are also available which include Aspen Mesh, Flagger, SuperGloo, solo.io, etc.
Converting to service mesh would again change the technology and infrastructure landscape of the microservices; but its sole purpose is to improve and simplify it. At this time, it could be treated as a modern way of implementing the microservices because of its inherent features that include weighted routing, circuit breaking, traffic shifting and limiting, etc. The developers would continue to focus on software development, and not worry about the complexity of re-implementing resilient microservice features that service mesh already provides.
The diagram below shows a service mesh implementation using Istio. The observability and traceability are represented by the Tracing and Monitoring capabilities. The weighted routing, circuit breaking, and traffic shifting functionalities are represented on the sidecar proxies.
The Developers community do not have to configure technology to support load balancing, autoscaling, weighted routing, traffic splitting, and circuit breaking in their Microservices. These capabilities are provided by the Service Mesh technology. With Service Mesh, companies can focus on solving business requirements, software development, and finally, product delivery.
Check out: Top Networking Consulting Startups