I’ve been in software development for three decades now. Ouch! That’s hard to admit because it means I’m a lot older than I was when I started coding. On the other hand, participating in the evolutionary change that has taken place in our industry has been one of the great joys of my life. Today, the software world is changing so quickly that you don’t have to be around for 30 years to see it taking shape.
It wasn’t that long ago that one of our technology partners would run their deployments in planned maintenance windows every other Saturday. The company would communicate this to customers just in case there were unexpected issues during the deployment.
(The author shivers)
Enter a new era
Times have changed.
We no longer deploy to a machine down the hall every other weekend. Companies, including ModusBox, are continuously deploying lightweight containers to cloud servers any time of the day, any day of the week. It’s a job that is 24/7/365. Additionally, uptime requirements for deployed services continue to rise. With today’s high bar for frequent, error-free deployments, maintenance windows with downtime are becoming an unacceptable and archaic practice.
However, running zero-downtime deployments in the middle of a business day with high production traffic is a new challenge that teams must solve for. This fundamental shift in how the industry handles application deployment has redefined our need for tooling that can keep up with the pace.
Poppin’ the question
Kubernetes was purpose-built to make the continuous deployment process easier to manage. Kubernetes is an open source platform (created by Google) that automates the deployment, scaling, and management of containerized workloads and services. Support for techniques like canary deployments where the updated application is deployed to only a portion of the traffic, validated in production, and then deployed to all consumers is more manageable with Kubernetes.
When ModusBox designed the infrastructure for the PortX Platform, we faced a fundamental question resulting from this industry evolution:
With the advent of microservices architecture and Kubernetes-driven deployments, does it still make sense to have a non-cloud-native API gateway that isn’t fully integrated into the Kubernetes infrastructure?
Our team decided that it no longer made sense to continue doing things the same ole way. Now, every development team must answer the same question for themselves.
The microservices revolution
Kubernetes is at the heart of the microservices revolution. It handles the routing between services and can also control “ingress” calls made into any service hosted inside the Kubernetes cluster. Using a traditional API Gateway outside of Kubernetes only adds complexity to the overall architecture, preventing you from achieving true end-to-end “Infrastructure-as-Code” (IaC).
You could say the confluence of Docker, Kubernetes, Microservices architecture, DevOps, and any associated tools and practices represent this revolutionary change in how software is built and delivered. However, as with any revolutionary change, many of the old tools and practices become obsolete. There are several existing tools for implementing API gateways that should be re-evaluated in our world of modern application deployment.
Re-evaluating the status quo
For years, Nginx was perhaps the most popular industry-standard, open source HTTP server and reverse proxy. It has always been known for its efficiently low usage of memory and high concurrency. However, Nginx was designed and built during an era of traditional deployment to physical servers. Unfortunately, this tool is outdated and creates a weak link in the deployment process. It can’t handle the requirements for deploying today’s dynamic applications. Specifically, the large latencies experienced when updating routing configuration (for example, scaling the number of pods, changing headers, or rewriting URLs) with Nginx can result in errors during a deployment, a lack of operational reliability, and, ultimately, a bad user experience.
That’s why we use an Envoy-based ingress controller as our API Gateway.
In today’s environment, where 99.99% reliability is the expected benchmark, companies absolutely cannot afford any delay. By combining Kubernetes and Envoy-based ingress control, you can allow users to dynamically deploy and secure new services with near-zero latency.
What do you think?
We would love to get your feedback on this topic. Do you agree? How has your team addressed the modern approach to API management in the microservices revolution?
Please leave us a note below or on our LinkedIn post on the topic.