Migrating to Kubernetes
It was recently reported that T-Mobile runs over 20,000 containers across its virtualized cloud infrastructure and that it will save $30 million annually as their workloads relocate away from traditional environments. While the direct cost savings are impressive on their own, T-Mobile is accelerating additional key deliveries against a company-wide initiative to become an “uncarrier” — a move away from an inflexible, stodgy “phone company” to a highly agile, customer-oriented technology company.
Sundar4 in a superb 2-part article, outlines the migration approach that Upday, the leading News Service used to migrate to Kubernetes. Upday had a strong 3-pronged reason to move to Kubernetes:
- Upday wanted an infrastructure uniquely designed for auto-scaling (seconds vs minutes)
- Infrastructure that is optimized for costs
- Platform that has managed/native [Java] features
- Infrastructure that supports advanced features like
Kubernetes has a well-annotated history from its origins as an open source Borg revision to it’s v1.0 release that included the launch of the Cloud Native Computing Foundation (CNCF).
Since 2013, the team behind Kubernetes has partnered successfully across industries and disciplines to make the container orchestration software the clear choice for building, deploying, and managing containers at any scale on-premise or in the cloud.
The open-source application orchestration software has established itself as the clear choice for building, deploying, and managing containers at any scale on-premise or in the cloud.
This is attested by many companies, including T-Mobile that can attest to its business and economic benefits.
However, not all companies have the ability to pull off such a “big bang” migration successfully. Kloudone’s expertise within containerization and automation in platforms such as Kubernetes allows enterprises to truly leverage the power of development and crafting robust pipelines.
(Sample architecture of GKE, source: https://cloud.google.com/blog/products/containers-kubernetes)
Christian Melendez2 from Equinix recommends that organizations take a phased approach. He recommends a hybrid approach, where one can learn how to run systems with Kubernetes first. For instance, companies can configure a deployment strategy to run Kubernetes on-premises or in the public cloud side-by-side, with an existing on-premises VM-based system in production by mirroring traffic. From an architecture perspective, organizations can utilize a load balancer such as NGNIX or an API Gateway or use traffic managers provided by AWS Route 53 and Azure Traffic Manager to direct a portion of traffic to a Kubernetes cluster. These load-balancing mechanisms are pivotal in harnessing an environment that syncs applications, infrastructure, and networking in harmony.
Another approach is to utilize a service mesh like Istio to front-end. Istio works as a side-car proxy that utilizes helm-charts which allows granular monitoring and logging along with a secured container environment. Istio aligned with Anthos can be an innovative solution for enterprises that want to expand their horizons both on-premise and on the cloud seamlessly.
On the parallel, VMWare announced Tanzu which is a way to operate multiple Kubernetes clusters in one place. Google’s Anthos Migrate allows organizations to migrate virtual machines to GCP (Google Cloud Platform). There are a number of open-source projects like Kubevirt, KataContainers, or WeaveIgnite to run virtual machines in Kubernetes.
Installing the Kubernetes command-line tools on your workstation requires the kubectl tool. The kubectl tool controls the Kubernetes cluster manager. It lets you inspect your cluster resources, create, delete, and update components, and much more. You will use it to look at your new cluster and bring up example apps. Here is a good example of getting started:
gcloud components install kubectl
Apply the cluster-admin role to the Jenkins service account:
kubectl create clusterrolebinding jenkins-deploy \ — clusterrole=cluster-admin — serviceaccount=default:cd-jenkins
Tinder3 is another organization that has successfully migrated over 200 services to Kubernetes cluster to 1,000 nodes; 15,000 pods, and 48,000 running containers. The idea of containers scheduling and serving traffic within seconds as opposed to minutes was appealing to Tinder.
Why do organizations move to Kubernetes? Here are a few good reasons:
- Plan your unplanned surge of internet traffic…500+ Million downloads ..!! Niantic never anticipated Pokemon Go traffic to exceed their expectations by 50x. Below picture tells this story
While this was made possible by dozens of Google cloud services, at the background Google Kubernetes Engine was at its best enabling Niantic to manage high user traffic through Vertical scaling of Kubernetes in addition to horizontal scaling.
- One of the biggest reasons why companies are able to confidently adopt Kubernetes is because of the strong base of contributors and user community. While everyone today understands how kubernetes has revolutionized container technology, a strong community of users and contributors is what gives confidence to companies that it’s going to get better from here.
In summary, organizations, small and large; Global 1000 to Cloud-Native firms are migrating their legacy assets to Kubernetes at an unprecedented rate. While the benefits are immense, in order to achieve the best results in the most optimal schedule and budget, one needs to adopt a set of sound and systematic migration technical architecture and tools; and a sound project approach.
- SDX Central — April 2, 2020, Sydney Sawaya
- Equinix — Dec 5, 2019, Christian Melendez
- Tinder migrates to Kubernetes, Medium, April 2019, Chris O’Brien, et al
- Kubernetes migration, Shyam Sundar, Medium, Sept 2019
- GOTO 2016 • Cluster Management at Google with Borg John Wilkes