Benefits of managed Kubernetes, e.g. AWS EKS, over DIY.
I’m Mark Ross, Lead Cloud Architect at Atos, specialising in AWS and an AWS Ambassador. I’ve been in the industry for 22 years now, starting out as a field service engineer fixing laptops and desktops as the ‘filling’ in the sandwich of my Computer Science degree, moving on to server management, on-premise infrastructure architecture, private cloud and then made the leap into AWS at the start of 2017.
During that time I’ve learn heaps of technical and non-technical things, either via formal training and industry qualification, on-line training and on the job training and experience. My highlight is achieving all the of AWS Certifications achieving ‘all 12’ and then maintaining it with the release of the SAP on AWS Cert, where I wrote up my findings to help others whilst we await training courses aimed at helping to pass the cert.
I’m always looking to widen my knowledge, regardless of whether I’m using something day to day. Even though I know the specific details may be forgotten over time (that’s what Google’s for right?) I think I retain enough knowledge to recall where I need to go to look, techniques etc. to make it worthwhile. As more customers move to containerised applications and services I felt that I would benefit from increasing my knowledge in containers and Kubernetes, and I always like to validate my knowledge with a certification, so I set about targeting the Linux Foundation CKAD, CKA and CKS certifications.
I realised during my training just how complicated you can make a Kubernetes deployment, adding a myriad of these components from the cloud native ecosystem, and can configure a multitude of things within the Kubernetes cluster impacting things like security.
Setting up and managing Kubernetes yourself is of course doable, but I hear countless stories of people saying that whilst they set off on a journey of maximum flexibility, they’ve ended up creating a support requirement and team around managing and keeping up to date that platform. You can of course offload that overhead to a partner such as Atos, but I think you need to have a genuine reason why you need a self managed cluster and are willing to make that additional investment.
If you compare a self managed cluster with a managed service like AWS’ EKS service, you avoid a significant amount of overhead. The control plane is managed for you, has a good security posture, is highly available etc. It’s also managed in an ongoing manner, although you get enough control to manage the Kubernetes version. As an experience I ran a quick build and then deployed a Kube-bench pod to check the default out of the box compliance to CIS Kubernetes benchmarks and there were no fails, so you know you’re good to go from the off.
Some people may cry ‘lock-in’ but Azure (AKS) and GCP (GKE) both have equivalent services and given you can deploy to the clusters with native techniques lock-in can be avoided if that’s your primary concern, i.e. you can deploy with Helm, your specification files in yaml if using kubectl etc. can all be identical. Personally I just look at ‘lock-in’ as ‘what’s the cost of change’, and I’ve heard others refer to ‘portability time objective’ (PTO) to drive a meaningful discussion on how long you’re willing to take to be able to move an application somewhere else, to take the emotion out of the conversation, in a similar way to how people talk about recovery time objective and recovery point objective when discussing business continuity. One thing that is worth nothing though regarding lock-in is you really need to think about what the code inside your containers is ultimately doing and how you create your architecture if your ultimate aim is to minimise the cost of change. It’s no good thinking your not locked in at all because you’ve used EKS if you’ve also used a load of native services for queuing, secrets management etc. within the containers. You probably want to look for options on the CNCF to achieve that if your aim is to minimise your ‘PTO’. If you’re less concerned about your ‘PTO’ and want to maximise availability, elasticity and all that other goodness you may well want to venture further up the stack than even a managed Kubernetes service like AWS EKS. Then it’s worth considering AWS Fargate for serverless container compute (so there’s no underlying cluster to worry about) or AWS Lambda so there’s not even a container to worry yourself with! Many born in the cloud business are going from zero to hero without a server or potentially even a container in sight!