AWS Builders Day Part 3 – Kubernetes on AWS with Amazon EKS

So next up, after lunch at AWS Builders Day was another deep dive session, again with Paul Maddox. This time we moved up the stack a little from ECS to EKS, AWS’ managed Kubernetes service.

Paul started with the session asking who is running Kubernetes, who is running it in production and finally who is happy with it in production. There were a few people with hands up at the beginning but not many remaining for the last question.

EKS is not yet production ready and is still in preview. Best way to get on the preview is to fill out the form or speak to you account management team. Slow rollout still.

Why an AWS Managed Kubernetes Service

63% of Kubernetes workloads run on AWS today – CNCF survey

There are some common methods for deploying Kubernetes on AWS, for example KOPS is a community tool that manages things like upgrades and deployment (https://github.com/kubernetes/kops). However there are a lot of steps to get KOPS running; eg installs, IAM users, route 53, S3 buckets.

AWS customers repeatedly asked AWS to run all of this for them including replication, backups, HA, upgrades etc

Tenets for Amazon EKS

  1. Must be for enterprises to run production grade workloads
  2. Provide a native and upstream Kubernetes experience
  3. Provide seamless integration with additional AWS services.
  4. The EKS Team actively contributes to the Kubernetes project.

Overview of Kubernetes

Master nodes scheduling pods/containers on to workers nodes, with these nodes spread across at least three nodes for quorum and resiliency. Ideally these are spread across data centres or availability zones. Kubernetes also uses etcd for state and configuration storage, which must also be deployed across 3 or more servers.

EKS takes care of the masters and etcd. Customer is responsible for the worker nodes.

EKS has 4 API calls currently. This simplicity is deliberate, AWS want to get out of your way as quickly as possible, so all work is done as soon as possible in native Kubernetes.

Master nodes and etcd nodes will be run in an auto scaling group. Etcd will be encrypted at rest. These nodes will automatically scale in and out as load changes. Masters will be available active/active across availability zones. If a zone fails, no connectivity should be lost and auto scaling will bring nodes back when possible.

EKS will support current version of Kubernetes and a couple of version back, although currently in preview it only supports 1.7 (not 1.9, which is latest version of Kubernetes). Minor updates will be applied automatically (not to worker nodes – customer is responsible for these).

Major upgrades can be trigger by customer or EKS deployments can be pinned to a specific version. This will only be supported for a couple of versions back. If you pin and don’t upgrade, eventually AWS will warn you of scheduled maintenance and then automatically upgrade you.

You can’t SSH to Master or etcd nodes in EKS. You will see logging for Kubernetes running on these nodes in CloudWatch. There is no ability to install stuff to masters, although this might be offered as an option to add add-ons to EKS masters, such as Prometheus.

For assitance in deploying worker nodes, there will be a Kubernetes optimised AMI. Built with Packer, the Packer script will be published. CloudFormation templates can be used to spin up worker nodes.

Currently Kubernetes uses a modular plugin architecture for networking. Modules such as Calico and Flannel. However this can be complex on AWS. So AWS have created an open source native VPC plugin, which means each pod gets it own IP address via an ENI.

For networking security, AWS is working with and recommending Tigera to implement Kubernetes Network Policies. These are not tied to AWS, so can be applied to Kubernetes cluster anywhere.

For IAM, AWS are using kube2iam, an open source project for IAM roles and Kubernetes.

For individual users, AWS have integrated kubectl with IAM, so you can use AWS credentials to access Kubernetes clusters, without having a separate access credentials within Kubectl config. This is being developed with Heptio, it’s open source.

On the roadmap is to use Fargate, so customers doesn’t have to worry about worker nodes setup.

The final takeaway was to checkout a collection of workshops provided by AWS on Github, to help learn about Kubernetes in general. When EKS comes out of preview, stuff will be added here as well to help learn about EKS

https://github.com/aws-samples/aws-workshop-for-kubernetes

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s