AWS Builders Day Part 2 – ECS Deep Dive


So next up at AWS Builders Day in London, after the Containers State of the Union, Paul Maddox (@paulmaddox) kicked off the deep dive sessions with a look at Amazon ECS.

The brief to presenters had been to avoid marketing slides. So Paul’s presentation was to walk through building a Twitter analyser using micro services on AWS. Covering Deployment, Availability, cost optimisation, security, scaling, monitoring and logging. Basically, ‘How do I take a container from my laptop and move it properly in to production’.

Paul started with a quick background on ECS. You feed ECS with a task definition in JSON, which defines the containers you want to run. ECS then goes off and decides where to run those containers on EC2 instances or on to Fargate.

There is also an AWS provided container registry, ECR.

Paul covered the shared responsibility model between AWS and the customer. AWS is responsible for operations of the cloud, and the customer is responsible for operations in the cloud. For example, AWS worries about the availability of ECS, but the customer worries about cost control, scaling, logging, app security. For container instances themselves, the responsibility for pretty much everything is with the customer. However, customer can use AWS services to help, e.g auto scaling. This is where Fargate can help shift some of this responsibility to AWS.


Deploying cluster on ECS. You can use the AWS Management Console, good to get started. But should be automated, especially when multiple clusters are used (dev vs production for example). Basic scripting is a next step but lacks error handling, lacks asynchronous task handling (waiting for a cluster to be created), and lacks the ability to handle rollbacks and upgrades. Deployments should be like cookie cutters; self contained, repeatable, auditable and self documenting. Use infrastructure as code, such as Cloud Formation or Terraform, which means you can use have audit logging of changes, easily make updates or easily deploy new clusters.

ECR tip, use ECR Credential Helper to avoid having to authenticate every time

ECS Example Infrastructure Overview

Paul ran through a reference architecture for a possible customer ECS infrastructure. Starts with and Internet gateway to ALB. Uses path based routing to send /customers to one ECS cluster and /catalog to another cluster. Uses NAT gateways for containers to get out to web if required. Finally uses ECS to integrate with Cloudwatch for logging.

Reference architecture is available on Github;

You can go to the Reference Architecture and click on a button to deploy straight in AWS via CloudFormation. Paul ran this as a demo.

Demo app

After a slight AV hiccup, Paul moved on to his demo application. An overview of the simply app can be seen in the slide photo above. One of the first areas Paul tackled was credential storage by using AWS Parameter store.

Paul then gave an overview of the Twirp framework, made by Twitch, to help with inter service communication in microservices. Its used in this demo between the two micros services in Paul’s app.

There are two micros services, the Tweet Collector service, which is is written in Go and passed tweets on to Tweet Archiver, the second micro service. That then passes tweets into Kinesis Firehose.

To develop this, you can run locally with Docker Compose, then use ECS for ‘prod’. The code for the micro services involved are available on Paul’s Github;

Paul ran through the files involved, including docker files and docker compose files. He suggested that using docker compose makes networking really easy between containers.

To demo the app, Paul configured app to use Twitter search term of ‘Amazon’ and ran it locally. He then used docker compose to push directly to ECS. He ran through the CloudFormation file used for ECS, including using Cloudwatch Logs for logging. using CloudFormation, he walked through stack creation, then moved on to using Cloudwatch logs and CloudWatch metrics to trace errors in your ECS deployed applications.

Containers in Production

Cost Optimisation

  • Use reserved instances, ECS uses EC2 instances so can use some of the standard cost optimisation tools
  • Use spot fleet
  • Example of different clusters with different configuration; Dev on spot, prod using Reserved instances, batch on GPU backed ECS.


  • Use EC2 auto scaling – suggested using memory reservation metric
  • Use application auto scaling, based at container level rather than EC2 host level.


  • Patching, use autoscalingrollingupdate policy. CloudFormation performs a rolling upgrade to containers.
  • Scanning as part of CI/CD pipeline. Twistlock and AquaSecurity, for example, will scan for vulnerabilities, CVEs etc. Will stop pipeline if issues are found.
  • Minimise attack service on containers. Minimise size of containers, for example use Docker scratch container
  • IAM roles – don’t bake IAM credentials into containers, ever. IAM roles can be applied to EC2 instances under ECS. However, this applies role to all containers on host. Use IAM roles with ECS tasks to assign per task.

Monitoring and Logging

  • Cloudwatch for monitoring – CPU etc
  • Prometheus (plugin for ECS Available)
  • Cloudwatch for logging, or ECS allows for other logging solutions to be easily plugged in

Final take away – go look at the “Awesome ECS” GitHub repository, curated by Nathan Peck. It has loads of examples for ECS and Fargate, for building a service mesh, performing blue/green deployment etc;

One comment

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s