London VMUG – January 2017

Today I attended the first London VMUG of the year.

Work commitments meant I missed the intro session and the first sponsor session from BitDefender. I arrived in time to catch the end of Paul Northard’s session on what to expect from VMware in 2018. Paul talked about HCX, and it’s history in a previous form as the VMware Hybrid Cloud Manager. HCX looks very interesting in helping with migrations to VMware incarnations in the cloud, especially combining functions that are similar to vSphere Replication and NSX Standalone Edge, so that VMs can be pre-seeded and then migrated without downtime. To wrap up Paul’s session, Nico Vibert took over, in his new VMC Presales role, to talkabout VMC in AWS, which is clearly going to have a big year in 2018 both for features and for expansion at some point into Europe and perhaps beyond.

After a break, Sam McGeown, presented on NSX and vRA. This was a great session which gave a good overview of constructing security polices And populating security groups in NSX, and how those can be automated in vRA. Sam suggested two different approaches in vRA; one a more out of the box approach and the other requiring a little more customisation in vRA. Sam did an excellent job of outlining the merits and pitfalls of each approach.

After lunch it was time for a difficult decision between Chris Bradshaw’s Coding for vSphere Admin’s session and Nico Vibert’s full session on VMC. As I plan to pilot VMC this year I chose Nico’s session in the end, sorry Chris!

Nico skipped the high level VMC on AWS overview as I think by now most people have seen it. He ran through various customer plans for using VMC, for things like data centre evacuations and DR. Nico then ran through a number of demos. First up was a basic demo of VMC itself, which was actually a good way of bringing some of the audience up to speed if they hadn’t previously caught up with what VMC on AWS actually is. Nico demoed Python integration and then a demo of integrating VMC with Slack. He finished up by demoing using an AWS IoT button to trigger adding a new host to your VMC cluster, which was very cool and got a well earned round of applause.

It was then time for the second vendor session of the day. Tegile presented a case study of their implementation at Honda Trading, migrating them away from groaning HP MSAs.

And to wrap things up, Gareth Edwards got everyone excited by demoing the administration of a vSphere environment using a VR headset. It was really interesting to see the same type of demo from the VMworld keynote working straight from a laptop in stage. The demo sparked an interesting debate about the future of VR for this type of work, and whether it had a valid future or is just a gimmick.

With all the day’s sessions done it was time for Simon Gallagher to wrap things up with the annual London VMUG awards. Well done to all the winners. And then it was off to the pub.

Hopefully I’ll be able to make it to the next event in March. See you then!

London VMUG – January 2017

VCP 6.5 DCV Study Spreadsheet

VCP65DCV-for-Blog-220x147I’ve knocked up a spreadsheet to track my study for the VCP 6.5 DCV Exam. It covers the objectives as outlined by the VMware Exam Guide. I thought it might be of use to others so I’m sharing it here;

VCP6.5 DCV Study Tracker.xlsx



Go through each section of the spreadsheet and change the value against each sub topic to either 1, 0.5 or 0.

  • 1 = No study required
  • 0.5 = General review of topic required (just review the docs, blogs etc)
  • 0 = Full study required (lab, read and make notes on docs etc)

As you complete study in each area, update the value. You’ll then see the summary sheet progress towards 100% coverage for each section.

My personal opinion; 100% coverage is nice to have but the pass mark isn’t 100% so you probably don’t need that level of coverage to sit the exam.

Any feedback let me know and I’ll look to update the spreadsheet.




The idea for using the three different values and averaging that out came from Travis Wood’s similar spreadsheet for the VCP6-DT exam;

The official VCP 6.5 DCV Exam Guide;


VCP 6.5 DCV Study Spreadsheet

Virtual Design Master – VDM005!

TL;DR – I’m honoured to say I won Virtual Design Master Series 5! Woop Woop!

So for the longer version….

Last year I heard about Virtual Design Master for the first time from my fellow #LonVMUG alumni Gareth Edwards. Gareth was participating in Season 4 of the competition at the time, and he eventually came in with 2nd place. It was a great result for Gareth and meant the UK had representation throughout the competition last year.

Fast forward to earlier this year and I decided I would take part in the next season of VDM. I knew it would be a challenge just to fit it in around work and family, but I thought even if I stuck with it for just one week I might as well give it a go. The competition gives you a great opportunity to practice infrastructure architecture skills and to take a look at technologies you might have been meaning to play with but haven’t got around too.

Keep going with the Fast Forward button and we arrive at 2am last Thursday night, and after surviving three rounds of challenges, its the live final of Virtual Design Master Season 5. After an entertaining and nerve racking hour, with defences from the other final competitors, Adam Post and Kyle Jenner, the time came to announce 1st, 2nd and 3rd place. I’m sure this was a difficult decision as all three of the finalist had worked hard and been complimented on their designs throughout the season. With the obligatory suspense from Angelo on the creative team, the placings were slowly announced – Kyle in 3rd, Adam in 2nd, which meant I had been awarded 1st place! An unexpected but great way to complete the 5 week competition after initially hoping just to make it past week 1!

So lets take a step back, and ask; what is Virtual Design Master (VDM)? It’s basically a knock out reality show for enterprise infrastructure geeks. Over the course of 4 challenges, spread over about 5 weeks, each week the participants are given a new challenge to complete. The competition is run by a group of fantastic volunteers; Eric Wright, Angelo Luciani and Melissa Palmer. Every season there is a set of judges who volunteer their time to review the designs and decide who is unfortunately eliminated each week. This year the judges were Byron Schaller, Lior Kamrat and Rebecca Fitzhugh. Rather dauntingly for us participants was that all the judges hold the VCDX certification! A huge thanks to the Creative Team for their work in preparing and running the competition, and to the judges for giving up their time to review and challenge the submissions.

The challenges are issues on a Thursday evening and you have until Tuesday evening to submit your response to the challenge. That submission can take the form of an architectural design document, a security incident response, code examples or in the case of the final challenge, most of those plus building out an actual live environment which must be submitted for review. We did get an extra 6 days for the final challenge though, with it being issued on the Thursday and not needing to be submitted until 10 days later.

Once you have submitted, you get Wednesday to sleep (which has spawned the hashtag #wesleeponwednesdays), and to prepare for Thursday night’s live show. Every Thursday at 8pm ET, all the participants join a live recording and take it in turns to defend their submission for that week. The defence is made up of a 45 second statement (2 minutes for the final) from each contestant about their design and then 2 out of the 3 judges will challenge the contestant with some questions about their submission (although all 3 judges asked questions in the final two challenges).

Once everyone has finished their defence, and with the judges having conferred in the background throughout show, one or more of the contestants will be eliminated. Then the next challenge is issued and read out and the show finishes.

All the shows are available on YouTube and all of the designs can be found on GitHub.

For me, participating in VDM was a fantastic experience, and has taught me a great deal in a very short space of time. I would encourage anyone to throw their hat in the ring, especially any seasoned admins and those tempted by infrastructure architecture.

Thanks for reading!



Virtual Design Master – VDM005!

Working with Terraform – Git and Visual Studio Code

I’m becoming a huge fan of Terraform, having started using it at work to manage our AWS environment and using it more recently with Virtual Design Master. When working with Terraform there are a few simple things that can make a big difference, using a good development environment and using a source code repository. These are obvious things for any developer but as an infrastructure guy I’ve grown into using them as I’ve used Terraform. The following is how to get all of this setup on Windows.

Installing Terraform

Installing Terraform is simple – its a self contained binary which needs adding to your PATH variable.

  1. Download Terraform from
  2. Unzip it anywhere, but as we are going to add this to the PATH environment variable, make it somewhere fairly permenant. For me thats c:\users\chris\tools\terraform
  3. Add the location where you unzipped Terraform to your PATH environment variable. To that, hit Start, type environment and select the result shown as ‘Edit the system environment variables.capture1
  4. Click the ‘Environment Variables…’ button to the bottom right of the settings box that appears.
  5. Under System variables select Path and click on Edit
  6. Click New and enter the path to where you unzipped terraform.exe.
  7.  Click OK, then OK, then OK.
  8. Open up command prompt and type terraform. You should see the terraform help appear.capture2

Installing Visual Studio Code with the Terraform Extension

  1. Download Visual Studio Code (VSC) from
  2. Run the installer, tick to add an option to open folders with VSCcapture3
  3. Installer will complete and VSC can now be opened from the Start Menu.
  4. Open VSC
  5. Click the extension button from the left hand menucapture4
  6. Type terraform in the search bar
  7. Click the green ‘Install’ button next to the extension from Mikael Olenfalkcapture5
  8. That’s it, you now have a development environment which also includes syntax highlighting and automatic formatting in Terraform files.

Setting up Git in VSC

Git integration is  included out of the box with VSC but we do need to install Git for Windows first to install the git.exe executable VSC uses.

  1. Download Git from
  2. Run the Git installer. There are lots of options for the install of Git but the defaults are fine. For VSC, ensure the options ‘Use Git from Windows Command Prompt’ and ‘Enable Git Credential Manager’ are selected so the VSC can find git.exe and secure credential storage.
  3. Setup some basic required user config for git. Open command prompt and run the following commands;
    git config –global “”
    git config –global “Your Name”

Once Git is installed and configured, we need to create a folder and initialise it as Git repository.

  1. Create a folder for this Terraform project. Terraform projects (technically know as modules) live within a folder so create something specific for this tutorial. For me this is c:\users\chris\projects\terraform-tutorial.
  2. Open VSC
  3. Click File > Open Folder
  4. Open the folder you created in Step 1
  5. Click View > SCM
  6. At the top of middle narrow pane, click ‘Initialize Repository’.Capture7

Thats Git setup for this project – easy, right? Lets create our first file and commit the changes

  1. Click File > New File
  2. Click File > Save
  3. As the file has not been saved before, the Save As dialog opens. It should be in the same folder as we opened earlier on but double check. Name the file and click save.Capture6
  4. The Source Control icon on the left hand pane should change to show 1 uncommitted change for the repository.


  5. Click on Source Control icon.
  6. Hover of the file and click the ‘+’ to stage the file. This means it will be included in the next commit, allowing you to decide which files are committed and which files are not.
  7. Enter a short message to describe the commit and hit the ‘tick’ to perform the commit.


From now on, as you update files in the folder, you should get into the habit of periodically staging and committing your changes.

A good place to start with Terraform is to follow Hashicorp’s Getting Started guide, which can be found here;

That’s the basics of git, terraform and VSC covered!

Working with Terraform – Git and Visual Studio Code

Setting Up and Securing an AWS Account

This blog post walks through setting up an AWS account for the first time. The following is a checklist for setting up a new account;

  1. Account Sign Up
  2. Root Account MFA
  3. $1 Billing Alert
  4. IAM Password Policy
  5. IAM Administrators Group Creation
  6. IAM Admin User Creation
  7. IAM Admin User MFA

The goal is to have a secure account with no nasty billing surprises outside of the free tier, so that you can use AWS with some peace of mind.

So lets get cracking on that list;

Account Sign Up


First we're gonna need an AWS account, if you've already set this up then skip this section.

  1. Head over to
  2. Click the 'Sign In to the Console' button in the top right hand corner
  3. Enter your email address and tick 'I am a new user', then click 'sign in using our secure server'
  4. The next step is to enter some basic login details. Enter your name, confirm your email address and ensure you create a complex password. This account will full access so it needs to be secure.
  5. We now need to setup some contact info. Tick that this is a Personal Account and enter the required details. Ensure the phone number is correct as this is what AWS support will use to contact you if there are issues with your account, such as with your MFA access.
  6. You now need to enter some payment details. As you can start using resources outside of the free tier from the outset, AWS need a way to bill you so these payment details are mandatory.
  7. We now need to verify the account via phone. An automated system will call you to complete this step, which is pretty cool! In the UK I had to remove the first '0' from my mobile number for this to work.
  8. Almost there. The final step is to select a support plan. As this is a personal account and we want to limit cost, select 'Basic' which is free. The Basic plan only includes support account related issues (billing, logon etc), no Technical Support is included. For more info on the different types of support AWS offer, go here.
  9. Now sit back and wait for AWS to verify your account. You can click on the Launch Management Console button to jump straight in, but be aware that you may get errors launching resources until your account is verified.

Root Account MFA

First, with the account setup, lets setup multi factor authentication for the root account.

  1. Before we set things up in AWS, install an MFA application on your phone. Google Authenticator will do the trick and is available for Android, IOS and Blackberry.
  2. In the AWS console, click on your name in the top right hand corner and then click on 'My Security Credentials'
  3. If prompted about getting started with IAM users, just click the 'Continue to Security Credentials' link.
  4. Leave Virtual MFA device selected and click Next Step
  5. Click Next at the next step as we installed the MFA app in step 1.
  6. Follow the instructions to configure the MFA app on your phone with your AWS account. This involves scanning a QR code from the MFA app and entering in codes that are generated
  7. Once setup you should get a prompted saying the MFA device was successfully associated.
  8. That's it. You now need your password and the MFA app to log in to your account. meaning it should be much less susceptible to being compromised

$1 Billing Alert

Before we continue with further securing the account, lets quickly setup a billing alert.

  1. In your account, click on your name in the top right hand corner and then click on 'My billing Dashboard
  2. From the left hand menu, click Preferences
  3. Tick the 'Receive Billing Alerts' option and click Save preferences
  4. Click the Services drop down in the top left hand corner and click on CloudWatch, under Management Tools.
  5. In the left hand pane click Billing
  6. click Create Alarm.
  7. Set the amount to alarm on in dollars. For example, if you want to ensure you don't exceed whats available in the free tier, add $1 here. Enter you email address for where alerts should be sent and click Create Alarm.
  8. You'll receive an email with a link to confirm the address. Once you've done that the alarm is in place.

IAM Password Policy, IAM Administrators Group Creation & IAM Admin User Creation

AWS strongly advise avoiding using the root account whenever possible. Instead, an IAM user should be setup with administrative access. The IAM account can have its access revoked or even the account deleted if it has been compromised. This cannot be done for the root account.

  1.  In top left corner of the the AWS console, click Services > IAM (its under the 'Security, Identity & Compliance' section)
  2. Click Account settings
  3. Set up a strong password policy. Ideally this should be a long password (16+ characters etc) Customise the policy to your preference and ensure 'Allow users to change their own password' is ticked. Once you are happy with the settings, click Apply password policy
  4. In the left hand panel, click Groups.
  5. Click 'Create New Group'
  6. Call the group 'Administrators' and click Next Step.
  7. tick the 'AdministratorAccess' policy and click Next Step
  8. Review the name and that the policy attached is AdministratorAccess and then click Create Group
  9. In the left hand pane, click Users
  10. Click Add user
  11. Enter a username. This will be the account you use to interact with AWS so make the username easy to remember and use. I went with 'chris'.
  12. Select the access type. I would advise selecting only AWS Management Console access for this account, and then setting up a user with limited permissions for programmatic access.
  13. After selecting the access type, if you selected Console access you will be prompted for the Console password. Enter a strong password or let AWS auto-generate one
  14. As you have already created a strong password and this account will be used by yourself, un- tick the 'Require password reset' option. You can use that if you add other users to your account and you want them to generate their own password which you do not have knowledge of. Click 'Next: Permissions'
  15. Select the Administrators group we created earlier in this process and click 'Next: Review'
  16. Review the settings and then click 'Create user'
  17. If you auto generated the password, ensure you take a copy of it on the next screen or download the CSV provided which includes the password. You cannot get the password after clicking close
  18. Once you have a copy of the password, click Close.

With all of that done, you should have all green ticks on the dashboard at Services > IAM > Dashboard;


IAM Admin User MFA

This is not included in the Security Status but is just as important as activating MFA on your root account. We've just created an IAM user which has administrative access to our AWS account so we need to ensure that user is also protect by MFA.

  1.  In top left corner of the the AWS console, click Services > IAM (its under the 'Security, Identity & Compliance' section)
  2. Click on Users in the left hand pane
  3. Click on the user you created in the last section
  4. click on the Security credentials tab.
  5. Click the pen icon next to 'Assigned MFA device'
  6. Follow the same process as we did to setup MFA for the root account.
  7. One MFA is setup, click on Dashboard in the left hand pane, then copy the 'IAM users sign-in link'.
  8. Log out of the root account and open the link you copied. You should not be able to sign in as your IAM user at that link.

That's it. Keep your root credentials safe but don't use them. Instead log in as your IAM user. If you need API access, create a new IAM group and user, and grant it only the rights for what you want to access via the API.


Setting Up and Securing an AWS Account

Getting Started with Cloud Foundry – Application Deployment Pipelines with Concourse – Part 1

In the first two blogs post of Getting Started with Cloud Foundry, we have setup a local development environment for Cloud Foundry with PCF Dev and shown how easy it is to deploy a sample app. Now we’re going to look at taking our sample app through a basic deployment pipeline, first deploying it to ‘test’ and then to ‘production’.

To build the deployment pipeline, we are going to use a tool called Concourse. Concourse is built by Pivotal to create and run deployment pipelines, and as it’s built by Pivotal it has excellent support for Cloud Foundry. You can read more about Concourse at

  1. The quickest way to get started with Concourse is to spin it up with Vagrant. If you don’t have Vagrant installed, go grab it from here and install it before proceeding. That will require a restart.
  2. Before starting Concourse with Vagrant locally, depending on how much memory you have in your machine it, you might want to stop PCF Dev if you still have it running. Run “cf dev stop”
  3. Once you have Vagrant installed, run “vagrant init concourse/lite” then “vagrant up” to get it started. As this will download a Vagrant box of about 700Mb it will take a few moments depending on your internet speed.
  4. Once completed, Concourse will then be running at


  5. Click on the relevant link to download the CLI used for Concourse, which is called ‘fly’. The Concourse web UI is only for viewing pipelines, all the config is done via fly.
  6. Finally, once you have download fly, as it comes as a binary with no install, lets add it to the PATH environment variable so we can use it from any directory. Do that from Control Panel > System and Security > System > Advanced system settings > Advanced > Environment variables.

Ok, so we have Concourse and fly installed, lets go and setup our first, basic pipeline. For this we will follow the ‘Hello, world!’ tutorial from For my own learning I’m duplicating their steps in my own words.

  1. Before using fly for managing pipelines, you need to configure fly to point at a Consource instance. In fly this Concourse instance is called a target and you can give it an alias to make it easy to use going forward. Run the following command to set the concourse instance we have running in Vagrant as our target and give it an alias of ‘lite’; ‘fly -t lite login -c’


  2. Now we have set up an alias to target our Concourse instance, we can use the YAML file below to create a pipeline. Save the below as hello.yaml. This is a simple file which defines a pipeline that has a job with a plan which comprises of one task, and uses a Docker container to echo the words “Hello, world!”. The image_resource and source configuration use Concourse’s container register to grab the required Docker image, but you can point this at any registry.
    - name: hello-world
      - task: say-hello
          platform: linux
            type: docker-image
            source: {repository: ubuntu}
            path: echo
            args: ["Hello, world!"]
  3. Once you have that saved, run the following to create your pipeline ‘fly -t lite set-pipeline -p hello-world -c hello.yml’. You now have your first pipeline in Concourse, all be it with one job so just one square turns up in the GUI. Arguably this isn’t even a pipeline, its just one ‘thing’ on its own.


  4. The top bar is blue because any new pipeline by default is paused. You can unpause it by running ‘fly -t lite unpause-pipeline -p hello-world’
  5. Because we don’t have a resource to trigger the job, it won’t run until we manually trigger it. Click on the hello-world box and click the ‘+’ icon in the top right corner. The job will run and you will see it download the docker image and then echo ‘Hello, world!’.


  6. If you run it again by clicking the ‘+’ button, it won’t download the image again and so the task will just echo ‘Hello, world!’
  7. Now lets create a pipeline with a resource which triggers the job. Create a file called navi-pipeline.yml with the following contents. This creates a ‘time’ resource (resources can be things like Git, but in this instance its just a timer) and then creates a job which triggers when the resource name ‘every-1m’ is true. The job basically does the same as our hello world pipeline, except this time it says “Hey! Listen!”.
    - name: every-1m
      type: time
      source: {interval: 1m}
    - name: navi
      - get: every-1m
        trigger: true
      - task: annoy
          platform: linux
            type: docker-image
            source: {repository: ubuntu}
            path: echo
            args: ["Hey! Listen!"]
  8. Update the Hello world pipeline with the above YAML by running the command ‘fly -t lite set-pipeline -p hello-world navi-pipeline.yml’
  9. Take a look at the GUI and you’ll see the hello world job has been replaced by the navi job with a resource. Every 1 minute the ‘every-1m’ resource will be true and therefore the navi job will trigger.


  10. If you click on the navi job you’ll see all the times it has trigger and the output from the latest run.


Now we have Concourse up and running, and have the fly cli powering some simple pipelines, the next step is to setup the pipeline to run our deployment. That’ll be for another part 2 of this blog post.

Getting Started with Cloud Foundry – Application Deployment Pipelines with Concourse – Part 1

Getting Started with Cloud Foundry – Sample App

In the previous post we got Pivotal’s version of Cloud Foundry running locally and successfully logged in via the CLI and the web page. Now we want to deploy a sample app.

  1. Download and install the Java SDK
  2. Download and extract the PCF Sping based sample application, Spring Music.
  3. Open a command prompt and navigate to the directory where you extracted the sample app.
  4.  Run “gradlew.bat assemble”. This will build the sample application. Once complete, it will show “BUILD SUCCESSFUL”


  5. The sample application has now been built and and saved as a .jar file to ..\build\libs\spring-music.jar.


  6. In the extracted files, there is a file called manifest.yml. This is needed to tell Cloud Foundry where to find the build that needs to be deployed and other parameters such as memory required for the instances that run the application.


  7. Now we have built our application and checked their is a manifest file, we are ready to push to PCF. Login to PCF Dev using the following command “cf login -a –skip-ssl-validation”. At the Select an Org screen, press 1 to select the pcf-dev-org so this org is used for push operations.


  8. Now run “cf push –hostname spring-music” to push the sample app to PCF Dev.

    This first creates an empty app, a route (dns name for the app), then binds the two together. It uploads the app files and starts the app.


  9. At this point the uploaded app files aren’t actually deployed to the app itself, so if you browse the route that was created you’ll get the following error;


  10. However, the ‘cf push’ command hasn’t finished, as its still running in the background. CF will analyse whats in the app files, what dependencies are required and then download them. It will then spin up a container in a staging area, combine these files together in to a ‘droplet’, then destroy the container.


  11. It then starts the app proper using the droplet created in staging.


  12. To show the status of the app, run ‘cf apps’


  13. You can now browse to the route address and the sample app will display 🙂


There is lots more to do after that, but its a great example of being able to spin up an app really quickly.

These steps are based on Pivotal’s own getting started tutorial which can be found here.

Getting Started with Cloud Foundry – Sample App