Getting Started with Cloud Foundry – Application Deployment Pipelines with Concourse – Part 1

In the first two blogs post of Getting Started with Cloud Foundry, we have setup a local development environment for Cloud Foundry with PCF Dev and shown how easy it is to deploy a sample app. Now we’re going to look at taking our sample app through a basic deployment pipeline, first deploying it to ‘test’ and then to ‘production’.

To build the deployment pipeline, we are going to use a tool called Concourse. Concourse is built by Pivotal to create and run deployment pipelines, and as it’s built by Pivotal it has excellent support for Cloud Foundry. You can read more about Concourse at concourse.ci.

  1. The quickest way to get started with Concourse is to spin it up with Vagrant. If you don’t have Vagrant installed, go grab it from here and install it before proceeding. That will require a restart.
  2. Before starting Concourse with Vagrant locally, depending on how much memory you have in your machine it, you might want to stop PCF Dev if you still have it running. Run “cf dev stop”
  3. Once you have Vagrant installed, run “vagrant init concourse/lite” then “vagrant up” to get it started. As this will download a Vagrant box of about 700Mb it will take a few moments depending on your internet speed.
  4. Once completed, Concourse will then be running at http://192.168.100.4:8080/

    Capture01

  5. Click on the relevant link to download the CLI used for Concourse, which is called ‘fly’. The Concourse web UI is only for viewing pipelines, all the config is done via fly.
  6. Finally, once you have download fly, as it comes as a binary with no install, lets add it to the PATH environment variable so we can use it from any directory. Do that from Control Panel > System and Security > System > Advanced system settings > Advanced > Environment variables.

Ok, so we have Concourse and fly installed, lets go and setup our first, basic pipeline. For this we will follow the ‘Hello, world!’ tutorial from Concourse.ci. For my own learning I’m duplicating their steps in my own words.

  1. Before using fly for managing pipelines, you need to configure fly to point at a Consource instance. In fly this Concourse instance is called a target and you can give it an alias to make it easy to use going forward. Run the following command to set the concourse instance we have running in Vagrant as our target and give it an alias of ‘lite’; ‘fly -t lite login -c http://192.168.100.4:8080’

    Capture02

  2. Now we have set up an alias to target our Concourse instance, we can use the YAML file below to create a pipeline. Save the below as hello.yaml. This is a simple file which defines a pipeline that has a job with a plan which comprises of one task, and uses a Docker container to echo the words “Hello, world!”. The image_resource and source configuration use Concourse’s container register to grab the required Docker image, but you can point this at any registry.
    jobs:
    - name: hello-world
      plan:
      - task: say-hello
        config:
          platform: linux
          image_resource:
            type: docker-image
            source: {repository: ubuntu}
          run:
            path: echo
            args: ["Hello, world!"]
  3. Once you have that saved, run the following to create your pipeline ‘fly -t lite set-pipeline -p hello-world -c hello.yml’. You now have your first pipeline in Concourse, all be it with one job so just one square turns up in the GUI. Arguably this isn’t even a pipeline, its just one ‘thing’ on its own.

    Capture03

  4. The top bar is blue because any new pipeline by default is paused. You can unpause it by running ‘fly -t lite unpause-pipeline -p hello-world’
  5. Because we don’t have a resource to trigger the job, it won’t run until we manually trigger it. Click on the hello-world box and click the ‘+’ icon in the top right corner. The job will run and you will see it download the docker image and then echo ‘Hello, world!’.

    Capture04

  6. If you run it again by clicking the ‘+’ button, it won’t download the image again and so the task will just echo ‘Hello, world!’
  7. Now lets create a pipeline with a resource which triggers the job. Create a file called navi-pipeline.yml with the following contents. This creates a ‘time’ resource (resources can be things like Git, but in this instance its just a timer) and then creates a job which triggers when the resource name ‘every-1m’ is true. The job basically does the same as our hello world pipeline, except this time it says “Hey! Listen!”.
    resources:
    - name: every-1m
      type: time
      source: {interval: 1m}
    
    jobs:
    - name: navi
      plan:
      - get: every-1m
        trigger: true
      - task: annoy
        config:
          platform: linux
          image_resource:
            type: docker-image
            source: {repository: ubuntu}
          run:
            path: echo
            args: ["Hey! Listen!"]
  8. Update the Hello world pipeline with the above YAML by running the command ‘fly -t lite set-pipeline -p hello-world navi-pipeline.yml’
  9. Take a look at the GUI and you’ll see the hello world job has been replaced by the navi job with a resource. Every 1 minute the ‘every-1m’ resource will be true and therefore the navi job will trigger.

    Capture05

  10. If you click on the navi job you’ll see all the times it has trigger and the output from the latest run.

    Capture06

Now we have Concourse up and running, and have the fly cli powering some simple pipelines, the next step is to setup the pipeline to run our deployment. That’ll be for another part 2 of this blog post.

Getting Started with Cloud Foundry – Application Deployment Pipelines with Concourse – Part 1

Getting Started with Cloud Foundry – Sample App

In the previous post we got Pivotal’s version of Cloud Foundry running locally and successfully logged in via the CLI and the web page. Now we want to deploy a sample app.

  1. Download and install the Java SDK
  2. Download and extract the PCF Sping based sample application, Spring Music.
  3. Open a command prompt and navigate to the directory where you extracted the sample app.
  4.  Run “gradlew.bat assemble”. This will build the sample application. Once complete, it will show “BUILD SUCCESSFUL”

    PCFDev07

  5. The sample application has now been built and and saved as a .jar file to ..\build\libs\spring-music.jar.

    PCFDev08

  6. In the extracted files, there is a file called manifest.yml. This is needed to tell Cloud Foundry where to find the build that needs to be deployed and other parameters such as memory required for the instances that run the application.

    PCFDev09

  7. Now we have built our application and checked their is a manifest file, we are ready to push to PCF. Login to PCF Dev using the following command “cf login -a api.local.pcfdev.io –skip-ssl-validation”. At the Select an Org screen, press 1 to select the pcf-dev-org so this org is used for push operations.

    PCFDev10

  8. Now run “cf push –hostname spring-music” to push the sample app to PCF Dev.

    This first creates an empty app, a route (dns name for the app), then binds the two together. It uploads the app files and starts the app.

    PCFDev11

  9. At this point the uploaded app files aren’t actually deployed to the app itself, so if you browse the route that was created you’ll get the following error;

    PCFDev12

  10. However, the ‘cf push’ command hasn’t finished, as its still running in the background. CF will analyse whats in the app files, what dependencies are required and then download them. It will then spin up a container in a staging area, combine these files together in to a ‘droplet’, then destroy the container.

    PCFDev13

  11. It then starts the app proper using the droplet created in staging.

    PCFDev14

  12. To show the status of the app, run ‘cf apps’

    PCFDev15

  13. You can now browse to the route address and the sample app will display 🙂

    PCFDev16

There is lots more to do after that, but its a great example of being able to spin up an app really quickly.

These steps are based on Pivotal’s own getting started tutorial which can be found here.

Getting Started with Cloud Foundry – Sample App

Getting Started with Cloud Foundry – Installing PCF Dev

As part of the Virtual Design Master competition I’ve taken the opportunity to play with something I have been meaning to look at for while; Cloud Foundry. You can set this up locally to start playing with it by using Pivotal’s local version of Cloud Foundry; PCF Dev.

  1. Make sure your processor is capable of hardware assisted virtualisation and this feature is turned on. You can use the free tool SecurAble to tell you the current status. If its not enabled, you’ll need to enable it in your computer’s bios.
  2. Install Virtualbox
  3. Create a Pivotal Network account
  4. Download and install PCF CLI
  5. Download PCF Dev
  6. Run the pcfdev executable from the command line so you can see if it generates any errors or if it is successful. If successful it will show the following;PCFDev01.PNG
  7. You can check the status of PCF Dev by running “cf dev status”;PCFDev02
  8. To create the PCF Dev environment, run “cf dev start”. You’ll need your Pivotal Network credentials to start the setup. This command downloads a roughly 4.5GB OVA from Pivotal and starts it in Virtualbox.PCFDev03
  9. Once its done and its completed successfully you’ll see the following;PCFDev04
  10. You can then access PCF Dev via the command line using “cf login -a https://api.local.pcfdev.io –skip-ssl-validation”PCFDev05
  11. Or you can access it via your web browser using https://local.pcfdev.ioPCFDev06

I did struggle to get PCF Dev running at the cf dev start phase. This turned out to be an issue with my local firewall (Bitdefender), which I had to disable temporarily.

That’s it for now, next step is to get something running in PCF Dev.

Cheers

Chris

Getting Started with Cloud Foundry – Installing PCF Dev

Upcoming Event – North East England VMUG June 2017

On Thursday (22nd June) I will have the pleasure of presenting at the North East England VMUG (NEVMUG), on the topic of AWS for VMware Admins. It will be a great event with an outstanding lineup despite my attendance, so I’d recommend attending if you can. I’ll be presenting solo this time as Alex Galbraith, my copresenter for the last two events, is unable to join in this time. 

Sign up here!

Upcoming Event – North East England VMUG June 2017

Scottish VMUG April 2017

Two months ago in April (yes, it’s taken me far too long to write things up!), I managed to attend the Scottish VMUG in Glasgow, and was also privileged enough to co-present a repeat of the AWS for Beginners presentation with Alex Galbraith. Thanks guys for having us! It was a long day as I was up at 4.30am and not back home until gone midnight, but absolutely well worth it.

Chris Storrie; Intro

The day was kicked off by Chris Storrie from the Scottish VMUG team. A quick straw poll from Chris showed that there was a large number of first time attendees.

Meet The Leaders

Chris introduced the leadership team, comprising of Sandy Bryce, Ian Balmer, James Cruickshank and himself, and joked that the slide listing the team was ordered in worst to best beard!

Chris went on to state that the agenda was perhaps the most community driven agenda that the Scottish VMUG have run, something I am happy to have helped with and proud to have been a part of. He then went on to encourage attendees to continue the community spirit by trying to meet someone new outside of the sessions. Chris wrapped up the session by advertising the new Slack team, scottishvmug.slack.com, and that to sign up you should drop the team an email at scotland@vmug.com. Finally, he thanked the sponsors, who were Zerto, Morpheus Data and Pure/Capito.

Joe Baguley; Keynote

After Chris, VMware’s VP and CEO for EMEA, Joe Baguley, presented the keynote. Joe’s keynote was very informative and made a lot of sense. He started by talking about industry buzzwords and how some conversations VMware have with their customers involves a desire from customers to ‘do’ digital transformation, digital strategy and digital business, without them really understanding what those terms mean and why these things are desirable. Joe said that successful transformation starts with the user, which means VMware’s discussions are becoming more and more with their customer’s customer.

Digital Transformation Agenda

From VMware’s perspective, in 2017, digital transformation is about business outcomes, in particular Business Agility and Innovation, Exceptional Mobile Experiences and Protection of Brand and Customer Trust. Those outcomes when distilled down to 4 priorities for IT end up being to Modernise Data Centres, Integrate Public Clouds, Empower Digital Workspaces and Transform Security. Joe then outlined how each of these priorities aligns with VMware’s solution set.

Application Cycle

Joe talked about the cycle of how changes occur in business IT, and that many organisations only go through this loop every 4-5 years when they are forced to, perhaps due to an application going EOL (e.g. Exchange 2010). When they do go through this loop, it can take 18 months. Joe contrasted that to more forward thinking organisations that see the ever increasing benefits of going through this loop as quickly as possible, sometimes many times a day.

State of Cloud - 2021

Joe moved on to discuss how back in 2006 it was predicted by 2015 everything would be running in the public cloud. That clearly hasn’t happened, and shows that the benefits of cloud don’t suit every application and that moving applications to the cloud is not straightforward. Joe discussed recent analysis which predicts cloud (both public and private) would cover 50% of all workloads by 2021, with 30% of that being public cloud, which would split roughly 50/50 between IaaS and SaaS. Clearly on prem infrastructure will be with us for sometime.

Cross Cloud Architecture

On the back of the public and private cloud growth, Joe stated that VMware’s goal is to provide a ‘RAID’ like availability for services across datacentres and in future across clouds, and that NSX and Cross Cloud Architecture are a big part of that goal.

He then wrapped up talking about what is next for VMware, which includes NFV, AI, IOT, Serverless, PaaS and Unikernals

Zerto (Sponsor Session)

Joe was followed by one of the day’s sponsors, Zerto. Zerto ran through an overview of their data protection and disaster recovery product, of which I’m sure most of the community are aware of as Zerto have been around for a while now.  You can find more info on Zerto at their website.

After all of that, it was then time for a coffee break.

Adrian Hornsby; AWS Session on Integrating AWS and VMware Cloud on AWS

The main sessions tracks then begun. I headed back to the main room to listen to Adrian Hornsby who is a Technical Evangelist for AWS. He asked the room who is using AWS which showed a mixed room of AWS users and complete novices, leaning mainly towards people who haven’t used AWS before. Adrian outlined AWS’ ‘customer backwards’ approach, where features are delivered based on customer requests and demand. He attributed the growing demand from customers for hybrid cloud to one of the key drivers behind the AWS and VMware relationship. Adrian outlined the common challenges with hybrid cloud and that VMware Cloud on AWS will allow AWS and VMware to overcome these. The challenges being the following;

  • Multiple virtual machine formats
  • Different networks
  • Operational inconsistencies
  • Differing Security baselines
  • Multiple monitoring and control Mechanisms

Whilst I agree with much of that sentiment, hybrid cloud has many different definitions and the AWS/VMware approach is tackling one, perhaps more focused but restricted view of hybrid cloud. VMware Cloud on AWS will allow you to run VMware workloads in the ‘cloud’ without retooling or importing virtual machines, but it still remains a traditional vSphere setup, albeit one that is now very close to an AWS datacentre (i.e. inside it). You do get Elastic DRS and on the fly cluster resizing, very cool features but ultimately its still vSphere workloads, albeit easy to consume ones, that are very close to AWS native services. And you have to manage the two separately, vCenter for vSphere, AWS console and API for AWS. VMware are also tackling hybrid cloud in another way with the very interesting Cross Cloud Architecture; being able to perform holistic management across multiple public clouds and on premises infrastructure.

Anyway, back to Adrian’s session. After the intro, he outlined some common scenarios which AWS perceive customers will look to use VMware Cloud for;

  • Maintain and expand
  • Consolidate and migrate
  • Workload flexibility

With the intro out of the way, Adrian then did a bit of a technical drill down, outlining how the account structure would work. VMware Cloud gets its own VPC, which is managed by VMware, and then the customer needs their own standard VPC in a AWS account, which is used to for transit connectivity into the VMware Cloud VPC. There is a new type of AWS endpoint for VMware Cloud, which will be interesting to see how it works, as there is currently no transitive routing across a VPC to another VPC, unless using a third party router within the VPC. Perhaps that is what the new endpoint is doing?

Deploy and Consume native AWS Services

Arian talked through some examples of mixing VMware VMs with AWS native services, such as using S3 and an S3 endpoint to keep this all within AWS with no internet breakout required. Other examples included being able to quickly get data into Amazon’s data warehousing service, Redshift, and being able to migrate database workloads to RDS whilst keeping the application components on VMs in VMware Cloud.

Amazon AI Demo

He then ran through an overview of Aurora, Redshift and some of the new services such as Polly and Rekognition. To wrap things up, Adrian performed a cool demo where he created a webpage in S3, uploaded a photo to the page which was then analysed by Rekognition after which the results were read out by Polly.

Community Round Table on vCenter Updates

The next session for me was a community round table on vCenter Updates, which was being hosted by Sandy and Chris from the VMUG leadership team, with attendance from a VCDX in the form of Rebecca Fitzhugh and representation from VMware in the form of Paul Nottard. It was a very useful session with inputs from a number of people in the room, and the topic moving around between certificates in vCenter, 6.5 upgrades, the use of management clusters and vCenter HA and its requirement for an external PSO. I also banged on about Auto Deploy for a short while :).

It was then time for lunch.

Mark Brookfield; vRealize Automation with SRM and Puppet

Following lunch the session tracks continued, I headed to Mark Brookfield’s session on automation. Mark’s session was on using vRealize Automation in conjunction with SRM and with Puppet. The session was interesting to see vRA in action, something I’m not at all familiar with. The SRM piece is useful to see as SRM traditionally has been hard to work with outside of the GUI, although Mark did caveat that you need to be careful who you give automated DR tools too, giving an example of a customer doing a full invocation rather than a test failover. The two demos Mark ran with vRA and SRM were to pick up replicated datastores so these could be filtered upon and selected when creating a new VM, and using vRA to trigger an SRM failover.

Mark also covered using Puppet in vRA. He used vRA to spin up a VM and have the Puppet agent installed with the configuration in place so that VM could then connect to a Puppet Master and continue its configuration from there.

5934B430-1EBC-4319-BB72-72FF94C77672

Mark started the session with a slide joking that on average only 8% of his demos successfully work. Unfortunately for Mark, none of his demos worked in this session. He took it with good humour though, bring up the 8% slide and changing it to show 0%! Although the demos didn’t work, it was still an informative session and there was also good interaction from the audience.

James Smith; Morpheus Data (Sponsor Session)

Then it was time for a session from one of the day’s sponsors, Morpheus Data. James Smith showed one slide and then jumped straight into a live demo of the product, which is essentially a very powerful multi cloud/on-prem provisioning portal, which also supports logging and backups. I wrote about Morpheus earlier this year in my London VMUG post so I won’t repeat myself. Having not had chance to see the actual demo in London it was good to see it this time around. Definitely something to look at later in the year.

Wrap up

And for the final session of the day it was time for Alex and I to step up and present our own session. We had about 20-30 attendees, of which only a couple really had any prior AWS knowledge. I think the session went well and we had some good questions at the end, even though we over ran. The slide deck for the session can be viewed here.

With all the session tracks complete, we headed back up to main room for a wrap up and prize giving, then it was off to the pub for vBeers.

After vBeers, the final part of the day was the short bus/flight/train/bus home.

Slides from the days event can be found here.

Thanks for reading! Next time I hope to publish a little quicker! For some reason I have been persevering with the Windows WordPress App, which keeps crashing on right clicks and editing images. I’ve switched back to using the WordPress site directly and everything works as expected, funny that!

Cheers

Chris

Scottish VMUG April 2017

Two Recent Podcast Appearances

A quick post to mention my recent first and second appearances at taking part in two podcasts; Open TechCast and ExploreVM. This is something I have been interested in doing for a while but hadn’t yet had the opportunity, then suddenly, two come along at once!

Podcasts

Firstly, I took part in a roundtable with the Open TechCast crew at the London VMUG last month. You can read about the event in my blog post here. We discussed VMware’s various cloud offerings and recent news, plus my real desire to try and avoid having to rack and cable datacentre kit!

The podcast can be found at either http://www.opentechcast.com/2017/05/04/spe-london-vmug-april-vmware-in-the-cloud/ or on iTunes via https://itunes.apple.com/gb/podcast/open-tech-cast/id1149366895?mt=2&ls=1#

My second appearance was a chat between myself and Paul Woodward Jr, for an episode of the ExploreVM podcast which Paul hosts. We chatted about VMware Cloud on AWS, exploring the definitions of the different types of cloud and where VMWonAWS might fit, and whether those definitions are useful at all (preview; not really!). We also discussed becoming a vExpert and participating in the VMUG scene. I had great fun and hopefully it made for good listening.

The episode can be found on the ExploreVM website at http://www.explorevm.com/2017/05/explorevm-podcast-episode-5-vmware-on.html or on iTunes via https://itunes.apple.com/us/podcast/explorevm-podcast/id1226483860?mt=2#.

As these are my first podcast appearances, I’d really appreciate any feedback if you have time to listen and drop me a comment here or via Twitter.

Cheers

Chris

Two Recent Podcast Appearances

London VMUG April 2017

C8vCPe2WAAE1z6A
photo by @LonVMUG

I was lucky enough to attend the London VMUG last Thursday, which makes probably a record three VMUGs in a row for me (if I include the UK VMUG last November!). It was as ever an informative and fun event, and I’d encourage anyone to attend in future if they can.

The event was kicked off by Simon Gallagher, London VMUG leader, with the normal housekeeping and also a good recap of recent VMware related news, such as the vCloud Air sale (good piece on that here), and the removal of allowing third-party switches in ESXi (KB article here). Simon also explained that due to a scheduling clash with Easter half term, a lower than normal attendance meant they would be only running one session track, rather than the normal two. However, in place of the second track would be a number of roundtable sessions run by the guys from Open TechCast. More on that later

After the intro, it was time for a session from one of the event’s sponsors, Bitdefender, represented by Andrei Ionescu. Somewhat unfortunately for Bitdefender, a quick straw poll at the start of the session showed there didn’t seem to be anyone in the room actually using their product, although in another way that presents an opportunity for them as well.

Whilst AV isn’t really my area of concern/influence, there were still a couple of interesting take aways from the presentation. Firstly, something I wasn’t aware of is that Bitdefender are a huge OEM software provider, with many vendors taking parts or all of their technology and re labelling it. Check out the ‘third party engine’ column on the list of AV Vendors over at AV Comparatives. This means Bitdefender have over 400 million end points running their software worldwide, more than any other AV vendor.

The second interesting takeaway was that they have taken the idea of a centralised scanning appliance with lightweight local clients, and then expanded this to support multi hypervisor, rather than just NSX. Admittedly the scanning traffic for non NSX machines travels over the network, rather than within the hypervisor but it’s interesting seeing the expansion of this type of architecture outside of ESXi.

One item of clarification for Andrei and Bitdefender. I asked Andrei during the session if an NSX license is required to run AV guest introspection and he said that it was. On further investigation, I don’t think this is the case. Bitdefender will be using vShield Endpoint, which comes free with vSphere Essentials Plus and up. NSX Manager comes with an embedded, unlimited license for vShield Endpoint, so NSX licenses are not required for AV guest introspection. More info from VMware here.

Next up was vSAN, more specifically, Mr vSAN! Simon Todd from VMware gave a great update and roadmap overview for vSAN. He talked about how vSAN now has over 7000 customers and continues to grow impressively. He also mentioned a couple of interesting customer deployments. He has talked about Sky’s use of vSAN before, but he mentioned it again as it’s a massive vindication of the trust that can be placed on the product. Sky has over a petabyte of data in vSAN, running all their streaming and catch-up services in the U.K., including Now TV and Sky Q.

Two new customer deployments he talked about were a large retailer deploying 2 node ROBO using Direct Connect to reduce the costs associated with a 10Gb capable switch, and an airline which amasses 100,000 of data points during the flights of its A380 fleet which is stored on vSAN.

Simon did a refresh of some of the new features in vSAN 6.5, such as iSCSI and two node direct connect. He then talked about some of the advances coming with the next vSAN release, which to me look to be some very important new capabilities which will allow vSAN to meet feature parity with any enterprise class array. The release should be out soon, so there will be more on those features shortly I hope.

After Simon’s session I skipped the vendor session by Morpheus, although I did talk to them later in the vendor room. Instead I headed off to take part in a community round table session that Gareth and Amit from the Open TechCast podcast were running. Along with those guys, myself and Erik Bussink, we chatted about VMware and cloud, including the announcement about VMware Cloud on AWS and the recent sale of vCloud Air. I think the Open TechCast guys were planning on putting our discussion, and the other two on certifications and homelabs, out as podcasts soon.

After that session it was lunch, and then on to the first community session of the day from Gary Williams on his experience with Docker. I was a little fearful at first due to slides showing code snippets which in some presentation can struggle to hold a room but there was nothing to fear as everyone paid attention and it was a thoroughly useful session. Gary described how VMs and containers differ, always useful for a VM centric crowd. He then walked through the concepts in Docker and how it all fitted together, before kicking off a live demo. In the demo he built a container image and span up a container from that newly created image, then attempted to upload the image to a repository in AWS ECS. Unfortunately for Gary at that point the demo gods did not shine on him, but fair play for being brave enough to perform a live demo in the first place!

It was good for me to see how Gary had been successfully working on containers in the workplace and gaining the benefits of doing so, without having first trying to plan everything before implementation, so he could then work through ideas as they occurred. For example, next on his list was how to establish trusted registries and regulate the use of Docker Hub, and then to look at schedulers such as Swarm.

Part way through the session, the room had a good discussion on the merits of Docker containers on Windows, and that despite all the fanfare of support in Windows 2016, few had seen any headline grabbing reports of interesting or large-scale use of Windows containers – I guess it’s still early days.

After Gary’s session there was a break, so I took the time to catch up with James from Morpheus in the Vendor room. Morpheus have an interesting back story (as does James, being recently ex Pernix), and an interesting product, of which you can see a demo by James on YouTube here. Morpheus provide a portal for provisioning and operating servers and databases across a number of different infrastructures, including on premises VMware and OpenStack, and public clouds such as AWS and Azure. They also provide monitoring, logging and backups of systems on those infrastructures, all within the same tool. It looks a very nice product and something I will be checking out in more detail in the future. Morpheus do operate in a rapidly crowding market, competing against the likes of Rightscale, Scalr, Cisco CloudCenter (aka Cliqr), ServiceNow and the upcoming Cross Cloud Services from VMware, plus vRealize Automation, to name a few.

For the final session of the day I attended a vendor session by Stan from Runecast. Runecast have a very interesting product and it is one of those ‘why did no one else think of that before’ ideas. In a nutshell, it come as a virtual appliance which scans your vSphere infrastructure and cross references your environment against VMware’s knowledge base, Runecast’s own catalogue of best practices and also the hardening guides for vSphere. This is especially useful for the VMware KB articles, the idea being you don’t normally look at these until after an issue occurs, even though they were out there telling you about an issue you were going to encounter, yet you knew nothing about. Runecast are another product I will be investigating in the future.

Perhaps the best was saved for last, with the day being wrapped up by a community session from Sam McGeown about his use of Amazon Alexa in his home lab. Sam’s session was engaging throughout and it was interesting to see his thought process in finding a suitable solution which avoided exposing his homelab to the internet. The use of ha-bridge made this all work, ha-bridge being an emulator of the bridge used to control Phillips Hue lights. Due to not exposing his solution to the internet, Sam was unable to run a live demo but he did show two videos of it all working back at his house, to which he received a well-deserved applause. You can read about Sam’s setup in much more detail over on his blog; http://www.definit.co.uk/2017/04/alexa-turn-on-my-workload-cluster/. Definately something to add to the personal project backlog.

Once Sam had finished it was time for Simon to leap back on stage and wrap things up. Of particular mention was that he managed during the day to put the videos of November’s UK VMUG Usercon up on YouTube. Simon requested people go subscribe to the channel so that UK VMUG can qualify for a proper shorten YouTube url. You can find the channel here (go subscribe now!), and the playlist for the 2016 UK Usercon here.

The day wrapped up with an almost 100% great vBeers, in which I got to have lengthy chat with Stan from Runecast and caught up with Gregg Robertson about design and VCDX. The evening was not 100% all good as it unfortunately ending with some laptops being stolen – definitely a reminder to keep bags with you at all times when in busy pubs.

So that rounded up a busy day, thanks to Simon, Linda and Dave for all the hard work in putting on another fantastic event. Hopefully I’ll make the next event in London on 22nd June, which if it follows the last two years will be followed by a luxury vBeers. My next VMUG is actually a lot sooner, as I’m flying up to Scotland next week (20th) to present on AWS with Alex Galbraith at the Scottish VMUG in Glasgow. Which should be a lot of fun.

Thanks for reading this far. Next time I’ll try and make some outline notes during the event itself and take some photos, should make writing the next post a little quicker! Maybe!

 

London VMUG April 2017