For the final session of the day I jumped from the Containers track over to the last session of the AI track.
Julien Simon (@julsimon) introduced the audience to DeepLens, which is a learning enabled camera. It runs Linux and use AWS Greengrass. Greengrass is an IoT service which allows devices to synchronise whenever they can back to the AWS cloud. Devices can execute lambda locally. With AWS IoT devices they always need to be connected for Lambda to run, Greengrass is a way to get around this.
With Greengrass you write Lambda functions and push them down to the device, so they can be run locally. This is important for example where you don’t want devices to depend on being connected to the cloud.
Julien ran through a demo in AWS, with an actual DeepLens camera connected to his 4G hotspot. He connected to the camera via SSH. The Camera is using Ubuntu. He then plugged the camera into the HDMI of the projector to check it was displaying. He then switched back to laptop to show template projects in the AWS Console. He ran through the face detection template project, including walking through the Lambda function.
Julien then connected back to DeepLens and after struggled with a password (English vs French keyboard), he eventually got in. He ran the video output of detection model, showing real time identification of objects in the presentation room. It identifies people, chairs, lights etc. WiFi signal meant a new project to do facial recognition couldn’t be downloaded.
Julien discussed that on the DeepLens there is an Intel optimiser which takes the MXNet trained model and it is then converted to run better on the Intel chip within the DeepLens camera. You can now add these optimisations to your own Lambda code so it takes place when the inference lambda function is run.
And with that, Builders Day was over – it had been a pretty epic day!