Project VILMA

Sometimes, I get home and just wish I could say to my virtual machine in the cloud, “magico presto, turn on!” and it’s on and ready for me to remote into and do things. I wanted to do things to make that happen, but time and procrastination happens to many. Thankfully, there was an upcoming developer gathering that I used as the catalyst to actually build a system that would work, almost like magic.

So, last Sunday, the Port of Spain chapter of GDG (Google Developer Groups) held a developer event, #GDGPOS #DevFest. They reached out to the local developer community for interesting projects and I responded with a proposal to build something that would work in the way I described.

gdg-presenters
GDGPOS Presenters

My proposal got accepted and I spent a few weeks building out the idea. My whole solution involved using my Google mini to turn my virtual machine on or off.

To do that, I created a Google Action on the Google Actions console. I had played around with Actions before, but this would be different. I have been making most of my conversational agents using Microsoft’s Bot FrameworkBot Framework, so a lot of the concepts were familiar to me, from things like Intents, to Utterances and even the use of webhooks. For this action, I largely had to focus on just one Intent – the one that would hear a command for a VM state change and execute. Overall, the system would look like this:

VILMA-diagram

  • Creating the action

So, I created this custom Intent that took me to Dialogflow, Google’s interactive tool for building conversational interfaces. There, I created a custom intent, ChangeVMState.

ChangeVMState would receive messages and figure out if to turn a VM on or off. The messages could be in a range of formats like:

  • turn on/off
  • power on/off
  • shutdown/start up the vm

They would resolve to the ChangeVMState intent. All messages sent to ChangeVMState was then forwarded to my webhook. I deployed the webhook as a function in Azure.

The code to execute the functions is pretty straightforward. One function receives the request and queues it on an Azure Storage Queue.  Azure functions provides a really simple infrastructure for doing just that. 

I mean, this is the whole method: 

The item being put on the queue – the desired VM state – is just a variable being set. 

Another function in Azure will then take up the values in the queue and will start or stop the VM based on state. Again, a pretty simple bit of code. 

I’m using the Azure Fluent Management SDK to start/stop a VM

So, finally, after the VM is put into the desired state, an email is sent either saying the VM is off or that it’s on and an RDP file is included. Ideally, I wanted to have the Google Assistant I was using notify me when the VM got up and running, but I just couldn’t get push notifications working – which is why I ended up with email. 

Thus,  I ended up with a Google Action, that I eventually called VILMA Agent (at first, I was calling it Shelly-Ann).  I could say to my Google Mini, “OK, Google, tell VILMA Agent, turn on” and I’d get an email with an RDP file.

The code for the functions part of VILMA is up here on GitHub

Advertisements

“Pacers”

In my running group I declared I was going to do this year’s UWI Half Marathon in 11:00 minute miles. My friend roundly berated me for aiming so slow, us having run together and him just knowing I can do a bit better.

Nothing’s wrong with that time if that’s where your training and skill have taken you. But he was like, you do the thing at 9:00 last year and reach 11:00 this year?! LOL, he was right. But I had a bag of excuses, less time to train, distractions, tiredness.

The man said don’t come and talk to me if you run this thing at no 11.

I had a fairly safe race planned. Because truly, my training wasn’t where I wanted it this year (also, lol, this is usually the case). But I planned to play some nice chill music, run at 10:30 for the first half and then try and maintain an even 10 on the way back down.

I started off too quickly.

By the time I got to my first half-mile, with my chill music playing, Endomondo told me I was at 8:something a mile. 🙂 Not good – based on my safe outlook on the race. A little after the mile, I realized I was not going to play this according to plan at all.

This is just after mile 5. Me and the pacer became friends this day.

I ran with the pacer. The 9:00-mile pacer. I saw him coming up with his jolly, flag self just after the first mile, I did a quick self-diagnostic check and determined I felt good at that pace and just went with it.

Pacers feel like the best thing innovation to the UWI half in a long time! They aren’t there to win, not there to get a medal, just there to be a beacon, and not even a silent one. Dude was hailing out people he knew, encouraging his growing following and just being literally a living flag of hope.

So, I ran with the pacer all the way to the half-mark. He had to turn off for a bit and I did not sustain my 9:00. I ran too fast, I ran too slow, but I had enough gas and my new music kicked in at the right time. I listened to both recent hip hop albums from Congress Music Factory that seem to be perfect for running as well as spiritual sustenance.

But all that delight started to wane at mile 10. It became a familiar slog as the sun came up, my muscles’ tiredness became more vocal and essentially, I started to lose focus. My pacer buddy came back, like a good Pied Piper with his merry crew. But this time, I couldn’t answer his call.

By mile 11, I did get to that 11:00-mile pace, with a single goal: keep running. Or jogging. But not walking. I knew from experience that the walk would feel REALLY GOOD, but do a lot of damage to my momentum. So I kept running and by mile 12 got the surge I needed to finish the race.

I ended up having a really decent average pace of 9:17-mile. Not a bad result.

WhatsApp Image 2018-10-28 at 7.45.16 PM

PS:

At the end of the race in the runner’s tent, I was waiting for a massage. Next to me sat this “elderly” dude. He told me he’s been running for longer than I’ve been alive. His race time? 1:37:00.

 

Danger Zones

TL; DR

I’ve built a map of the location updates from the Ministry of Works and Transport of Trinidad and Tobago based on flooding and where was/is impassable. You van view it here.

“Technical” details

That tweet above is kind of how I got the idea in my head to build out an example of the approach.

When I sat down to do create a version of a good approach, I had all kinds of options in my mind. Should it be rendered on the client or server side? React or Angular? Should I use Google Maps, Leaflet & MapBox or something else? How would I generate the data?  Should I try and parse some tweets? What’s the fastest way to get data? Who has the data?

Since I didn’t want to spend all evening in analysis paralysis, I just dove in and began pulling things together. I had recently set up a new dev environment, so my regular tools for some ideas weren’t restored yet. No node, npm or React was set up. So I started downloading packages, installers and tools.

And then I remembered glitch! I literally paused mid environment setup and jumped onto searching in glitch. Glitch is like online development environment that comes prepackaged with the resources you need to get up and running with with minimal fuss. Now, you have to have a sense of what you want to build and what tech to use. Which I did. A few searches later, I found a great starting point, something that already had the Leaflet stuff built in.

Having the base I wanted, I needed to get the content of these tweets represented as geojson:

Again, numerous options, parsers to write and just ideas swirling around. But while spelunking online for stuff to use, I found geojson.io – a WYSIWIG for generating geojson. I had to handcode the stuff, switching between Google Maps, Open Streetmaps and Waze but I just wanted an early result.

And I got it: a map that presents the information that @mowtgovtt tweeted about the state of impassable regions in the country.

 

Cloud, fluently.

So, I really dig the Azure Fluent SDK. It feels incredibly intuitive. Once you have familiarity with the lay of the land in terms of resources in Azure, then following on from examples of using the Fluent SDK looks as easy as using linq to get data access queries done.

It looks like the team behind it is ensuring the SDK stays up to date with Azure resources as they are released. Prior to being introduced to Azure Fluently (my name, lol), I was trying to find a way to create Azure Function applications on demand.  One of my recent Stack Overflow questions was in that vein.

But then along came this SDK. Now, I could do something like this:


IAzure azure = GetAzure();

var newName = (fnNamePrefix + DateTime.Now.Ticks).Substring(0, 19);

var storageAccount = azure.StorageAccounts.List()
 .Where(x => x.Name.Equals(storageAccountName))?.First();

MemoryStream stream = CreateZip(indexJs, functionJson);

var functionUrlZip = UploadZip(storageAccount, newName, stream);
 stream.Position = 0;
 var websiteApp =
 azure.AppServices.FunctionApps.Define(newName)
 .WithRegion("East US")
 .WithExistingResourceGroup(resourceGroup)
 .WithExistingStorageAccount(storageAccount)
 .WithAppSetting("WEBSITE_USE_ZIP", functionUrlZip)
 .Create()
 ;

Which lets me programmatically create an archive of the bits for a function (JSON), upload it and then, create the actual function. Notice, this function is powered by that experimental feature -> pointing to a zip file for your web app (WEBSITE_USE_ZIP).

I could have used this creation step to instead get the function’s publish profile and then upload the files via FTP to the newly created app as well.

This versatile way of engaging with Azure Resources, from a creational/management perspective is really compelling and I’m looking forward to using it more in the future.

#TheFutureIsFluent

 

See a flood, tweet a flood

Brandon was a student of mine in 2016. He did the Cloud Technologies course as an elective in his GIS programme at UWI.

During the course, one of the assignments is to develop a proposal for a cloud service. The proposal should address service model, delivery model and deployment. It also needs to talk about how each of the 5 characteristics of cloud services would be delivered.

Brandon and his team proposed flood identification as a service.  That is, it would grab user generated content and use that to identify if floods are happening in real time. After the proposal, he continued refining the proposal and is now testing it. He published this video to explain how it works:

I dig how he used a Twitter bot to receive the feedback as well. I hope his findings reveal a productive solution.

Good job, Brandon!

Hack for nutrition

Last weekend, I attended the Trinidad and Tobago leg of the WSIS’ Hack Against Hunger event.

I was talking with Dr. Bernard about a new Teleios Code Jam initiative and she let me know what was going on at the weekend.

So, I went on Saturday to hear what it was about and wondered if I’d have any time to build something simple.  The hackathon had a really nice premise:

HAH_Snippet

Hackathons tend to be pressure cookers, so I wasn’t game to spend all night and day building something. Largely because my wife and child would not have been impressed, but I could have carved out some space to put an idea together.

“Carving out some space” really meant getting three hours of sleep while stumbling around datasets, doing the dishes and taking care of baby. An good solution came together, though.

I tackled nutrition, using my own experiences with trying to find the best food for my family. Best of course being relative. One might think that means most expensive, when really, it can mean, most appropriate. For example, our pediatrician told us, lay off the flour-based spaghetti and dive in to more ground provisions for our baby girl. That stuff can be pretty cheap in the local market.

Thus, I spent my time hacking together a virtual assistant that will help with finding out both the locally produced foods and their nutritional content. I called the bot Miss Mary. Largely because the old lady in the market that I ask questions like “what’s this thing?” and “how do you know that pepper’s good?” 1. It was cassava yam and 2. Because she ate it raw. I don’t know her name, but she reminds me of a shopkeeper in a place I used to live, who was called Miss Mary.

Presentation time, I didn’t have one, so I put this together to help tell the story.

I wasn’t able to stay for the remaining presentations, but I was told they were really good. I’m looking forward to hear more of what was built! Ultimately, the first prize went to Sterling & Keshav. For their troubles, they’ll be headed to Geneva later in March to compete once more.

All the best, guys! 🙂 #KeepHacking

PS: I’ll release a version of Miss Mary a bit later on, I was excited to share the story! 🙂

One line made all projects better

For the past few years, the final project for the course, COMP6905 has been a research write-up.  This year it was no different, but there was a key addition to the requirements:

Design a cloud service based on research being done on campus

Each proposal had use current research or support research work being done. As an approach, its something we explored over at Teleios Code Jam before, but with a bit less rigor. One year, we required teams to base their submissions on articles that appeared in the media. It produced a lot of solutions with disaster preparedness/flooding as the focus.

But this class, they went to town with this requirement. We saw proposals for cloud services focused on the Seismic Research Center, on diabetes research, on alternative energy and even on cocoa research optimization.

There was no requirement to involve the actual researchers in the proposals as their published findings would have been sufficient evidence for my needs. However, there are already a few researchers expressing interest in taking these proposals further.

One goal of teaching cloud is to produce a set of people who understand the technology and are willing to build cool stuff with it. I’m looking forward to see what comes of these proposals.