Lightning in a Hansard bottle

Some of the technology team that brings the Hansard online in Trinidad & Tobago

When we built the Hansard Speaks chatbot in 2017, I was super excited and told all my friends about it. One of them now works in IT at the Parliament and he invited me to talk with the team about it.

At the brief talk, I spoke about the motivations for building the chatbot, how we thought it was a great way to win arguments about who said what in Parliament, and that we liked how easy it was to bring an automated conversational experience into that world.

I think the team at the Parliament does a great job. I’ve always liked that they were among the early movers to bringing Internet technology into governance. They’ve been online for a long time, they make copies of the Hansard available on their site and they stream proceedings. They’re also on Twitter and are pretty active.

We spoke about how much Hansard Speaks leverages cloud technology and the fact that though the government is progressing, the policy on public cloud means that they have to find ways to use on-prem tech to accomplish similar functionality. HS uses the Microsoft Bot Framework, Azure Media Services and Azure App Services. If they wanted to do that off the cloud, they could but it would just be a bit harder.

I’d love if they shared more about what they do, in terms of all the teams that go in to making the Hansard come alive. There’s a real production team around ensuring those documents get generated and that the parliament team can handle requests that come down from MPs about who said what, when.

Since it’s been two years after we first did the chatbot, I described to them one key change we might make if we were doing it again. We would use a tool like Video Indexer on the videos they create.

Video Indexer’s insights on a Parliament video from Trinidad and Tobago.

It would let us do more than simply get a transcript, as we would then be able to get information on who said what, and how much they contributed to a session. We would be able to get key topics that were discussed.

So, it was great to speak with some of the guys behind the Hansard and share with them ideas on services they can leverage to make their offering even more compelling, insightful and relevant.

Advertisements

Learning to scale.

My friend Christopher shared this on Facebook. It’s a queue for CXC results.

Tomfoolery. That’s the adjective that bubbled into my mind when I saw this screenshot and understood what it was describing.

Admittedly, this was a bit harsh. I mean, someone built this, they took the time to craft a way to deal with the rush of users trying to get their examination results from the Caribbean eXamination Council (CXC). They (CXC) have a classic seasonal scale problem.

During the year, the number of users hitting their servers is probably at a manageable level. And when results come up, they probably literally see a 10-fold increase of access request. They’d have the numbers, but I think this guess is reasonable.

I was curious about the traffic data on cxc.org, so I got this from similarweb.com

Their solution may have been reasonable in 2012, when maybe scaling on the web was more of a step-wise move in terms of resource management than a flexible thing as it is today. Scaling may have meant buying new servers and software and getting a wider team to manage it all.

But there are solutions today that don’t necessarily mean an whole new IT investment.

SAD sporting
I really dig this chart from Forbes demonstrating the seasonality challenge

So, how do you contend with seasonal IT resource demand without breaking the bank? Design your solution to leverage “The Cloud” when you need that extra burst.

“The Cloud” in quotes because if things were as easy as writing the line, “leverage The Cloud” then we wouldn’t be this deep on a blog post about it. So, specifics, here’s what I mean:

Plan the resources needed. In this case, it might be a solution that uses load balancing where some resource is handled on-prem and capacity from the cloud is used as needed. Alternatively, an whole new approach to sharing the resources at play might be worth investigating – keeping authentication local, but sending users to access results stored in the cloud via a serverless solution is a great consideration.

I don’t want to spec out the entire solution in two paragraphs. I do want to say CXC can and should do better, since they serve the needs of much of the Caribbean and beyond.

One Marvelous Scene – Tony & Jarvis

My video editing skills are what you might call, “Hello World” level. So, I’m not interested in bringing down the average as it were, in terms of the quality content over on the official “One Marvelous Scene” playlist on YouTube.

I dig what those storytellers did a lot. But as far as I can tell, even though there are 67 videos on that list, they missed a key one. It was from when Tony had returned home in Iron Man 1 and was building out the new suit, working with Jarvis.

Back when IM1 came out, I remember being delighted by the movie, just for the flow of it, just as a young developer then, seeing a portrayal of a “good programmer” was refreshing. And he was getting things wrong. Making measuring mistakes or pushing things out too quickly. He had to return to the drawing board a lot. It was messy. Just like writing code.

And while doing this, he was working along with Jarvis. An AI helper with “personality”. One that he could talk to in a normal way and move his work along.

In 2008, that was still viewed as fanciful, in terms of a developer experience, but even then I was somewhat seriously considering its potential, which is why I even asked about voice commands for VS on StackOverflow 👇🏾

Jump forward to 2019 and bot framework is a thing, the StackOverflow bot is also a thing and Team Visual Studio introduced IntelliCode as this year’s build conference.

So, as a scene in a movie, Tony talking with Jarvis helps us understand Tony’s own creative process, but for me, it gave glimpses into the near future for how I could be building applications. And that’s marvelous.

A short note on value-based pricing

This is a short note because I’m not an economist and don’t pretend to be one. I think areas like pricing and value and cost determination are complex and should be given their just consideration, however, I recently saw a question on Caribbean Developers and I wanted to share what insights I have on it.

When I first saw this, I knew immediately I wanted to say, “Aye Roger, lean away from thinking hourly rates”, but that felt a bit too curt.

It’s been my experience that freelance developers tend to think in terms of charging hourly rates and doing work that they cost based on that rate. The “better” or more experienced they get, the more the rate reflects their growth/maturity in the space. Then, they learn about markups, based on their understanding of customer risk and that good stuff. But I’ve come to appreciate that thinking in terms of hourly rates for the work you do in freelance is a trap. The actual term for what I think is a better approach is value-based pricing.

I found a great explanation of it on quora:

Value-Based Pricing means presenting a price to the purchaser which is based on their perception of the value they will derive from the result being discussed

More from David Winch, Pricing Coach on quora.

So, I think Roger needs to spend time understanding how to develop a firm that uses Value-based pricing. It’ll help much more in the long run.

After Roger posted his question, I saw a great point from @sehelburt on the matter too:

Making a map.

I like how WordPress’ default on these blog posts is “tell your story”. LOL.

Recently, I described wanting to use Azure App Insights for dependency tracking. I started by looking at documentation on it. I ended up using this, along with this video.

As I started looking into the first place to drop in dependency notification, which was where I create my Azure Container Instance. The tutorial’s code suggested dropping this fragment into a finally around where the dependency was, like so:

From: https://docs.microsoft.com/en-us/azure/azure-monitor/app/api-custom-events-metrics#trackdependency

Then I found myself wondering if for the dependencies I was going to track, wouldn’t a C# attribute be really good. Then I fell down a rabbit hole of when to create attributes.

And then I emerged out of that hole, attribute-less.

So, after I applied guidance from the Application Insights document, I colored my external operations with track dependency calls and ended up with this image:

Azure Application Insights application map for my solution

This does what I wanted, which is to visualize the components and dependencies of my solution, with information on success/failure and duration of the calls.

I’d like to do more with this.

Project VILMA

Sometimes, I get home and just wish I could say to my virtual machine in the cloud, “magico presto, turn on!” and it’s on and ready for me to remote into and do things. I wanted to do things to make that happen, but time and procrastination happens to many. Thankfully, there was an upcoming developer gathering that I used as the catalyst to actually build a system that would work, almost like magic.

So, last Sunday, the Port of Spain chapter of GDG (Google Developer Groups) held a developer event, #GDGPOS #DevFest. They reached out to the local developer community for interesting projects and I responded with a proposal to build something that would work in the way I described.

gdg-presenters
GDGPOS Presenters

My proposal got accepted and I spent a few weeks building out the idea. My whole solution involved using my Google mini to turn my virtual machine on or off.

To do that, I created a Google Action on the Google Actions console. I had played around with Actions before, but this would be different. I have been making most of my conversational agents using Microsoft’s Bot FrameworkBot Framework, so a lot of the concepts were familiar to me, from things like Intents, to Utterances and even the use of webhooks. For this action, I largely had to focus on just one Intent – the one that would hear a command for a VM state change and execute. Overall, the system would look like this:

VILMA-diagram

  • Creating the action

So, I created this custom Intent that took me to Dialogflow, Google’s interactive tool for building conversational interfaces. There, I created a custom intent, ChangeVMState.

ChangeVMState would receive messages and figure out if to turn a VM on or off. The messages could be in a range of formats like:

  • turn on/off
  • power on/off
  • shutdown/start up the vm

They would resolve to the ChangeVMState intent. All messages sent to ChangeVMState was then forwarded to my webhook. I deployed the webhook as a function in Azure.

The code to execute the functions is pretty straightforward. One function receives the request and queues it on an Azure Storage Queue.  Azure functions provides a really simple infrastructure for doing just that. 

I mean, this is the whole method: 

The item being put on the queue – the desired VM state – is just a variable being set. 

Another function in Azure will then take up the values in the queue and will start or stop the VM based on state. Again, a pretty simple bit of code. 

I’m using the Azure Fluent Management SDK to start/stop a VM

So, finally, after the VM is put into the desired state, an email is sent either saying the VM is off or that it’s on and an RDP file is included. Ideally, I wanted to have the Google Assistant I was using notify me when the VM got up and running, but I just couldn’t get push notifications working – which is why I ended up with email. 

Thus,  I ended up with a Google Action, that I eventually called VILMA Agent (at first, I was calling it Shelly-Ann).  I could say to my Google Mini, “OK, Google, tell VILMA Agent, turn on” and I’d get an email with an RDP file.

The code for the functions part of VILMA is up here on GitHub

“Pacers”

In my running group I declared I was going to do this year’s UWI Half Marathon in 11:00 minute miles. My friend roundly berated me for aiming so slow, us having run together and him just knowing I can do a bit better.

Nothing’s wrong with that time if that’s where your training and skill have taken you. But he was like, you do the thing at 9:00 last year and reach 11:00 this year?! LOL, he was right. But I had a bag of excuses, less time to train, distractions, tiredness.

The man said don’t come and talk to me if you run this thing at no 11.

I had a fairly safe race planned. Because truly, my training wasn’t where I wanted it this year (also, lol, this is usually the case). But I planned to play some nice chill music, run at 10:30 for the first half and then try and maintain an even 10 on the way back down.

I started off too quickly.

By the time I got to my first half-mile, with my chill music playing, Endomondo told me I was at 8:something a mile. 🙂 Not good – based on my safe outlook on the race. A little after the mile, I realized I was not going to play this according to plan at all.

This is just after mile 5. Me and the pacer became friends this day.

I ran with the pacer. The 9:00-mile pacer. I saw him coming up with his jolly, flag self just after the first mile, I did a quick self-diagnostic check and determined I felt good at that pace and just went with it.

Pacers feel like the best thing innovation to the UWI half in a long time! They aren’t there to win, not there to get a medal, just there to be a beacon, and not even a silent one. Dude was hailing out people he knew, encouraging his growing following and just being literally a living flag of hope.

So, I ran with the pacer all the way to the half-mark. He had to turn off for a bit and I did not sustain my 9:00. I ran too fast, I ran too slow, but I had enough gas and my new music kicked in at the right time. I listened to both recent hip hop albums from Congress Music Factory that seem to be perfect for running as well as spiritual sustenance.

But all that delight started to wane at mile 10. It became a familiar slog as the sun came up, my muscles’ tiredness became more vocal and essentially, I started to lose focus. My pacer buddy came back, like a good Pied Piper with his merry crew. But this time, I couldn’t answer his call.

By mile 11, I did get to that 11:00-mile pace, with a single goal: keep running. Or jogging. But not walking. I knew from experience that the walk would feel REALLY GOOD, but do a lot of damage to my momentum. So I kept running and by mile 12 got the surge I needed to finish the race.

I ended up having a really decent average pace of 9:17-mile. Not a bad result.

WhatsApp Image 2018-10-28 at 7.45.16 PM

PS:

At the end of the race in the runner’s tent, I was waiting for a massage. Next to me sat this “elderly” dude. He told me he’s been running for longer than I’ve been alive. His race time? 1:37:00.