Advocacy Cloud

Disaster Recovery

We watched in horror as our neighbour’s SUV was pushed down the road by the flood. Having seen this kind of thing on screens before, witnessing it so close was a different matter entirely. There’s an instinct some people have to stick their hand into a burning flame to save a grain of rice. It’s primal, almost automatic, and unfortunately, can result in loss of life.

When I saw the SUV, stuck on the railing, I grabbed the rope. And my wife was like, where this man going with that? It wasn’t even the right kind of rescue rope. Just something I had at home for God-knows-what-reason. Those waters were not to be trifled with. Good sense prevailed and we waited for things to subside.

Sometimes, when it rains too much, too quickly, we lose power in the lines. I half-jokingly told my colleagues in a work meeting that I might drop off the call… moments later, current gone, current came and the SUV was rolling on down.

Thankfully, there was no one in the vehicle.


In my conversations with IT folk over the years, I learnt about setting the Recovery Time Objective (RTO) in disaster recovery planning. It’s a nifty concept. The RTO is that time it takes to go from state of disaster to normalcy. Depending on what system is down, it can require several subsystems to be restored before the main system is back up and recovery has been achieved. You can read up more about RTO and its sibling concerns of RPO and RTA, here.

In this given disaster, several “systems” went down.

The first that got restored was the road. As soon as the waters subsided, the guys from the hotel up the road, Courtenay Rooks et al, turned up with electric saws, a pick up and other implements. That log was removed in about an hour. People could now safely walk up to their homes, albeit in the mud.

By the time the fire services had arrived, it was to assist with hooking up the SUV to Rooks’ pickup so it could be towed out of the way.

Road now free, the fire tender proceeded up the road to assist others.

Normally when there’s a power outage, I call the electricity company, just to make sure that we’re going to be dealt with. Once my call connected, I was greeted with an automated “We’re aware of issues in St. Ann’s …”

I had no idea how long we would be without service, and it was getting dark. In the twilight, I saw the characteristic yellow of our power company’s service vehicle making its way through the mud. That was about two hours into the outage. It was a good sign. But with fallen poles, I expected it would be a wait. About 5 hours in, the lights came on.

I wonder if like the community response, T&TEC has a specific RTO in events like this. Perhaps 6 hours, maybe a day. Maybe it varies based on access, time of day, type of disaster. It would be good to know.

So, that’s two key subsystems – transport and power – back up in about 6 hours. There’s one other that most of us in the community would need to get normal again. The Internet. Thankfully, it would seem that I only had intermittent LTE outages throughout the period. But land-based Internet connectivity was down. During a lockdown that means we literally can’t work.

Of the two providers I’m familiar with here, Flow seemed to be back up in the morning. But Digicel – my provider. Gosh. I wouldn’t see bytes across my wire until the afternoon. That is, almost 24 hours later.

We need published RTOs for disaster recovery by utility service providers. Figures that they can be held accountable for. Now, more than ever, the mix of services we use at home are critical, as we’re still living under a pandemic, with several limitations.

I know there is a Regulated Industries Commission in Trinidad and Tobago. I think this should be something they treat with, for all of our benefit.

Cloud virtual reality

Along came a squid

Finalists in an VR challenge in Trinidad and Tobago

Explore Tobago – Underwater.

That’s the whole idea. Last year, marine researchers out of Trinidad and Tobago produced some amazing imagery capturing numerous locations off the coast of Tobago.

A slice of life from the Maritime Ocean Collection

Their project is called the Maritime Ocean Collection and it features many 360-degree images. So with the right device, you could look all around in a given image and get a decent appreciation of a particular spot.

As a scuba diver, I was enrapt. These images came out right after I had really good dive, that I couldn’t properly record. My camera gave out on us and we were super disappointed. They let me re-live those recent experiences, especially as they were still very fresh in my mind. And they showed me how much more there was to go.

Literally a month after I saw the Collection, the The Caribbean Industrial Research Institute (CARIRI) announced a VR competition.

My ideas as a developer, experiences as a diver and curiosity about the work of those researchers gave me that push to participate in CARIRI’s competition.

The result was Explore Tobago – Underwater – a prototype that let’s you do just that. It’s web-based, can be used with something as simple as a Google Cardboard and uses images from the Collection. The idea of “walking around” underwater, clicking on an interesting object and learning more and getting even a sense of that world is the core goal.

Explore Tobago – Underwater. Proof of concept.

This VR project made it all the way to the finals of the CARIRI competition. The finals. We didn’t win. I was legit sour for a whole minute.

But my team had decided to collaborate with the Collection’s researchers to build this out regardless of the result. The value of the idea as a tool for education, exploration and just a very cool way of seeing our natural resources was much greater than the estimation of a competition’s judges.

As the developers and researchers who met because of the competition started to talk and explore collaboration to make it reality, Microsoft Ignite dropped an amazing bomb.

The Squid, in VR at Microsoft Ignite.

The explanation for that squid starts from about 71 minutes in on the video below. Researchers, Edie Widder & Vincent Pieribone demonstrated mixed reality solutions, focused on underwater exploration.

I mean. My jaw dropped. It was so cool. It was also a great point of validation. Watching them talk about the kind of inspiration, the way VR can be a doorway for education and excitement were the same beats I flowed with when talking about Explore Tobago – Underwater.

There’s something the government representative said in their remarks in the first video above. It was that the VR solutions proposed can stand up with any in the world. As I wrote, we’re exploring how to make the experimental version real. It’s a tough journey, but we can already see that making it, both connects to a global movement and demonstrates to the world the beauty of our home.

Cloud TrinidadAndTobago

If a tree falls in the forest…

At about 5:00 am, the fans stopped spinning. And we knew there was a power outage. We rolled back to sleep in the embers of the night and thought, “oh well, they’ll sort themselves out”.

We were jolted out of sleep two hours later, by the loud noise of a crash down the road.

A massive tree had fallen. It made the electricity company seem far more prescient than I had ever given it credit for.

The tree that collapsed pulled down wires from two poles, caused one of them to fold over into an acute angle and pushed cords into the nearby river.

Early morning, early January disaster.

By the time I walked down to check out what was going on, with only my phone in hand, the community response was well underway.

The community seemed battle-hardened by these events. My wide-eyed, city-boy confusion melted away. A man in a van turned up with not one, not two but three chainsaws. Others turned up with rope and van man, sent for gasoline.

The army was on the scene relatively quickly too. Closed the road and essentially kept people who weren’t helpful at a useful distance. Me, the kept me away.

The men of the neighbourhood started cutting and when the fire services arrived, with their coordination and support the tree was eventually moved aside.

Cars could pass once again, though of course, slowly. By the time the electricity company arrived, the road was clear enough to let them begin the repair process.

The situation reminded me about the need for status updates from utilities. There’s clearly a chain of events needed here. The community response was an amazing, welcome first step. But it seemed like a proactive neighborhood. The baton was passed to the fire services, which made the way for the team from the electricity company.

Who would tell the other service providers? I didn’t see any communication utilities on the scene. Were they aware? Would they spring into action like the men with the chainsaws? This is doubtful.

Also, my family and I temporarily decamped to a place to get power, Internet and some chill. When should we go back? Again, it would be great to either check something like “” to find out.

For now, I’d actually settle for an SMS or WhatsApp from the providers. To date, we’ve gotten none. It seems like the best response will remain that of individuals and neighbours, who proactively set up their own systems, limited as they are, until better can be done.

Cloud Tracks

Save (your data from) Endomondo Month!


I hereby dub December, 2020, “Save your data from Endomondo” month. Why?

Endomondo’s retiring from the game.

So, given this state of affairs, it would be wise to ensure your data on the Endomondo platform is exported to somewhere. I made a request via their site to get all 789 of my workouts from there and a few days later, I got an archive that included this folder structure:

I wanted to do some analysis on my workout data, so I created a really simple ingestion tool that takes the data from the json documents in Workouts/ and inserts them into a SQL Server database.

The tool can be found in this repo.

The key thing about this tool is that I had to fiddle with Endomondo’s JSON output to get it to play nice with my approach to serialization:

I’m not super-proud of it, because it could be very finicky, but it got the job done for my purposes. I deliberately rejected pulling in the available lat-lon data from the runs, because I wasn’t interested in it for the moment, but a slight modification to the approach I’ve taken will accommodate that.

So, I’m glad the data is ingestible now, and I hope to do some cool stuff with it soon.

Advocacy Cloud Collaboration

Back to the Sky: Processing Satellite Data Using Cloud Computing

From time to time, I work with researchers on projects outside of the day to day, forms-over-databases stuff. Mind you, I like a nice form over a cool data repository as much as the next guy, but it’s definitely cool to stretch your arms and do something more.

So, when Dr. Ogundipe approached me about cloudifying her satellite data processing, I had to do a lot of research. She had a processing pipeline that featured ingesting satellite images, doing some data cleanup, then analyzing those results using machine learning. Everything ran on local machines in her lab, and she knew she would run into scaling problems.

Early on, I decided to use a containerized approach to some of the scripting that was performed. The python scripts were meant to run in Windows, but I had an easier go at the time getting Linux containers up and running, so I went with that. Once the container was in good order, I stored the image in the Azure Container Registry and then fired it up using an Azure Container Instance.

Like a good story, I had started in the middle – with the data processing. I didn’t know how I would actually get non-test data into the container. Eventually, I settled on using Azure Files. Dr. Ogundipe would upload the satellite images via a network drive mapped to storage in the cloud. Since I got to have some fun with the fluent SDK in Azure a while back, I used it to build an orchestrator of sorts.

Once the orchestrator had run, it would have fed the satellite images into the container. Output from the container was used to run models stored in Azure ML. Instead of detailing all the steps, this helpful diagram explains the process well:

Super simple.

No, not that diagram.

The various cloud resources used to process satellite data in Azure.

So, I shared some of this process at a seminar Dr. Ogundipe held to talk about the work she does, and how her company, Global Geo-Intelligence Solutions Ltd uses a pipeline like this to detect locust movement in Kenya or the impact of natural disasters and a host of other applications of the data available from satellite images.


Jumping over hurdles to get to insights using Azure

On the way to getting some data to an Azure DB, I explored strategy in using entity framework that was such a cool timesaver, I thought I’d share it for #GlobalAzureBootcamp

I could have done this years ago. It’s just that sometimes a task feels so complex, daunting or mind-numbingly boring that you make do with alternatives until you just have to bite the bullet.

We work in SMPP at Teleios. It’s one of the ways you can connect to mobile carriers and exchange messages.  From time to time, we need to analyze the SMPP traffic we’re sending/receiving and typically, we use WireShark for this. That’s as easy as setting it up to listen on a port and then filtering for the protocol we care about. Instead of actively monitoring with WireShark, we may at times use dumpcap to get chunks of files with network traffic for analysis later on.

What we found was that analyzing file by file was tedious, we wanted to analyze a collection of files at once. We’d known of cloud-based capture analysis solutions for a while, but they tended to focus on TCP and maybe HTTP. Our solution needed to be SMPP-based. So, we decided to build a rudimentary SMPP-based database to help with packet analysis.

That’s where the mind-numbingly boring work came in. In the SMPP 3.4 spec, a given SMPP PDU can contain 150 fields. The need for analysis en masse was so great that we had to construct that database. But this is where the ease of using modern tools jumped in.

I got a list of the SMPP fields from the WireShark site.  In the past, I would have then gone about crafting the SQL that represented those fields as columns in a table. But now, in the age of ORM, I made a class in C#. And if you’re following along from the top, I created a project in Visual Studio, turned on Entity Framework migrations and added a DataContext. From there, it was plain sailing, as I just needed to introduce my new class to my DataContext and push that migration to my db.

It probably took me 30 minutes to go from the list of 150 fields on WireShark to being able to  create the database without the necessary tables. Now, where does Azure come into all of this?

Each capture file we collect could contain thousands of messages. So, in my local development environment, when I first tried to ingest all those to my database, my progress bar told me I’d be waiting a few days.  With Azure, I rapidly provisioned a database and had it’s structure in place with this gem:


That is, from the Entity Framework DataContext class, I called the Database.Migrate() method and any newly set up db would have my latest SMPP table. From there, I provisioned my first 400 DTU Azure SQL database and saw my progress bar for ingestion drop from days to hours.

With a database that size, queries over thousands of rows went by reasonably fast and gave us confidence that we’re on the right path.

We’re also working on a pipeline that automates ingestion of new capture files, very similar to what I did last year my Azure Container Instances project.

So, for #GlobalAzureBootcamp 2019, I’m glad that we were able to step over the hurdle of tedium into richer insights in our message processing.


Conquering complexity with a map

Last year, I worked with a researcher to develop a really cool, complex Azure solution to augment her work flow with some data. She needed to ingest a large volume of data, clean it up, run some AI on that and then present it. Typical data science activities that she wanted to run in the cloud.

I implemented the flow using several components including Azure Container Instances, Event Grid, Azure ML, Cosmos DB and Azure Functions. My daily drive at work doesn’t necessarily let me play in all those spaces at once, so I felt really glad to see all of those pieces work together.

Deploying took a bit more work as I wanted to make that as straightforward as possible. Thus, I used the Azure Fluent SDK that I was fanboying about across a few posts in 2018.

After everything was deployed though, I found visibility into the system was a bit of a stretch. My researcher colleague wanted to easily know where things were at in a given iteration of the process. I didn’t have a lot of solutions for that, so it was mostly email alerts.

That is, until I learnt about Azure Application map from two of my colleagues at Teleios – Ikechi, in Ops and Anand in Engineering.

It’s a part of Application Insights and lets you track dependencies between various services in an Azure solution. So, just out of the box, you can view the success of calls between web sites and web services and databases and other components. Going further, you can even add other components and dependencies to your app. That got me thinking. Maybe I can use Azure Application Map to display the various components of the solution and track issues in a visual, at-a-glance way?

I’m going to check it out.

Advocacy Cloud

Funky Azure Functions

Let’s talk about watering plants.

When I was younger, in my family, I was assigned the task of watering the flowering plants around the house. Thinking back on it now, there was easily 50 plants of all shapes and sizes. So, I would have to shuffle around the yard, bucket in hand, dipping and watering. Some plants would get two dips, others one. I couldn’t use the hose, because that might damage the roots of the younger plants. I hated it.

Ever the creative, I used to come up with outlandish ideas to solve the predicament. Sadly, I never implemented any of them. Thus, I was left to water these plants by hand.

Last week, for Caribbean Developer Week, I came up with a demo, featuring Azure Functions, that is the nearest to a solution to my plant watering needs back then that I have ever come.

I built three Azure Functions:

  1. Setup Waterer
  2. GuidEnqueuer
  3. Plant Waterer

Setup Waterer actually created more Azure Functions. Those would be Timer functions, each potentially able to run their own schedule.

GuidEnqueuer, alas poorly named, but good at pretending to be a plant food source, would receive an Http post and enqueue it. Plant Waterer would pick this up and display on a console. No actual plants benefited from this demo.

As I gushed previously, I created the Setup Waterer function on top of the Azure Fluent SDK and it worked fine. Functions making functions. That’s what I wanted to show really, and things worked well.

The code is available on my repo here.

Cloud teaching Uncategorized

Provisioning some test storage accounts for class

I wanted to create a few storage accounts for students in my class to complete an assignment featuring Event Sourcing and Material Views.

So, here’s what I did.

Download/install the latest azure command line interface (cli).
(While doing this, I realized I could have just used the cloud shell. I soldiered on with the dl)

Create a resource group to contain the accounts we’d need.

#Prior to doing this, ensure that user is logged in
# 'az login' works
#Then, if you have multiple subscriptions attached to account, select the appropriate one using:
# 'az account set –subscription <name or id>'
#command below:
az group create –name COMP6905A2Storage #name I used

Create the accounts and output the storage account keys
The command to make a single storage account is pretty straightforward:

#ensure logged in to azure
#ensure default subscription is desired one
az storage account create –name comp69052017a2test \ #test storage account
–resource-group COMP6905A2Storage \#test resource group
–location eastus –sku Standard_LRS \
–encryption blob

But I wanted to also output the keys and display them on a single line. The command to get the keys after the account is created is this:

az storage account keys list –account-name comp69052017a2test –resource-group COMP6905A2Storage

So, I used the jq program in bash to parse the json result and display both keys on a line. Thus, I created a script that would create the accounts and then output their storage account keys.
This is the script that produced the accounts and keys:

for number in {1..20}
az storage account create –name $account –resource-group COMP6905A2Storage –location eastus –sku Standard_LRS –encryption blob | jq ".name"
az storage account keys list –account-name $account –resource-group COMP6905A2Storage | jq '.[].value' | tr -s '\n' ','

Overall, the longest part of the exercise was dealing with the way the files were being saved in windows vs how they were being saved and read by bash. But the accounts were created and class can get on with assignment 2.

Cloud teaching

Exploring the differences between SaaS, PaaS and IaaS

In Cloud Technologies class today, we used both the course outline and the notes from MSFTImagine’s Github repo to talk through the differences in service offering.

I used the canonical service model responsibility chart to start the conversation off.

Service Model Division of Responsibility, via MSFTImagine on Github.

It’s fairly straightforward to talk to these divisions, of course. I often use it to drive home the NIST 2011 definition of cloud services. With emphasis on the service delivery models.

In today’s presentation, one of the things that jumped out at me was the slide that provided a distinction between SaaS Cloud Storage and IaaS.

SaaS or IaaS, via MSFTImagine on Github.

Finally, when talking about the ever versatile Salesforce, and how its PaaS solution works out it reminded me of the Online Accommodation Student Information System (OASIS 🙂 ) that I had built when I was in undergrad.

I’d built OASIS as a commission for the Office of Student Advisory Services. It was a tool to help off-campus students more easily find accommodation. Prior to OASIS all the information was a notebook in an office. It was built before I learnt about the utility-based computing of cloud. I’m thinking about using that as the basis of an exploration of the architectural changes need to move an old service to the cloud.

Hopefully, I’ll be able to revisit it when we touch on Cloud Design Patterns.