Danger Zones


I’ve built a map of the location updates from the Ministry of Works and Transport of Trinidad and Tobago based on flooding and where was/is impassable. You van view it here.

“Technical” details

That tweet above is kind of how I got the idea in my head to build out an example of the approach.

When I sat down to do create a version of a good approach, I had all kinds of options in my mind. Should it be rendered on the client or server side? React or Angular? Should I use Google Maps, Leaflet & MapBox or something else? How would I generate the data?  Should I try and parse some tweets? What’s the fastest way to get data? Who has the data?

Since I didn’t want to spend all evening in analysis paralysis, I just dove in and began pulling things together. I had recently set up a new dev environment, so my regular tools for some ideas weren’t restored yet. No node, npm or React was set up. So I started downloading packages, installers and tools.

And then I remembered glitch! I literally paused mid environment setup and jumped onto searching in glitch. Glitch is like online development environment that comes prepackaged with the resources you need to get up and running with with minimal fuss. Now, you have to have a sense of what you want to build and what tech to use. Which I did. A few searches later, I found a great starting point, something that already had the Leaflet stuff built in.

Having the base I wanted, I needed to get the content of these tweets represented as geojson:

Again, numerous options, parsers to write and just ideas swirling around. But while spelunking online for stuff to use, I found geojson.io – a WYSIWIG for generating geojson. I had to handcode the stuff, switching between Google Maps, Open Streetmaps and Waze but I just wanted an early result.

And I got it: a map that presents the information that @mowtgovtt tweeted about the state of impassable regions in the country.



Cloud, fluently.

So, I really dig the Azure Fluent SDK. It feels incredibly intuitive. Once you have familiarity with the lay of the land in terms of resources in Azure, then following on from examples of using the Fluent SDK looks as easy as using linq to get data access queries done.

It looks like the team behind it is ensuring the SDK stays up to date with Azure resources as they are released. Prior to being introduced to Azure Fluently (my name, lol), I was trying to find a way to create Azure Function applications on demand.  One of my recent Stack Overflow questions was in that vein.

But then along came this SDK. Now, I could do something like this:

IAzure azure = GetAzure();

var newName = (fnNamePrefix + DateTime.Now.Ticks).Substring(0, 19);

var storageAccount = azure.StorageAccounts.List()
 .Where(x => x.Name.Equals(storageAccountName))?.First();

MemoryStream stream = CreateZip(indexJs, functionJson);

var functionUrlZip = UploadZip(storageAccount, newName, stream);
 stream.Position = 0;
 var websiteApp =
 .WithRegion("East US")
 .WithAppSetting("WEBSITE_USE_ZIP", functionUrlZip)

Which lets me programmatically create an archive of the bits for a function (JSON), upload it and then, create the actual function. Notice, this function is powered by that experimental feature -> pointing to a zip file for your web app (WEBSITE_USE_ZIP).

I could have used this creation step to instead get the function’s publish profile and then upload the files via FTP to the newly created app as well.

This versatile way of engaging with Azure Resources, from a creational/management perspective is really compelling and I’m looking forward to using it more in the future.



See a flood, tweet a flood

Brandon was a student of mine in 2016. He did the Cloud Technologies course as an elective in his GIS programme at UWI.

During the course, one of the assignments is to develop a proposal for a cloud service. The proposal should address service model, delivery model and deployment. It also needs to talk about how each of the 5 characteristics of cloud services would be delivered.

Brandon and his team proposed flood identification as a service.  That is, it would grab user generated content and use that to identify if floods are happening in real time. After the proposal, he continued refining the proposal and is now testing it. He published this video to explain how it works:

I dig how he used a Twitter bot to receive the feedback as well. I hope his findings reveal a productive solution.

Good job, Brandon!

Hack for nutrition

Last weekend, I attended the Trinidad and Tobago leg of the WSIS’ Hack Against Hunger event.

I was talking with Dr. Bernard about a new Teleios Code Jam initiative and she let me know what was going on at the weekend.

So, I went on Saturday to hear what it was about and wondered if I’d have any time to build something simple.  The hackathon had a really nice premise:


Hackathons tend to be pressure cookers, so I wasn’t game to spend all night and day building something. Largely because my wife and child would not have been impressed, but I could have carved out some space to put an idea together.

“Carving out some space” really meant getting three hours of sleep while stumbling around datasets, doing the dishes and taking care of baby. An good solution came together, though.

I tackled nutrition, using my own experiences with trying to find the best food for my family. Best of course being relative. One might think that means most expensive, when really, it can mean, most appropriate. For example, our pediatrician told us, lay off the flour-based spaghetti and dive in to more ground provisions for our baby girl. That stuff can be pretty cheap in the local market.

Thus, I spent my time hacking together a virtual assistant that will help with finding out both the locally produced foods and their nutritional content. I called the bot Miss Mary. Largely because the old lady in the market that I ask questions like “what’s this thing?” and “how do you know that pepper’s good?” 1. It was cassava yam and 2. Because she ate it raw. I don’t know her name, but she reminds me of a shopkeeper in a place I used to live, who was called Miss Mary.

Presentation time, I didn’t have one, so I put this together to help tell the story.

I wasn’t able to stay for the remaining presentations, but I was told they were really good. I’m looking forward to hear more of what was built! Ultimately, the first prize went to Sterling & Keshav. For their troubles, they’ll be headed to Geneva later in March to compete once more.

All the best, guys! 🙂 #KeepHacking

PS: I’ll release a version of Miss Mary a bit later on, I was excited to share the story! 🙂

One line made all projects better

For the past few years, the final project for the course, COMP6905 has been a research write-up.  This year it was no different, but there was a key addition to the requirements:

Design a cloud service based on research being done on campus

Each proposal had use current research or support research work being done. As an approach, its something we explored over at Teleios Code Jam before, but with a bit less rigor. One year, we required teams to base their submissions on articles that appeared in the media. It produced a lot of solutions with disaster preparedness/flooding as the focus.

But this class, they went to town with this requirement. We saw proposals for cloud services focused on the Seismic Research Center, on diabetes research, on alternative energy and even on cocoa research optimization.

There was no requirement to involve the actual researchers in the proposals as their published findings would have been sufficient evidence for my needs. However, there are already a few researchers expressing interest in taking these proposals further.

One goal of teaching cloud is to produce a set of people who understand the technology and are willing to build cool stuff with it. I’m looking forward to see what comes of these proposals.

Who are some of the regional cloud service providers?

Find three regional [From Belize to Guyana] cloud service providers. One must be IaaS [must  be unique]. Detail their service offering. Other two must be unique across class. Groups of two.

This was essentially the meat of the first class assignment for this year’s Cloud Technology course.

I’ve got ten pairs of students sending in reports. These were the providers they focused on:

IaaS Service Other Service 1 Other Service 2
Cloud Carib Teleios Systems (PaaS) Grupo Eximo
T-Tech (a cloud adoption agent) CCIHosting NAAP Global Solutions BVI (IaaS)
CP Enterprise (an IT service provider) infoexchangeja (cloud service reseller/consulting firm) TSTT (data centre provider)
Curacao Technology Exchange (IaaS) Digi-Data (cloud service reseller) Digicel Business (IaaS/SaaS)
Racklodge (IaaS) HRplus Scudetto Software & Design Company (SaaS)
Costa Rica Servers (IaaS) Claro cloud (IaaS) Rack Nation
fastCloud HeadOfficeHeadOffice Cariri
Link Bermuda Secure Hosting Ltd. Brac Informatics Systems
Global Nexus Idea Lab’s Orchid ORS LoanCirrus
Kryonyx IslandNet (managed services provider) Fresh Mango

You’ll notice there are some companies from Costa Rica & Curacao there, I added in those countries after there were complaints that there weren’t enough providers to be found from BTL to GUY. I called cheese on that, but still opened up to a few more countries not typically associated with the Caribbean.

Doing this exercise brought to mind a need to create something of a Caribbean Cloud Registry. As I wrote that, the word  “Foundry” came to mind, but the guys at pivotal sort of have that locked down, I think. But the point remains, a place where you can at-a-glance see who the cloud players are and what they offer.

One of the things I wanted from those doing the evaluations was a discerning eye on whether the service providers in question were offering actual cloud services they built or if they were cloud journey enablers. As an enabler, they may have been either reselling or consulting and supporting an organization’s journey to the cloud.

Cloud enablers providing a necessary service, leading the interested to get to cloud. Without them, organizations can easily get lost in the details of implementation too soon, or make costly missteps when starting out. However, this assignment is not about them. It’s about those who are practitioners in the world of building cloud solutions. This assignment largely involves finding and understanding the context of the cloud service builders.

Not every -as-a-service marketing brochure means the seller is a cloud provider.


Just completed the 2017 edition of the University of the West Indies’ International Half Marathon.

This was a brutal run in beautiful weather. The “race” starts at 5:30 am. Last week, stormy weather was on the cards. Some might believe running in the rain is a joy, but not for 13.1 miles. Thankfully, it was very cool for the duration of my run, no sun, no rain & a light breeze.

I often start off too quickly. The PhillyNet Half Marathon training started just before the middle of the year in earnest and through a series of long and short runs, strength work and other training, the team and I got in to racing shape. 11 of us ran the race, but more than double that at times trained with the team. We had many supporters, on the course and online. But anyway, I still start off too quickly. I may have some overall race pace that I want to maintain, and invariably, my first mile is off. That’s where Endomondo comes in. I start it up on my phone, and at the end of mile 1, the Endomondo Lady announces, “Aye! Stop! You’re going too fast, slow down!”. OK, no, she doesn’t do that. Instead, she announces that I’m running at 8-something a mile or some such and I know,
“Oh, if I want to finish this race in a decent overall time, I have to slow down”. Today, E. Lady was quiet. I got to mile 1 and heard nothing.

Immediately disappointed, I expected it was because even though I started it at the start of the race, bad GPS or some android-y thing had thrown things off. Since a mile had passed, I was not interested in getting it going if it was off. Anyway, I felt good. I just didn’t know what time it was.

I would have run the entire race time-blind, if not for this year’s UWI half innovation: race pacers. Either they were provided by UWI or some other club, but they were golden. Well, not literally. I saw a 1:50:00 pacer and a 2:00:00 one. Keeping up with them would essentially give you a race result of their stated times. The 1:50:00 one flew past me after mile 2. And since I hadn’t seen the 2:00:00 for the first half, I judged to be between those finishing times. But having a pacer in no wise means having the power to finish at those times. So, I had to be careful.

With no E. Lady in my ear, improvisation became necessary. Since there were so many runners, finding and keeping pace with someone on the course became a fun, mini-game. At first there was the lady from +One a Week, I kept up with her for about half a mile. Two ladies overtook me and ran on that spot I started to like – the exact middle of the road on the white line. So, I overtook them to reclaim my land. That cat and mouse lasted for a mile and then they bested me.

Coming up to mile 4, there was loud music. I liked it, then it got too loud, then I liked it again. The DJ seemed very proud to remix “Full Extreme” into “We Runnin Still”.

“The buildings,

Could fall down,

We running still… we runnin still.

The treasury,

Could fall down,

We running still… we runnin still…”

I mean, it sorta worked. Something about the loud music added some energy to my run that I intended to use later.

I met up with someone from TT Road Runners and that partnership lasted probably two miles. We got up to the turn together. Mind you, these aren’t announced things. It’s just that as a runner, you realize someone is next to you, they’re keeping pace and either you increase yours to leave them in your dust or you chill and maybe see how far it goes.

She eventually outran me and I was alone. The 2:00:00 pacer passed me at mile 7 or so, but then, my family, immediate and extended was there at mile 8, being raucous and extremely encouraging. Sophie, my 10-month-old, even deigned to give me a look and a small smile. All this probably being a bit much for her thankyouverymuch. I knew the pressure miles for me would be between mile 10 and 11 and a little after. By the time I got past the Eddie Hart ground, which is about a quarter mile after the 10 mile mark, I got into my head that I needed something to distract me. So, I started to count. I just picked a number and counted up to it.

Somewhere between 300 and 400 “counts” I met up TT Road Runner lady. She seemed to be having a hard time. So, I tried to return the favor. Shouting out encouragement to other runners gives me a slight jolt myself. I yelled things like “keep it up”, “this is the hardest one” and such. I don’t know if it worked. Running is a physical thing. Not a talking thing. To a guy who passed me that I was meeting up to, I yelled some more. For some reason, I was legit feeling OK, coming in to mile 11. There’s a hill up to mile 11 and another one up to mile 12.

To keep pace up the mile 12 hill, I pulled out and old trick: Father Abraham.  I just sang the song and sort of performed the steps. Way, it’s a long song. It worked, though. All the while, I kept 2:00:00 in sight. By mile 12, I returned to counting.  This time, down from 900. 900 because it would take me about 10 minutes to get to 1. I saw my wife, Sophie and #TeamPhillyNetSupporters again. Again providing a burst of joy.

I settled in my mind, a few weeks before the race that the half marathon is a 12.75 mile jog and a quarter mile race. I only really care to finish ahead of people at the very last piece of the run. So that’s what I did. I essentially pelted through the straightaway and for my effort, copped a really good time.

Going in to the run, I was expecting and said as much to do the race in 2:10:00 or 2:15:00. This result was very surprising and I think the combination of decent training with PhillyNet and random race partnerships helped me discover some new things about running.

WhatsApp Image 2017-10-29 at 10.53.39 AM
PhillyNet Runners and Supporters

PS: Oh, as it turns out, Endomondo was fine. I had my audio settings too low, so she was talking, just at a whisper.