Categories
Advocacy Bots

#MSBuild 2021 Table Talk: How it went

In a recent post I mentioned moderating a table talk at this year’s MSBuild along with Erik Kleefeldt.

By design, it’s meant to be like a hallway chat with a bunch of people about tech we’ve been using/playing with/excited about. This hallway was huge. 318 people turned up on Teams to talk about extending it in some way or other.

Erik and I met with Anna, Andy and Joey from Microsoft in the preceding weeks, to get oriented, nail down a flow and just get a backstage peak at some of coordination it takes to pull off an event like Build. (Hint: It’s a lot!).

We had a good idea about what we would do, I’d introduce, he’d conclude and stuff like that. And then when the meeting started, I had jitters, my machine had jitters and I wondered if we would actually get started at all. But then, I told all my new friends that I was super nervous and everything just settled down.

Top things we spoke about:

  • Fluid Framework
  • Viva Connections
  • Upcoming Meetings and Calling API

As a hallway chat, like in real life, there’s no recording, but those topics are great areas to explore further. I’m definitely looking forward to the new media API – to get me out of being stuck in the middle of my current project.

Overall, this was a lot of fun, Build was plenty vibes & plenty action and I’ve got a lot of unpacking to do from my backpack.

Categories
Bots

In media res

I’ve been playing with an idea in Microsoft Teams for a few months now. It hasn’t yet borne fruit, but I’ve decided its time to move on, and maybe revisit it in a few more months with fresh eyes.

This sort of breaks with how I’ve considered writing posts on here. I like each post related to a creative activity to represent a singular finished work. I can refine things and come back, but each should say, “I did this and here’s the result”. But I’m writing about being in the middle of things, largely as an exercise in discipline, but also, as a clear record for myself of what I’ve done.

So what was it?

Scott Hanselman had this post on Twitter about making credits appear at or near the end of a Teams meeting. From the time he put it up, I was pretty clear on how he might have done it.

Genesis of my idea for a Microsoft Teams End Credits bot came from this.

I think apart from the fact that we all become video presenters last year, we also became much more familiar with OBS. And the approach he used could be done with OBS. Which he described here.

In fact, I think I saw an earlier tweet from him about a dude doing essentially a transparent board overlay with OBS, too. OBS to me, feels easy to use, but when you put everything together, it could feel like a bit of work. You have to set up your scenes, fill them with the right layers, and hang everything together, just so.

So, not hard, but somewhat involved. Since I’d been experimenting with the audio and video streams of Teams calls, I could see how a similar thing could possibly be done in Teams directly. Which would let me yell, “Look ma! No OBS!”, while achieving the same functionality.

Quite a few of these experiments begin with messing around with a sample, here and there. Call it SDD – Sample Driven Development. I picked up where the HueBot Teams sample left off. It’s one that let’s you create a bot that grabs a speaker’s video stream in a call and overlay it with a given hue – red, green or blue. I’d gotten that to work. And last time I played with that sample, I was able to send music down into a Teams meeting using a set of requests to a chatbot.

Now, I wanted to essentially overlay on a given video stream, the same credits info that I saw from Scott’s OBS trick.

I am currently still in that rabbit hole.

Yes, I was able to access the video stream based on the sample. I even got to the point of overlaying text. But pushing that back down to the call for everyone? Various shades of failure.

The first image is essentially straight out of the sample, where guidance was provided on how to extract a bitmap image from the video stream, which is otherwise formatted in NV12. The other images in the carousel are what appeared in Teams, with various degrees of resizing but always having a blue hue.

/// <summary>
/// Transform NV12 to bmp image so we can view how is it looks like. Note it's not NV12 to RBG conversion.
/// </summary>
/// <param name="data">NV12 sample data.</param>
/// <param name="width">Image width.</param>
/// <param name="height">Image height.</param>
/// <param name="logger">Log instance.</param>
/// <returns>The <see cref="Bitmap"/>.</returns>
public static Bitmap TransformNv12ToBmpFaster(byte[] data, int width, int height, IGraphLogger logger)
{
Stopwatch watch = new Stopwatch();
watch.Start();
var bmp = new Bitmap(width, height, PixelFormat.Format32bppPArgb);
var bmpData = bmp.LockBits(
new Rectangle(0, 0, bmp.Width, bmp.Height),
ImageLockMode.ReadWrite,
PixelFormat.Format32bppRgb);
var uvStart = width * height;
for (var y = 0; y < height; y++)
{
var pos = y * width;
var posInBmp = y * bmpData.Stride;
for (var x = 0; x < width; x++)
{
var vIndex = uvStart + ((y >> 1) * width) + (x & ~1);
//// https://msdn.microsoft.com/en-us/library/windows/desktop/dd206750(v=vs.85).aspx
//// https://en.wikipedia.org/wiki/YUV
var c = data[pos] 16;
var d = data[vIndex] 128;
var e = data[vIndex + 1] 128;
c = c < 0 ? 0 : c;
var r = ((298 * c) + (409 * e) + 128) >> 8;
var g = ((298 * c) (100 * d) (208 * e) + 128) >> 8;
var b = ((298 * c) + (516 * d) + 128) >> 8;
r = r.Clamp(0, 255);
g = g.Clamp(0, 255);
b = b.Clamp(0, 255);
Marshal.WriteInt32(bmpData.Scan0, posInBmp + (x << 2), (b << 0) | (g << 8) | (r << 16) | (0xFF << 24));
pos++;
}
}
bmp.UnlockBits(bmpData);
watch.Stop();
logger.Info($"Took {watch.ElapsedMilliseconds} ms to lock and unlock");
return bmp;
}
This code essentially does the transformation.

I’m currently stuck with that blue hue. 😕.

So, since the sample only had a one-way transformation of NV12 to bitmap, not having any experience with that, I speelunked around the web for a solution. Normally that would mean some drive-by [StackOverflow]ing for a whole method, but that got me as far as those blue hues.

Literally, the method I got from S/O let me convert BMP to some kind of NV12, but not something that Teams quite liked.

private byte [] getYV12(int inputWidth, int inputHeight, Bitmap scaled) {
int [] argb = new int[inputWidth * inputHeight];
scaled.getPixels(argb, 0, inputWidth, 0, 0, inputWidth, inputHeight);
byte [] yuv = new byte[inputWidth*inputHeight*3/2];
encodeYV12(yuv, argb, inputWidth, inputHeight);
scaled.recycle();
return yuv;
}
private void encodeYV12(byte[] yuv420sp, int[] argb, int width, int height) {
final int frameSize = width * height;
int yIndex = 0;
int uIndex = frameSize;
int vIndex = frameSize + (frameSize / 4);
int a, R, G, B, Y, U, V;
int index = 0;
for (int j = 0; j < height; j++) {
for (int i = 0; i < width; i++) {
a = (argb[index] & 0xff000000) >> 24; // a is not used obviously
R = (argb[index] & 0xff0000) >> 16;
G = (argb[index] & 0xff00) >> 8;
B = (argb[index] & 0xff) >> 0;
// well known RGB to YUV algorithm
Y = ( ( 66 * R + 129 * G + 25 * B + 128) >> 8) + 16;
U = ( ( 38 * R 74 * G + 112 * B + 128) >> 8) + 128;
V = ( ( 112 * R 94 * G 18 * B + 128) >> 8) + 128;
// YV12 has a plane of Y and two chroma plans (U, V) planes each sampled by a factor of 2
// meaning for every 4 Y pixels there are 1 V and 1 U. Note the sampling is every other
// pixel AND every other scanline.
yuv420sp[yIndex++] = (byte) ((Y < 0) ? 0 : ((Y > 255) ? 255 : Y));
if (j % 2 == 0 && index % 2 == 0) {
yuv420sp[uIndex++] = (byte)((V<0) ? 0 : ((V > 255) ? 255 : V));
yuv420sp[vIndex++] = (byte)((U<0) ? 0 : ((U > 255) ? 255 : U));
}
index ++;
}
}
}
view raw yv12.java hosted with ❤ by GitHub
I converted this Java method to c#.

Part of the conversion meant reading up on YUV. The Java method focused on YV12. Teams needed the stream to be NV12. Their differences are summarized here:

NV12

Related to I420, NV12 has one luma “luminance” plane Y and one plane with U and V values interleaved.

In NV12, chroma planes (blue and red) are subsampled in both the horizontal and vertical dimensions by a factor of 2.

For a 2×2 group of pixels, you have 4 Y samples and 1 U and 1 V sample.

It can be helpful to think of NV12 as I420 with the U and V planes interleaved.

Here is a graphical representation of NV12. Each letter represents one bit:

For 1 NV12 pixel: YYYYYYYY UVUV

For a 2-pixel NV12 frame: YYYYYYYYYYYYYYYY UVUVUVUV

For a 50-pixel NV12 frame: Y×8×50 (UV)×2×50

For a n-pixel NV12 frame: Y×8×n (UV)×2×n

FROM: VideoLan on YUV#NV12
public void BMPtoNV12(byte[] yuv420sp, byte[] argb, int width, int height)
{
int frameSize = width * height;
int yIndex = 0;
int uvIndex = frameSize;
uint a;
int R, G, B, Y, U, V;
int index = 0;
for (int j = 0; j < height; j++)
{
//int index = width * j;
for (int i = 0; i < width; i++)
{
a = (argb[index] & 0xff000000) >> 24; // a is not used obviously
R = (argb[index] & 0xff0000) >> 16;
G = (argb[index] & 0xff00) >> 8;
B = (argb[index] & 0xff) >> 0;
// well known RGB to YUV algorithm
Y = (( 66 * R + 129 * G + 25 * B + 128) >> 8) + 16;
U = ((38 * R 74 * G + 112 * B + 128) >> 8) + 128;
V = ((112 * R 94 * G 18 * B + 128) >> 8) + 128;
//NV12
//Related to I420, NV12 has one luma "luminance" plane Y and one plane with U and V values interleaved.
//In NV12, chroma planes(blue and red) are subsampled in both the horizontal and vertical dimensions by a factor of 2.
//For a 2×2 group of pixels, you have 4 Y samples and 1 U and 1 V sample.
//It can be helpful to think of NV12 as I420 with the U and V planes interleaved.
//Here is a graphical representation of NV12.Each letter represents one bit:
//For 1 NV12 pixel: YYYYYYYY UVUV
//For a 2 – pixel NV12 frame: YYYYYYYYYYYYYYYY UVUVUVUV
//For a 50 – pixel NV12 frame: Y×8×50(UV)×2×50
//For a n – pixel NV12 frame: Y×8×n(UV)×2×n
yuv420sp[yIndex++] = (byte)((Y < 0) ? 0 : ((Y > 255) ? 255 : Y));
if (j % 2 == 0 && i % 2 == 0)
{
yuv420sp[uvIndex++] = (byte)((U < 0) ? 0 : ((U > 255) ? 255 : U));
yuv420sp[uvIndex++] = (byte)((V < 0) ? 0 : ((V > 255) ? 255 : V));
}
index++;
}
}
}
view raw BMPtoNV12.cs hosted with ❤ by GitHub
Converted the YV12 approach to NV12.

Even though I modified a method’s to produce NV12 from a BMP array, no joy. And this after much tinkering..

Eventually, I even tried using the OpenCV project, but that just led to green splotches all over.

Thus, I’m stuck. I still love the idea, but I’ve poured way too many hours into the experiment at this stage. I’m looking forward to Microsoft’s Build this week. Maybe I’ll find some helpful soul to set me on the straight and narrow.

Categories
Advocacy reference counting

A quick take on a hot mess

I saw that response from Dona Sarkar, who I’m following on Twitter. Since I’ve been following her, from what she shares I could have guessed her response. Dona leads advocacy on Microsoft’s Power Platform, while also running her own fashion house – PrimaDonaStudios.

My own first response was, “wow, talk about a horrible take”.

The thread because of Jack Forge’s post refused to quietly exit my mind. It wasn’t a massive controversy or anything but there was something more.

Then I remembered the 99 Percent Invisible podcast had a series of episodes looking at the history of design in fashion, clothing and textile. And in the very first episode, they identified the relationship between garment construction and engineering.

That’s a tweet I shared about it sometime ago.

That first episode reveals punch cards, among the earliest storage media for computing were used for – get this – design patterns, in making clothes.

A snippet from 99Pi’s “Articles of Interest”, episode 1.

I remember driving to the office listening to that episode and doing everything I could to not pull over and call my wife – she’s a costume designer to say, “AYE!” for no reason at all.

So, when Jack came online to forge a post that revealed ignorance about the history of Jacquard Looms, I felt I had to help untangle the truth.

Fashion and code share a history so closely that even if you don’t personally care about what you wear, their relationship cannot be ignored. How those actual clothing articles are made and why they look & feel like they do are precisely why one might even say fashion is a form of output written in a programming language used by designers around the world.

One more snippet. Programming owes a debt to the fashion industry. We shouldn’t forget it.

Categories
Advocacy CUI

#MSBuild 2021: Teams Table Talk

I saw this tweet 👆🏾 and thought, I should send a topic.

Since I’ve been recently building bots and extensions in Teams I focused my topic on just that – extending Teams. I hadn’t heard about table talks, but Microsoft started making them possible in a few conferences before Build.

My topic was accepted, and along with Erik Kleefeldt, we’ll be hosting a table talk on “Extending the Microsoft Teams Experience”

Erik and I have met a few times and we’re excited to share the experience. Table Talks are meant to be like those hallway conversations you might have on your way to a session about topics you dig. They should be welcoming, open and good-natured, really.

This should be fun!

Extending the Microsoft Teams Experience – May 26, 2021 9:30 AM AST (6:30 AM PT).
“Build the next generation of productivity experiences for hybrid work”
RSVP early to join.

Categories
IoT

Birbicam.

When I posted that video on IG, I knew I wanted to come back to it. My mechanics of capturing that story was a bit precarious: I propped my phone up on a rock, and hoped that in the video being captured, a bird or two would feature before I ran out of space.

So then I saw Alex Ellis’ tweet about using an RPi to track plant growth and I remembered I had a Raspberry Pi just lying there, waiting to get used.

Thus, my mind went into overdrive. I started to focus on the hardest part of the mini-project: Bird detection using Python or Tensor Flow on the Raspberry Pi. I hadn’t even turned the thing on yet. No OS installed. I didn’t even know if those super cheap sports cameras I had lying around would work.

I just mentally swam around in the deep setup, maybe even going to get some OpenCV involved.

Eventually, I calmed down. And began the pedestrian work of setting up the Pi, finding a working camera and getting the networking right.

When I had everything all put together, I cracked my knuckles to dive in deep learning. Before I did though, I thought I’d explain to my wife what I was going to do:

  1. Point the RPi at the birds
  2. Write a script to stream the camera’s output
  3. Find an machine learning model to take the video and detect the birds
  4. Send detected birds somewhere

“Why not use a motion sensor?”, my wife queried.

Maybe literally the first result on google, I found this article that walks you through in very clear steps how to setup a motion sensor using your camera on the Raspberry Pi.

I was getting emails and videos of birds in half an hour.

Categories
Uncategorized

In the forest, the poui falls different

I leaned on a tree after my seventh rope, it felt like. I was good and proper bun.
Then the neighbourhood watch appeared. A beetle here, some ants that clearly grew up on cassava and two wasps. I had to get up.
I put my hand on a tree to stabilize and a member of the bush welcoming committee bit me. So now, I’m cross, angry and bun out.
I didn’t even see the the offending animal afterwards. It’s like he was saying, “Get a move on, interloper”
To the understanding, this thing was the hardest hard. The thing is, I was already 5.7 miles in. You can’t go back, and have zero inclination to go forward. If I saw some teak, I might have built a house in the forest.
But one foot in front the other, one hand over the next and eventually I made it to a beautiful flower.


While doing what one does when it’s no longer a race and more of a series of questions about your life choices, an old man deftly, wordlessly ran past me.
In the normal world, a runner such as me would treat that as an invitation to share some linx out. But in the forest, the poui falls different. I just kept taking my photos and returned to more rope.
(Yes, Irwin, you do need those gloves)
I made it out. Barely.

Categories
Advocacy teaching

Bootcamp

I’ve never been to bootcamp. I wasn’t even in the Scouts growing up. So, unlike most of the posts on this site, which features a story about something I did or was involved in, this is largely my views on a question.

Here’s the question, “I’m a twentysomething-year-old with regular computing schools, I have a non-IT career, but want to make a switch, what should I do?”

A few preliminaries:

  • I’m not trying to convince you to do IT
  • You’re willing and able to devote time to make a switch
  • Everything following this is a suggestion mostly based on opinion, with a dash of experience

I heard that question and immediately thought, “Not a degree”. It’s not that degrees are bad, or that I’m in the anti-degree movement. I think degrees have their place, but for adults, who are probably in a clearer place with respect to their needs, and who don’t need too much handholding, a degree feels like the wrong approach. Note, feels. Some might tell you go ahead and do a degree of some sort, and that’s OK, if you have the inclination and time (and money), go ahead.

So, if not degree, what?

There are many voices online about why to do an alternative to a degree when considering a career switch. I’m taking this from one of two starting points:

  1. You’ve advanced to some degree in a non-IT career, and you would love to add some form of IT as seasoning on top of that. For example, you’ve been in banking and finance, and have been hearing about the wonders that can be done if you get a handle on data science.
  2. OR, you hate what you currently do. Every day is a slog, and though it pays the bills, which is important, you’d love to get out and do something else. The something else you’ve settled on, is something in IT.

If you’re in camp 1, then I think it’s good to look first for people who have already made the switch. Depending on your industry, they’re easy to find, they might have blogs, or tweets. They might be in your office, or across the world. You might know them from the books they’ve written, or you use something they’ve created, like a tool to get work done.

Find a few of these people, and create a matrix that tracks how their career has evolved. See what they studied and when, look at the order of growth for them. Did they take a few courses? Did they blog about their journey? Did they join any groups? You’re not trying to copy their path necessarily, but it would be good to open your eyes to the kinds of pathways you can explore.

People in camp 1 tend to want to use the aspects of IT they like as ways to get their overall life goals accomplished. They don’t see programming or data science or some aspect of development as their new passion, instead as a way to further their existing skills in their current field. That sort of person is looking for a bridge between what they know and what they need to know.

In the past, they might have done a masters to fill that need, but now, it might be a menu of courses that closely relate to their existing field – the specific list should be clear if they did the work of selecting a few people to study and glean good ideas from.

Now, if you’re in camp 2… that’s something else. You’re starting over or maybe even picking back up from a long time ago. Your first step doesn’t have to be daunting. As opposed to looking at people, you might want to look at areas in IT. Even the term “Information Technology” is a bit long in the tooth. But it still tracks. Look at broad areas, and do some YouTube surfing for talks that describe how those areas work in real life. It might be on the design side, or security, or something called back-end. You’re trying to get a sense of why the area is important and whether you feel a broad pull to dig at it more.

IT is hard. Maybe you haven’t made any real investment yet, so let’s get that out of the way. But I hear that any career that you want to do really well at is hard. You generally have to figure out if the hardness of an area lines up well with what you want to spend your time doing.

Once you find a few areas you dig, there are quite a few providers online of good standing that can provide you with courses or rather collections of courses to get you a sense of awareness of what working in that area can be like. I don’t know of any course-list that will just give you everything to simply be a professional in your chosen area. What most courses should aim to do is make you conversant in the area you care about. That’s usually enough to help you “Google your way to success”.

Let’s say you chose software development. Googling, “bootcamp software development” will yield way too many results. It’s good to talk to working software engineers to help weed out some of the starting results. At first, my results of that query yielded this great article, essentially saying “be wary of bootcamps”. It’s good advice and paints a decent picture. Bootcamps aren’t a cure-all. But as I said, you’re probably working and don’t have the luxury of doing a full-time degree, but may be interested in getting into the field.

Since I use a site called “StackOverflow” a lot, I searched that network for some perspective. This was a good Q/A on the question of bootcamp vs something longer.

In both articles above, an important takeaway was that any education in software development is necessarily only a start, and it can take a while for the way (to paraphrase Mando) to even make sense.

Yet, using a decent bootcamp/starter experience to understand more of the field you’re trying to switch into as an adult is a good strategy. You have to keep your eyes open and trust the instinct you’ve developed, it will help you know when what you’re trying maybe isn’t working and when you need to switch things up.

I don’t have a lot of experience with actual providers, but I like what CodeNewbie has been doing in the space of getting new people into the field.

This whole post was a suggestion, filled with opinion. HTH.

Categories
CUI

The Air IQ Agent

Last year, at the Caribbean Developers virtual gathering, I demonstrated a bot that displays air quality information based on sensors at a few spots in Trinidad & Tobago. My remarks on it can be seen here.

This week, I went ahead and pulled together the work to publish that bot as an Action on Google.

The bot will surface the same information available at the environment management authority has on their site.

I tried to show it to a friend and found it a bit difficult to invoke. You have to say, “OK, Google, Talk to air IQ agent” just so, for it to work, and then, it just says “Hi there! What area are you interested in?”.

If you look at your phone or other assistant screen, it’ll show the options.

But it won’t actually say anything else. You have to just know the area choices. Should I have caught this in testing? Yes, but I may have been working on getting it all to work and less focused on how it actually works.

So, that’ll be fixed. Nevertheless, I’m glad to explore yet another channel for virtual assistants and in cases like this, I feel it’s much easier to talk to your assistant that remember which site should be checked to find out factoids like this.

Categories
Cloud virtual reality

Along came a squid

Finalists in an VR challenge in Trinidad and Tobago

Explore Tobago – Underwater.

That’s the whole idea. Last year, marine researchers out of Trinidad and Tobago produced some amazing imagery capturing numerous locations off the coast of Tobago.

A slice of life from the Maritime Ocean Collection

Their project is called the Maritime Ocean Collection and it features many 360-degree images. So with the right device, you could look all around in a given image and get a decent appreciation of a particular spot.

As a scuba diver, I was enrapt. These images came out right after I had really good dive, that I couldn’t properly record. My camera gave out on us and we were super disappointed. They let me re-live those recent experiences, especially as they were still very fresh in my mind. And they showed me how much more there was to go.

Literally a month after I saw the Collection, the The Caribbean Industrial Research Institute (CARIRI) announced a VR competition.

My ideas as a developer, experiences as a diver and curiosity about the work of those researchers gave me that push to participate in CARIRI’s competition.

The result was Explore Tobago – Underwater – a prototype that let’s you do just that. It’s web-based, can be used with something as simple as a Google Cardboard and uses images from the Collection. The idea of “walking around” underwater, clicking on an interesting object and learning more and getting even a sense of that world is the core goal.

Explore Tobago – Underwater. Proof of concept.

This VR project made it all the way to the finals of the CARIRI competition. The finals. We didn’t win. I was legit sour for a whole minute.

But my team had decided to collaborate with the Collection’s researchers to build this out regardless of the result. The value of the idea as a tool for education, exploration and just a very cool way of seeing our natural resources was much greater than the estimation of a competition’s judges.

As the developers and researchers who met because of the competition started to talk and explore collaboration to make it reality, Microsoft Ignite dropped an amazing bomb.

The Squid, in VR at Microsoft Ignite.

The explanation for that squid starts from about 71 minutes in on the video below. Researchers, Edie Widder & Vincent Pieribone demonstrated mixed reality solutions, focused on underwater exploration.

I mean. My jaw dropped. It was so cool. It was also a great point of validation. Watching them talk about the kind of inspiration, the way VR can be a doorway for education and excitement were the same beats I flowed with when talking about Explore Tobago – Underwater.

There’s something the government representative said in their remarks in the first video above. It was that the VR solutions proposed can stand up with any in the world. As I wrote, we’re exploring how to make the experimental version real. It’s a tough journey, but we can already see that making it, both connects to a global movement and demonstrates to the world the beauty of our home.

Categories
Cloud TrinidadAndTobago

If a tree falls in the forest…

At about 5:00 am, the fans stopped spinning. And we knew there was a power outage. We rolled back to sleep in the embers of the night and thought, “oh well, they’ll sort themselves out”.

We were jolted out of sleep two hours later, by the loud noise of a crash down the road.

A massive tree had fallen. It made the electricity company seem far more prescient than I had ever given it credit for.

The tree that collapsed pulled down wires from two poles, caused one of them to fold over into an acute angle and pushed cords into the nearby river.

Early morning, early January disaster.

By the time I walked down to check out what was going on, with only my phone in hand, the community response was well underway.

The community seemed battle-hardened by these events. My wide-eyed, city-boy confusion melted away. A man in a van turned up with not one, not two but three chainsaws. Others turned up with rope and van man, sent for gasoline.

The army was on the scene relatively quickly too. Closed the road and essentially kept people who weren’t helpful at a useful distance. Me, the kept me away.

The men of the neighbourhood started cutting and when the fire services arrived, with their coordination and support the tree was eventually moved aside.

Cars could pass once again, though of course, slowly. By the time the electricity company arrived, the road was clear enough to let them begin the repair process.

The situation reminded me about the need for status updates from utilities. There’s clearly a chain of events needed here. The community response was an amazing, welcome first step. But it seemed like a proactive neighborhood. The baton was passed to the fire services, which made the way for the team from the electricity company.

Who would tell the other service providers? I didn’t see any communication utilities on the scene. Were they aware? Would they spring into action like the men with the chainsaws? This is doubtful.

Also, my family and I temporarily decamped to a place to get power, Internet and some chill. When should we go back? Again, it would be great to either check something like “status.utility.co.tt” to find out.

For now, I’d actually settle for an SMS or WhatsApp from the providers. To date, we’ve gotten none. It seems like the best response will remain that of individuals and neighbours, who proactively set up their own systems, limited as they are, until better can be done.