Advocacy Cloud

Disaster Recovery

We watched in horror as our neighbour’s SUV was pushed down the road by the flood. Having seen this kind of thing on screens before, witnessing it so close was a different matter entirely. There’s an instinct some people have to stick their hand into a burning flame to save a grain of rice. It’s primal, almost automatic, and unfortunately, can result in loss of life.

When I saw the SUV, stuck on the railing, I grabbed the rope. And my wife was like, where this man going with that? It wasn’t even the right kind of rescue rope. Just something I had at home for God-knows-what-reason. Those waters were not to be trifled with. Good sense prevailed and we waited for things to subside.

Sometimes, when it rains too much, too quickly, we lose power in the lines. I half-jokingly told my colleagues in a work meeting that I might drop off the call… moments later, current gone, current came and the SUV was rolling on down.

Thankfully, there was no one in the vehicle.


In my conversations with IT folk over the years, I learnt about setting the Recovery Time Objective (RTO) in disaster recovery planning. It’s a nifty concept. The RTO is that time it takes to go from state of disaster to normalcy. Depending on what system is down, it can require several subsystems to be restored before the main system is back up and recovery has been achieved. You can read up more about RTO and its sibling concerns of RPO and RTA, here.

In this given disaster, several “systems” went down.

The first that got restored was the road. As soon as the waters subsided, the guys from the hotel up the road, Courtenay Rooks et al, turned up with electric saws, a pick up and other implements. That log was removed in about an hour. People could now safely walk up to their homes, albeit in the mud.

By the time the fire services had arrived, it was to assist with hooking up the SUV to Rooks’ pickup so it could be towed out of the way.

Road now free, the fire tender proceeded up the road to assist others.

Normally when there’s a power outage, I call the electricity company, just to make sure that we’re going to be dealt with. Once my call connected, I was greeted with an automated “We’re aware of issues in St. Ann’s …”

I had no idea how long we would be without service, and it was getting dark. In the twilight, I saw the characteristic yellow of our power company’s service vehicle making its way through the mud. That was about two hours into the outage. It was a good sign. But with fallen poles, I expected it would be a wait. About 5 hours in, the lights came on.

I wonder if like the community response, T&TEC has a specific RTO in events like this. Perhaps 6 hours, maybe a day. Maybe it varies based on access, time of day, type of disaster. It would be good to know.

So, that’s two key subsystems – transport and power – back up in about 6 hours. There’s one other that most of us in the community would need to get normal again. The Internet. Thankfully, it would seem that I only had intermittent LTE outages throughout the period. But land-based Internet connectivity was down. During a lockdown that means we literally can’t work.

Of the two providers I’m familiar with here, Flow seemed to be back up in the morning. But Digicel – my provider. Gosh. I wouldn’t see bytes across my wire until the afternoon. That is, almost 24 hours later.

We need published RTOs for disaster recovery by utility service providers. Figures that they can be held accountable for. Now, more than ever, the mix of services we use at home are critical, as we’re still living under a pandemic, with several limitations.

I know there is a Regulated Industries Commission in Trinidad and Tobago. I think this should be something they treat with, for all of our benefit.


Where everybody knows your name

Like most Trinbagonians, I’ve learned to dread going in to any licensing office. Who could forget the signage up at the Wrightson Road office that said, “We’re closing at 1:00 today” – when you’ve turned up at 12:45, and the people you need have gone to lunch?

And then, the game of ping-pong, between the “information desk” to the cashier to window #3, to outside, to the waiting room, and back to the window #2 (the following week)?

And yet, forget we must.

Maybe not forget, but at least hope, that the future we imagined could one day be real. A future where you could make an appointment, not fill out the same form about yourself three times and can be in and out inside of an hour.

That future is now. Almost. But for what we have, I’m here for it. Because we’re moving, and once that progress continues, we can imagine even more.

I needed to change some details of my Driver’s Permit. So I made the necessary appointment online. The confirmation email arrived and I expected to not need it. When I arrived at licensing, I was a few minutes late. Sometimes, we can be a bit punitive. Thus, I readied myself for “the guard” to send me packing.

However, the concierge was gracious, he didn’t even mention it, instead he waved me towards an actual good place. A kiosk.

A kiosk? In a licensing office? Na.

A kiosk in Licensing Office

I got so excited to see it, that apart from taking the above photo, I’m sure I caused my procedure to take longer than it should. Beyond that, a licensing officer, on seeing me taking a photo and just being a glad man, came outside to see what the commotion was about. When he realized I was actually celebrating the coolness, he encouraged me – “share it on Facebook eh, is only complaints we does see from people”

After the official explained how to use it, “… Just enter your confirmation number, no dashes, no caps”, I was just glad to be in the future – no cap. He gave me two forms to fill out. I was only a little worried. It was almost like, they could have used my data one time eh, but like I said, we’re moving forward. Streams become rivers.

While filling out the form, I heard my name. Williams? Honestly, I was confused. I was like, I don’t know anybody here or back there. Cannot be me. But when the lady repeated it, I looked to where she was, Irwin Williams? I was like, but how does she know my name?

Yes, me a creator of systems and wielder of technologies, small and medium, needed to take a minute – to realize that the Licensing Office was utilizing the information from my appointment to relate my presence there to what I was trying to do. It’s not very difficult to implement something like that. But this was one of the few times I saw a realization of a easy, simple idea that could be so impactful.

In less than an hour, I was able to complete my business and get on with my day. I’m happy to see these service improvements and I’m looking forward to seeing more of this all over the country.


Dirt. Mud. Progress.

Jesus was born in a manger, so I cannot be too sour that to register the birth of my child I have to stand up under a tent in the rain.

Waiting in rain.

So, under that tent, ducking the wet seats, were a few parents. The instructions we received were to book an appointment at the site to register the fact that a new player had entered humanity.

Registering on the site to me was a breeze. No really, I felt the breeze blowing on my verandah in the comfort of my home when I filled in my details and promptly got a date to turn up. It felt easy. Maybe too easy.

Indeed, no tents or mud was mentioned.

When the day came, I wended my way around the Port of Spain General Hospital to arrive at the location in the appointment.

I know where the Blood Bank is located. When I saw “Blood Bank Compound”, I interpreted that to mean that the registrars were sharing the facility of the Blood Bank. Same building, maybe with their own office. The Bank is a small building, so I was curious as to the lay of the land in there.

Walking up to the guard, he seemed to sense that I was going in the wrong direction.

“Going to give blood?”

“Na, I -,”

“Oh, you’re Blood Bank staff.”

“No, I’m here to register a birth”

He spun me around with a nod of his head. He was confusing me while giving me direction. The area he pointed behind me was the carpark. But in that carpark was a nondescript shipping container. A big box. In that big box were the registrars for births.

A different breeze started to blow.

I stiffly walked over to the box, er container, and began the process. Very soon in to the interaction, I was made to understand that though I followed the guidelines on the registration site, I was still underprepared. I needed to go back home.

When I returned, the breeze became a Port of Spain storm. A Port of Spain storm is not a real storm. But the flash flooding, garbage flow and general concern for your car is as real as going through any actual storm.

And we waited in the tent for our names to be called.

While I was waiting, all documents in check, I was just about to check out, that is, while away the time on TikTok. What stopped me was a couple that walked into the tent, looking more confused than me. They asked to no one really about the process. A lady with a latin accent explained it to them. You have to register on “the website” to be able receive service.

The website. These days, there’s always a website. Few know about it, and as I was about to learn, even fewer can actually use it.

The couple were a bit put off. The wife of the pair said she had tried the website but it didn’t work. There was no crowd waiting to be dealt with so we all thought, they might have had a chance to get through.

The did not get through. The staff told them they needed to make an appointment on, you guessed it, the website. I knew what the site was, had a relatively straightforward process using it and was ready to scroll silently when it hit me, the couple might need help.

I offered to walk them through it and as is often the case, looking through the eyes of the end user saw things that you never see as a developer or denizen of these kinds of systems.

Sharing the good news…

The carousel was a bad design choice here. The site rendered well on mobile, but was too dense. Sign up and Login are sometimes confusing steps for users. Forget your password should not work if you don’t have an account. A really important question for when you’re dealing with a broad class of users is “How can someone struggle to complete this step? How can they get unstuck?” Essentially, how do we get users back on the happy path?

Getting them to get an appointment took much longer than I anticipated. And yet, because they could probably sense my own confidence, and were motivated to finish, the process did not feel frustrating. It felt like we were on a hike, maybe through the dirt and some mud, but we were definitely going somewhere and our destination would be rewarding.

And rewarded they were with a fresh appointment for two days from today. They knew they had to come back, but they were sure that they would not be turned away again.

So, my own visit resulted in more registrations than I planned. It’s normal to go through a range of service delivery challenges, especially with government services, but my own frustrations fell away when I saw that a more fundamental access problem existed. That turned my negative experience into a positive one for a new family.


GitHub Copilot: A young Jarvis?

I recently watched yet another time travel TV series on Netflix. This time out of South Korea. I forgot the name, so in looking it up I realized that time-travel K-dramas are very much a thing, lol. Anyway, it’s called, “Sisyphus: The Myth”. It comes to mind because the writers of the show were fond of having characters repeat the quote,

The future is already here – it’s just not evenly distributed

William Gibson

I mean, for a time travel show, it’s fine. And I think they probably got what they were going after since here I am, quoting them, quoting Gibson. But look, I wrote about Jarvis about two years ago. Sort of wondering when will that kind of tech be available. Of having a sort of programmatic au pair who would come alongside and help not just with syntax and formatting, which admittedly, has existed for a long time, but with coming up with the actual code based on your intent.

In that blog post, I didn’t put a time horizon on it, but I thought, “maybe soonish”. This week though, I realized it’s already here.

I just got access to the still-in-beta GitHub Copilot.

It’s been blowing my mind and I didn’t even read everything on the tin, I just dove right in. In following the first example, it involved creating a method to get the difference between two dates:

First example … writing a method with GitHub Copilot’s help.

Again, I didn’t read everything, so I wasn’t sure what would happen when I began the method definition, but things fell into place quickly. I typed the “function calculateBetweenDates(” and sure enough, just like I have grown accustomed to with IntelliSense, I saw a prompt.

But, unlike IntelliSense, these prompts weren’t based on any methods I had written or that was available in some framework/library that had been loaded. These prompts came from Copilot’s body of knowledge.

Now, excited, I wanted to use an example of something I would do from time to time.

For a lot of my current projects there’s a “sign in to your cloud provider” step. So, that’s the method I went with. I wrote, “function signInToAzure(” and after a tab, Copilot has inserted the full method.

Playing just a bit more, in the generated method, I noticed the need for something related to the AAD_AUTH_URL. So, I started a method, “function GetAzureADAuthUrl” and voila, Copilot understood and gave me the method.

These are simple examples, but I really dig how it works.

I’ve got an AFRAME project I wanted to take this on a spin with, so I’ll see it in action in more detail, but this is very groovy already.

Now, just like Tony in Iron Man 2, the question of if he should even be the hero, wielding the suit while being the child and prime beneficiary of a massively successful arms company, the moral questions are in no short supply with Copilot. This includes the licenses of the code used to train Copilot, and whether they permit it to exist in its current form.

Nat Friedman, GitHub CEO weighed in on some of those issues here, essentially concluding,

We expect that IP and AI will be an interesting policy discussion around the world in the coming years, and we’re eager to participate!

Nat Friedman, HackerNews

I’m still in experiment-and-play mode with Copilot, observing where the technology goes, but I’m mindful of the arguments about its existence and will keep tracking that.

Here’s to Jarvis, he’s a teenager now, but already showing promise.

Advocacy Bots

#MSBuild 2021 Table Talk: How it went

In a recent post I mentioned moderating a table talk at this year’s MSBuild along with Erik Kleefeldt.

By design, it’s meant to be like a hallway chat with a bunch of people about tech we’ve been using/playing with/excited about. This hallway was huge. 318 people turned up on Teams to talk about extending it in some way or other.

Erik and I met with Anna, Andy and Joey from Microsoft in the preceding weeks, to get oriented, nail down a flow and just get a backstage peak at some of coordination it takes to pull off an event like Build. (Hint: It’s a lot!).

We had a good idea about what we would do, I’d introduce, he’d conclude and stuff like that. And then when the meeting started, I had jitters, my machine had jitters and I wondered if we would actually get started at all. But then, I told all my new friends that I was super nervous and everything just settled down.

Top things we spoke about:

  • Fluid Framework
  • Viva Connections
  • Upcoming Meetings and Calling API

As a hallway chat, like in real life, there’s no recording, but those topics are great areas to explore further. I’m definitely looking forward to the new media API – to get me out of being stuck in the middle of my current project.

Overall, this was a lot of fun, Build was plenty vibes & plenty action and I’ve got a lot of unpacking to do from my backpack.


In media res

I’ve been playing with an idea in Microsoft Teams for a few months now. It hasn’t yet borne fruit, but I’ve decided its time to move on, and maybe revisit it in a few more months with fresh eyes.

This sort of breaks with how I’ve considered writing posts on here. I like each post related to a creative activity to represent a singular finished work. I can refine things and come back, but each should say, “I did this and here’s the result”. But I’m writing about being in the middle of things, largely as an exercise in discipline, but also, as a clear record for myself of what I’ve done.

So what was it?

Scott Hanselman had this post on Twitter about making credits appear at or near the end of a Teams meeting. From the time he put it up, I was pretty clear on how he might have done it.

Genesis of my idea for a Microsoft Teams End Credits bot came from this.

I think apart from the fact that we all become video presenters last year, we also became much more familiar with OBS. And the approach he used could be done with OBS. Which he described here.

In fact, I think I saw an earlier tweet from him about a dude doing essentially a transparent board overlay with OBS, too. OBS to me, feels easy to use, but when you put everything together, it could feel like a bit of work. You have to set up your scenes, fill them with the right layers, and hang everything together, just so.

So, not hard, but somewhat involved. Since I’d been experimenting with the audio and video streams of Teams calls, I could see how a similar thing could possibly be done in Teams directly. Which would let me yell, “Look ma! No OBS!”, while achieving the same functionality.

Quite a few of these experiments begin with messing around with a sample, here and there. Call it SDD – Sample Driven Development. I picked up where the HueBot Teams sample left off. It’s one that let’s you create a bot that grabs a speaker’s video stream in a call and overlay it with a given hue – red, green or blue. I’d gotten that to work. And last time I played with that sample, I was able to send music down into a Teams meeting using a set of requests to a chatbot.

Now, I wanted to essentially overlay on a given video stream, the same credits info that I saw from Scott’s OBS trick.

I am currently still in that rabbit hole.

Yes, I was able to access the video stream based on the sample. I even got to the point of overlaying text. But pushing that back down to the call for everyone? Various shades of failure.

The first image is essentially straight out of the sample, where guidance was provided on how to extract a bitmap image from the video stream, which is otherwise formatted in NV12. The other images in the carousel are what appeared in Teams, with various degrees of resizing but always having a blue hue.

/// <summary>
/// Transform NV12 to bmp image so we can view how is it looks like. Note it's not NV12 to RBG conversion.
/// </summary>
/// <param name="data">NV12 sample data.</param>
/// <param name="width">Image width.</param>
/// <param name="height">Image height.</param>
/// <param name="logger">Log instance.</param>
/// <returns>The <see cref="Bitmap"/>.</returns>
public static Bitmap TransformNv12ToBmpFaster(byte[] data, int width, int height, IGraphLogger logger)
Stopwatch watch = new Stopwatch();
var bmp = new Bitmap(width, height, PixelFormat.Format32bppPArgb);
var bmpData = bmp.LockBits(
new Rectangle(0, 0, bmp.Width, bmp.Height),
var uvStart = width * height;
for (var y = 0; y < height; y++)
var pos = y * width;
var posInBmp = y * bmpData.Stride;
for (var x = 0; x < width; x++)
var vIndex = uvStart + ((y >> 1) * width) + (x & ~1);
var c = data[pos] 16;
var d = data[vIndex] 128;
var e = data[vIndex + 1] 128;
c = c < 0 ? 0 : c;
var r = ((298 * c) + (409 * e) + 128) >> 8;
var g = ((298 * c) (100 * d) (208 * e) + 128) >> 8;
var b = ((298 * c) + (516 * d) + 128) >> 8;
r = r.Clamp(0, 255);
g = g.Clamp(0, 255);
b = b.Clamp(0, 255);
Marshal.WriteInt32(bmpData.Scan0, posInBmp + (x << 2), (b << 0) | (g << 8) | (r << 16) | (0xFF << 24));
logger.Info($"Took {watch.ElapsedMilliseconds} ms to lock and unlock");
return bmp;
This code essentially does the transformation.

I’m currently stuck with that blue hue. 😕.

So, since the sample only had a one-way transformation of NV12 to bitmap, not having any experience with that, I speelunked around the web for a solution. Normally that would mean some drive-by [StackOverflow]ing for a whole method, but that got me as far as those blue hues.

Literally, the method I got from S/O let me convert BMP to some kind of NV12, but not something that Teams quite liked.

private byte [] getYV12(int inputWidth, int inputHeight, Bitmap scaled) {
int [] argb = new int[inputWidth * inputHeight];
scaled.getPixels(argb, 0, inputWidth, 0, 0, inputWidth, inputHeight);
byte [] yuv = new byte[inputWidth*inputHeight*3/2];
encodeYV12(yuv, argb, inputWidth, inputHeight);
return yuv;
private void encodeYV12(byte[] yuv420sp, int[] argb, int width, int height) {
final int frameSize = width * height;
int yIndex = 0;
int uIndex = frameSize;
int vIndex = frameSize + (frameSize / 4);
int a, R, G, B, Y, U, V;
int index = 0;
for (int j = 0; j < height; j++) {
for (int i = 0; i < width; i++) {
a = (argb[index] & 0xff000000) >> 24; // a is not used obviously
R = (argb[index] & 0xff0000) >> 16;
G = (argb[index] & 0xff00) >> 8;
B = (argb[index] & 0xff) >> 0;
// well known RGB to YUV algorithm
Y = ( ( 66 * R + 129 * G + 25 * B + 128) >> 8) + 16;
U = ( ( 38 * R 74 * G + 112 * B + 128) >> 8) + 128;
V = ( ( 112 * R 94 * G 18 * B + 128) >> 8) + 128;
// YV12 has a plane of Y and two chroma plans (U, V) planes each sampled by a factor of 2
// meaning for every 4 Y pixels there are 1 V and 1 U. Note the sampling is every other
// pixel AND every other scanline.
yuv420sp[yIndex++] = (byte) ((Y < 0) ? 0 : ((Y > 255) ? 255 : Y));
if (j % 2 == 0 && index % 2 == 0) {
yuv420sp[uIndex++] = (byte)((V<0) ? 0 : ((V > 255) ? 255 : V));
yuv420sp[vIndex++] = (byte)((U<0) ? 0 : ((U > 255) ? 255 : U));
index ++;
view raw hosted with ❤ by GitHub
I converted this Java method to c#.

Part of the conversion meant reading up on YUV. The Java method focused on YV12. Teams needed the stream to be NV12. Their differences are summarized here:


Related to I420, NV12 has one luma “luminance” plane Y and one plane with U and V values interleaved.

In NV12, chroma planes (blue and red) are subsampled in both the horizontal and vertical dimensions by a factor of 2.

For a 2×2 group of pixels, you have 4 Y samples and 1 U and 1 V sample.

It can be helpful to think of NV12 as I420 with the U and V planes interleaved.

Here is a graphical representation of NV12. Each letter represents one bit:

For 1 NV12 pixel: YYYYYYYY UVUV


For a 50-pixel NV12 frame: Y×8×50 (UV)×2×50

For a n-pixel NV12 frame: Y×8×n (UV)×2×n

FROM: VideoLan on YUV#NV12
public void BMPtoNV12(byte[] yuv420sp, byte[] argb, int width, int height)
int frameSize = width * height;
int yIndex = 0;
int uvIndex = frameSize;
uint a;
int R, G, B, Y, U, V;
int index = 0;
for (int j = 0; j < height; j++)
//int index = width * j;
for (int i = 0; i < width; i++)
a = (argb[index] & 0xff000000) >> 24; // a is not used obviously
R = (argb[index] & 0xff0000) >> 16;
G = (argb[index] & 0xff00) >> 8;
B = (argb[index] & 0xff) >> 0;
// well known RGB to YUV algorithm
Y = (( 66 * R + 129 * G + 25 * B + 128) >> 8) + 16;
U = ((38 * R 74 * G + 112 * B + 128) >> 8) + 128;
V = ((112 * R 94 * G 18 * B + 128) >> 8) + 128;
//Related to I420, NV12 has one luma "luminance" plane Y and one plane with U and V values interleaved.
//In NV12, chroma planes(blue and red) are subsampled in both the horizontal and vertical dimensions by a factor of 2.
//For a 2×2 group of pixels, you have 4 Y samples and 1 U and 1 V sample.
//It can be helpful to think of NV12 as I420 with the U and V planes interleaved.
//Here is a graphical representation of NV12.Each letter represents one bit:
//For 1 NV12 pixel: YYYYYYYY UVUV
//For a 2 – pixel NV12 frame: YYYYYYYYYYYYYYYY UVUVUVUV
//For a 50 – pixel NV12 frame: Y×8×50(UV)×2×50
//For a n – pixel NV12 frame: Y×8×n(UV)×2×n
yuv420sp[yIndex++] = (byte)((Y < 0) ? 0 : ((Y > 255) ? 255 : Y));
if (j % 2 == 0 && i % 2 == 0)
yuv420sp[uvIndex++] = (byte)((U < 0) ? 0 : ((U > 255) ? 255 : U));
yuv420sp[uvIndex++] = (byte)((V < 0) ? 0 : ((V > 255) ? 255 : V));
view raw BMPtoNV12.cs hosted with ❤ by GitHub
Converted the YV12 approach to NV12.

Even though I modified a method’s to produce NV12 from a BMP array, no joy. And this after much tinkering..

Eventually, I even tried using the OpenCV project, but that just led to green splotches all over.

Thus, I’m stuck. I still love the idea, but I’ve poured way too many hours into the experiment at this stage. I’m looking forward to Microsoft’s Build this week. Maybe I’ll find some helpful soul to set me on the straight and narrow.

Advocacy reference counting

A quick take on a hot mess

I saw that response from Dona Sarkar, who I’m following on Twitter. Since I’ve been following her, from what she shares I could have guessed her response. Dona leads advocacy on Microsoft’s Power Platform, while also running her own fashion house – PrimaDonaStudios.

My own first response was, “wow, talk about a horrible take”.

The thread because of Jack Forge’s post refused to quietly exit my mind. It wasn’t a massive controversy or anything but there was something more.

Then I remembered the 99 Percent Invisible podcast had a series of episodes looking at the history of design in fashion, clothing and textile. And in the very first episode, they identified the relationship between garment construction and engineering.

That’s a tweet I shared about it sometime ago.

That first episode reveals punch cards, among the earliest storage media for computing were used for – get this – design patterns, in making clothes.

A snippet from 99Pi’s “Articles of Interest”, episode 1.

I remember driving to the office listening to that episode and doing everything I could to not pull over and call my wife – she’s a costume designer to say, “AYE!” for no reason at all.

So, when Jack came online to forge a post that revealed ignorance about the history of Jacquard Looms, I felt I had to help untangle the truth.

Fashion and code share a history so closely that even if you don’t personally care about what you wear, their relationship cannot be ignored. How those actual clothing articles are made and why they look & feel like they do are precisely why one might even say fashion is a form of output written in a programming language used by designers around the world.

One more snippet. Programming owes a debt to the fashion industry. We shouldn’t forget it.

Advocacy CUI

#MSBuild 2021: Teams Table Talk

I saw this tweet 👆🏾 and thought, I should send a topic.

Since I’ve been recently building bots and extensions in Teams I focused my topic on just that – extending Teams. I hadn’t heard about table talks, but Microsoft started making them possible in a few conferences before Build.

My topic was accepted, and along with Erik Kleefeldt, we’ll be hosting a table talk on “Extending the Microsoft Teams Experience”

Erik and I have met a few times and we’re excited to share the experience. Table Talks are meant to be like those hallway conversations you might have on your way to a session about topics you dig. They should be welcoming, open and good-natured, really.

This should be fun!

Extending the Microsoft Teams Experience – May 26, 2021 9:30 AM AST (6:30 AM PT).
“Build the next generation of productivity experiences for hybrid work”
RSVP early to join.



When I posted that video on IG, I knew I wanted to come back to it. My mechanics of capturing that story was a bit precarious: I propped my phone up on a rock, and hoped that in the video being captured, a bird or two would feature before I ran out of space.

So then I saw Alex Ellis’ tweet about using an RPi to track plant growth and I remembered I had a Raspberry Pi just lying there, waiting to get used.

Thus, my mind went into overdrive. I started to focus on the hardest part of the mini-project: Bird detection using Python or Tensor Flow on the Raspberry Pi. I hadn’t even turned the thing on yet. No OS installed. I didn’t even know if those super cheap sports cameras I had lying around would work.

I just mentally swam around in the deep setup, maybe even going to get some OpenCV involved.

Eventually, I calmed down. And began the pedestrian work of setting up the Pi, finding a working camera and getting the networking right.

When I had everything all put together, I cracked my knuckles to dive in deep learning. Before I did though, I thought I’d explain to my wife what I was going to do:

  1. Point the RPi at the birds
  2. Write a script to stream the camera’s output
  3. Find an machine learning model to take the video and detect the birds
  4. Send detected birds somewhere

“Why not use a motion sensor?”, my wife queried.

Maybe literally the first result on google, I found this article that walks you through in very clear steps how to setup a motion sensor using your camera on the Raspberry Pi.

I was getting emails and videos of birds in half an hour.


In the forest, the poui falls different

I leaned on a tree after my seventh rope, it felt like. I was good and proper bun.
Then the neighbourhood watch appeared. A beetle here, some ants that clearly grew up on cassava and two wasps. I had to get up.
I put my hand on a tree to stabilize and a member of the bush welcoming committee bit me. So now, I’m cross, angry and bun out.
I didn’t even see the the offending animal afterwards. It’s like he was saying, “Get a move on, interloper”
To the understanding, this thing was the hardest hard. The thing is, I was already 5.7 miles in. You can’t go back, and have zero inclination to go forward. If I saw some teak, I might have built a house in the forest.
But one foot in front the other, one hand over the next and eventually I made it to a beautiful flower.

While doing what one does when it’s no longer a race and more of a series of questions about your life choices, an old man deftly, wordlessly ran past me.
In the normal world, a runner such as me would treat that as an invitation to share some linx out. But in the forest, the poui falls different. I just kept taking my photos and returned to more rope.
(Yes, Irwin, you do need those gloves)
I made it out. Barely.