Categories
Bots

In media res

Taking a break from an experiment about streams in Teams

I’ve been playing with an idea in Microsoft Teams for a few months now. It hasn’t yet borne fruit, but I’ve decided its time to move on, and maybe revisit it in a few more months with fresh eyes.

This sort of breaks with how I’ve considered writing posts on here. I like each post related to a creative activity to represent a singular finished work. I can refine things and come back, but each should say, “I did this and here’s the result”. But I’m writing about being in the middle of things, largely as an exercise in discipline, but also, as a clear record for myself of what I’ve done.

So what was it?

Scott Hanselman had this post on Twitter about making credits appear at or near the end of a Teams meeting. From the time he put it up, I was pretty clear on how he might have done it.

Genesis of my idea for a Microsoft Teams End Credits bot came from this.

I think apart from the fact that we all become video presenters last year, we also became much more familiar with OBS. And the approach he used could be done with OBS. Which he described here.

In fact, I think I saw an earlier tweet from him about a dude doing essentially a transparent board overlay with OBS, too. OBS to me, feels easy to use, but when you put everything together, it could feel like a bit of work. You have to set up your scenes, fill them with the right layers, and hang everything together, just so.

So, not hard, but somewhat involved. Since I’d been experimenting with the audio and video streams of Teams calls, I could see how a similar thing could possibly be done in Teams directly. Which would let me yell, “Look ma! No OBS!”, while achieving the same functionality.

Quite a few of these experiments begin with messing around with a sample, here and there. Call it SDD – Sample Driven Development. I picked up where the HueBot Teams sample left off. It’s one that let’s you create a bot that grabs a speaker’s video stream in a call and overlay it with a given hue – red, green or blue. I’d gotten that to work. And last time I played with that sample, I was able to send music down into a Teams meeting using a set of requests to a chatbot.

Now, I wanted to essentially overlay on a given video stream, the same credits info that I saw from Scott’s OBS trick.

I am currently still in that rabbit hole.

Yes, I was able to access the video stream based on the sample. I even got to the point of overlaying text. But pushing that back down to the call for everyone? Various shades of failure.

The first image is essentially straight out of the sample, where guidance was provided on how to extract a bitmap image from the video stream, which is otherwise formatted in NV12. The other images in the carousel are what appeared in Teams, with various degrees of resizing but always having a blue hue.

/// <summary>
/// Transform NV12 to bmp image so we can view how is it looks like. Note it's not NV12 to RBG conversion.
/// </summary>
/// <param name="data">NV12 sample data.</param>
/// <param name="width">Image width.</param>
/// <param name="height">Image height.</param>
/// <param name="logger">Log instance.</param>
/// <returns>The <see cref="Bitmap"/>.</returns>
public static Bitmap TransformNv12ToBmpFaster(byte[] data, int width, int height, IGraphLogger logger)
{
Stopwatch watch = new Stopwatch();
watch.Start();
var bmp = new Bitmap(width, height, PixelFormat.Format32bppPArgb);
var bmpData = bmp.LockBits(
new Rectangle(0, 0, bmp.Width, bmp.Height),
ImageLockMode.ReadWrite,
PixelFormat.Format32bppRgb);
var uvStart = width * height;
for (var y = 0; y < height; y++)
{
var pos = y * width;
var posInBmp = y * bmpData.Stride;
for (var x = 0; x < width; x++)
{
var vIndex = uvStart + ((y >> 1) * width) + (x & ~1);
//// https://msdn.microsoft.com/en-us/library/windows/desktop/dd206750(v=vs.85).aspx
//// https://en.wikipedia.org/wiki/YUV
var c = data[pos] 16;
var d = data[vIndex] 128;
var e = data[vIndex + 1] 128;
c = c < 0 ? 0 : c;
var r = ((298 * c) + (409 * e) + 128) >> 8;
var g = ((298 * c) (100 * d) (208 * e) + 128) >> 8;
var b = ((298 * c) + (516 * d) + 128) >> 8;
r = r.Clamp(0, 255);
g = g.Clamp(0, 255);
b = b.Clamp(0, 255);
Marshal.WriteInt32(bmpData.Scan0, posInBmp + (x << 2), (b << 0) | (g << 8) | (r << 16) | (0xFF << 24));
pos++;
}
}
bmp.UnlockBits(bmpData);
watch.Stop();
logger.Info($"Took {watch.ElapsedMilliseconds} ms to lock and unlock");
return bmp;
}
This code essentially does the transformation.

I’m currently stuck with that blue hue. 😕.

So, since the sample only had a one-way transformation of NV12 to bitmap, not having any experience with that, I speelunked around the web for a solution. Normally that would mean some drive-by [StackOverflow]ing for a whole method, but that got me as far as those blue hues.

Literally, the method I got from S/O let me convert BMP to some kind of NV12, but not something that Teams quite liked.

private byte [] getYV12(int inputWidth, int inputHeight, Bitmap scaled) {
int [] argb = new int[inputWidth * inputHeight];
scaled.getPixels(argb, 0, inputWidth, 0, 0, inputWidth, inputHeight);
byte [] yuv = new byte[inputWidth*inputHeight*3/2];
encodeYV12(yuv, argb, inputWidth, inputHeight);
scaled.recycle();
return yuv;
}
private void encodeYV12(byte[] yuv420sp, int[] argb, int width, int height) {
final int frameSize = width * height;
int yIndex = 0;
int uIndex = frameSize;
int vIndex = frameSize + (frameSize / 4);
int a, R, G, B, Y, U, V;
int index = 0;
for (int j = 0; j < height; j++) {
for (int i = 0; i < width; i++) {
a = (argb[index] & 0xff000000) >> 24; // a is not used obviously
R = (argb[index] & 0xff0000) >> 16;
G = (argb[index] & 0xff00) >> 8;
B = (argb[index] & 0xff) >> 0;
// well known RGB to YUV algorithm
Y = ( ( 66 * R + 129 * G + 25 * B + 128) >> 8) + 16;
U = ( ( 38 * R 74 * G + 112 * B + 128) >> 8) + 128;
V = ( ( 112 * R 94 * G 18 * B + 128) >> 8) + 128;
// YV12 has a plane of Y and two chroma plans (U, V) planes each sampled by a factor of 2
// meaning for every 4 Y pixels there are 1 V and 1 U. Note the sampling is every other
// pixel AND every other scanline.
yuv420sp[yIndex++] = (byte) ((Y < 0) ? 0 : ((Y > 255) ? 255 : Y));
if (j % 2 == 0 && index % 2 == 0) {
yuv420sp[uIndex++] = (byte)((V<0) ? 0 : ((V > 255) ? 255 : V));
yuv420sp[vIndex++] = (byte)((U<0) ? 0 : ((U > 255) ? 255 : U));
}
index ++;
}
}
}
view raw yv12.java hosted with ❤ by GitHub
I converted this Java method to c#.

Part of the conversion meant reading up on YUV. The Java method focused on YV12. Teams needed the stream to be NV12. Their differences are summarized here:

NV12

Related to I420, NV12 has one luma “luminance” plane Y and one plane with U and V values interleaved.

In NV12, chroma planes (blue and red) are subsampled in both the horizontal and vertical dimensions by a factor of 2.

For a 2×2 group of pixels, you have 4 Y samples and 1 U and 1 V sample.

It can be helpful to think of NV12 as I420 with the U and V planes interleaved.

Here is a graphical representation of NV12. Each letter represents one bit:

For 1 NV12 pixel: YYYYYYYY UVUV

For a 2-pixel NV12 frame: YYYYYYYYYYYYYYYY UVUVUVUV

For a 50-pixel NV12 frame: Y×8×50 (UV)×2×50

For a n-pixel NV12 frame: Y×8×n (UV)×2×n

FROM: VideoLan on YUV#NV12
public void BMPtoNV12(byte[] yuv420sp, byte[] argb, int width, int height)
{
int frameSize = width * height;
int yIndex = 0;
int uvIndex = frameSize;
uint a;
int R, G, B, Y, U, V;
int index = 0;
for (int j = 0; j < height; j++)
{
//int index = width * j;
for (int i = 0; i < width; i++)
{
a = (argb[index] & 0xff000000) >> 24; // a is not used obviously
R = (argb[index] & 0xff0000) >> 16;
G = (argb[index] & 0xff00) >> 8;
B = (argb[index] & 0xff) >> 0;
// well known RGB to YUV algorithm
Y = (( 66 * R + 129 * G + 25 * B + 128) >> 8) + 16;
U = ((38 * R 74 * G + 112 * B + 128) >> 8) + 128;
V = ((112 * R 94 * G 18 * B + 128) >> 8) + 128;
//NV12
//Related to I420, NV12 has one luma "luminance" plane Y and one plane with U and V values interleaved.
//In NV12, chroma planes(blue and red) are subsampled in both the horizontal and vertical dimensions by a factor of 2.
//For a 2×2 group of pixels, you have 4 Y samples and 1 U and 1 V sample.
//It can be helpful to think of NV12 as I420 with the U and V planes interleaved.
//Here is a graphical representation of NV12.Each letter represents one bit:
//For 1 NV12 pixel: YYYYYYYY UVUV
//For a 2 – pixel NV12 frame: YYYYYYYYYYYYYYYY UVUVUVUV
//For a 50 – pixel NV12 frame: Y×8×50(UV)×2×50
//For a n – pixel NV12 frame: Y×8×n(UV)×2×n
yuv420sp[yIndex++] = (byte)((Y < 0) ? 0 : ((Y > 255) ? 255 : Y));
if (j % 2 == 0 && i % 2 == 0)
{
yuv420sp[uvIndex++] = (byte)((U < 0) ? 0 : ((U > 255) ? 255 : U));
yuv420sp[uvIndex++] = (byte)((V < 0) ? 0 : ((V > 255) ? 255 : V));
}
index++;
}
}
}
view raw BMPtoNV12.cs hosted with ❤ by GitHub
Converted the YV12 approach to NV12.

Even though I modified a method’s to produce NV12 from a BMP array, no joy. And this after much tinkering..

Eventually, I even tried using the OpenCV project, but that just led to green splotches all over.

Thus, I’m stuck. I still love the idea, but I’ve poured way too many hours into the experiment at this stage. I’m looking forward to Microsoft’s Build this week. Maybe I’ll find some helpful soul to set me on the straight and narrow.

One reply on “In media res”

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s