Hey folks, welcome to the digest!
Since I started doing interview issues last year, it has not been lost on me that a newsletter is a low-fidelity medium for these conversations, so I'm pleased to say that the Browsertech Podcast is now live! Going forward, interviews that appear here will be published in extended form on the podcast.
Episode 1 is a conversation I had with Luke McGartland of Sequence.film. Luke and team are building cinema-quality film editing in the browser. If you haven't seen their trailer video, do check it out.
The whole conversation is available on the podcast. Here are some excerpts, which have been formatted to fit your screen.
Paul: I'm curious about the pixel streaming side. To kind start with, is everything in the browser window pixel streamed, or are you doing it on a component basis?
What I didn't want to do was make the whole UI pixel streamed because, 1: then you have less bandwidth available for video quality. And 2: you want a really, really low latency user interface. Most of the user interface for a video editor is just a bunch of rectangles and text anyway. That's really very easy to draw locally.
I joke that this is the hardest way to build a video editor, just because you have to build a high performance application on the browser, high performance rendering engine on the server, marry those two things together, make sure they don't get out of sync, add multiplayer on top of all that.
It's a lot of different moving pieces that all have to line up to get the experience working properly. But it was very important to for me to figure out how do we use pixel screening as efficiently as possible. And just that's why the preview monitor is the only bit that's being rendered server side.
Paul: I find it interesting that you took the WebGL escape hatch for the timeline where you could have done that with the DOM.
Luke: Yeah, it’s interesting. But even with that, one of the challenges that we had with the timeline component was we actually can't think of it as components.
If you think about the timeline, normally in React or Svelte or whatever, that would be a component. You'd have a little
Clip component. And you’d pass them all your props. And then it would put in the thumbnails and the text and all of the little like audio waveform.
One of the things we learned when we were figuring out how Pixi got the efficiency that it did for high performance drawing, is that Pixi relies on batching things. Basically, using the same shaders or the same textures over and over again, and just repeating those.
So you have to be really, really conscious of your draw order. This was interesting coming from component-driven development. We had to flip from thinking about things as components, to figuring out the draw order.
“First we're going to let out all the backgrounds, then we're going to do the text. And then we're going to do the sprite image.”
That was a not-initially-obvious thing coming from a world where I can encapsulate all this functionality in the view layer inside that little component and just reuse that everywhere.