Welcome to the Browsertech Digest, this is issue #7.
Last week we had the pleasure of hosting Yining Shi at our second in-person browsertech event in NYC.
Yining is a founding engineer at Runway, one of the original AI-based creativity tools founded back in 2018. Runway is also known for collaborating with Stability AI to develop Stable Diffusion. Yining also teaches about creative ML at NYU.
Yining first talked about some of the creative ML projects she and her students have built, followed by some demos of the latest ML-backed features that Runway has been building.
Here’s a recording of Yining’s talk.
One thing that stood out to me about Runway is that they have innovated in two directions: developing ML-backed content generation models, and making them work in the browser. These are not unrelated!
Building for the browser allows Runway to deliver ML models that require expensive GPUs and high-memory machines, without putting the burden on their users to acquire an expensive workstation PC.
Today, a big driver pushing desktop applications into the browser is the low friction of collaboration (think: Google Sheets, Figma). We’re starting to see a new generation of apps like Runway target the browser, not just for the low friction of collaboration, but for the low friction of integrating specialized hardware, especially GPUs.
There’s a powerful economic force at play here. A desktop workstation that runs Final Cut Pro well will set you back an order of magnitude more than the software license of FCP itself. But each user only needs the hardware for short bursts of time. By amortizing the cost of the hardware among all of its users, browser-based software companies like Runway can compete on total price in a way that desktop software can’t.
The new hybrid/remote reality of work is also a driving factor here: users can get the benefits of serious hardware without having to haul it around with them everywhere they want to work.
Until next time,
-- Paul