Welcome to issue #15 of the Browsertech Digest.
In a few months, WebGPU will be enabled by default in Chrome. This issue is about why that matters for games and machine learning.
Background: GPU compute on the web
To do any sort of high-performance graphics, you need to be able to send instructions directly to the computer’s graphics processing unit (GPU).
For over a decade, it’s been possible for web apps to do this, via WebGL. Although WebGL is mostly associated with 3D graphics, it’s also what enables 2D applications like Figma and most GPU-backed UIs.
GPU architecture has evolved a lot over the last decade, and the interface that WebGL provides is increasingly diverging from the way modern GPUs operate.
WebGPU has emerged as the successor to WebGL, but rather than an incremental improvement (i.e. “WebGL 3”), it’s a brand new modern graphics API.
WebGPU is slated to be enabled by default on Chrome 113, although it might slip to Chrome 114. That puts the stable date at either April 26th, or May 24th.
Steaming up game distribution
Most games these days are distributed on Steam, which takes a 20-30% cut on all games sold.
Google Chrome alone has an order of magnitude more active users than Steam. I’m very interested in seeing whether game developers seize on the opportunity to sidestep Steam and deliver games directly to users in the browser.
Before that happens, games will first have to be written for WebGPU. There’s a case even for native game engines target WebGPU: if you want to build on a modern graphics API without locking yourself in to a specific platform’s graphics stack, WebGPU is a compelling way to do it.
The browser is also becoming a more capable game platform thanks to the nascent Origin Private Filesystem API for caching large assets, and QUIC/WebTransport, which exposes UDP networking in the browser. (UDP is the network transport layer of choice for multiplayer games.)
A new framework called Ambient was recently announced which combines WebGPU, WebAssembly, and QUIC to create games that work both on native and (eventually) the web.
Rendering graphics tends to involve highly parallel workloads. Another task that involves highly parallel workloads is neural network inference.
It’s possible to squeeze general purpose compute out of WebGL by shoehorning it into a graphics computation, but it’s not ideal.
WebGPU provides a way to use the GPU for compute directly. It’s already a supported backend for TensorFlow.js, so applications that use tfjs will be able to start taking advantage of WebGPU when running on a browser that has it.
As a comparison, running Chrome Canary on my M1 MacBook, WebGPU benchmarks are about 2-3x faster than their WebGL counterparts.
(This benchmark is based on Google’s person segmentation model, which you can run in your browser.)
WebGPU doesn’t automatically mean faster graphics. If an application is already bottlenecked by GPU compute, there’s not much that WebGPU can do about that. Its shader language WGSL is mostly 1:1 equivalent to WebGL’s GLSL, and it ultimately runs on the same chip.
In particular, shader-heavy graphics like Shadertoy won’t benefit much. Likewise, scenes with a large number of vertices but not many state changes will not benefit from WebGPU.
My co-founder Taylor implemented our LiDAR point cloud visualization for both WebGL and WebGPU. Here’s what he had to say about the differences in that context:
By contrast, games and 3D applications that rely on frequent GPU state transitions (e.g. switching textures between draw calls) do stand to benefit from WebGPU.
I’ll be giving a talk called Multiplayer doesn’t have to be hard at Real World React in NYC in a few weeks. If you can’t make it in-person, it will be livestreamed (details TBA).
Speaking of 3D, Flux opened up to the public. Here’s a link that directly opens up an example PCB in their three.js-based editor.
Until next time,