Welcome to issue #12 of the Browsertech Digest. Today’s issue is about the trend of applications rendering their entire UI directly on the GPU.
Traditionally, if you wanted to write desktop software, you would use the operating system’s APIs to create a user interface. That’s why Windows apps looked like Windows apps, and Mac apps looked like Mac apps.
This also meant that if you wanted to turn a Windows app into a Mac app, it was an expensive and time-consuming process. As browser rendering engines became more powerful, developers realized they could be used as a foundation for a cross-platform UI framework, bypassing much (not all) of the work in maintaining a cross-platform app.
The modern way to build a desktop application is to build a web application, bundle it together with a specific version of the Chromium browser, and ship it. I’m simplifying a bit, but that’s essentially the technique used by apps like Spotify, Visual Studio Code, and Figma.
This is why (e.g.) Slack on Mac/PC doesn’t look like a Mac/PC app.
Not everybody loves this. Chromium is big. It feels pretty wasteful to ship an entire browser with every desktop application. Browsers are notorious memory hogs. It seems like a lot of overhead just to have a cross-platform way to, essentially, draw boxes with text in them.
One alternative to shipping a web browser with your application is to hook into a micro browser provided by the platform. Modern operating systems generally provide a way to embed a stripped-down WebKit view into a native application. This is what Tauri does, for example, to produce programs that are much smaller than their Electron equivalents.
Another approach is to render the UI directly on a cross-platform GPU API, usually OpenGL.
Since OpenGL is cross-platform, if you write the UI of your app to target OpenGL, you can run it on any system that supports OpenGL. And since WebGL is just a flavor of OpenGL, the browser is one of those systems.
The downside is that OpenGL is basically just an API for drawing triangles very fast. Unlike the UIs provided by the operating system, where you can say “draw a button with the label ‘delete’”, you have to tell OpenGL how to make a button out of triangles.
It takes a lot of work to make a usable UI out of triangles, so most applications did not go this path. One big exception was games, which were already heavy GPU users, and usually had minimal UIs. The 3D renderer Blender is another example.
Lately, I’ve seen more apps choosing to target the GPU directly, even for not-particularly-graphical applications:
We originally planned to use Electron as a convenient means of delivering a cross-platform GUI for Zed, while building the core of the application in Rust. But at every turn, we found that web technology was the bottleneck in achieving amazing performance. Finally, we decided to take full control and simply build a GPU-powered UI framework that met our needs.
Rerun.io is building visualization tools for robotics. One of the founding members is the author of the Rust egui GPU GUI framework, which they are using.
Raph Linus, in his retrospective on the Xi editor, wrote
These days, I consider GPU acceleration to be essentially required for good GUI performance.
Raph is now working on the GPU-backed Xilem GUI framework.
Picking GPU acceleration has allowed us to be at way over 60fps on a 4K screen, with text and glyphs and rectangles.
An early Warp prototype did use Electron and the team decided it would not be able to provide the level of performance that users should expect from their terminal.
One setback of GPU-rendered UIs today is accessibility. When you hook into an operating system’s UI library, or use a browser to render a UI, your users get the benefit of accessibility tools built into those platforms.
When you draw the UI on the GPU, the host just hands you a framebuffer to draw into, and has no awareness of the text and click surfaces you’re rendering into it. If the user has set a larger system text size, your app won’t get it. If the user uses a screen reader, they won’t be able to navigate your app.
These are not technically insurmountable problems, but they do mean that as GPU-rendered apps seek a larger audience, developers will have to spend time on a11y functionality that native and web apps get for free.
Until next time,
(PS. Thanks to Mish at Step CI on twitter for the tip on Warp.dev being GPU-backed)