← Back to context

Comment by varun_ch

9 hours ago

This feels like a lot of work for low reward, the technical/business infrastructure would be wild. And if anyone wants to offload their prompts to users browsers, they might as well just use the Chrome API correctly? How many server side prompts would realistically be useful to offload to a low end model like this?

Plus even if you really wanted to do that, WebGPU exists and has for a while right?

> How many server side prompts would realistically be useful to offload to a low end model like this?

There's a lot of ways this API could go, e.g. more powerful models eventually, or perhaps integration with cloud models. For example, I could see Google trying to default Gemini as the model for users signed into Chrome

  • I think we’ll get more powerful models when they become reasonable to run on regular people’s computers, in which case the compute costs would hopefully fall enough that people don’t need to resort to this kind of weird stuff.

    As for cloud models, that would be interesting, although I guess then the fraud would be easier in spoofing whatever parameters (ip address? domain name? some Chrome install identifier?) to get around whatever rate limiting they come up with, rather than actually using people’s computers.

    Anyways I’m sure if it ends up being abused, they can throw a permissions dialog in front of it. Just need to figure out a way to make normal people understand.

    •   > Just need to figure out a way to make normal people understand.
      

      Has that strategy ever actually worked?

  > This feels like a lot of work for low reward

Low per-device reward combined with a high user count - either by large legitimate players or by botnets - has been the monetisation strategy of most online enterprises.