← Back to context

Comment by mattnewton

7 days ago

Lots of good answers that mention the big things (money, scale, and expertise). But one thing I haven’t seen mentioned yet is that the transformer math is probably against your use case. Batch compute on beefy hardware is currently more efficient than computing small sequences for a single user at a time, since these models tend to be memory bound and not compute bound. They have the users that makes the beefy hardware make sense, enough people are querying around the same time to make some batching possible.