← Back to context Comment by stavros 17 days ago OpenAI offers that, or at least used to. You can batch all your inference and get much lower prices. 1 comment stavros Reply airspresso 16 days ago Still do. Great for workloads where it's okay to bundle a bunch of requests and wait some hours (up to 24h, usually done faster) for all of them to complete.
airspresso 16 days ago Still do. Great for workloads where it's okay to bundle a bunch of requests and wait some hours (up to 24h, usually done faster) for all of them to complete.
Still do. Great for workloads where it's okay to bundle a bunch of requests and wait some hours (up to 24h, usually done faster) for all of them to complete.