← Back to context Comment by stavros 19 days ago OpenAI offers that, or at least used to. You can batch all your inference and get much lower prices. 1 comment stavros Reply airspresso 18 days ago Still do. Great for workloads where it's okay to bundle a bunch of requests and wait some hours (up to 24h, usually done faster) for all of them to complete.
airspresso 18 days ago Still do. Great for workloads where it's okay to bundle a bunch of requests and wait some hours (up to 24h, usually done faster) for all of them to complete.
Still do. Great for workloads where it's okay to bundle a bunch of requests and wait some hours (up to 24h, usually done faster) for all of them to complete.