Comment by GaggiX

7 days ago

Huge batches to find the perfect balance between compute and memory banthwidth, quantized models, speculative decoding or similar techniques, MoE models, routing of requests on smaller models if required, batch processing to fill the GPUs when demand is lower (or electricity is cheaper).