← Back to context Comment by BoorishBears 7 months ago I pay 78 cents an hour to host Llama. 2 comments BoorishBears Reply beastman82 7 months ago Vast? Specs? BoorishBears 7 months ago Runpod, 2xA40.Not sure why you think buying an entire inference server is a necessity to run these models.
beastman82 7 months ago Vast? Specs? BoorishBears 7 months ago Runpod, 2xA40.Not sure why you think buying an entire inference server is a necessity to run these models.
BoorishBears 7 months ago Runpod, 2xA40.Not sure why you think buying an entire inference server is a necessity to run these models.
Vast? Specs?
Runpod, 2xA40.
Not sure why you think buying an entire inference server is a necessity to run these models.