← Back to context Comment by BoorishBears 5 months ago I pay 78 cents an hour to host Llama. 2 comments BoorishBears Reply beastman82 5 months ago Vast? Specs? BoorishBears 5 months ago Runpod, 2xA40.Not sure why you think buying an entire inference server is a necessity to run these models.
beastman82 5 months ago Vast? Specs? BoorishBears 5 months ago Runpod, 2xA40.Not sure why you think buying an entire inference server is a necessity to run these models.
BoorishBears 5 months ago Runpod, 2xA40.Not sure why you think buying an entire inference server is a necessity to run these models.
Vast? Specs?
Runpod, 2xA40.
Not sure why you think buying an entire inference server is a necessity to run these models.