← Back to context Comment by BoorishBears 16 days ago I pay 78 cents an hour to host Llama. 2 comments BoorishBears Reply beastman82 14 days ago Vast? Specs? BoorishBears 14 days ago Runpod, 2xA40.Not sure why you think buying an entire inference server is a necessity to run these models.
beastman82 14 days ago Vast? Specs? BoorishBears 14 days ago Runpod, 2xA40.Not sure why you think buying an entire inference server is a necessity to run these models.
BoorishBears 14 days ago Runpod, 2xA40.Not sure why you think buying an entire inference server is a necessity to run these models.
Vast? Specs?
Runpod, 2xA40.
Not sure why you think buying an entire inference server is a necessity to run these models.