Comment by BoorishBears 1 year ago I pay 78 cents an hour to host Llama. 2 comments BoorishBears Reply beastman82 1 year ago Vast? Specs? BoorishBears 1 year ago Runpod, 2xA40.Not sure why you think buying an entire inference server is a necessity to run these models.
beastman82 1 year ago Vast? Specs? BoorishBears 1 year ago Runpod, 2xA40.Not sure why you think buying an entire inference server is a necessity to run these models.
BoorishBears 1 year ago Runpod, 2xA40.Not sure why you think buying an entire inference server is a necessity to run these models.
Vast? Specs?
Runpod, 2xA40.
Not sure why you think buying an entire inference server is a necessity to run these models.