← Back to context

Comment by sarthaksaxena

1 month ago

https://llmpm.co

I have built npm for LLM models, which lets you install & run 10,000+ open sourced large language models within seconds. The idea is to make models installable like packages in your code:

llmpm install llama3

llmpm run llama3

You can also package large language models together with your code so projects can reproduce the same setup easily.

Github: https://github.com/llmpm/llmpm-dev

Also, is there is way I can invoke the models or is there an API which your tool exposes?

  • Yes indeed there is, run `llmpm serve <model_name>`, which will expose an API endpoint http://localhost:8080/v1/chat/completions & also host a chat UI where you can interact with the local running model https://localhost:8080/chat.

    Follow the docs here: https://www.llmpm.co/docs

    Pro tip for your use case: Checkout the `llmpm serve` section