← Back to context

Comment by elliotec

8 days ago

Why does it need an Apple M-series chip? Any hope for it getting on an intel chip and using it with Linux?

It uses MLX (https://github.com/ml-explore/mlx), Apple’s ML framework, for running LLMs.

  • Why people tend to nail some stuff into their products?

    We have been talking about the AI revolution for several years already, and yet there is no IDE or plugin for VS Code that supports multiple OpenAI compatible endpoints. Some, like Cody, do not even support "private" LLMs other than the ollama endpoint on localhost. Cursor supports only one endpoint for OpenAI API compatible models.

    I made a custom version of ChatGPT.nvim for myself to be able to use models I like (mostly removing hardcoded gpt-3), though I dropped it because then I needed to invest time into maintaining and improving this version for myself instead of doing my job.

    I'd like to run several specialized models with a vLLM engine and serve them at different endpoints, and then I'd like an IDE to be able to use these specialized LLMs for different purposes. Does anyone know a vim/neovim/vscode plugin that supports several OPENAI_API_HOST endpoints?

    For now, this is only possible with agent frameworks, but that's not really what I need.

Not OP but it presumably uses an open LLM that won't run in a timely manner without being on a faster computer.