Comment by jedisct1 17 days ago Really cool.But how to use it instead of Copilot in VSCode ? 5 comments jedisct1 Reply flanked-evergl 17 days ago Would love to know myself, I recall there was some plugin for VSCode that did next edits that accepted a custom model but I don't recall what it was now. replete 17 days ago Run server with ollama, use Continue extension configured for ollama BoredomIsFun 17 days ago I'd stay away from ollana, just use llama.cpp; it is more up date, better performing and more flexible. mika6996 16 days ago But you can't just switch between installed models like in ollama, can you? 1 reply →
flanked-evergl 17 days ago Would love to know myself, I recall there was some plugin for VSCode that did next edits that accepted a custom model but I don't recall what it was now.
replete 17 days ago Run server with ollama, use Continue extension configured for ollama BoredomIsFun 17 days ago I'd stay away from ollana, just use llama.cpp; it is more up date, better performing and more flexible. mika6996 16 days ago But you can't just switch between installed models like in ollama, can you? 1 reply →
BoredomIsFun 17 days ago I'd stay away from ollana, just use llama.cpp; it is more up date, better performing and more flexible. mika6996 16 days ago But you can't just switch between installed models like in ollama, can you? 1 reply →
mika6996 16 days ago But you can't just switch between installed models like in ollama, can you? 1 reply →
Would love to know myself, I recall there was some plugin for VSCode that did next edits that accepted a custom model but I don't recall what it was now.
Run server with ollama, use Continue extension configured for ollama
I'd stay away from ollana, just use llama.cpp; it is more up date, better performing and more flexible.
But you can't just switch between installed models like in ollama, can you?
1 reply →