Comment by jedisct1 4 hours ago Really cool.But how to use it instead of Copilot in VSCode ? 3 comments jedisct1 Reply replete 2 hours ago Run server with ollama, use Continue extension configured for ollama BoredomIsFun 1 hour ago I'd stay away from ollana, just use llama.cpp; it is more up date, better performing and more flexible. flanked-evergl 3 hours ago Would love to know myself, I recall there was some plugin for VSCode that did next edits that accepted a custom model but I don't recall what it was now.
replete 2 hours ago Run server with ollama, use Continue extension configured for ollama BoredomIsFun 1 hour ago I'd stay away from ollana, just use llama.cpp; it is more up date, better performing and more flexible.
BoredomIsFun 1 hour ago I'd stay away from ollana, just use llama.cpp; it is more up date, better performing and more flexible.
flanked-evergl 3 hours ago Would love to know myself, I recall there was some plugin for VSCode that did next edits that accepted a custom model but I don't recall what it was now.
Run server with ollama, use Continue extension configured for ollama
I'd stay away from ollana, just use llama.cpp; it is more up date, better performing and more flexible.
Would love to know myself, I recall there was some plugin for VSCode that did next edits that accepted a custom model but I don't recall what it was now.