Comment by rcarmo
3 days ago
This looks really cool, but not being able to use my own LLM endpoints for the Copilot is an instant turn-off.
3 days ago
This looks really cool, but not being able to use my own LLM endpoints for the Copilot is an instant turn-off.
You actually can, if you self-host there are environment variables to control what models are available to the copilot but it’s tuned to Azure for the time being. We can work on generalizing it further and documenting it better
Azure is just fine, as long as it's documented someplace. I'll take a look, although I also couldn't find prebuilt Docker images referenced in the compose.local file (I will look into what is being built into ghcr.io)
the prebuilt images are in compose.prod file only, not in compose.local.
since the copilot is a managed service, you’d be setting those azure credentials in your .env and the copilot would call into your azure openai deployment.