Comment by NSUserDefaults 1 day ago Curious to see how quickly each LLM picks up the new codecs/options. 3 comments NSUserDefaults Reply stevejb 1 day ago I use the Warp terminal and I can ask it to run —-help and it figures it out baq 1 day ago the canonical (if that's the right word for a 2-year-old technique) solution is to paste the whole manual into the context before asking questions xnx 1 day ago Gemini can now load context from a URL in the API (https://ai.google.dev/gemini-api/docs/url-context), but I'm not sure if that has made it to the web interfaces yet.
baq 1 day ago the canonical (if that's the right word for a 2-year-old technique) solution is to paste the whole manual into the context before asking questions xnx 1 day ago Gemini can now load context from a URL in the API (https://ai.google.dev/gemini-api/docs/url-context), but I'm not sure if that has made it to the web interfaces yet.
xnx 1 day ago Gemini can now load context from a URL in the API (https://ai.google.dev/gemini-api/docs/url-context), but I'm not sure if that has made it to the web interfaces yet.
I use the Warp terminal and I can ask it to run —-help and it figures it out
the canonical (if that's the right word for a 2-year-old technique) solution is to paste the whole manual into the context before asking questions
Gemini can now load context from a URL in the API (https://ai.google.dev/gemini-api/docs/url-context), but I'm not sure if that has made it to the web interfaces yet.