Show HN: I taught GPT-OSS-120B to see using Google Lens and OpenCV

11 hours ago

I built an MCP server that gives any local LLM real Google search and now vision capabilities - no API keys needed.

  The latest feature: google_lens_detect uses OpenCV to find objects in an image, crops each one, and sends them to Google Lens for identification. GPT-OSS-120B, a text-only model with
   zero vision support, correctly identified an NVIDIA DGX Spark and a SanDisk USB drive from a desk photo.

  Also includes Google Search, News, Shopping, Scholar, Maps, Finance, Weather, Flights, Hotels, Translate, Images, Trends, and more. 17 tools total.

  Two commands: pip install noapi-google-search-mcp && playwright install chromium

  GitHub: https://github.com/VincentKaufmann/noapi-google-search-mcp
  PyPI: https://pypi.org/project/noapi-google-search-mcp/

Booyah!

I don't get this. Isn't this the same as saying "I taught my 5 year old to calculate integrals, by typing them into Wolfram Alpha"...so the actual relevant cognitive task (integrals in my example, "seeing" in yours) is outsources to an external API.

Why do I need gpt-oss-120B at all in this scenario? Couldn't I just directly call e.g. gemini-3-pro api from the python script?

  • 'Calculating' an integral, is usually done by applying a series of sort of abstract mathematical tricks. There is usually no deeper meaning applied to the solving. If you have profound intuition you can guess the solution to an integral, by 'inspection'.

    What part here is the knowing or understanding? Does solving an integral symbolically provide more knowledge than numerically or otherwise?

    Understanding the underlying functions themselves and the areas they sweep; has substitution or by-parts, actually provided you with this?

    • [I teach Math in the first year of the university in Argentina. We have a few Calculus courses, with different levels according to the degree.]

      In 1D, substitution by linear functions like "t=3x+1" is very insightful. It's a pity that sometimes we don't have time to analyze it more deeply. Other substitutions may be insightful or not. Some tricks like "t=sin(x)" has a nice geometrical interpretation, but it's never explained, we don't teach it anyway now.

      Integration by parts is not very insightful until you get to the 3rd or 4th year and learn Solovev spaces or advanced Electrodynamics. I'd like to drop it, but other courses require it and I'd be fired.

      In some cases, parity and other symmetries are interesting, but those tricks are mostly teach in Physics than in Math.

      Also, in the second year we get 2D or 3D integrals, that have a lot of interesting variable changes. Also, things like the Gauss theorem and it's relation with conservation laws.

    • Parent says “I taught my 5yo how to” — this means their 5yo learned a process.

      OP says “I taught LLM how to see” and this should mean the LLM (which is capable of being taught/learning) internalized how to. It did not, it was given a tool that does seeing and tells it what things are.

      People are very interested in getting good local LLMs with vision integrated, and so they want to read about it. Next to nobody would click on the honest “I enabled an LLM to use a Google service to identify objects in images”, which is what OP actually did.

      1 reply →

Confused as to why you wouldn’t integrate a local vlm if you want a local llm as the backbone. Plenty of 8b - 30b vlms out there that are visually competent.

  • Its meant to be super light weight for people who run 1B, 3B, 8B or 20B models on skinny devices, one Pip install with high impact for one install :D

> GPT-OSS-120B, a text-only model with zero vision support, correctly identified an NVIDIA DGX Spark and a SanDisk USB drive from a desk photo.

But wasn't it Google Lens that actually identified them?

Booyah yourself, this like being able to call two APIs and calling it learning? I thought you did some VLM stuff with a projection

Looks like a TOS violation to me to scrape google directly like that. While the concept of giving a text only model 'pseudo vision' is clever, I think the solution in its current form is a bit fragile. The SerpAPI, Google Custom Search API, etc. exist for a reason; For anything beyond personal tinkering, this is a liability.

have you tried Llama? In my experience it has been strictly better than GPT OSS, but it might depend on specifically how it is used.

  • Have you tried GPT-OSS-120b MXFP4 with reasoning effort set to high? Out of all models I can run within 96GB, it seems to consistently give better results. What exact llama model (+ quant I suppose) is it that you've had better results against, and what did you compare it against, the 120b or 20b variant?

    • How are you running this? I've had issues with Opencode formulating bad messages when the model runs on llama.cpp. Jinja threw a bunch of errors and GPT-OSS couldn't make tool calls. There's an issue for this on Opencode's repo but seems like it's been waiting or a couple of weeks.

      > What exact llama model (+ quant I suppose) is it that you've had better results against

      Not llama, but Qwen3-coder-next is on top of my list right now. Q8_K_XL. It's incredible (not just for coding).

      4 replies →