← Back to context

Comment by garyfirestorm

6 hours ago

Everyone seems to be missing important piece here. Ollama is/was a one click solution for non technical person to launch a local model. It doesn’t need a lot of configuration, detects Nvidia GPU and starts model inferencing with single command. Core principle being your grandmother should be able to launch local AI model without needing to install 100 dependencies.

Exactly.

I can be in a non-technical team, and put the LLM code inside docker.

The local dev instruction is to install ollama and use it to pull the models and set some env vars.

The same code can point at bedrock when deployed there.

Using straight llamacpp at the time I wrote that it wasn't as straightforward.

  • For fun, this is how an actual "non-technical" individual would hear/read your comment:

    > Exactly. I can be in a non-technical team, and put the blah inside blah. The blah is to install blah and use it to blah and blah. The same blah can point at blah when blah there. Using blah at the time I wrote that it wasn't as straightforward.

    I think when people say "non-technical", it feels like they're talking about "People who work in tech startups, but aren't developers" instead of actually people who aren't technical one bit, the ones who don't know the difference between "desktop" and a "browser" for example. Where you tell them to press any key, and they replied with "What key is that?".

> Ollama is/was a one click solution for non technical person to launch a local model

Maybe it is today, but initially ollama was only a cli, so obviously not for "non technical people" who would have no idea how to even use a terminal. If you hang out in the Ollama Discord (unlikely, as the mods are very ban-happy), you'd see constantly people asking for very trivial help, like how to enter commands in the terminal, and the community stringing them along, instead of just directing them to LM Desktop or something that would be much better for that type of user.