← Back to context

Comment by thewebguyd

2 days ago

> For me LLM:s are just a computer interface you can program using natural language.

Sort of. You still can't get a reliable output for the same input. For example, I was toying with using ChatGPT with some Siri shortcuts on my iPhone. I do photography on the side, and finding a good time for lighting for photoshoots is a usecase I use a lot so I made a shortcut which sends my location to the API along with a prompt to get the sunset time for today, total amount of daylight, and golden hour times.

Sometimes it works, sometimes it says "I don't have specific golden hour times, but you can find those on the web" or a useless generic "Golden hour is typically 1 hour before sunset but can vary with location and season"

Doesn't feel like programming to me, as I can't get reproducible output.

I could just use the LLM to write some API calling script from some service that has that data, but then why bother with that middle man step.

I like LLMs, I think they are useful, I use them everyday but what I want is a way to get consistent, reproducible output for any given input/prompt.

For things where I don't want creativity, I tell it to write a script.

For example, "write a comprehensive spec for a script that takes in the date and a location and computes when golden hour is." | "Implement this spec"

That variability is nice when you want some creativity, e.g. "write a beautiful, interactive boids simulation as a single file in html, css, and JavaScript."

Words like "beautiful" and interactive" are open to interpretation, and I've been happy with the different ways they are interpreted.