Comment by theshackleford
3 days ago
> I get that. But then if that option doesn't help, what I've seen is that the next followup is inevitably "have you tried doing/prompting x instead of y"
Maybe I’m misunderstanding, but it sounds like you’re framing a completely normal proces (try, fail, adjust) as if it’s unreasonable?
In reality, when something doesn’t work, it would seem to me that the obvious next step is to adapt and try again. This does not seem like a radical approach but instead seems to largely be how problem solving sort of works?
For example, when I was a kid trying to push start my motorcycle, it wouldn’t fire no matter what I did. Someone suggested a simple tweak, try a different gear. I did, and instantly the bike roared to life. What I was doing wasn’t wrong, it just needed a slight adjustment to get the result I was after.
I get trying and improving until you get it right. But I just can't make the bridge in my head around
1. this is magic and will one-shot your questions 2. but if it goes wrong, keep trying until it works
Plus, knowing it's all probabilistic, how do you know, without knowing ahead of time already, that the result is correct? Is that not the classic halting problem?
> I get trying and improving until you get it right. But I just can't make the bridge in my head around
> 1. this is magic and will one-shot your questions 2. but if it goes wrong, keep trying until it works
Ah that makes sense. I forgot the "magic" part, and was looking at it more practically.
To clarify on the “learn and improve” part, I mean I get it in the context of a human doing it. When a person learns, that lesson sticks so errors and retries are valuable.
For LLMs none of it sticks. You keep “teaching” it and the next time it forgets everything.
So again you keep trying until you get the results you want, which you need to know ahead of time.