Comment by skybrian
6 months ago
Yes, arguing with an LLM in this way is a waste of time. It’s not a person. If it does anything weird, start a new conversation.
6 months ago
Yes, arguing with an LLM in this way is a waste of time. It’s not a person. If it does anything weird, start a new conversation.
> arguing with an LLM in this way is a waste of time
I wasn't arguing. I was asking it what it thought it was doing because I was assumed. The waste of time was from before this up to this point. I could have given up at 30 minutes, or an hour, but these darn llms are always so close and maybe just one more prompt...
LLM programs can't describe what they are doing. The tech doesn't allow this. LLM can generate you a text which will resemble what LLM would be doing if that was hypothetically possible. A good example has been published by Anthropic recently - they program LLM to add two integers. It outputs correct answer. Then they program it to write steps which LLM executed to do that addition. LLM of course starts generating the primary school algorithm, with adding one pair of digits, carry 1 if needed, adding next pair of digits, add 1, combine result, then next digits etc. But in reality it calculates addition using probabilities, like any other generated tokens. Anthropic even admitted it in that same article, that LLM was bullshitting them.
Same with your query, it just generated you a most likely text which was in the input data. It is unable to output what it actually did.
The look up how LLM is generating its answers :)
Next time just rephrase your problem.
> Next time just rephrase your problem.
Don't you need to know if the llm is wrong to rephrase your problem? How are people asking the llm to do something they do not know how to do, then being able to know the answer is incorrect?
1 reply →
I usually just modify the message before it goes off the rails, taking into consideration how it failed.
> It's not a person
and yet we as a species are spending trillions of dollars in order to trick people that it is very very close to a person. What do you think they're going to do?
No. It can emulate a person to an extent because it was trained on the people.
Trillions of dollars are not spent on convincing humanity LLMs are humans.
I'd argue that zero dollars are spent convincing anyone that LLMs are people since:
A. I've seen no evidence of it, and I say that as not exactly a fan of techbros
B. People tend to anthropomorphize everything which is why we have constellations in the night sky or pets that supposedly experience emotion the way we do.
Collectively, we're pretty awful at understanding different intelligences and avoiding the trappings of seeing the world through our own experience of it. That is part of being human, which makes us easy to manipulate, sure, but the major devs in Gen AI are not really doing that. You might get the odd girlfriend app marketed to incels or whatever, but those are small potatoes comparatively.
The problem I see when people try to point out how LLMs get this or that wrong is that the user, the human, is bad at asking the question...which comes as no surprise since we can barely communicate properly with each other across the various barriers such as culture, reasoning informed by different experiences, etc.
We're just bad at prompt engineering and need to get better in order to make full use of this tool that is Gen AI. The genie is out of the bottle. Time to adapt.
4 replies →
and it was trained on the people because...
because it was wanted to statistically resemble...
You're so close!