← Back to context

Comment by teekert

12 hours ago

Idk I try talk like cavemen to claude. Claude seems answer less good. We have more misunderstandings. Feel like sometimes need more words in total to explain previous instructions. Also less context is more damage if typo. Who agrees? Could be just feeling I have. I often ad fluff. Feels like better result from LLM. Me think LLM also get less thinking and less info from own previous replies if talk like caveman.

In the regular people forums (twitter, reddit), you see endless complaints about LLMs being stupid and useless.

But you also catch a glimpse of how the author of the complaint communicates in general...

"im trying to get the ai to help with the work i am doing to give me good advice for a nice path to heloing out and anytim i askin it for help with doing this it's total trash i dunt kno what to do anymore with this dum ai is so stupid"

  • The realization is LLMs are computer programs. You orchestrate them like any other program and you get results.

    Everyone's interfaces, concept and desires are different so the performance is wildly varied

    This is similar to frameworks: they were either godsends or curses depending on how you thought and what you were doing ..

I once (when ChatGPT first came out) launched into a conversation with ChatGPT using nothing but s-expressions. Didn't bother with a preamble, nor an explanation, just structured my prompt into a tree, forced said tree into an s-expression and hit enter.

I was very surprised to see that the response was in s-expressions too. It was incoherent, but the parens balanced at least.

Just tried it now and it doesn't seem to do that anymore.

Yes because in most contexts it has seen "caveman" talk the conversations haven't been about rigorously explained maths/science/computing/etc... so it is less likely to predict that output.

Fluff adds probable likeness. Probablelikeness brings in more stuff. More stuff can be good. More stuff can poison.