← Back to context

Comment by croes

7 days ago

It’s a rant against the wrong usage of a tool not the tool as such.

It's a tool that promotes incorrect usage though, and that is an inherent problem. All of these companies are selling AI as a tool to do work for you, and the AI _sounds confident_ not matter what it spits out.

  • My personal pet-peeve is how a great majority of people--and too many developers--are being misled into believing a fictional character coincidentally named "Assistant" inside a story-document half-created by an LLM is the author-LLM.

    If a human generates a story containing Count Dracula, that doesn't mean vampires are real, or that capabilities like "turning into a cloud of bats" are real, or that the algorithm "thirsts for the blood of the innocent."

    The same holds when the story comes from an algorithm, and it continues to hold when story is about a differently-named character named "AI Assistant" who is "helpful".

    Getting people to fall for this illusion is great news for the companies though, because they can get investor-dollars and make sales with the promise of "our system is intelligent", which is true in the same sense as "our system converts blood into immortality."

  • That's the real danger of AI.

    The false promises of the AI companies and the false expectations of the management and users.

    Had it just recently for a data migration where the users asked if they still need to enter meta data for documents they just could use AI to query data that was usually based on that meta data.

    They trust AI before it's even there and don't even consider a transition period where they check if the result are correct.

    Like with security convenience prevails.

    • But isn’t this just par for the course with every new technological revolution?

      “It’ll change everything!” they said, as they continued to put money in their pockets as people were distracted by the shiny object.

      1 reply →

  • > All of these companies are selling AI as a tool to do work for you, and the AI _sounds confident_ not matter what it spits out.

    If your LLM + pre-prompt setup sounds confident with every response, something is probably wrong; it doesn't have to be that way. It isn't for me. I haven't collected statistics, but I often get decent nuance back from Claude.

    Think more about what you're doing and experiment. Try different pre-prompts. Try different conversation styles.

    This is not dismissing the tendency for overconfidence, sycophancy, and more. I'm just sharing some mitigations.

    • > Think more about what you're doing and experiment. Try different pre-prompts. Try different conversation styles.

      Ask on a Wednesday. During a full moon. While in a shipping container. Standing up. Keep a black box on your desk as the sacred GenAI avatar and pray to it. Ask while hopping on one leg.

      1 reply →

    • Here's the root of the problem though, how do you know that the AI is actually "thinking" more carefully, as opposed to just pretending to?

      The short answer is: you can know for a fact that it _isn't_ thinking more carefully because LLMs don't actually think at all, they just parrot language. LLMs are performing well when they are putting out what you want to hear, which is not necessarily a well thought out answer but rather an answer that LOOKS well thought out.

      1 reply →

Well, it's actually a rant about AI making what the author perceives as mistakes. Honestly it reads like the author is attempting to show off or brag by listing imaginary mistakes an AI might have made, but they are all the sort of mistakes a human could make too. And the fact that they are not real incidents, significantly weakens his argument. He is a consultant who sells training services so obviously if people come to rely on AI more for this kind of thing he will be out of work.

It does not help that his examples of things an imaginary LLM might miss are all very subjective and partisan too.