← Back to context

Comment by freetime2

3 hours ago

The scary thing is I have seen high level directors and executives say “I asked ChatGPT and it agreed with me” as a way to try to settle a debate. People seem all too willing to delegate even matters of judgement to AI.

On the other hand I have been in debates where someone asks ChatGPT to draft a list of possible approaches and pros and cons - and after reading through the list we were all in alignment on the best approach.

The latter I think is a constructive use of AI to elevate thinking, while the former has me thinking it may be time for a career change.

To make an exhaustive list of possible options you need to find key questions that divide solution space. This requires logic, which LLMs lack.

  • > This requires logic, which LLMs lack.

    What? I've heard many takes on what AI lacks, but never this one. We had ChatGPT being able to solve an Erdős problem on its own yesterday [0]; how could you explain that if it cannot do logic?

    [0] https://news.ycombinator.com/item?id=47903126

    • LLM didn't solve an Erdos problem, it generated a text that a human looked at, cleaned up, corrected and used as base for a solution.

      WRT logic, there a multiple occasions of LLMs answering incorrectly to trivial logic puzzles. Of course, with each occasion becoming public they are added to training data and overfitted on, but if you embed them in a more subtle way LLMs will fail again.

      3 replies →