← Back to context

Comment by cyberrock

9 days ago

Does this mean we're back in favor of using weird riddles to decide programming skills now? Do we owe Google an apology for the inverse binary tree incident?

I don't know what inverse binary incident you're referring to, but the fundamental premise here is LLMs can't really think logically like humans do and they are far away from replacing humans in software, let alone senior software engineers

  • Because humans always say 'bread' if you ask them what you put in a toaster.

    And humans will always deduce that you should switch doors if you are in a hypothetical gameshow and they show you a horse behind one of the doors.

    (All I mean is - an example of an LLM answering illogically is not proof that LLMs can't really think logically, as you can equally find examples of humans answering illogically and also find examples of novel questions that LLMs get right)