← Back to context

Comment by VincentEvans

1 year ago

Indeed, I appreciate the explanation, it is certainly both interesting and informative to me, but to somewhat echo the person you are replying to - if I wanted a boat, and you offer me a boat, and it doesn’t float - the reasons for failure are perhaps full of interesting details, but perhaps the most important thing to focus on first - is to make the boat float, or stop offering it to people who are in need of a boat.

To paraphrase how this thread started - it was someone testing different boats to see whether they can simply float - and they couldn’t. And the reply was questioning the validity of testing boats whether they can simply float.

At least this is how it sounds to me when I am told that our AI overlords can’t figure out how many Rs are in the word “strawberry”.

At some point you need to just accept the details and limitations of things. We do this all the time. Why is your calculator giving only approximate result? Why can't your car go backwards as fast as forwards? Etc. It sucks that everyone gets exposed to the relatively low level implementation with LLM (almost the raw model), but that's the reality today.

  • People do get similarly hung up on surprising floating point results: why can't you just make it work properly? And a full answer is a whole book on how floating point math works.

The test problem is emblematic of a type of synthetic query that could fail but of limited import in actual usage.

For instance you could ask it for a JavaScript function to count any letter in any word and pass it r and strawberry and it would be far more useful.

Having edge cases doesn't mean its not useful it is neither a free assastant nor a coder who doesn't expect a paycheck. At this stage it's a tool that you can build on.

To engage with the analogy. A propeller is very useful but it doesn't replace the boat or the Captain.

  • Does not seem work universally. Just tested a few with this prompt

    "create a javascript function to count any letter in any word. Run this function for the letter "r" and the word "strawberry" and print the count"

    ChatGPT-4o => Output is 3. Passed

    Claude3.5 => Output is 2. Failed. Told it the count is wrong. It apologised and then fixed the issue in the code. Output is now 3. Useless if the human does not spot the error.

    llama3.1-70b(local) => Output is 2. Failed.

    llama3.1-70b(Groq) => Output is 2. Failed.

    Gemma2-9b-lt(local) => Output is 2. Failed.

    Curiously all the ones that failed had this code (or some near identical version of it)

    ```javascript

    function countLetter(letter, word) {

      // Convert both letter and word to lowercase to make the search case-insensitive
    
      const lowerCaseWord = word.toLowerCase();
    
      const lowerCaseLetter = letter.toLowerCase();
    
    
      // Use the split() method with the letter as the separator to get an array of substrings separated by the letter
    
      const substrings = lowerCaseWord.split(lowerCaseLetter);
    
      // The count of the letter is the number of splits minus one (because there are n-1 spaces between n items)
    
      return substrings.length - 1;

    }

    // Test the function with "r" and "strawberry"

    console.log(countLetter("r", "strawberry")); // Output: 2 ```

    • It's not the job of the LLM to run the code... if you ask it to run the code, it will just do its best approximation at giving you a result similar to what the code seems to be doing. It's not actually running it.

      Just like Dall-E is not layering coats of pain to make a watercolor... it just makes something that looks like one.

      Your LLM (or you) should run the code in a code interpretor. Which ChatGPT did because it has access to tools. Your local ones don't.