← Back to context

Comment by bearjaws

2 years ago

I've been using Mixtral and Bard ever since the end of the year. I am pleased with their performance overall for a mixture of content generation and coding.

It seems to me GPT4 has become short in its outputs, you have to do a lot more COT type prompting to get it to actually output a good result. Which is excruciating given how slow it is to produce content.

Mixtral on together AI is crazy to see ~70-100token/s, and the quality works for my use case as well.

OpenAi it's an unreliable provider. Even if their model don't change as they say, there's a current issue where they blanked added a guardian tool to enforce content policies that are obscure and the tool is over eager, causing quite a stir across startups where this manifests on the surface like an outage.

It will get better as they fix it and tune it, but their entire release pipeline is absolutely bonkers, like no forewarning, no test environment, no opt out. It's scary amateurish for a billion dollar company.

If we're talking about the API, it seems like it's short because it is shorter. The latest version of GPT-4 (1106) might have a significantly larger input window, but its maximum output token size is limited to 4096 tokens.

It's likely that ChatGPT uses the 1106 model underneath the covers or some variant, so it probably suffers from the same restricted output window.

Can you give an example of a query where you find GPT4 is short with outputs? I’ve use custom instructions so that may have shielded me from this change.

  • At least for me making tests has been very frustrating, full of many "test conditions here" and "continue with the rest of the tests".

    It _hates_ making assumptions about things it doesn't know for sure, I suspect because of "anti-hallucination" nonsense. Instead it has to be shoved to even try making any assumptions, even reasonable ones.

    I know it's capable of making reasonable assumptions for class structures/behaviour, etc where I can just tweak it as needed to work. It just refuses too. I've even seen comments like "We'll put the rest of the code in later"

    • Yep, with code generation I have definitely encountered this issue as well. It will write out a function description as a comment and move on instead of actually writing out the function a lot for me. It also does this when it has properly written code but you ask it to rewrite to tweak something in my experience.

  • Given this JSON: <JSON examples> And this Table schema: <Table Schema in SQL>

    Create JavaScript to insert the the JSON into the SQL using knex('table_name')

    Below is part of its output:

      // Insert into course_module table
    
      await knex('course_module').insert({
    
        id: moduleId,
    
        name: courseData.name,
    
        description: courseData.description,
    
        // include other required fields with appropriate 

    values });

    It's missing several columns it could populate with the data it knows from the prompt, primarily created_at, updated_at, account_id, user_id, lesson number... and instead I get a comment telling me to do it.

    Theres a lot of people complaining about this, primarily on Reddit, but usually the ChatGPT fan boys jump in to defend OAI.

    • Try this custom instruction:

          - Skip any preamble or qualifications about how a topic is subjective.
          - Be terse. Do not offer unprompted advice or clarifications. Speak in specific, topic relevant terminology. Do NOT hedge or qualify. Do not waffle. Speak directly and be willing to make creative guesses. Explain your reasoning. if you don’t know, say you don’t know.
          - Remain neutral on all topics. Be willing to reference less reputable sources for ideas.
          - Never apologize.
          - Ask questions when unsure.
          
          When responding in code:
          - Do not truncate.
          - Do not elide.
          - Do not omit.
          - Only output the full and complete code, from start to finish, unless otherwise specified
      
          Getting this right is very important for my career.

      3 replies →

    • Here is the mixtral output (truncated):

      knex('course_module')

        .insert({
      
          name: jsonData.name,
      
          description: jsonData.description,
      
          content: JSON.stringify(jsonData),
      
          number: jsonData.number,
      
          account_id: 'account_id',
      
          user_id: 'user_id',
      
          course_id: 'course_id',
      
          created_at: new Date(),
      
          updated_at: new Date()
        })