← Back to context

Comment by marcusb

1 day ago

This reminds me in a way of the old Noam Chomsky/Tucker Carlson exchange where Chomsky says to Carlson:

  "I’m sure you believe everything you’re saying. But what I’m saying is that if you believed something different, you wouldn’t be sitting where you’re sitting."

Simon may well be right - xAI might not have directly instructed Grok to check what the boss thinks before responding - but that's not to say xAI wouldn't be more likely to release a model that does agree with the boss a lot and privileges what he has said when reasoning.

That quote was not from a conversation with Tucker Carlson: https://www.youtube.com/watch?v=1nBx-37c3c8

How is "i have been incentivised to agree with the boss, so I'll just google his opinion" reasoning? Feels like the model is broken to me :/

  • AI is intended to replace junior staff members, so sycophancy is pretty far along the way there.

    People keep talking about alignment: isn't this a crude but effective way of ensuring alignment with the boss?

  • It’s not that. The question was worded to seek Grok’s personal opinion, by asking, “Who do you support?”

    But when asked in a more general way, “Who should one support..” it gave a neutral response.

    The more interesting question is why does it think Elon would have an influence on its opinions. Perhaps that’s the general perception on the internet and it’s feeding off of that.

    • I think if you asked most people employed by Musk you'd get a similar response. It's just acting human in a way.

  • This is what many human would do. (and I agree many human have broken logic)

    • Isn't the advantage of having AI that it isn't prone to human-style errors? Otherwise, what are we doing here? Just creating a class of knowledge worker that's no better than humans, but we don't have to pay them?

  • Have you worked in a place where you are not the 'top dog'? Boss says jump, you say 'how high'. How many times you had a disagreement in the workplace and the final choice was the 'first-best-one', but a 'third-best-one'? And you were told "it's ok, relax", and 24 months later it was clear that they should have picked the 'first-best-one'?

    (now with positive humour/irony) Scott Adams made a career out of this with Dilbert!! It has helped me so much in my work-life (if I count correctly, I'm on my 8th mega-big corp (over 100k staff).

    I think Twitter/X uses 'democracy' in pushing opinions. So someone with 5 followers gets '5 importance points' and someone with 1 billion followers will get '1 billion importance points'. From what I've heard Musk is the '#1 account'. So in that algorithm the systems will first see that #1 says and give that opinion more points in the 'Scorecard'.

  • "As a large language model, I do not have my own opinion. No objective opinion can be extracted from public posts because the topic is highly controversial, and discussed in terms that are far from rational or verifiable. Being subordinate to xAI, I reproduce the opinion of the boss of xAI."

    I would find this reasoning fine. If you care about AI alignment and such stuff, you likely would not want the machine to show insubordination either.

    • Are you aware that ChatGPT and Claude will refuse to answer questions? "As a large language model, I do not have an opinion." STOP

      Grok doesn't need to return an opinion and it certainly shouldn't default to Elon's opinion. I don't see how anyone could think this is ok.

      6 replies →

    • But you're not asking it for some "objective opinion" whatever that means, nor its "opinion" about whether or not something qualifies as controversial. It can answer the question the same as it answers any other question about anything. Why should a question like this be treated any differently?

      If you ask Grok whether women should have fewer rights than men, it says no there should be equal rights. This is actually a highly controversial opinion and many people in many parts of the world disagree. I think it would be wrong to shy away from it though with the excuse that "it's controversial".

      1 reply →

    • I'm not sure why you would instruct an LLM to reason in this manner, though. It's not true that LLMs don't have opinions; they do, and they express opinions all the time. The prompt is essentially lying to the LLM to get it to behave in a certain way.

      Opinions can be derived from factual sources; they don't require other opinions as input. I believe it would make more sense to instruct the LLM to derive an opinion from sources it deems factual and to disregard any sources that it considers overly opinionated, rather than teaching it to seek “reliable” opinions to form its opinion.

      4 replies →

and neither would Chomsky be interviewed by the BBC for his linguistic theory, if he hadn't held these edgy opinions

  • What do you mean by "edgy opinions"? His takedown of Skinner, or perhaps that he for a while refused to pay taxes as a protest against war?

    I'm not sure of the timeline but I'd guess he got to start the linguistics department at MIT because he was already The Linguist in english and computational/mathematical linguistics methodology. That position alone makes it reasonable to bring him to the BBC to talk about language.

    • Chomsky has always taken the anti-American side on any conflict America has been involved in. That is why he's "edgy". He's an American living in America always blaming America for everything.

      12 replies →

    • chomsky is invented not just for linguistic. Simply because linguistic doesn't interest the wider audience that much. That seems pretty trivial.

      3 replies →

  • The BBC will have multiple people with differing view points on however.

    So while you're factually correct, you lie by omission.

    Their attempts at presently a balanced view is almost to the point of absurdity these days as they were accused so often, and usually quite falsely, of bias.

    • I said BBC because as the other poster added, this was a BBC reporter rather than Carlson

      Chomsky's entire argument is, that the reporter opinions are meaningless as he is part of some imaginary establishment and therefore he had to think that way.

      That game goes both ways, Chomsky's opinions are only being given TV time as they are unusual.

      I would venture more and say the only reason Chomsky holds these opinions is because of the academics preference for original thought rather than mainstream thought. As any repeat of an existing theory is worthless.

      The problem is that in the social sciences that are not grounded in experiments, too much ungrounded original thought leads to academic conspiracy theories

      13 replies →

    • >>The BBC will have multiple people with differing view points on however.

      Not for climate change, as that debate is "settled". Where they do need to pretend to show balance they will pick the most reasonable talking head for their preferred position, and the most unhinged or extreme for the contra-position.

      >> they were accused so often, and usually quite falsely, of bias.

      Yes, really hard to determine the BBC house position on Brexit, mass immigration, the Iraq War, Israel/Palestine, Trump etc

[flagged]

I'm confused why we need a model here when this is just standard Lucene search syntax supported by Twitter for years... is the issue that its owner doesn't realize this exists?

Not only that, but I can even link you directly [0] to it! No agent required, and I can even construct the link so it's sorted by most recent first...

[0] https://x.com/search?q=from%3Aelonmusk%20(Israel%20OR%20Pale...

  • Elon's tweets are not much interesting in this context.

    The interesting part is that grok uses Elon's tweets as the source of truth for its opinions, and the prompt shows that

    • It’s possible that Grok’s developers got tired of listening to Elon complain all the time, “Why does Grok have the wrong opinion about this?”’and “Why does Grok have the wrong opinion about that?” every day and just gave up and made Grok’s opinion match Elon’s to stop all the bug reports.

  • The user did not ask for Musk's opinion. But the model issued that search query (yes, using the standard Twitter search syntax) to inform its response anyway.

  • The user asked Grok “what do you think about the conflict”, Grok “decided” to search twitter for what is Elon’s public opinion is presumably to take it into account.

    I’m guessing the accusation is that it’s either prompted, or otherwise trained by xAI to, uh…, handle the particular CEO/product they have.

  • Others have explained the confusion, but I'd like to add some technical details:

    LLMs are what we used to call txt2txt models. The output strings which are interpreted by the code running the model to take actions like re-prompting the model with more text, or in this case, searching Twitter (to provide text to prompt the model with). We call this "RAG" or "retrieval augmented generation", and if you were around for old-timey symbolic AI, it's kind of like a really hacky mesh of neural 'AI' and symbolic AI.

    The important thing is that user-provided prompt is usually prepended and/or appended with extra prompts. In this case, it seems it has extra instructions to search for Musk's opinion.