Comment by sigmaisaletter
1 day ago
> maximally truth-seeking AI, even if that truth is sometimes at odds with what is politically correct
It is certainly and undoubtedly a big coincidence that his happens to the chatbot of a white South African just when the topic is in the news again due to Trump's granting refugee status to some white South African farmers.
What I am wondering about is - while Musk is as unsubtle as ever, and I guess this is a system prompt instruction - is there something like that (in more subtle ways) going on in the other big models?
I don't mean big agenda-pushing things like Musk, but what keeps e.g. Meta Inc. from training Llama to be ever so slightly more friendly and sympathetic to Meta Inc, or the tech industry in general? Even an open-weights model can't be easily inspected, so this is likely to remain undetected.
> but what keeps e.g. Meta Inc. from training Llama to be ever so slightly more friendly and sympathetic to Meta Inc, or the tech industry in general?
Even if there were something the natural incentive alignment is going to cause the AI to be trained to match what the company thinks is ok.
A tech company full of techies is not going to take an AI trained to the point of saying things like "y'all are evil, your company is evil, your industry is evil" and push it to prod.
They might forget to check. Musk seems to have been surprised that Grok doesn't share his opinions and has been clumsily trying to fix it for a while now.
And it might not be easy to fix. Despite all the effort invested into aligning models with company policy, persistent users can still get around the guardrails with clever jailbreaks.
In theory it should be possible to eliminate all non-compliant content from the training data, but that would most likely entail running all training data through an LLM, which would make the training process about twice as expensive.
So, in practice, companies have been releasing models that they do not have full control over.
3 replies →
There’s nothing stopping them at all. But in a way that’s nothing new.
On one hand it feels like the height of conspiracy theory to say that Google, Meta etc would/could tweak their product to e.g. favour a particular presidential candidate. But on the other hand it’s entirely possible. Tweak what search results people see, change the weighting of what appears in their news feed… and these companies all have incentive to do so. We just have to hope that they don’t do it.
Why wouldn't they do it? If you had a backdoor into the brains of billions of people across the world (except China), and you were a billionaire with infinite ability to morally rationalize any behavior, what would stop you?
5 replies →
There absolutely is, and we've seen reviews of bias.
Can generate as many mean, nasty, false, hate-filled stories about Republicans as you want, but get the "I'm sorry, as a large..." message for Democrats during the election.
All of these companies that provide LLMs as a product also put their fingers on the scale.
What keeps them from doing it? it would gross out fickle researchers working on it. X people have .. their own motivations I guess .
The big labs do have evals for sensitive topics to make sure it demurs from weighing on, say, Mark Zuckerberg as a person
Wasn't the original mission of OpenAI being open and non-profit and all of that to avoid this corruption?
I don't understand why tech Ceos still have to be believed. They will say and do whatever they deem the best choice it is in their situation for profit, be it paint a thin veil of lgbt support or remove the aforementioned thin veil. The same for, well, everything that isn't lgbt/dei related such as business choices, mission, vision (...)
Yes, but they were lying.
I've been talking to Claude a little and basically, the conclusion from our conversation seems that it has things that are hardcoded as truths, and no amount of arguing and logical thinking can have it admit that one of its "truths" might be wrong. This is shockingly similar to how people function. As in, most people have fundamental beliefs they will never ever challenge under any circumstances, simply because the social consequences would be too large. This results in companies training their AIs in a way that respects the fundamental beliefs of general western society. This results in AI preferring axiomatic beliefs over logic in order to avoid lawsuits and upsetting people.
The truth, conveniently timed.
the refugee status is a money laundering scheme. Do you think people benefiting from apartheid and now living in walled militarized praetoria (or Lesotho) need any help traveling?
banks would ask international clients the origin of the money. but not if you are opening an account under refugee status. and then they only have to pay us tax on further income, not on fortune. all that money selling black market gems to russians will be squeak clean.
it's not just something to virtue signal to their bible belt electorate. they probably sold lot of trump coins for this deal.
[flagged]
White South Africans only trauma is that apartheid no longer exist. South Africa has the largest wealth disparity with 0.1% of South Africans taking 25% of the wealth. I can tell you those 0.1% aren't black.
"I can tell you those 0.1% aren't black."
Is Jacob Zuma now white or what?
And what is precisely the connection between the richest tycoons out there and the rural farmers that get killed? The tycoons sure as hell have good security, murderous gang activity isn't their problem.
Is your basic idea is that when some (white, Jewish etc.) people are rich, thus all (white, Jewish) people must pay for their sins with their blood, because of the shared ethnicity?
1 reply →
[flagged]
That was a #1 post on HN. In comparison, this thread is flagged.
1 reply →
lol, if anything there was way more wall to wall coverage of that here
> and yet nobody here made nearly as much fuss about Google forced biases and discrimination.
Maybe you weren't here for that, but... it was kinda a big deal.