Comment by binsquare
5 days ago
This is oddly a case to signify there is value in an AI moderation tools - to avoid bias inherent to human actors.
5 days ago
This is oddly a case to signify there is value in an AI moderation tools - to avoid bias inherent to human actors.
The AI moderation tools are trained on the Reddit data that is actively being sabotaged by a competitor. If an AI were to take up moderation now, mentioning this specific bootcamp would probably get you warned or banned because of how bad it is according to the training data.
AI is as biased as humans are, perhaps even more so because it lacks actual reasoning capabilities.
> [AI] lacks actual reasoning capabilities.
Evals are showing reasoning (by which I mean multi-step problem solving, planning, etc) is improving over time in LLMs. We don't have to agree on metaphysics to see this; I'm referring to the measurable end result.
Why? Some combination of longer context windows, better architectures, hybrid systems, and so on. There is more research about how and where reasoning happens (inside the transformer, during the chain of thought, perhaps during a tool call).
[flagged]
2 replies →
Getting rid of bias in LLM training is a major research problem and anecdotally e.g., to my surprise, Gemini infers gender of the user depending on the prompt/what the question is about; by extension it’ll have many other assumptions about race, nationality, political views, etc.
> to my surprise, Gemini infers gender of the user depending on the prompt/what the question is about
What, automatically (and not, say, in response to a "what do you suppose my gender is" prompt)? What evidence do we have for this?
They still have bias. Not sure its necessarily worse but there is bias inherent to LLMs
https://misinforeview.hks.harvard.edu/article/do-language-mo...
The big advantage of LLMs is that we can test on the biases and attempt to correct them.
We'll never get it 100% right but having something more sensible and neutral than the average Reddit mod is not a high bar ;)
> to avoid bias inherent to human actors.
Do you understand how AI tools are trained?