← Back to context

Comment by jeroenhd

5 days ago

The AI moderation tools are trained on the Reddit data that is actively being sabotaged by a competitor. If an AI were to take up moderation now, mentioning this specific bootcamp would probably get you warned or banned because of how bad it is according to the training data.

AI is as biased as humans are, perhaps even more so because it lacks actual reasoning capabilities.

> [AI] lacks actual reasoning capabilities.

Evals are showing reasoning (by which I mean multi-step problem solving, planning, etc) is improving over time in LLMs. We don't have to agree on metaphysics to see this; I'm referring to the measurable end result.

Why? Some combination of longer context windows, better architectures, hybrid systems, and so on. There is more research about how and where reasoning happens (inside the transformer, during the chain of thought, perhaps during a tool call).

  • [flagged]

    • > Please stick to facts here, not hype-filled wishful thinking. You are actively pushing misinformation that makes situations like the OP’s worse.

      I don't understand what you are talking about. I have to wonder if you posted in the wrong place. Care to explain:

      - What specifically did I write that was misinformation?

      - How do you justify saying it "makes situations like the OP’s worse". Connect the dots for me?

      Please remember to be charitable here.

      1 reply →