Comment by ForceBru

11 hours ago

> pushing back against preconceived notions and challenging users to reflect and evaluate

Who decides what needs to be "pushed back"? Also, I imagine it's not easy to train a model to notice these "preconceived notions" and react "appropriately": machine learning will automatically extract patterns from data, so if enough texts contain a "preconceived notion" that you don't like, it'll learn it anyway, so you'll have to manually clean the data (seems like extremely hard work and lowkey censorship) or do extensive "post-training".

It's not clear what it means to "challenge users to reflect and evaluate". Making the model analyze different points of view and add a "but you should think for yourself!" after each answer won't work because everyone will just skip this last part and be mildly annoyed. It's obvious that I should think for myself, but here's why I'm asking the LLM: I _don't_ want to think for myself right now, or I want to kickstart my thinking. Either way, I need some useful input from the LLM.

If the model refuses to answer and always tells me to reflect, I'll just go back to Google search and not use this model at all. In this case someone just wasted money on training the model.

> It's not clear what it means to "challenge users to reflect and evaluate"

In childhood education, you're developing complex thinking pathways in their brains. (Or not, depending on quality of education)

The idea here isn't to corral their thinking along specific truths, as it sounds like you're interpreting it, but rather to foster in them skills to explore and evaluate multiple truths.

That's doable with current technology because the goal is truth-agnostic. From a sibling comment's suggestion, simply asking LLMs to also come up with counterfactuals produces results -- but that isn't their default behavior / system prompt.

I'd describe the Brookings and GP recommendation in terms of adjusting teenager/educational LLMs by lessening their assumption of user correctness/primacy.

If a user in that cohort asks an LLM something true, it would still help their development for an LLM to also offer counterfactuals as part of its answer.

> Who decides what needs to be "pushed back"?

Millions of teachers make these kinds of decisions every minute of every school day.

  • So would your recommendation that each individual teacher put in their own guardrails or you try to get millions of teachers to agree?

  • True, but teachers don't train LLMs. Good LLMs can only be trained by massive corporations, so training an "LLM for schools" must be centralized. This should of course be supervised by the government, so the government ends up deciding what needs pushback and what kind of pushback. This alone is not easy because someone will have to enumerate the things that need pushback, provide examples of such "bad things", provide "correct" alternatives and so on. This then feeds into data curation and so on.

    Teachers are also "local". The resulting LLM will have to be approved nation-wide, which is a whole can of worms. Or do we need multiple LLMs of this kind? How are they going to differ from each other?

    Moreover, people will hate this because they'll be aware of it. There will be a government-approved sanitized "LLM for schools" that exhibits particular "correct" and "approved" behavior. Everyone will understand that "pushing back" is one of the purposes of the LLM and that it was made specifically for (indoctrination of) children. What is this, "1984" or whatever other dystopian novel?

    Many of the things that may "need" pushback are currently controversial. Can a man be pregnant? "Did the government just explicitly allow my CHILD to talk to this LLM that says such vile things?!" (Whatever the "things" may actually be) I guarantee parents from all political backgrounds are going to be extremely mad.

    • I think you're interpreting the commenter's/article's point in a way that they didn't intend. At all.

      Assume the LLM has the answer a student wants. Instead of just blurting it out to the student, the LLM can:

      * Ask the student questions that encourages the student to think about the overall topic.

      * Ask the student what they think the right answer is, and then drill down on the student's incorrect assumptions so that they arrive at the right answer.

      * Ask the student to come up with two opposing positions and explain why each would _and_ wouldn't work.

      Etc.

      None of this has to get anywhere near politics or whatever else conjured your dystopia. If the student asked about politics in the first place, this type of pushback doesn't have to be any different than current LLM behavior.

      In fact, I'd love this type of LLM -- I want to actually learn. Maybe I can order one to actually try..

      1 reply →

> Also, I imagine it's not easy to train a model to notice these "preconceived notions" and react "appropriately"

Then don't. It's easy enough to pay a teacher a salary.