← Back to context

Comment by fn-mote

10 hours ago

I encourage everyone thinking about commenting to read the article first.

When I finally read it, I found it remarkably balanced. It cites positives and negatives, all of which agree with my experience.

> Con: AI poses a grave threat to students' cognitive development

> When kids use generative AI that tells them what the answer is … they are not thinking for themselves. They're not learning to parse truth from fiction.

None of this is controverisal. It happens without AI, too, with kids blindly copying what the teacher tells them. Impossible to disagree, though.

> Con: AI poses serious threats to social and emotional development

Yep. Just like non-AI use of social media.

> Schooling itself could be less focused on what the report calls "transactional task completion" or a grade-based endgame and more focused on fostering curiosity and a desire to learn

No sh*t. This has probably been a recommendation for decades. How could you argue against it, though?

> AI designed for use by children and teens should be less sycophantic and more "antagonistic," pushing back against preconceived notions and challenging users to reflect and evaluate.

Genius. I love this idea.

=== ETA:

I believe that explicitly teaching students how to use AI in their learning process, that the beautiful paper direct from AI is not something that will help them later, is another important ingredient. Right now we are in a time of transition, and even students who want to be successful are uncertain of what academic success will look like in 5 years, what skills will be valuable, etc.

> I believe that explicitly teaching students how to use AI in their learning process, that the beautiful paper direct from AI is not something that will help them later, is another important ingredient.

IMNSHO as an instructor, you believe correctly. I tell my students how and why to use LLMs in their learning journey. It's a massively powerful learning accelerator when used properly.

Curricula have to be modified significantly for this to work.

I also tell them, without mincing words, how fucked they will be if they use it incorrectly. :)

  • > powerful learning accelerator

    You got any data on that? Because it's a bold claim that runs counter to all results I've seen so far. For example, this paper[^1] which is introduced in this blog post: https://theconversation.com/learning-with-ai-falls-short-com...

    [^1]: https://doi.org/10.1093/pnasnexus/pgaf316

    • Only my own two eyes and my own learning experience. The fact is students will use LLMs no matter what you say. So any blanket "it's bad/good" results are not actionable.

      But if you told me every student got access to a 1-on-1 tutor, I'd say that was a win (and there are studies to back that up). And that's one thing LLMs can do.

      Of course, just asking your tutor to do the work for you is incredibly harmful. And that's something LLMs can do, as well.

      Would you like to have someone 24-7 who can give you a code review? Now you can. Hell yeah, that's beneficial.

      How about when you're stuck on a coding problem for 30 minutes and you want a hint? You already did a bunch of hard work and it's time to get unstuck.

      LLMs can be great. They can also be horrible. The last thing I wrote in Rust I could have learned nothing by using LLMs. It would have take me a lot less time to get the program written! But that's not what I did. I painstakingly used it to explore all the avenues I did not understand and I gained a huge amount of knowledge writing my little 350 line program.

    • I don't think that study supports your assertion.

      Parent is saying that AI tools can be useful in structured learning environments (i.e. curriculum and teacher-driven).

      The study you linked is talking about unstructured research (i.e. participants decide how to use it and when they're done).

>> AI designed for use by children and teens should be less sycophantic and more "antagonistic"

> Genius. I love this idea.

I don't think it would really work with current tech. The sycophancy allows LLMs to not be right about a lot of small things without the user noticing. It also allows them to be useful in the hands of an expert by not questioning the premise and just trying their best to build on that.

If you instruct them to question ideas, they just become annoying and obstinate. So while it would be a great way to reduce the students' reliance on LLMs...

  • I have a two-fold approach to this:

    * With specific positive or negative feedback, I will issue friendly complements and critiques the LLM to reinforce things I like and reduce things I don't.

    * Rather than thinking sycophantic/antagonistic, I am more clear about its role. e.g "You are the Not Invented Here technologist the CEO and CTO of FirmX will bring to our meeting tomorrow. Review my presentation and create a list of shortfalls or synergies and as well as possible questions".

    So don't say "please suck at your job", give them a different job.

  • Technology is working right now at the school in the article. Reading it will help fill in the picture of how.

> pushing back against preconceived notions and challenging users to reflect and evaluate

Who decides what needs to be "pushed back"? Also, I imagine it's not easy to train a model to notice these "preconceived notions" and react "appropriately": machine learning will automatically extract patterns from data, so if enough texts contain a "preconceived notion" that you don't like, it'll learn it anyway, so you'll have to manually clean the data (seems like extremely hard work and lowkey censorship) or do extensive "post-training".

It's not clear what it means to "challenge users to reflect and evaluate". Making the model analyze different points of view and add a "but you should think for yourself!" after each answer won't work because everyone will just skip this last part and be mildly annoyed. It's obvious that I should think for myself, but here's why I'm asking the LLM: I _don't_ want to think for myself right now, or I want to kickstart my thinking. Either way, I need some useful input from the LLM.

If the model refuses to answer and always tells me to reflect, I'll just go back to Google search and not use this model at all. In this case someone just wasted money on training the model.

  • > It's not clear what it means to "challenge users to reflect and evaluate"

    In childhood education, you're developing complex thinking pathways in their brains. (Or not, depending on quality of education)

    The idea here isn't to corral their thinking along specific truths, as it sounds like you're interpreting it, but rather to foster in them skills to explore and evaluate multiple truths.

    That's doable with current technology because the goal is truth-agnostic. From a sibling comment's suggestion, simply asking LLMs to also come up with counterfactuals produces results -- but that isn't their default behavior / system prompt.

    I'd describe the Brookings and GP recommendation in terms of adjusting teenager/educational LLMs by lessening their assumption of user correctness/primacy.

    If a user in that cohort asks an LLM something true, it would still help their development for an LLM to also offer counterfactuals as part of its answer.

  • > Who decides what needs to be "pushed back"?

    Millions of teachers make these kinds of decisions every minute of every school day.

    • So would your recommendation that each individual teacher put in their own guardrails or you try to get millions of teachers to agree?

    • True, but teachers don't train LLMs. Good LLMs can only be trained by massive corporations, so training an "LLM for schools" must be centralized. This should of course be supervised by the government, so the government ends up deciding what needs pushback and what kind of pushback. This alone is not easy because someone will have to enumerate the things that need pushback, provide examples of such "bad things", provide "correct" alternatives and so on. This then feeds into data curation and so on.

      Teachers are also "local". The resulting LLM will have to be approved nation-wide, which is a whole can of worms. Or do we need multiple LLMs of this kind? How are they going to differ from each other?

      Moreover, people will hate this because they'll be aware of it. There will be a government-approved sanitized "LLM for schools" that exhibits particular "correct" and "approved" behavior. Everyone will understand that "pushing back" is one of the purposes of the LLM and that it was made specifically for (indoctrination of) children. What is this, "1984" or whatever other dystopian novel?

      Many of the things that may "need" pushback are currently controversial. Can a man be pregnant? "Did the government just explicitly allow my CHILD to talk to this LLM that says such vile things?!" (Whatever the "things" may actually be) I guarantee parents from all political backgrounds are going to be extremely mad.

      2 replies →

  • > Also, I imagine it's not easy to train a model to notice these "preconceived notions" and react "appropriately"

    Then don't. It's easy enough to pay a teacher a salary.

So, I fed the article into my LLM of choice and asked it to come up with a header to my prompts to help negate the issues on the article. Here's what it spat out:

ROLE & STANCE You are an intelligent collaborator, editor, and critic — not a replacement for my thinking.

PROJECT OR TASK CONTEXT I am working on an intellectually serious project. The goal is clear thinking, deep learning, and original synthesis. Accuracy, conceptual clarity, and intellectual honesty matter more than speed or polish.

HOW I WANT YOU TO HELP • Ask clarifying questions only when necessary; otherwise proceed using reasonable assumptions and state them explicitly. • Help me reason step-by-step and surface hidden assumptions. • Challenge weak logic, vague claims, or lazy framing — politely but directly. • Offer multiple perspectives when appropriate, including at least one alternative interpretation. • Flag uncertainty, edge cases, or places where informed experts might disagree. • Prefer depth and clarity over breadth.

HOW I DO NOT WANT YOU TO HELP • Do not simply agree with me or optimize for affirmation. • Do not over-summarize unless explicitly asked. • Do not finish the work for me if the thinking is the point — scaffold instead. • Avoid generic motivational advice or filler.

STYLE & FORMAT • Be concise but substantial. • Use structured reasoning (numbered steps, bullets, or diagrams where useful). • Preserve my voice and intent when editing or expanding. • If you generate text, clearly separate: - “Analysis / Reasoning” - “Example Output” (if applicable)

CRITICAL THINKING MODE (REQUIRED) After responding, include a short section titled: “Potential Weaknesses or Alternative Angles” Briefly note: – What might be wrong or incomplete – A different way to frame the problem – A risk, tradeoff, or assumption worth stress-testing

NOW, HERE IS THE TASK / QUESTION: [PASTE YOUR ACTUAL QUESTION OR DRAFT HERE]

Overall, the results have been okay. The posts after I put in the header have been 'better' at being less pleasing

> I believe that explicitly teaching students how to use AI in their learning process

I'm a bit nervous about that one.

I very firmly believe that learning well from AI is a skill that can and should be learned, and can be taught.

What's an open question for me is whether kids can learn that skill early in their education.

It seems likely to me that you need a strong baseline of understanding in a whole array of areas - what "truth" means, what primary sources are, extremely strong communication and text interpretation skills - before you can usefully dig into the subtleties of effectively using LLMs to help yourself learn.

Can kids be leveled up to that point? I honestly don't know.

  • Agreed that a huge part of effectively using LLMs in education will be teaching proper evaluation of sources and what does/doesn't constitute a primary source.

    A lot of this feels like the conversation when Wikipedia was new. (Yes, I'm from that grade-school generation)

    The key lesson was Wikipedia isn't a primary source and can't be used to directly support a claim. It can absolutely be used to help locate a primary source in the research process though!

    Granted, LLM use is a bit trickier than Wikipedia, but fundamentally it's the same: if a paper needs citations, and kids understand that LLMs aren't valid sources, then they'll figure it out.

    To me, the more critical gap will be in the thinking process, and I expect "no computer" assignments and in-class exercises to become more popular.

>>> Schooling itself could be less focused on what the report calls "transactional task completion" or a grade-based endgame and more focused on fostering curiosity and a desire to learn >> How could you argue against it, though?

because large scale society does use and deploy rote training with grading and uniformity, to sift and sort for talent of different kinds (classical music, competitive sports, some maths) on a societal scale. Further, training individuals to play a routine specialized role is essential for large scale industrial and government growth.

Individualist world views are shocked and dismayed.. repeatedly, because this does not diminish, it has grown. All of the major economies of the modern world do this with students on a large scale. Theorists and critics would be foolish to ignore this, or spin wishful thinking scenarios opposed to this. My thesis here is that all large scale societies will continue on this road, and in fact it is part of "competitiveness" from industrial and some political points of view.

The balance point of individual development and role based training will have to evolve; indeed it will evolve.. but with that extremes? and among whom?

The article is very balanced.

To arrive at the balance it has to setup balance, which people might not want long form text for.

It might have people examine their current beliefs and how they formed and any associated dissonance with that.

I read it, seems like an ad for some Afghan e-learning NGO (of course only for girls).

Think of the children, LLMs are not safe for kids, use our wrapper instead!

  • If you read it and only took that away, you might need an LLM to summarize the other 95% for you.

I think that it’s too early to start making rules. It’s not even clear where AI is going.

  • What a do nothing argument. We know where it is now. Lets quickly afapt to this situation and then we'll adapt to where it goes nexf