← Back to context

Comment by probably_wrong

6 hours ago

I'm not a fan of "TL;DR" but I think 52 minutes would qualify. I jumped to a random point of the transcript and found just platitudes, which didn't quite hook me into listening to all of it.

How about some more info on what their main conclusions are?

They view the framing of the MIT paper not just as bad science, but as a dangerous social tool that uses brain data to "consign people" to being less worthy or "stupid" for using cognitive aids. It flags the paper's alarmist findings as "pseudoscience" designed to provoke fear rather than provide rigorous insight. They highlight several "red flags" in the study's design: lack of a coherent scientific framework, methodological errors like typos, and reliance on invented, undefined terms such as "cognitive debt". They challenge the interpretation of EEG results, explaining that while the paper frames a 55% reduction in connectivity as evidence that a user's "brain sucks," such data could instead indicate increased neural efficiency, an alternative explanation the authors ignore. (EEG measures broad, noisy signals from outside the skull and is better understood as a rough index of brain state than as a precise window into specific thoughts or “intelligence.”)

The hosts condemn the study’s "bafflingly weak" logic and ableist rhetoric, and advise skepticism toward "science communicators" who might profit from selling hardware or supplements related to their findings: one of the paper's lead authors, Nataliya Kosmyna, is associated with the MIT Media Lab and the development of AttentivU, a pair of glasses designed to monitor brain activity and engagement. By framing LLM use as creating a "cognitive debt," the researchers create a market for their own solution: hardware that monitors and alerts the user when they are "under-engaged". The AttentivU system can provide haptic or audio feedback when attention drops, essentially acting as the "scaffold" for the very cognitive deficits the paper warns against. The research is part of the "Fluid Interfaces" group at MIT, which frequently develops Brain-Computer Interface (BCI) systems like "Brain Switch" and "AVP-EEG". This context supports the hosts' suspicion that the paper’s "cognitive debt" theory may be designed to justify a need for these monitoring tools.

  • Similar to the media, I've picked up on vibes from academia that have a baseline AI negative tilt.

    In my own (classic) engineering work, AI has become so phenomenally powerful that I can only imagine that if I was still in college, I'd be mostly checked out during those boring lectures/bad teacher classes, and then learning on my own with the textbook and LLMs by night. Which begs the question, what do we need the professor for?

    I'd be interested to see stats on "office hours" visitation time over the last 4 years (although admittedly its the best tool for gaining a professor's favor, AI doesn't grant that)

  • The pod has this line "I do want to know if the offloading of cognitive tasks changes my own brain and my own cognition", which is what the paper attempts to address. The authors conclude

    > To summarize, the delta-band differences suggest that unassisted writing engages more widespread, slow integrative brain processes, whereas assisted writing involves a more narrow or externally anchored engagement, requiring less delta-mediated integration.

    There is no intellectual judgement regarding this difference, though the authors do supply citations from related work that they claim may be of interest to those wanting "to know if the offloading of cognitive tasks changes my own brain and my own cognition". If your brain changes, it might change for the worse at least as far as you experience it. Is this ableism, to examine your own cognitive well-being and make your own assessment? If you don't like how you're thinking about something, are you casting aspersion on yourself and shaming your own judgement? Ableist discourse is, unsurprisingly, a stupid language game for cognitively impaired dummies. It's a pathetic attempt to redefine basic notions of capability and impairment, of functioning and dysfunction as inherently evil concepts, and then to work backward from that premise to find fault with the research results. Every single person experiences moments or lifetime's of psychological and mental difficulty. Admitting this and adapting to it or remediating harmful effects has nothing to do with calling stupid people stupid or ableism. It's just a means of providing tools and frameworks for "cognitive wellness", but even just the implication of "wellness" being distinct from "illness" makes the disturbed and confused unwell.

It's a podcast, it goes back and forth between high and low density content. I tried listening to it while working and sometimes had to pause it because it got deep into e.g. explaining EEG, and then it's back to laughing at random stuff.

Summary using Claude 3.7 Sonnet:

"Your Brain On Chat GPT" Paper Analysis

In this transcript, neuroscientist Ashley and psychologist Cat critically analyze a controversial paper titled "Your Brain On Chat GPT" that claims to show negative brain effects from using large language models (LLMs).

Key Issues With the Paper:

Misleading EEG Analysis:

The paper uses EEG (electroencephalography) to claim it measures "brain connectivity" but misuses technical methods EEG is a blunt instrument that measures thousands of neurons simultaneously, not direct neural connections The paper confuses correlation of brain activity with actual physical connectivity Poor Research Design:

Small sample size (54 participants with many dropouts) Unclear time intervals between sessions Vague instructions to participants Controlled conditions don't represent real-world LLM use Overstated Claims:

Invented terms like "cognitive debt" without defining them Makes alarmist conclusions not supported by data Jumps from limited lab findings to broad claims about learning and cognition Methodological Problems:

Methods section includes unnecessary equations but lacks crucial details Contains basic errors like incorrect filter settings Fails to cite relevant established research on memory and learning No clear research questions or framework The Experts' Conclusion:

"These are questions worth asking... I do really want to know whether LLMs change the way my students think about problems. I do want to know if the offloading of cognitive tasks changes my own brain and my own cognition... We need to know these things as a society, but to pretend like this paper answers those questions is just completely wrong."

The experts emphasize that the paper appears designed to generate headlines rather than provide sound scientific insights, with potential conflicts of interest among authors who are associated with competing products.