I believe that training a system to understand the electrical signals that define a movement is significantly different from a system that understands thought.
I work in neurotech, I don't believe that the electrical signals of the brain define thought or memory.
When humans understood hydro-dynamics, we applied that understanding to the body and thought we had it all figured out. The heart pumped blood, which brought nutients to the organs, etc etc.
When humans discovered electricity, we slapped ourselves on the forehead and exclaimed "of course!! it's electric" and we have now applied that understanding on top of our previous understanding.
But we still don't know what consciousness or thought is, and the idea that it is a bunch of electrical impulses is not quite proven.
There are electrical firing of neurons, absolutely, but do they directly define thought?
I'm happy to say we don't know, and that "mind-reading" devices are yet un-proven.
A few start-ups are doing things like showing people images while reading brain activity and then trying to understand what areas of the brain "light-up" on certain images, but I think this path will prove to be fruitless in understanding thought and how the mind works.
Agree completely. The brain is so incredibly complex that we've barely scratched the surface. It's not just neurons, which are very complex and vary wildly in genetics between them - it's hundreds of other helper cells all interacting with each other in sometimes bizarre ways.
To try to boil down it all to any simple signal is just never going to work. If we want to map consciousness it's going to be as complex as simulating it ourselves, creating something as dense and detailed as a real brain.
I don't think it's anything other than electric activity, but it's clearly not "some electrical signal". It's the totality of them. They are many, and complicated. And they seem to be required for consciousness. Doubt there's any proven conscious state in a human, lacking electrical activity in the brain.
We know that the brain is a structure that works through electrochemical reactions. Synapses transmit signals sent by axons to neurons. We can test this. We can measure it. There's nothing else going on that we can describe using known science.
Ah, we might say, maybe there is an unknown science - we did not know about so much before, like electricity, like X-Rays, like quantum physics, and then we did, and the World changed.
The difference is that we observed something that science could not explain, and then we found the new science that explained it, and a new science was born.
It's pretty clear to me - but you may know more - that we can explain all brain activity through known science. It might be hard to think of us as nothing more than a bunch of electrochemical reactions in a real-world reinforcement learning system, but that's what we are: there's no gap that needs new science, is there?
Scalp-recorded EEG does not measure action potentials, it can only measure the graded potentials of basically one type of neuron (pyramidal cells) in the cortex, which is a really tiny percentage of both neurons and electrical activity in the brain. Additionally, there is also the various roles neurotransmitters play in the brain, etc., and glial cells seem to also play an important role. So, it’s definitely not the case that there aren’t any gaps that need new science, and even if there weren’t, it’s a pretty big stretch from there to decoding all brain activity solely through the electrical component.
It seems neatly organized to say "that we can explain all brain activity" and yet not necessarily bound exactly what is "brain activity." I think prior to recent research [1] people would have concluded that memory was solely the domain of the brain. But that sense/setting/environment would allow Clive Wearing to circumvent amnesia to access skills otherwise unavailable to his conscious mind [2] should raise questions of that understanding.
> We know that the brain is a structure that works through electrochemical reactions. Synapses transmit signals sent by axons to neurons. We can test this. We can measure it. There's nothing else going on that we can describe using known science
But what we can describe using known science doesn't describe the system. That doesn't mean the vacuum is voodoo. It's just a strong hint something more is going on. (Like the photoelectric effect.)
We know more about dark energy and matter than the dark essence that separates our leading electrochemical models from consciousness.
Can we? We can only see whatever we can measure with the tools we currently have, which are based on the knowledge we currently have. Who's to say there isn't something out there we haven't discovered yet? There's more than enough we still don't understand in many domains of science
> There are electrical firing of neurons, absolutely, but do they directly define thought?
Well, surgeons and researchers have shown that electrical stimulation of certain brain regions, can induce "perception" during procedures. They can make a patient have the conscious experience of certain smells, for instance.
It's not conclusive proof of anything, but I wouldn't bet against us getting closer to the mark, than we were when we only considered hydro-dynamics as the model.
> surgeons and researchers have shown that electrical stimulation of certain brain regions, can induce "perception" during procedures
I can carefully drop liquid reactants on a storage medium and induce nontrivial and reproducible changes in any computer reading it. That doesn't tell me how digital storage works, it just says I'm proximate to the process.
> I don't believe that the electrical signals of the brain define thought or memory.
Yes and no. It'll be something like a JPEG file. You can have a JPEG file that contains an image of a cat. But give that file to someone who has no clue about JPEG encoding and the file looks like random noise. They'll take 100 years to figure out it's an image of a cat.
Actually it's like if you take an electron beam prober to one of the NVidia AI GPU chips while it's figuring out whether it likes Wordsworth poetry.
You say you don’t believe something is true and then say you don’t know, but I’ll disagree with “electrical signals don’t “define” (encode) thoughts.
To be clear, of course it’s true that our thoughts are more than just electrical activity. The brain is a system. However, it seems clear that thoughts are at least partially encoded in electrical activity.
What you mentioned those startups will find fruitless, that’s already been done for years in a research setting. It may not be a successful business model, but it’s already been demonstrated.
There are fMRI studies and electrical measurement studies. You could argue fMRI decoding of images is not electrical activity which is true, but a bunch or work shows they are strongly correlated.
For electrical activity alone we’re already decoding information like words, so it’s hard to claim electrical activity doesn’t define thoughts.
Maybe you mean to say, doesn’t define all the content of our thoughts which is a much different claim.
Well, if you are making the assertion, which you implicitly seem to be, you must first define thought. Is a word == thought? And of correlations, we all know the adage about correlation and causation. Not that I would make the counter argument, that thought is not encoded by electrical signals, but I would bet you aren’t totally correct. Do you think there will be no future paradigm shifts?
Another I heard is that measuring EEG is like standing outside a stadium during a match and listening to the roar of the crowd.
Reading thoughts through EEG is like standing outside the stadium, listening for the roar of the crowd, and based on what you hear, knowing what the umpire's mother-in-law had for breakfast.
One thing is probably true: You have to train on the individual person, and it’s not transferable to a different person. Similar to how when taking an LLM and training on the fluctuations of its neural network to “read its thoughts”, the training results won’t transfer to interpreting the semantic contents of the network activity of a different LLM.
So you probably can’t build a universal mind-reading device.
> Train a decoder on rich neural recordings, then test it on entirely new thoughts chosen under blinded conditions.
There have been enough studies about this and the result is mostly the same: it's difficult to nearly impossible to reliable decode neural recordings that differ from the distribution of neural recordings that the decoder was trained on. There are a lot of reasons why this happens, electrical activity being insufficient is not one of them.
I thought it was pretty established by now that it is likely that other parts of the body participate in both memory and thought, a fully distributed system?
This is silly. It's the sum of electrical and chemical network activity in the brain. There's nothing else it can be. We've got a good enough handle on physics to know that it's not some weird quantum thing, it's not picking up radio signals from some other dimension, and it's not some sort of spirit or mystical phlogiston.
Your mind is the state of your brain as it processes information. It's a computer, in the sense that anything that processes information is a computer. It's not much like silicon chips or the synthetic computers we build, as far as specific implementation details go.
There's no scientific evidence that anything more is needed to explain everything the mind and brain does. Electrical and chemical signaling activity is sufficient. We can induce emotions, sights, sounds, smells, memories, moods, pleasure, pain, and anything you can experience through targeted stimulation of neurons in the brain. The scale of our experiments has been gross, only able to read and write from large numbers of neurons, but all the evidence is consistent.
There's not a single rigorously documented phenomenon, experiment, or any data in existence that suggests anything more than electrical and chemical signaling is needed to explain the full and wonderful and awe-inspiring phenomenon of the human mind.
It's the brain. We are self constructing software running on 2lb chunks of fancy electric meat stored in a bone vat with a sophisticated network of sensors and actuators in a wonderful biomechanical mobility platform that empowers us to interact with the world.
It explains consciousness, intelligence, qualia, and every other facet and nuance of the phenomena of mind - there's no need to tack on other explanations. It'd be like insisting that gasoline also requires the rage of fire spirits in order to ignite and power combustion engines - once you get to the point of understanding chemical combustion and expansion of gases and transfer of force, you don't need the fire spirits. They don't bring anything to the table. The scientific explanation is sufficient.
Neocortical networks, with thalamic and hippocampal system integrations, are sufficient to explain the entirety of human experience, in principle. We don't need fire spirits animating cortical stacks, or phlogiston or ether or spirit.
Could spirit exist as a distinct, separate phenomenon? Sure. It's not intrinsic to subjective experience, consciousness, and biological intelligence, though, and we should use tools of rational thinking when approaching these subjects, because a whole lot of pseudo-scientific BS gets passed as legitimate scientific and philosophical discourse without having any firm grounding in reality.
We are brains in bone vats - nothing says otherwise. Unless or until there's evidence to the contrary, let that be enough.
I think you misunderstood the person you're responding to. They did not say there was some higher force beyond the physical pieces.
What they're saying is that the brain is really really complicated and our understanding of biology is far too rudimentary right now to be saying "yes, absolutely, 100% sure that we know the nature of consciousness from this one measurement of one type of signal".
* Neurons are very complex and all have unique mutations from one another
* Hundreds of other types of cells in the brain interact with them and each other in ways we don't understand
* The various other parts of the body chemically interact with the brain in ways we don't understand yet, like the gut microbiome
Trying to flatten all of consciousness to one measurement is just not sufficient. It's like trying to simulate the entire planet as a perfect sphere of uniform density. That works OK for some things but falls apart for more complex questions.
There’s nothing in known physics that explains consciousness. I agree about the rest, but consciousness not only defies explanation by known physics, it’s so far beyond what’s known that there isn’t even any concept of what it could be. We barely have the ability to describe it, let alone explain it.
> Neocortical networks, with thalamic and hippocampal system integrations, are sufficient to explain the entirety of human experience, in principle.
Where did you get that? That's not an established scientific theorem, it's a philosophical stance (strong physicalist functionalism) expressed as if it were empirical fact.
We cannot simulate a full human brain at the correct level of detail, record every spike and synaptic change in a living human brain and we do not have a theory that predicts which neural organizations are conscious just from first principles of physics and network topology.
> We can induce emotions, sights, sounds, smells, memories, moods, pleasure, pain, and anything you can experience through targeted stimulation of neurons in the brain
That shows dependence of experience on brain activity but dependence is not the same thing as reduction or explanation.
We know certain neural patterns correlate with pain, color vision, memories, etc. we can causally influence experience by interacting with the brain.
But why any of this electrical/chemical stuff is accompanied by subjective experience instead of just being a complex zombie machine? The ability to toggle experiences by toggling neurons shows connection and that's it, it doesn't explain anything.
> We've got a good enough handle on physics to know that it's not some weird quantum thing, it's not picking up radio signals from some other dimension, and it's not some sort of spirit or mystical phlogiston.
We do have a good handle on how non conscious physical systems behave (engines, circuits, planets, whatever) But we don't have any widely accepted physical theory that derives subjective experience from physical laws. We don't know which physical/computational structures (if any) are sufficient and necessary for consciousness.
You are assuming without any evidence that current physics + it's "all computation" already gives a complete ontology of mind. So what is the consciousness? define it with physics, show me equations, you can't.
> It's a computer, in the sense that anything that processes information is a computer. It's not much like silicon chips or the synthetic computers we build, as far as specific implementation details go.
We design transformer architectures, we set the training objectives, we can inspect every weight and activation of a LLM. Yet even with all that access, tens of thousands of ML PhDs,years of work and we still don't fully understand why these models generalize the way they do, why they develop certain internal representations and how exactly particular concepts are encoded and combined.
If we struggle to interpret a ~10^11 parameter transformer whose every bit we can log and replay, it's a REAL hubris to act like we've basically got a 10^14-10^15 synapse constantly rewiring, developmentally shaped biological network to the point of confidently saying "we know there's nothing more to mind than this, case closed lol".
Our ability to observe and manipulate the brain is currently far weaker than our ability to inspect artificial nets and even those are not truly understood at a deep mechanistic concept level explanatory sense.
> Your mind is the state of your brain as it processes information.
Ok but then you have a problem, if anything that processes information is a computer, and mind is "just computation" then which computations are conscious?
Is my laptop conscious when it runs a big simulation?
Is a weather model conscious?
Are all supercomputers conscious by default just because they flip bits at scale?
If you say yes, you've gone to an extreme pancomputationalism that most people (including most physicalists) find extremely implausible.
If you say no, then you owe a non hand wavy criterion, what's the principled difference, in purely physical/computational terms between a conscious system (human brain) and a non conscious but still massively computational system (weather simulation, supercomputer cluster)? That criterion is exactly the kind of thing we don't have yet.
So saying "it’s just computation" without specifying which computations and why they give rise to a first person point of view leaves the fundamental question unanswered.
And one more thing your gasoline analogy is misleading, combustion never presented a "hard problem of combustion" in the sense of a first person, irreducible qualitative aspect. People had wrong physical theories, but once chemistry was in place, everything was observable from the outside.
Consciousness is different, you can know all the physical facts about a brain state and still not obviously see why it should feel like anything at all from the inside.
That's why even hardcore physicalist philosophers talk about the "explanatory gap". Whether or not you think it's ultimately bridgeable, it's not honest to say the gap is already closed and the scientific explanation is "sufficient".
[alert] Pre-thought match blacklist: 7f314541-abad-4df0-b22b-daa6003bdd43
[debug] Perceived injustice, from authority, in-person
[info] Resolution path: eaa6a1ea-a9aa-42dd-b9c6-2ec40aa6b943
[debug] Generate positive vague memory of past encounter
Not a reason to stop trying to help people with spinal damage, obviously, but a danger to avoid. It's easy to imagine a creepy machine argues with you or reminds you of things, but consider how much worse it'd be if it derails your chain of thought before you're even aware you have one.
When it infers illicit intent, it "corrects" you by biasing the output: a misclick here, a poisoned verb there... phantom intention drift™ injected into your parietal lobe milliseconds before your conscious even boots.
Split brain experiments show that a person rationalizes and accommodates their own behavior even when "they" didn't choose to perform an action[1]. I wonder if ML-based implants which extrapolate behavior from CNS signals may actually drive behavior that a person wouldn't intrinsically choose, yet the person accommodates that behavior as coming from their own free will.
> The patients could accurately indicate whether an object was present in the left visual field and pinpoint its location, even when they responded with the right hand or verbally. This despite the fact that their cerebral hemispheres can hardly communicate with each other and do so at perhaps 1 bit per second
1 bit per second and we are passing complex information about location in 3d space?
That's a great paper, but I don't think it calls into question anything about post-hoc rationalizations, and it might actually put that idea on more solid ground.
AI following the Libet ([0]1983) paper about preconscious thought apparently preceding 'voluntary' acts (which really elevated the question of what 'freewill' means).
The prima facie case for free will* is that it feels free. If you can predict the action before the feeling it removes that argument (unless you want to invoke time travel as an option)
*one of the predominant characterisations of free will, anyway. I'm a compatiblist, so I have no issue with caused feelings of decision making being in conflict with free will. I also have a variation of Tourette's, so I have a different perception of doing things wilfully when compared to most people. It's really hard to describe how sometimes you can't tell if something was tic or not.
I don't see why having some latency in the path of free will makes it no longer free. Before my arm moves up, there is a motor neuron that fires that is always correlated with my arm moving up; doesn't that just mean the free will occurs earlier in the process than the motor neuron firing?
There are a lot of things I feel that end up not being "real," like embarrassment, a failure. and anxiety. Why should free will not be like any of those?
There is no single definition for all scientists. However if you define free will as choices that are completely free of deterministic or even statistically deterministic causes that science could in principle predict, then most scientists would say: no, that kind of free will probably doesn’t exist.
I think the real danger lies in how many will accept that output as the unadulterated unmistakable truth for actions, for judgment. Talk about a sinister device.
For me, it immediately made me think of Psycho-Pass.
It’s a cyberpunk anime where society uses a system called the Sibyl System to constantly scan people’s mental states and “crime potential” (their Psycho-Pass).
People can be arrested before they’ve done anything - just because the system picked up certain signals from them.
Rather than the Karpathy thing about in class essays for everything, maybe random selections of students will be asked to head to the school fMRI machine and be asked to remember the details of writing their essay homework away from school.
If, one day someone can make a small, cheap device that can do the job of a fMRI it would be more world changing than you can imagine. If you had easy access to realtime data about what is going on in your brain, there is evidence to suggest that you can learn to influence the data and literally change your own mind.
That's actually happening. Commercially you can buy a 0.55T system such as the Siemens Free Max for around $500k.
There are also developments in ultra-low-field fMRI (<0.1T) which use permanent magnets, which are estimated to retail in the five figure range, however it's more for structural usage (can identify a tumour or stroke progression).
What you are saying sounds like being able to control your own heart rate if you see it on a monitor. Maybe combining low resolution fMRI with models trained on higher resolution data, could give you enough visibility that you could learn how to activate other areas of the brain that you wouldn't normally use for tasks.
It's interesting that the path from 'decide to do something' to performing the action is hundreds of ms long. It's also interesting that grabbing the data early in the process and acting on it can perform the action before the conscious 'self' understands fully that the action will take place. It's just another reminder that the 'you' that you consider to be running the show is really just a thin translation layer on top of an ocean of instinct, emotion, and hormones that is the real 'you'.
I rather prefer the holistic take that we are our whole selves and not just the part that reflects on what we do or the part that reacts to external and internal material stimuli. We know we can change the instincts, emotions, and hormones when they conflict with what we know by reflection to be just and good. To put it another way, we know that we can do things "without thinking" that are either just or unjust and by reflection can achieve some level of mastery over the direction of our impetuses.
You can take that position, but there's very little evidence that the translation layer that is accepting what I'm writing right now has any access to the vast expanse of mind underneath. The evidence is growing that our subconscious decides and 'we' as the conscious element of the system rationalizes that decision after the fact. There is a children of time novel about sentient octopuses that have a very pronounced disconnect between the 'crown' making the big picture calls and the 'reach' doing the implementation. When I first read that it was fascinating because it seemed so alien, but maybe that isn't so alien after all
> That learning enabled the system to detect her intention to play hundreds of milliseconds before she consciously attempted to do so, says trial leader Richard Andersen, a neuroscientist at the California Institute of Technology in Pasadena.
Sounds like Libet's Delay and all that. Conscious awareness is just a documentary covering something that has been decided some half a second ago.
Unlike the vast sea of the subconscious, we can try to take direct control of technology. But we don’t. So we are left to fret about what technology will do to us (meaning: what people will power will use it for).
The Amish take control of technology like that. If it doesn't pass review it's forbidden. It's a pretty good idea.
Consider that heroin used to be legal. We might have a dozen such technologies that we know are hazardous but have yet to kill enough people to force the issue.
I wonder how much this experience is similar to the Alien Hand Syndrome, where people experience that part of their body, usually a hand, act on their own.
Of course, for shallow people in need of validation, doing exactly that is the point of sycophancy. "I have a great idea, I will ask the AI.." followed immediately by "What an insightful question, this gets to the heart of the matter"
I find the take a quirk in how the state of the art assistive technology works is reason for privacy fear mongering to be tired, unimaginative, and typical of today's journalism that cares more for clicks than reporting fact.
It's a very interesting quirk of a immensely useful device for those that need it, but it's not an ethical dilemma.
I for one am sick and tired of these so-called ethicists who's only work appear to be so stir up outrage over nothing holding back medicinal progress.
Similar disingenuous articles appeared when stem-cell research was new, and still do from time to time. Saving lives and improving life for the least fortunate is not an ethical dilemma, it's an unequivocally good thing.
Quit the concern trolling nature.com, you're supposed to be better than that
the miracle that is humans and humanity came about through millennia of unrestrained fertility and lots of sex℠ producing many babies, most of whom didn't survive to adulthood. Insects and fish still do this on a grand scale. It's where we came from and who we are, and that's Not Bad™ and I think it's unequivocally a bad thing that we keep thinking we know better and interfering with it. we are failing miserably to propel our species forward for the future. "survival of the fittest" is a mistake, it's destruction of the no-longer-adequate that works the magic.
(pretty proud of myself for realizing after I put in the tm that I could go back and put in a service mark too)
>and I think it's unequivocally a bad thing that we keep thinking we know better and interfering with it.
I'll believe you actually hold this opinion after you die from an entirely curable decease. Until then I'll assume you actually think we do know better and that our interfering with, for example, bacterial infections, is actually a good thing.
Not to tell you what you think, but this is one of those opinions people seem to think they hold until it's phrased in a way that demonstrates the cost of the idea to them.
Just the fact that you survived long enough to type this comment into HN tells me it's highly unlikely medicine hasn't saved your life at least once already. And if it hasn't, it will.
I can count 5 times of the top of my head that I would've died if not for modern medicine (usually antibiotics, but also one life saving surgery). I would've been rendered a cripple a few times over too!
Trying to carry out a good thing (neural assistive technology) can open the door for the expansion of oppression (literal thought policing, in ?? years) in the same way that trying to stop a bad thing (terrorism, CSAM) can. It's not an immediate threat, it's a foot in the door.
This is quite the spicy take for something that could have far more than one purpose.
The problem with humanity is some people pick up the hammer and build a house while others will crack your head open with it and eat the pink gooey insides. The discussion of technology should be able to withstand the good and bad points of its conception.
>The discussion of technology should be able to withstand the good and bad points of its conception
True, but that's a poor excuse to make up hypothetical problems that don't even exist. Tying it into the current craze about data privacy is a bit too transparent imo.
The journalists should do better, the article makes a downright dishonest interpretation of the issue by shoehorning it into a lens of a long ongoing controversy about data privacy that is in no way applicable to the technology they're discussing.
The privacy issue is not only hypothetical, but far enough off in the future that it'd be a better fit for a sci-fi novel then nature.com reporting.
The problem is not discussing the downsides, the problem is doing it dishonestly, as the article does.
It’s so much more convincing with a device though. Backed by scientific consensus. Where do I leave my brain and offload my judgment effort and let it tell me what i need to know about anyone
20-year old me would’ve been amazed at how much closer technology is getting to Ghost in the Shell. 30-year old me is firmly in the “nope, no thanks” camp.
I love seeing the advancements still, don’t get me wrong, but in the current data, advertising, and attention economies under Capitalism? No fucking way that shit is ending up in my head.
Seems like they are really jumping to conclusions here.
> Smith’s BCI system, implanted as part of a clinical trial, trained on her brain signals as she imagined playing the keyboard. That learning enabled the system to detect her intention to play hundreds of milliseconds before she consciously attempted to do so
There are some serious problems lurking in the narrative here.
Let's look at it this way: they trained a statistical model on all of the brain patterns that happen when the patient performs a specific task. Next, the model was presented with the same brain pattern. When would you expect the model to complete the pattern? As soon as it recognizes the pattern, of course!
> That learning enabled the system to detect her intention to play hundreds of milliseconds before she consciously attempted to do so
There are two overconfident assumptions at play here:
1. Researchers can accurately measure the moment she "consciously attempted" to perform the pretrained task.
2. Whatever brain patterns that happened before this arbitrary moment are relevant to the patient's intention.
There's supposed to be a contradiction here: The first assumption is correct, and the second assumption is also correct. Therefore, the second assumption does not invalidate the first assumption. How? Because the circumstances of the second assumption are a special thing called "precognition"... Tautological nonsense.
Not only do these assumptions blatantly contradict each other, they are totally irrelevant to the model itself. The BCI system was trained on her brain signals during the entirety of her performance. It did not model "her intention" as anything distinct from the rest of the session. It modeled the performance. How can we know that when the patient begins a totally different task, that the model won't just "play the piano" like it was trained to? Oh wait, we do know:
> But there was a twist. For Smith, it seemed as if the piano played itself. “It felt like the keys just automatically hit themselves without me thinking about it,” she said at the time. “It just seemed like it knew the tune, and it just did it on its own.”
So the model is not responding to her intention. That's supposed to support your hypothesis how?
---
These are exactly the kind of narrative problems I expect to find any "AI" research buried in. How did we get here? I'll give you a hint:
> Along the way, he says, AI will continue to improve decoding capabilities and change how these systems serve their users.
This is the fundamental miscommunication. Statistical models are not decoders. Decoding is a symbolic task. The entire point of a statistical model is to overcome the limitations of symbolic logic by not doing symbolic logic.
By failing to recognize this distinction, the narrative leads us right to all the familiar tropes:
LLMs are able to perform logical deduction. They solve riddles, math problems, and find bugs in your code. Until they don't, that is. When an LLM performs any of these tasks wrong, that's simply a case of "hallucination". The more practice it gets, the fewer instances of hallucination, right? We are just hitting the current "limitation".
This entire story is predicated on the premise that statistical models somehow perform symbolic logic. They don't. The only thing a statistical model does is hallucinate. So how can it finish your math homework? It's seen enough examples to statistically stumble into the right answer. That's it. No logic, just weighted chance.
Correlation is not causation. Statistical relevance is not symbolic logic. If we fail to recognize the latter distinction, we are doomed to be ignorant of the former.
I believe that training a system to understand the electrical signals that define a movement is significantly different from a system that understands thought.
I work in neurotech, I don't believe that the electrical signals of the brain define thought or memory.
When humans understood hydro-dynamics, we applied that understanding to the body and thought we had it all figured out. The heart pumped blood, which brought nutients to the organs, etc etc.
When humans discovered electricity, we slapped ourselves on the forehead and exclaimed "of course!! it's electric" and we have now applied that understanding on top of our previous understanding.
But we still don't know what consciousness or thought is, and the idea that it is a bunch of electrical impulses is not quite proven.
There are electrical firing of neurons, absolutely, but do they directly define thought?
I'm happy to say we don't know, and that "mind-reading" devices are yet un-proven.
A few start-ups are doing things like showing people images while reading brain activity and then trying to understand what areas of the brain "light-up" on certain images, but I think this path will prove to be fruitless in understanding thought and how the mind works.
Agree completely. The brain is so incredibly complex that we've barely scratched the surface. It's not just neurons, which are very complex and vary wildly in genetics between them - it's hundreds of other helper cells all interacting with each other in sometimes bizarre ways.
To try to boil down it all to any simple signal is just never going to work. If we want to map consciousness it's going to be as complex as simulating it ourselves, creating something as dense and detailed as a real brain.
I don't think it's anything other than electric activity, but it's clearly not "some electrical signal". It's the totality of them. They are many, and complicated. And they seem to be required for consciousness. Doubt there's any proven conscious state in a human, lacking electrical activity in the brain.
2 replies →
We know that the brain is a structure that works through electrochemical reactions. Synapses transmit signals sent by axons to neurons. We can test this. We can measure it. There's nothing else going on that we can describe using known science.
Ah, we might say, maybe there is an unknown science - we did not know about so much before, like electricity, like X-Rays, like quantum physics, and then we did, and the World changed.
The difference is that we observed something that science could not explain, and then we found the new science that explained it, and a new science was born.
It's pretty clear to me - but you may know more - that we can explain all brain activity through known science. It might be hard to think of us as nothing more than a bunch of electrochemical reactions in a real-world reinforcement learning system, but that's what we are: there's no gap that needs new science, is there?
Scalp-recorded EEG does not measure action potentials, it can only measure the graded potentials of basically one type of neuron (pyramidal cells) in the cortex, which is a really tiny percentage of both neurons and electrical activity in the brain. Additionally, there is also the various roles neurotransmitters play in the brain, etc., and glial cells seem to also play an important role. So, it’s definitely not the case that there aren’t any gaps that need new science, and even if there weren’t, it’s a pretty big stretch from there to decoding all brain activity solely through the electrical component.
1 reply →
It seems neatly organized to say "that we can explain all brain activity" and yet not necessarily bound exactly what is "brain activity." I think prior to recent research [1] people would have concluded that memory was solely the domain of the brain. But that sense/setting/environment would allow Clive Wearing to circumvent amnesia to access skills otherwise unavailable to his conscious mind [2] should raise questions of that understanding.
[1] https://www.nyu.edu/about/news-publications/news/2024/novemb...
[2] https://en.wikipedia.org/wiki/Clive_Wearing
1 reply →
No, none of this is settled. We cannot adequately explain brain function with current science.
There have been studies this year implying that some brain functions rely on quantum interactions.
3 replies →
> We know that the brain is a structure that works through electrochemical reactions. Synapses transmit signals sent by axons to neurons. We can test this. We can measure it. There's nothing else going on that we can describe using known science
But what we can describe using known science doesn't describe the system. That doesn't mean the vacuum is voodoo. It's just a strong hint something more is going on. (Like the photoelectric effect.)
We know more about dark energy and matter than the dark essence that separates our leading electrochemical models from consciousness.
1 reply →
Can we? We can only see whatever we can measure with the tools we currently have, which are based on the knowledge we currently have. Who's to say there isn't something out there we haven't discovered yet? There's more than enough we still don't understand in many domains of science
7 replies →
I think there is new science we need first. The brain very likely uses quantum processes. We don't understand quantum mechanics yet.
[dead]
> There are electrical firing of neurons, absolutely, but do they directly define thought?
Well, surgeons and researchers have shown that electrical stimulation of certain brain regions, can induce "perception" during procedures. They can make a patient have the conscious experience of certain smells, for instance.
It's not conclusive proof of anything, but I wouldn't bet against us getting closer to the mark, than we were when we only considered hydro-dynamics as the model.
> surgeons and researchers have shown that electrical stimulation of certain brain regions, can induce "perception" during procedures
I can carefully drop liquid reactants on a storage medium and induce nontrivial and reproducible changes in any computer reading it. That doesn't tell me how digital storage works, it just says I'm proximate to the process.
1 reply →
It goes far beyond smells, in ways I find deeply unsettling
We can induce religious experience, see "The God Helmet"
https://en.wikipedia.org/wiki/God_helmet
or deep depression & suicidal thoughts
https://www.nejm.org/doi/full/10.1056/NEJM199905133401905
3 replies →
> I don't believe that the electrical signals of the brain define thought or memory.
Yes and no. It'll be something like a JPEG file. You can have a JPEG file that contains an image of a cat. But give that file to someone who has no clue about JPEG encoding and the file looks like random noise. They'll take 100 years to figure out it's an image of a cat.
Actually it's like if you take an electron beam prober to one of the NVidia AI GPU chips while it's figuring out whether it likes Wordsworth poetry.
You say you don’t believe something is true and then say you don’t know, but I’ll disagree with “electrical signals don’t “define” (encode) thoughts.
To be clear, of course it’s true that our thoughts are more than just electrical activity. The brain is a system. However, it seems clear that thoughts are at least partially encoded in electrical activity.
What you mentioned those startups will find fruitless, that’s already been done for years in a research setting. It may not be a successful business model, but it’s already been demonstrated.
There are fMRI studies and electrical measurement studies. You could argue fMRI decoding of images is not electrical activity which is true, but a bunch or work shows they are strongly correlated.
For electrical activity alone we’re already decoding information like words, so it’s hard to claim electrical activity doesn’t define thoughts.
Maybe you mean to say, doesn’t define all the content of our thoughts which is a much different claim.
Well, if you are making the assertion, which you implicitly seem to be, you must first define thought. Is a word == thought? And of correlations, we all know the adage about correlation and causation. Not that I would make the counter argument, that thought is not encoded by electrical signals, but I would bet you aren’t totally correct. Do you think there will be no future paradigm shifts?
1 reply →
For me it's like attaching wires to CPU and trying to decipher what youtube video is being playing right now.
Absolutely not possible.
That is such a great analogy!
Another I heard is that measuring EEG is like standing outside a stadium during a match and listening to the roar of the crowd.
Reading thoughts through EEG is like standing outside the stadium, listening for the roar of the crowd, and based on what you hear, knowing what the umpire's mother-in-law had for breakfast.
One thing is probably true: You have to train on the individual person, and it’s not transferable to a different person. Similar to how when taking an LLM and training on the fluctuations of its neural network to “read its thoughts”, the training results won’t transfer to interpreting the semantic contents of the network activity of a different LLM.
So you probably can’t build a universal mind-reading device.
You can't build a universal mind-reading device that doesn't require calibration.
2 replies →
This sounds logical and convincing.
At the same time, it should also be easy to falsify.
Has there been an experimental setup like this tested? If I’m not mistaken it should falsify your claim.
Train a decoder on rich neural recordings, then test it on entirely new thoughts chosen under blinded conditions.
If it can still recover the precise unseen content from signals alone, the claim that electrical activity is insufficient is overturned.
> Train a decoder on rich neural recordings, then test it on entirely new thoughts chosen under blinded conditions.
There have been enough studies about this and the result is mostly the same: it's difficult to nearly impossible to reliable decode neural recordings that differ from the distribution of neural recordings that the decoder was trained on. There are a lot of reasons why this happens, electrical activity being insufficient is not one of them.
seems like trying to take a single pixel signal (so to speak) and interpolate entire image out of it.
I thought it was pretty established by now that it is likely that other parts of the body participate in both memory and thought, a fully distributed system?
Does it make sense to think of thoughts, consciousness etc. as an emergent property of the neuronal activity in our brains?
This is silly. It's the sum of electrical and chemical network activity in the brain. There's nothing else it can be. We've got a good enough handle on physics to know that it's not some weird quantum thing, it's not picking up radio signals from some other dimension, and it's not some sort of spirit or mystical phlogiston.
Your mind is the state of your brain as it processes information. It's a computer, in the sense that anything that processes information is a computer. It's not much like silicon chips or the synthetic computers we build, as far as specific implementation details go.
There's no scientific evidence that anything more is needed to explain everything the mind and brain does. Electrical and chemical signaling activity is sufficient. We can induce emotions, sights, sounds, smells, memories, moods, pleasure, pain, and anything you can experience through targeted stimulation of neurons in the brain. The scale of our experiments has been gross, only able to read and write from large numbers of neurons, but all the evidence is consistent.
There's not a single rigorously documented phenomenon, experiment, or any data in existence that suggests anything more than electrical and chemical signaling is needed to explain the full and wonderful and awe-inspiring phenomenon of the human mind.
It's the brain. We are self constructing software running on 2lb chunks of fancy electric meat stored in a bone vat with a sophisticated network of sensors and actuators in a wonderful biomechanical mobility platform that empowers us to interact with the world.
It explains consciousness, intelligence, qualia, and every other facet and nuance of the phenomena of mind - there's no need to tack on other explanations. It'd be like insisting that gasoline also requires the rage of fire spirits in order to ignite and power combustion engines - once you get to the point of understanding chemical combustion and expansion of gases and transfer of force, you don't need the fire spirits. They don't bring anything to the table. The scientific explanation is sufficient.
Neocortical networks, with thalamic and hippocampal system integrations, are sufficient to explain the entirety of human experience, in principle. We don't need fire spirits animating cortical stacks, or phlogiston or ether or spirit.
Could spirit exist as a distinct, separate phenomenon? Sure. It's not intrinsic to subjective experience, consciousness, and biological intelligence, though, and we should use tools of rational thinking when approaching these subjects, because a whole lot of pseudo-scientific BS gets passed as legitimate scientific and philosophical discourse without having any firm grounding in reality.
We are brains in bone vats - nothing says otherwise. Unless or until there's evidence to the contrary, let that be enough.
I think you misunderstood the person you're responding to. They did not say there was some higher force beyond the physical pieces.
What they're saying is that the brain is really really complicated and our understanding of biology is far too rudimentary right now to be saying "yes, absolutely, 100% sure that we know the nature of consciousness from this one measurement of one type of signal".
* Neurons are very complex and all have unique mutations from one another
* Hundreds of other types of cells in the brain interact with them and each other in ways we don't understand
* The various other parts of the body chemically interact with the brain in ways we don't understand yet, like the gut microbiome
Trying to flatten all of consciousness to one measurement is just not sufficient. It's like trying to simulate the entire planet as a perfect sphere of uniform density. That works OK for some things but falls apart for more complex questions.
4 replies →
There’s nothing in known physics that explains consciousness. I agree about the rest, but consciousness not only defies explanation by known physics, it’s so far beyond what’s known that there isn’t even any concept of what it could be. We barely have the ability to describe it, let alone explain it.
13 replies →
> Neocortical networks, with thalamic and hippocampal system integrations, are sufficient to explain the entirety of human experience, in principle.
Where did you get that? That's not an established scientific theorem, it's a philosophical stance (strong physicalist functionalism) expressed as if it were empirical fact. We cannot simulate a full human brain at the correct level of detail, record every spike and synaptic change in a living human brain and we do not have a theory that predicts which neural organizations are conscious just from first principles of physics and network topology.
> We can induce emotions, sights, sounds, smells, memories, moods, pleasure, pain, and anything you can experience through targeted stimulation of neurons in the brain
That shows dependence of experience on brain activity but dependence is not the same thing as reduction or explanation. We know certain neural patterns correlate with pain, color vision, memories, etc. we can causally influence experience by interacting with the brain.
But why any of this electrical/chemical stuff is accompanied by subjective experience instead of just being a complex zombie machine? The ability to toggle experiences by toggling neurons shows connection and that's it, it doesn't explain anything.
> We've got a good enough handle on physics to know that it's not some weird quantum thing, it's not picking up radio signals from some other dimension, and it's not some sort of spirit or mystical phlogiston.
We do have a good handle on how non conscious physical systems behave (engines, circuits, planets, whatever) But we don't have any widely accepted physical theory that derives subjective experience from physical laws. We don't know which physical/computational structures (if any) are sufficient and necessary for consciousness.
You are assuming without any evidence that current physics + it's "all computation" already gives a complete ontology of mind. So what is the consciousness? define it with physics, show me equations, you can't.
> It's a computer, in the sense that anything that processes information is a computer. It's not much like silicon chips or the synthetic computers we build, as far as specific implementation details go.
We design transformer architectures, we set the training objectives, we can inspect every weight and activation of a LLM. Yet even with all that access, tens of thousands of ML PhDs,years of work and we still don't fully understand why these models generalize the way they do, why they develop certain internal representations and how exactly particular concepts are encoded and combined.
If we struggle to interpret a ~10^11 parameter transformer whose every bit we can log and replay, it's a REAL hubris to act like we've basically got a 10^14-10^15 synapse constantly rewiring, developmentally shaped biological network to the point of confidently saying "we know there's nothing more to mind than this, case closed lol".
Our ability to observe and manipulate the brain is currently far weaker than our ability to inspect artificial nets and even those are not truly understood at a deep mechanistic concept level explanatory sense.
> Your mind is the state of your brain as it processes information.
Ok but then you have a problem, if anything that processes information is a computer, and mind is "just computation" then which computations are conscious?
Is my laptop conscious when it runs a big simulation? Is a weather model conscious? Are all supercomputers conscious by default just because they flip bits at scale?
If you say yes, you've gone to an extreme pancomputationalism that most people (including most physicalists) find extremely implausible.
If you say no, then you owe a non hand wavy criterion, what's the principled difference, in purely physical/computational terms between a conscious system (human brain) and a non conscious but still massively computational system (weather simulation, supercomputer cluster)? That criterion is exactly the kind of thing we don't have yet.
So saying "it’s just computation" without specifying which computations and why they give rise to a first person point of view leaves the fundamental question unanswered.
And one more thing your gasoline analogy is misleading, combustion never presented a "hard problem of combustion" in the sense of a first person, irreducible qualitative aspect. People had wrong physical theories, but once chemistry was in place, everything was observable from the outside.
Consciousness is different, you can know all the physical facts about a brain state and still not obviously see why it should feel like anything at all from the inside.
That's why even hardcore physicalist philosophers talk about the "explanatory gap". Whether or not you think it's ultimately bridgeable, it's not honest to say the gap is already closed and the scientific explanation is "sufficient".
4 replies →
From some dystopic device log:
Not a reason to stop trying to help people with spinal damage, obviously, but a danger to avoid. It's easy to imagine a creepy machine argues with you or reminds you of things, but consider how much worse it'd be if it derails your chain of thought before you're even aware you have one.
This reminds me of "Upgrade" – a sci-fi movie about a paralized man who gets an AI brain implant, which can move his body for him. It's pretty decent.
https://www.imdb.com/title/tt6499752
Also "Common People," first episode in season 7 of Black Mirror. One word: ads [1]
[1] https://en.wikipedia.org/wiki/Common_People_(Black_Mirror)
Can you imagine having chatgpt in your brain to constantly police wrongthink? Would save the British media a job.
It might be able to react fast enough to prevent the horror of the wrongthink reaching twitter.
When it infers illicit intent, it "corrects" you by biasing the output: a misclick here, a poisoned verb there... phantom intention drift™ injected into your parietal lobe milliseconds before your conscious even boots.
You should make text based game
Split brain experiments show that a person rationalizes and accommodates their own behavior even when "they" didn't choose to perform an action[1]. I wonder if ML-based implants which extrapolate behavior from CNS signals may actually drive behavior that a person wouldn't intrinsically choose, yet the person accommodates that behavior as coming from their own free will.
[1]: "The interpreter" https://en.wikipedia.org/wiki/Left-brain_interpreter
Split brain experiments have been called into question.[0]
[0]: https://www.sciencedaily.com/releases/2017/01/170125093823.h...
> The patients could accurately indicate whether an object was present in the left visual field and pinpoint its location, even when they responded with the right hand or verbally. This despite the fact that their cerebral hemispheres can hardly communicate with each other and do so at perhaps 1 bit per second
1 bit per second and we are passing complex information about location in 3d space?
3 replies →
That's a great paper, but I don't think it calls into question anything about post-hoc rationalizations, and it might actually put that idea on more solid ground.
1 reply →
Wow this is fascinating, and gets rid of one of my eldritch memetic horrors. Thanks for sharing, I’m going to submit it as its own post as well!
2 replies →
AI following the Libet ([0]1983) paper about preconscious thought apparently preceding 'voluntary' acts (which really elevated the question of what 'freewill' means).
* [0] https://pubmed.ncbi.nlm.nih.gov/6640273/
The prima facie case for free will* is that it feels free. If you can predict the action before the feeling it removes that argument (unless you want to invoke time travel as an option)
*one of the predominant characterisations of free will, anyway. I'm a compatiblist, so I have no issue with caused feelings of decision making being in conflict with free will. I also have a variation of Tourette's, so I have a different perception of doing things wilfully when compared to most people. It's really hard to describe how sometimes you can't tell if something was tic or not.
I don't see why having some latency in the path of free will makes it no longer free. Before my arm moves up, there is a motor neuron that fires that is always correlated with my arm moving up; doesn't that just mean the free will occurs earlier in the process than the motor neuron firing?
1 reply →
There are a lot of things I feel that end up not being "real," like embarrassment, a failure. and anxiety. Why should free will not be like any of those?
3 replies →
Hm, but maybe you can predict the feeling before you can predict the action. Checkmate atheists :)
(for the record I am also a compatibilist)
That it precedes voluntary acts tells us that most of what we do are not conscious. Which has been known for over a century, maybe millenia.
(opinion stolen from some Chomsky video)
Well, what does freewill mean to scientists?
There is no single definition for all scientists. However if you define free will as choices that are completely free of deterministic or even statistically deterministic causes that science could in principle predict, then most scientists would say: no, that kind of free will probably doesn’t exist.
> is it time to worry?
Shouldn't the device be the judge of that?
I think the real danger lies in how many will accept that output as the unadulterated unmistakable truth for actions, for judgment. Talk about a sinister device.
You don’t need a sinister device. This is essentially how propaganda works.
A handsome, well-dressed alpha speaking with confidence and certainty. That's truth right there.
Propaganda is mostly without Science. This is with.
Ok does anyone else’s mind just immediately go to “The Minority Report” is soon going to no longer be just a sci fi dystopia?
For me, it immediately made me think of Psycho-Pass.
It’s a cyberpunk anime where society uses a system called the Sibyl System to constantly scan people’s mental states and “crime potential” (their Psycho-Pass).
People can be arrested before they’ve done anything - just because the system picked up certain signals from them.
Very, very interesting idea
Oh that sounds cool. Thanks for sharing. I’m definitely going to check it out.
yes, first thing i thought of. although i'm quite confident it's still outside the scope of our lifetimes, i do worry for future generations
[dead]
Rather than the Karpathy thing about in class essays for everything, maybe random selections of students will be asked to head to the school fMRI machine and be asked to remember the details of writing their essay homework away from school.
fMRI machines are not cheap, nor plentiful.
If, one day someone can make a small, cheap device that can do the job of a fMRI it would be more world changing than you can imagine. If you had easy access to realtime data about what is going on in your brain, there is evidence to suggest that you can learn to influence the data and literally change your own mind.
That's actually happening. Commercially you can buy a 0.55T system such as the Siemens Free Max for around $500k.
There are also developments in ultra-low-field fMRI (<0.1T) which use permanent magnets, which are estimated to retail in the five figure range, however it's more for structural usage (can identify a tumour or stroke progression).
What you are saying sounds like being able to control your own heart rate if you see it on a monitor. Maybe combining low resolution fMRI with models trained on higher resolution data, could give you enough visibility that you could learn how to activate other areas of the brain that you wouldn't normally use for tasks.
It's interesting that the path from 'decide to do something' to performing the action is hundreds of ms long. It's also interesting that grabbing the data early in the process and acting on it can perform the action before the conscious 'self' understands fully that the action will take place. It's just another reminder that the 'you' that you consider to be running the show is really just a thin translation layer on top of an ocean of instinct, emotion, and hormones that is the real 'you'.
I rather prefer the holistic take that we are our whole selves and not just the part that reflects on what we do or the part that reacts to external and internal material stimuli. We know we can change the instincts, emotions, and hormones when they conflict with what we know by reflection to be just and good. To put it another way, we know that we can do things "without thinking" that are either just or unjust and by reflection can achieve some level of mastery over the direction of our impetuses.
You can take that position, but there's very little evidence that the translation layer that is accepting what I'm writing right now has any access to the vast expanse of mind underneath. The evidence is growing that our subconscious decides and 'we' as the conscious element of the system rationalizes that decision after the fact. There is a children of time novel about sentient octopuses that have a very pronounced disconnect between the 'crown' making the big picture calls and the 'reach' doing the implementation. When I first read that it was fascinating because it seemed so alien, but maybe that isn't so alien after all
I’ve been saying “There is a real you, unfortunately, you’re not it.”
4 replies →
> That learning enabled the system to detect her intention to play hundreds of milliseconds before she consciously attempted to do so, says trial leader Richard Andersen, a neuroscientist at the California Institute of Technology in Pasadena.
Sounds like Libet's Delay and all that. Conscious awareness is just a documentary covering something that has been decided some half a second ago.
Unlike the vast sea of the subconscious, we can try to take direct control of technology. But we don’t. So we are left to fret about what technology will do to us (meaning: what people will power will use it for).
The Amish take control of technology like that. If it doesn't pass review it's forbidden. It's a pretty good idea.
Consider that heroin used to be legal. We might have a dozen such technologies that we know are hazardous but have yet to kill enough people to force the issue.
I wonder how much this experience is similar to the Alien Hand Syndrome, where people experience that part of their body, usually a hand, act on their own.
Install one of these on every citizen!
I guess I will start paying attention when it can predict word choice in my internal monologue.
Of course, for shallow people in need of validation, doing exactly that is the point of sycophancy. "I have a great idea, I will ask the AI.." followed immediately by "What an insightful question, this gets to the heart of the matter"
I find the take a quirk in how the state of the art assistive technology works is reason for privacy fear mongering to be tired, unimaginative, and typical of today's journalism that cares more for clicks than reporting fact.
It's a very interesting quirk of a immensely useful device for those that need it, but it's not an ethical dilemma.
I for one am sick and tired of these so-called ethicists who's only work appear to be so stir up outrage over nothing holding back medicinal progress.
Similar disingenuous articles appeared when stem-cell research was new, and still do from time to time. Saving lives and improving life for the least fortunate is not an ethical dilemma, it's an unequivocally good thing.
Quit the concern trolling nature.com, you're supposed to be better than that
>it's an unequivocally good thing.
the miracle that is humans and humanity came about through millennia of unrestrained fertility and lots of sex℠ producing many babies, most of whom didn't survive to adulthood. Insects and fish still do this on a grand scale. It's where we came from and who we are, and that's Not Bad™ and I think it's unequivocally a bad thing that we keep thinking we know better and interfering with it. we are failing miserably to propel our species forward for the future. "survival of the fittest" is a mistake, it's destruction of the no-longer-adequate that works the magic.
(pretty proud of myself for realizing after I put in the tm that I could go back and put in a service mark too)
>and I think it's unequivocally a bad thing that we keep thinking we know better and interfering with it.
I'll believe you actually hold this opinion after you die from an entirely curable decease. Until then I'll assume you actually think we do know better and that our interfering with, for example, bacterial infections, is actually a good thing.
Not to tell you what you think, but this is one of those opinions people seem to think they hold until it's phrased in a way that demonstrates the cost of the idea to them.
Just the fact that you survived long enough to type this comment into HN tells me it's highly unlikely medicine hasn't saved your life at least once already. And if it hasn't, it will.
I can count 5 times of the top of my head that I would've died if not for modern medicine (usually antibiotics, but also one life saving surgery). I would've been rendered a cripple a few times over too!
1 reply →
Trying to carry out a good thing (neural assistive technology) can open the door for the expansion of oppression (literal thought policing, in ?? years) in the same way that trying to stop a bad thing (terrorism, CSAM) can. It's not an immediate threat, it's a foot in the door.
This is quite the spicy take for something that could have far more than one purpose.
The problem with humanity is some people pick up the hammer and build a house while others will crack your head open with it and eat the pink gooey insides. The discussion of technology should be able to withstand the good and bad points of its conception.
>The discussion of technology should be able to withstand the good and bad points of its conception
True, but that's a poor excuse to make up hypothetical problems that don't even exist. Tying it into the current craze about data privacy is a bit too transparent imo.
The journalists should do better, the article makes a downright dishonest interpretation of the issue by shoehorning it into a lens of a long ongoing controversy about data privacy that is in no way applicable to the technology they're discussing.
The privacy issue is not only hypothetical, but far enough off in the future that it'd be a better fit for a sci-fi novel then nature.com reporting.
The problem is not discussing the downsides, the problem is doing it dishonestly, as the article does.
You don't need a device to do this.
It’s so much more convincing with a device though. Backed by scientific consensus. Where do I leave my brain and offload my judgment effort and let it tell me what i need to know about anyone
> It’s so much more convincing with a device though. Backed by scientific consensus.
Hold on, I dropped my Nobel Prize for lobotomy on the ground somewhere... going to need some help looking for it...
On second thought, maybe it's better off not being found.
1 reply →
Maybe skulls will need a faraday cage.
We already have tinfoil hats for that.
They make better parabolic reflectors than Faraday cages.
20-year old me would’ve been amazed at how much closer technology is getting to Ghost in the Shell. 30-year old me is firmly in the “nope, no thanks” camp.
I love seeing the advancements still, don’t get me wrong, but in the current data, advertising, and attention economies under Capitalism? No fucking way that shit is ending up in my head.
[dead]
Seems like they are really jumping to conclusions here.
> Smith’s BCI system, implanted as part of a clinical trial, trained on her brain signals as she imagined playing the keyboard. That learning enabled the system to detect her intention to play hundreds of milliseconds before she consciously attempted to do so
There are some serious problems lurking in the narrative here.
Let's look at it this way: they trained a statistical model on all of the brain patterns that happen when the patient performs a specific task. Next, the model was presented with the same brain pattern. When would you expect the model to complete the pattern? As soon as it recognizes the pattern, of course!
> That learning enabled the system to detect her intention to play hundreds of milliseconds before she consciously attempted to do so
There are two overconfident assumptions at play here:
1. Researchers can accurately measure the moment she "consciously attempted" to perform the pretrained task.
2. Whatever brain patterns that happened before this arbitrary moment are relevant to the patient's intention.
There's supposed to be a contradiction here: The first assumption is correct, and the second assumption is also correct. Therefore, the second assumption does not invalidate the first assumption. How? Because the circumstances of the second assumption are a special thing called "precognition"... Tautological nonsense.
Not only do these assumptions blatantly contradict each other, they are totally irrelevant to the model itself. The BCI system was trained on her brain signals during the entirety of her performance. It did not model "her intention" as anything distinct from the rest of the session. It modeled the performance. How can we know that when the patient begins a totally different task, that the model won't just "play the piano" like it was trained to? Oh wait, we do know:
> But there was a twist. For Smith, it seemed as if the piano played itself. “It felt like the keys just automatically hit themselves without me thinking about it,” she said at the time. “It just seemed like it knew the tune, and it just did it on its own.”
So the model is not responding to her intention. That's supposed to support your hypothesis how?
---
These are exactly the kind of narrative problems I expect to find any "AI" research buried in. How did we get here? I'll give you a hint:
> Along the way, he says, AI will continue to improve decoding capabilities and change how these systems serve their users.
This is the fundamental miscommunication. Statistical models are not decoders. Decoding is a symbolic task. The entire point of a statistical model is to overcome the limitations of symbolic logic by not doing symbolic logic.
By failing to recognize this distinction, the narrative leads us right to all the familiar tropes:
LLMs are able to perform logical deduction. They solve riddles, math problems, and find bugs in your code. Until they don't, that is. When an LLM performs any of these tasks wrong, that's simply a case of "hallucination". The more practice it gets, the fewer instances of hallucination, right? We are just hitting the current "limitation".
This entire story is predicated on the premise that statistical models somehow perform symbolic logic. They don't. The only thing a statistical model does is hallucinate. So how can it finish your math homework? It's seen enough examples to statistically stumble into the right answer. That's it. No logic, just weighted chance.
Correlation is not causation. Statistical relevance is not symbolic logic. If we fail to recognize the latter distinction, we are doomed to be ignorant of the former.
[dead]