Comment by morleytj
12 hours ago
A very good point. For anyone not familiar with anterograde amnesia, the classical case is patient H.M. (https://en.wikipedia.org/wiki/Henry_Molaison), whose condition was researched by Brenda Milner.
12 hours ago
A very good point. For anyone not familiar with anterograde amnesia, the classical case is patient H.M. (https://en.wikipedia.org/wiki/Henry_Molaison), whose condition was researched by Brenda Milner.
Or you could have just said "they can't form new memories."
I actually wasn't aware of this story. The steady stream of unexpected and enriching information like this is exactly why I love hackernews.
Sure, if you want to speak with the precision of a sledgehammer instead of a scalpel
All that needed to be conveyed was that there are humans who cannot create new memories. That is enough to pose the philosophical question about these models having intelligence. Anything more is just adding an anecdote that isn't necessary.
lol, as if pointing at a wikipedia article (without any relevant discussion of the contents therein) is some kind of conversational excellence.
Or perhaps you were referring to the impact of the two in that the "sledgehammer" of "they can't make new memories" is a lot more effective than the tiny scalpel of "if you do a wikipedia search this is a single one of the relevant articles"
I thought maybe people would be curious to read about how we came to understand the condition and the history behind it, as well as any associated information. Forgive me for such a deep transgression as this assumption.
That is a descriptive surface level reduction. Now do the work to define what that actually means for the intelligence.
Nobody else in the thread is making an argument that relies on the distinction.
"Intelligence" is used most commonly to refer to a class or collection of cognitive abilities. I don't think there is a consensus on an exact collection or specific class that the word covers, even if you consider specific scientific domains.
LLMs have honestly been a fun way to explore that. They obviously have a "kind" of intelligence, namely pattern recall. Wrap them in an agent and you get another kind: pattern composition. Those kinds of intelligences have been applied to mathematics for decades, but LLMs have allowed use to apply them to a semantic text domain.
I wonder if you could wrap image diffusion models in an agent set up the same way and get some new ability as well.
Or "like the dude in Memento".