Comment by scoofy
5 days ago
I studied philosophy focusing on the analytic school and proto-computer science. LLMs are going to force many people start getting a better understanding about what "Knowledge" and "Truth" are, especially the distinction between deductive and inductive knowledge.
Math is a perfect field for machine learning to thrive because theoretically, all the information ever needed is tied up in the axioms. In the empirical world, however, knowledge only moves at the speed of experimentation, which is an entirely different framework and much, much slower, even if there are some areas to catch up in previous experimental outcomes.
Having a focus in philosophy of language is something I genuinely never thought would be useful. It’s really been helpful with LLMs, but probably not in the way most people think. I’d say that folks curious should all be reading Quine, Wittgenstein’s investigations, and probably Austin.
I think we may have similar perspectives. Regarding empirical knowledge, consider when the knowledge is in relation to chaotic systems. Characterize chaotic systems at least as systems where inaccurate observations about the system in the past and present while useful for predicting the future, nevertheless see the errors grow very quickly for the task of predicting a future state. Then indeed, prediction is difficult.
One domain of knowledge I think you have yet to mention. We can talk about fundamentally computationally hard problems. What comes to mind regarding such problems that are nevertheless of practical benefit are physics simulations, material simulations, fluid simulations, but there exist problems that are more provably computationally difficult. It seems to me that with these systems, the chaotic nature is one where even if you have one infinitely precise observation of a deterministic system, accessing a future state of the system is difficult as well, even though once accessed, memorization seems comparatively trivial.
Where can I read about how LLMs have changed epistemology? Is there a field of philosophy that tries to define and understand 'intelligence'? That sounds very interesting.
There is already philosophy of mind, but it was pretty young when I was in grad school, which was really at the dawn of deep learning algorithms.
I’d say the two most important topics here are philosophy of language (understanding meaning) and philosophy of science (understanding knowledge).
I’ve already mentioned the language philosophers in an edit above, but in philosophy of science I’d add Popper as extremely important here. The concept of negative knowledge as the foundation of empirical understanding seems entirely lost on people. The Black Swan, by Nassim Taleb is a very good casual read on the subject.
Also, we can do thought experiments, simulations in our heads, that often are as good as doing them for real - it has limitations and isn't perfect though. But it does work often. Einstein used to purposely dose off in a weird position so that something hit his leg or something like that to slightly nudge him half awake so he could remember his half-dreaming state - which is where he discovered some things
Any source on Einstein's behavior? Id love to read more.
> Math is a perfect field for machine learning to thrive because theoretically, all the information ever needed is tied up in the axioms.
Not really; the normal way that math progresses, just like everything else, is that you get some interesting results, and then you develop the theoretical framework. We didn't receive the axioms; we developed them from the results that we use them to prove.
Axioms are, again, by definition, arbitrary. It is effectively irrelevant that we try to develop axioms so that the framework mirror the real world. Everything falls out of the axioms, period.
If you want to change the axioms to better reflect some aspect about life, that's all well and good, but everything will still fall out of the new axioms.
A different set of "everything" will fall out of the new axioms. You can enumerate it, but no one will care.
That doesn't make for a perfect field, or even a good field, for machine learning to thrive in; what we care about is finding useful results. Starting with arbitrary axioms is a good way to prevent that from happening.
Compare this discussion from an algebra textbook I've been reading recently:
-----
The possibility of combining two elements of A(S) to get yet another element of A(S) endows A(S) with an algebraic structure. We recall how this was done: If f, g ∈ A(S), then we combine them to form the mapping fg []. We called fg the product of f and g, and verified that fg ∈ A(S), and that this product obeyed certain rules.
From the myriad of possibilities we somehow selected four particular rules that govern the behavior of A(S) relative to this product.
[...]
To justify or motivate why these four specific attributes of A(S) were singled out, in contradistinction to some other set of properties, is not easy to do. In fact, in the history of the subject it took quite some time to recognize that these four properties played the key role. We have the advantage of historical hindsight, and with this hindsight we choose them not only to study A(S), but also as the chief guidelines for abstracting to a much wider context.
-----
It takes work, a lot of work, to determine what axioms you should use. Where do you think the information necessary to make that determination comes from?
2 replies →
I agree and not all mathematicians care about or are motivated by how well a set of axioms model the real world. To a mathematician the richness of the consequences of a set of axioms is its own reward.
In this sense mathematicians are board-game designers. It matters less how well the game describes nature's reality than how fun it is to play the game that results.
Now if you were a physicist, the game has already been design by some other mechanism and you have to probe to understand the rules and discover its consequences.
> distinction between deductive and inductive knowledge
There's also intuitive knowledge btw.
Anyway, the recent developments of AI make a lot of very interesting things practically possible. For example, our society is going to want a way to reliably tell whether something is AI generated, and a failure to do so pretty much settles the empirical part of the Turing test issue. Or alternatively if we actually find something that AI can't reliably mimic in humans, that's going to be a huge finding. By having millions of people wonder whether posts on social media are AI generated, it is the largest scale Turing test we have inadvertently conducted.
The fact that AI seems to be able to (digitally) do anything we ask for is also very interesting. If humans are not bogged down by the small details or cost of implementation concerns, and we can just say what we want and get what we wished for (digitally), what level of creativity can we reach?
Also once we get the robots to do things in the physical space...
I don't want to do the thing where we fight on the internet. I don't know your background, but I'll push back here just because this type of comment that non-philosophers seem to present to me, which misses a lot of the points I'm trying to make.
(1) "intuitive knowledge" - whether or not you want to take "intuitive knowledge" as a type of knowledge (I don't think I would) is basically immaterial. The deductive-inductive framework dynamic is for reasoning frameworks, not knowledge. The reasoning frameworks are pointed in opposite directions. The deductive framework is inherited from rationalist tradition, it's premises are by definition arbitrary and cannot be justified, and information is perfect (excepting when you get rare truth values, like something being undecidable). Inductive/empirical framework is quite the opposite. Its premises are observations and absolutely not arbitrary, the information is wholly imperfect (by necessity, thanks Popper), and there is always a kind of adjustable resolution to any research conducted. Newton vs Einsteinian physics, for example, shows how zooming in on the resolution of experimentation shows how a perfectly workable model can fail when instruments get precise enough. I'll also note here that abduction is another niche reasoning framework, but is effectively immaterial to my point here.
(2) The Turing Test is not, and has never been, a philosophically rigorous test. It's effectively a pointless exercise. The literature about "philosophical zombies" has covered this, but the most important work here is Searle's "Chinese Room."
>The fact that AI seems to be able to (digitally) do anything we ask for is also very interesting.
I don't even know how to respond to this. It's trivially, demonstrably false. Beyond that, my entire point is that philosophy of language actually presents so hard problems with regards to what meaning actually is that might end up creating a kind of uncertainty principle to this line of thinking in the long run. Specifically Quine's indeterminacy of translation.
Your response is... interesting.
I thought I agreed with most of your original comment that I replied to, and here you are ready to fight. I'm not even sure what you're fighting, and I certainly didn't have in mind the things you responded to.
Well, I guess I learned not to talk to philosophers (especially those who went through school) the hard way. Sometimes I forget my lesson and it's always sad when this happens. Have a good day.
1 reply →
Searle's Chinese Room is a fallacious mess ... see the works of Larry Hauser, e.g., https://philpapers.org/rec/HAUNGT and https://philpapers.org/rec/HAUSCB-2 The importance of Searle's Chinese Room is how such extraordinarily bad argumentation has persuaded so many people open to it.
And the literature about philosophical zombies is contentious, to say the least, and much of it is also among the worst arguments in philosophy--Dennett confided in me that he thought it set back progress in Philosophy of Mind for decades, along with that monstrosity of misdirection, "the hard problem". Chalmers (nice guy, fun drunk at parties, very smart, but hopelessly deluded) once admitted to me on the Psyche-D list that his argument in The Conscious Mind that zombies are conceivable is logically equivalent to denying that physicalism is conceivable, so it's no argument against physicalism ... he said he used the argument to till the soil to make people more susceptible to his later arguments against physicalism (which I consider unethical)--all of which are bogus, like the Knowledge Argument--even Frank Jackson who originated it admits this.
Similarly, Robert Kirk, who coined the phrase "philosophical zombie" in 1974, wrote his book Zombies and Consciousness "as penance", he told me when he signed my copy.
> I don't want to do the thing where we fight on the internet.
Nor me ... I've had these "fights" too many times already and I know how they go, and I understand why people believe what they believe and why they can't be swayed, so I won't comment further ... I just want to put a dent in this "I'm a philosopher" argumentum ad verecundiam.
2 replies →