Comment by therobots927
2 days ago
This is pure sophistry and the use of formal mathematical notation just adds insult to injury here:
“Think about it: we’ve built a special kind of function F' that for all we know can now accept anything — compose poetry, translate messages, even debug code! — and we expect it to always reply with something reasonable.”
This forms the axiom from which the rest of this article builds its case. At each step further fuzzy reasoning is used. Take this for example:
“Can we solve hallucination? Well, we could train perfect systems to always try to reply correctly, but some questions simply don't have "correct" answers. What even is the "correct" when the question is "should I leave him?".”
Yes of course relationship questions don’t have a “correct” answer. But physics questions do. Code vulnerability questions do. Math questions do. I mean seriously?
The most disturbing part of my tech career has been witnessing the ability that many highly intelligent and accomplished people have to apparently fool themselves with faulty yet complex reasoning. The fact that this article is written in defense of chatbots that ALSO have complex and flawed reasoning just drives home my point. We’re throwing away determinism just like that? I’m not saying future computing won’t be probabilistic but to say that LLMs are probabilistic, so they are the future of computing can only be said by someone with an incredibly strong prior on LLMs.
I’d recommend Baudrillards work on hyperreality. This AI conversation could not be a better example of the loss of meaning. I hope this dark age doesn’t last as long as the last one. I mean just read this conclusion:
“It's ontologically different. We're moving away from deterministic mechanicism, a world of perfect information and perfect knowledge, and walking into one made of emergent unknown behaviors, where instead of planning and engineering we observe and hypothesize.”
I don’t actually think the above paragraph makes any sense, does anyone disagree with me? “Instead of planning we observe and hypothesize”?
That’s called the scientific method. Which is a PRECURSOR to planning and engineering. That’s how we built the technology we have today. I’ll stop now because I need to keep my blood pressure low.
You seem to be having strong emotions about this stuff, so I'm a little nervous that I'm going to get flamed in response, but my best take at a well-intentioned response:
I don't think the author is arguing that all computing is going to become probabilistic. I don't get that message at all - in fact they point out many times that LLMs can't be trusted for problems with definite answers ("if you need to add 1+1 use a calculator"). Their opening paragraph was literally about not blindly trusting LLM output.
> I don’t actually think the above paragraph makes any sense, does anyone disagree with me?
Yes - it makes perfect sense to me. Working with LLMs requires a shift in perspective. There isn't a formal semantics you can use to understand what they are likely to do (unlike programming languages). You really do need to resort to observation and hypothesis testing, which yes, the scientific method is a good philosophy for! Two things can be true.
> the use of formal mathematical notation just adds insult to injury here
I don't get your issue with the use of a function symbol and an arrow. I'm a published mathematician - it seems fine to me? There's clearly no serious mathematics here, it's just an analogy.
> This AI conversation could not be a better example of the loss of meaning.
The "meaningless" sentence you quote after this is perfectly fine to me. It's heavy on philosophy jargon, but that's more a taste thing no? Words like "ontology" aren't that complicated or nonsensical - in this case it just refers to a set of concepts being used for some purpose (like understanding the behaviour of some code).
> I’d recommend Baudrillards work on hyperreality.
Any specific piece of writing you can recommend? I tried reading Simulacra and Simulation (English translation) a while ago and I found it difficult to follow.
I would actually recommend the YouTube channel Plastic Pills. This is a great video to start with: https://youtu.be/S96e6TdJlNE?si=gSVzXyyBq7t_q0Xp
Name-dropping Baudrillard based on Youtube videos is real rich... in irony.
4 replies →
> I’m not saying future computing won’t be probabilistic
Current and past computing has always been probabilistic in part, doesn't mean it will become 100% so. Almost all of the implementationof LLMs is deterministic except the part that is randomized. Its output is used in the same way. Humans combine the two approaches as well. Even reality is a combination of quantum uncertainty at a low level and very deterministic physics everywhere else.
> We're moving away from deterministic mechanicism, a world of perfect information and perfect knowledge, and walking into one made of emergent unknown behaviors, where instead of planning and engineering we observe and hypothesize.
The hype machine always involves pseudo-scientific babble and this is a particularly cringey example. The idea that seems to be promoted, that AI will be god like and therein we'll find all truth and knowledge is beyond delusional.
It a tool, like all other tools. Just like we see faces in everything we're also very susceptible to language (especially our own, consumed and regurgitated back to us) from a very neat chatbot.
AI hype is borderline mass hysteria at this point.
“The hype machine always involves pseudo-scientific babble and this is a particularly cringey example.”
Thanks for confirming. As crazy as the chatbot fanatics are, hearing them talk makes ME feel crazy.
There's another couple of principles underlying the most uses of science, which are consistency and smoothness. That is extrapolation and interpolation makes sense. Also, that if an experiment works now, it will work forever. Critically, the physical world is knowable.
It's already wrong at the first step. A probabilistic system is by definition not a function (it is a relation). This is such a basic mistake I don't know how anyone can take this seriously. Many existing systems are also not strictly functions (internal state can make them return different outputs for a given input). People love to abuse mathematics and employ its concepts hastily and irresponsibly.
The fact that the author is a data scientist at Anthropic should start ringing alarm bells for anyone paying attention. Isn’t Claude supposed to be at the front of the pack? To be honest I have a suspicion that Claude wrote the lions share of this essay. It’s that incomprehensible and soaked in jargon and formulas used completely out of context and incorrectly.
Their job, in this case, is probably more of a signal than a clear indicator
Plenty of front-running companies have hired plenty of…not-solid or excessively imaginative data scientists
From what the other comments say, this one seems to lack a grounding in science itself, which frankly is par for the course depending on their background
I read the full article (really resonated with it, fwiw), and I'm struggling to understand the issues you're describing.
> Yes of course relationship questions don’t have a “correct” answer. But physics questions do. Code vulnerability questions do. Math questions do. I mean seriously?
Can you say more? It seems to me the article says the same thing you are.
> I don’t actually think the above paragraph makes any sense, does anyone disagree with me? “Instead of planning we observe and hypothesize”?
I think the author is drawing a connection to the world of science, specifically quantum mechanics, where the best way to make progress has been to describe and test theories (as opposed to math where we have proofs). Though it's not a great analog since LLMs are not probabilistic in the same way quantum mechanics is.
In any case, I appreciated the article because it talks through a shift from deterministic to probabilistic systems that I've been seeing in my work.
Sure, but it's overblown. People have been reasoning about and building probabilistic systems formally since the birth of information theory back in the 1940s. Many systems we already rely on today are highly stochastic in their own ways.
Yes, LLMs are a bit of a new beast in terms of the use of stochastic processes as producers—but we do know how to deal with these systems. Half the "novelty" is just people either forgetting past work or being ignorant of it in the first place.
> Half the "novelty" is just people either forgetting past work or being ignorant of it in the first place.
We also see this in cryptocurrencies. The path to forgetting is greased by the presence of money and fame, and at some later time they are eventually forced to "discover" the same ancient problems they insisted couldn't possibly apply.
Truly appreciate the perspective. Any pointers to previous work on dealing with stochastic systems from the past? Part of my work is securing AI workloads, and it seems like losing determinism throws out a lot of assumptions in previously accepted approaches.
> Yes of course relationship questions don’t have a “correct” answer. But physics questions do. Code vulnerability questions do. Math questions do. I mean seriously?
But as per Gödel's incompleteness theorem and the Halting Problem, math questions (and consequently physics and CS questions) don't always have an answer.
Providing examples of questions without correct answers does not prove that no questions have correct answers. Or that it’s hallucinations aren’t problematic when they provide explicitly incorrect answers. The author is just avoiding addressing the hallucination problem at all by saying “well sometimes there is no correct answer”
There is a truth of the matter regarding whether a program will eventually halt or not, even when there is no computable proof for either case. Similar for the incompleteness theorems. The correct response in such cases is “I don’t know”.
You know something I don’t hear a lot from chatGPT? “I don’t know”
1 reply →
[flagged]
Exactly right. LLMs are a natural product of our post truth society. I’ve given up hope that things get better but maybe they will once the decline becomes more tangible. I just hope it involves less famine than previous systemic collapses.
[dead]