← Back to context

Comment by bubblyworld

2 days ago

You seem to be having strong emotions about this stuff, so I'm a little nervous that I'm going to get flamed in response, but my best take at a well-intentioned response:

I don't think the author is arguing that all computing is going to become probabilistic. I don't get that message at all - in fact they point out many times that LLMs can't be trusted for problems with definite answers ("if you need to add 1+1 use a calculator"). Their opening paragraph was literally about not blindly trusting LLM output.

> I don’t actually think the above paragraph makes any sense, does anyone disagree with me?

Yes - it makes perfect sense to me. Working with LLMs requires a shift in perspective. There isn't a formal semantics you can use to understand what they are likely to do (unlike programming languages). You really do need to resort to observation and hypothesis testing, which yes, the scientific method is a good philosophy for! Two things can be true.

> the use of formal mathematical notation just adds insult to injury here

I don't get your issue with the use of a function symbol and an arrow. I'm a published mathematician - it seems fine to me? There's clearly no serious mathematics here, it's just an analogy.

> This AI conversation could not be a better example of the loss of meaning.

The "meaningless" sentence you quote after this is perfectly fine to me. It's heavy on philosophy jargon, but that's more a taste thing no? Words like "ontology" aren't that complicated or nonsensical - in this case it just refers to a set of concepts being used for some purpose (like understanding the behaviour of some code).