Comment by thethirdone

2 days ago

You did not actually address the core of my points at all.

> It isn't a case of ratio it is a fundamentally different method of working hence my point of not needing all human literary output do the the equivalent of an LLM.

You can make ratios of anything. I agree that human cognition is different than LLM cognition, though I would think of it more like a phase difference than fundamentally different phenomena. Think liquid water vs steam, the density (a ratio) is vastly different and they have different harder to describe properties (surface tension, filling volume, incompressible vs compressible).

> Humans provide the connections, the reasoning the thought the insights and the subsequent correlations THEN we humans try to make a good pattern matcher/ guesser (the LLM) to match those.

Yes, humans provide the training data and benchmarks for measuring LLM improvement. Somehow meaning about the world has to get trained on to have any understanding. However, humans talking about patterns in number is not how the LLMs learned this. It is very much from just seeing lots of examples and deducing (during training not inference) the pattern. The fact that a general pattern is embedded in the weights implies that some general understand of many things are baked into the model.

> This common retort: most humans also makes mistakes, or most humans also do x, y, z means nothing.

It is not a retort, but some argument towards what "understanding" means. From what you have said, my guess of your definition makes "understanding" what humans do and computers are incapable of (by definition). If LLMs could out compete humans in all professional tasks, I think it would be hard to say they understand nothing. Humans are a worthwhile point of comparison and human exceptionalism can only really hold up until being surpassed.

I would also point out that some humans DO understand the properties of numbers I was referring to. In fact, I figured it out in second grade while doing lots of extra multiplication problems as punishment for being a brat.

> My digital thermometer uses an algorithm to determine the temperature. ... The paper will not be thinking if that is done.

I did not say "All algorithms are thinking". The stronger version of what I was saying is "Some algorithms can think." You simply have asserted the opposite with no reasoning.

> In fact at the extreme end this anthropomorphising has led to exacerbating mental health conditions and unfortunately has even led to humans killing themselves.

I do concede that anthropomorphizing can be problematic, especially if you do not have a background in CS and ML to understand beneath the hood. However, you completely skipped past my rather specific explanation of how it can be useful. On HN in particular, I do expect people to bring enough technical understanding to the table to not just treat LLMs as people.