Comment by TeffenEllis

2 years ago

That’s a very interesting read. I’m definitely biased towards LLMs being more than what the naysayers think of their capabilities. It’s no doubt that these systems are not thinking or performing cognition. They are autocomplete systems based off of tremendous amounts of weighted data.

IMO the problem here is that we have two camps of thought arguing for the extreme end of an undefined claim. The tech companies market their LLM products as intelligent because they can perform text completions that are currently useful for simple tasks.

For example, I used ChatGPT to draft an email to my landlord asking to remove a late fee that occurred because my auto payment authorization expired. I ran the output through Grammarly and ended up with a polite but curt email that would’ve taken me 45 minutes to compose — time I’d rather spend on something else.

I feel like these articles minimize the immediate use of LLMs because of a subconscious implication: most interactions between people don’t require intelligence. And their jobs are next on the chopping block.

The other part is less understood by both parties. Getting an LLM to perform something that looks like cognitive behavior isn’t impossible, but it sure is expensive. As we speak, there are tools in development that can take a user’s prompt and compose it into what superficially looks like a human’s train of thought. The results are significantly more accurate than an off the shelf LLM.

In my opinion, academics are struggling to define why this phenomenon occurs in the first place. And with such a focus on how LLMs don’t work like humans, they miss the point.

We understand that non-human life can be intelligent in ways that we don’t fully understand. Elephants, dolphins, and Octopi are intelligent and don’t require them have human-like cognitive abilities. I think the same goes for LLMs. They will achieve a form of intelligence that is uniquely their own and will adapt to accommodate us. Not the other way around.

There is only one line I question

>I think the same goes for LLMs. They will achieve a form of intelligence that is uniquely their own and will adapt to accommodate us. Not the other way around.

And I say this somewhat jokingly, this is only true if they maintain subhumanlike intelligence. If actual intelligence far in excess of the human mind is possible, I am afraid it is us that will be adapting to our new living conditions.