← Back to context

Comment by cstoner

2 days ago

> all i know about these LLMs is that even if they understand language or can create it, they know nothing of the subjects they speak of.

As a recent example of this, I was recently curious about how the heart gets the oxygen depleted blood back to the heart. Pumping blood out made sense to me, but the return path was less obvious.

So I asked chatgpt whether the heart sucks in the blood from veins.

It told me that the heart does not suck in the blood, it creates a negative pressure zone that causes the blood to flow into it ... :facepalm:

Sure, my language was non-technical/imprecise, but I bet if I asked a cardiologist about this they would have said something like "That's not the language I would have used, but basically."

I don't know why, but lately I've been getting a lot of cases where these models contradicts themself even within the same response. I'm working out a lot (debating a triathlon) and it told me to swim and do upper body weight lifting on the same day to "avoid working out the same muscle group in the same day". Similarly it told me to run and do leg workouts on the same day.

> i do like it as a bit of a glorified google, but looking at what code it outputs my confidence it its findings lessens every prompt

I'm having the exact same reaction. I'm finding they are still more useful than google, even with an error rate close to 70%, but I am quickly learning that you can't trust anything they output and should double check everything.

AI is impressive for a subject you know nothing about. If you ask it what you already know it becomes far less impressive.