← Back to context

Comment by sim7c00

1 day ago

interesting take. i dont know a lot about grammarz yet in my own language i can speak fairly ok...

all i know about these LLMs is that even if they understand language or can create it, they know nothing of the subjects they speak of.

copilot told me to cast an int to str to get rid of an error.

thanks copilot, it was on kernel code.

glad i didnt do it :/. just closed browser and opened man pages. i get nowhere with these things. it feels u need to understand so much its likely less typing to write the code. code is concise and clear after all, mostly unambiguous. language on the other hand...

i do like it as a bit of a glorified google, but looking at what code it outputs my confidence it its findings lessens every prompt

> all i know about these LLMs is that even if they understand language or can create it, they know nothing of the subjects they speak of.

As a recent example of this, I was recently curious about how the heart gets the oxygen depleted blood back to the heart. Pumping blood out made sense to me, but the return path was less obvious.

So I asked chatgpt whether the heart sucks in the blood from veins.

It told me that the heart does not suck in the blood, it creates a negative pressure zone that causes the blood to flow into it ... :facepalm:

Sure, my language was non-technical/imprecise, but I bet if I asked a cardiologist about this they would have said something like "That's not the language I would have used, but basically."

I don't know why, but lately I've been getting a lot of cases where these models contradicts themself even within the same response. I'm working out a lot (debating a triathlon) and it told me to swim and do upper body weight lifting on the same day to "avoid working out the same muscle group in the same day". Similarly it told me to run and do leg workouts on the same day.

> i do like it as a bit of a glorified google, but looking at what code it outputs my confidence it its findings lessens every prompt

I'm having the exact same reaction. I'm finding they are still more useful than google, even with an error rate close to 70%, but I am quickly learning that you can't trust anything they output and should double check everything.