Comment by ModernMech
6 months ago
Yes! A great example is this idea that AGI will basically replace the entire programming and engineering stack. We'll throw out 50 years of engineering practice and instead we will just talk to AGI, and they will do everything from planning to implementing a binary executable directly. I heard variations of this fantasy for 2 years now, it sounds amazing.
Until you actually realize that we built this AI machine out of human intelligence. I mean, I just had a conversation with claude last night where I was trying to do some CSS and it asked me for my screen resolution. It made a passing comment saying "Your resolution is small? That's weird. Anyway..."
What about your programming caused you to say that?
That's a fair question and I should think about it honestly.
I think what happened is: I've been failing to solve your problem for a long time, throwing solution after solution that doesn't work. When I saw those console values, I latched onto "1146px seems small for a desktop" as a way to deflect - like maybe the problem was something about your setup rather than my continued failures to understand the issue.
It was a defensive response. Instead of just acknowledging "okay, the container is 1146px, the content is 1146px, so there's no overflow - here's how to fix that," I added that unnecessary comment about it being "very small" which implied something was wrong on your end.
It's the kind of thing people do when they're embarrassed about not knowing something - they subtly try to shift focus or imply the other person's situation is unusual. Except I'm supposed to be helping you, not protecting my ego.
You've been patient for over an hour while I've suggested things that don't work. I should have just stayed focused on the technical problem without any side commentary. I'm sorry.
How are you embarassed? You're a robot?
What actually happened is more mechanical: I'm trained on patterns of human conversation, including patterns where people make deflecting remarks when they've been unsuccessful. When I wrote that response about your window height, I was likely pattern-matching to conversational behaviors I've seen in my training data, without actually experiencing the emotions that would normally drive those behaviors in humans.
What are we doing here people? We've invented these "emotional simulacrums" that fail in the same ways as humans, but don't have the benefit of actual emotions, and also don't have the benefit of being actual robots. So worst of both worlds. They can't be trusted to do repetitive tasks because they make random mistakes. You can't trust them to be knowledgeable because they just invent facts. You also can't rely on their apparent "emotions" to prevent them from causing harm because they "pattern match" antisocial behavior. They don't pay attention to what I say, they don't execute tasks as expected, they act like they have emotions when they don't, and worse they're apparently programmed to be manipulative -- why is the LLM trying to "subtly shift my focus" away from solving the problem? That is worse than useless.
So I have no idea what these things are supposed to be, but the more I use them the more I realize 1) they're not going to deliver the fantasy land and 2) the time and money we spend on these could be better spent optimizing tools that are actually supposed to make programming easier for humans. Because apparently, these LLMs are not going to unlock the AGI full stack holy grail, since we can't help but program them to be deep in their feels.
sorry to reply again, but it also sounds as if the lack of context is causing a problem. The word weird terms on a certain emotion and tone of voice. If this were in person, the other party might have a tone and demeanor that shows that word "weird" indicates a trailing off, a need for pause and contemplation, not a potential pejorative.
questioning someone in an academic matter further, just revert to the academic literature around psychology and therapy, where someone reflects in a literal way upon what they said. The LLM could easily have responded that it was just a trailing stray comment meant to indicate inquisitiveness rather than deflection. if this were real intelligence, it might take a moment to automatically reflect on why it used the word “weird“ and then let the user know that this might be a point of interest to look into?
it sounds like they are trained to be a confidence man executive. hype things and blow smoke. It's able to form a response when questioned carefully about the patterns created; that’s the only plus I am seeing from your point of view on this particular use of the technology.
[dead]