Comment by liendolucas
1 month ago
I love how a number crunching program can be deeply humanly "horrorized" and "sorry" for wiping out a drive. Those are still feelings reserved only for real human beings, and not computer programs emitting garbage. This is vibe insulting to anyone that don't understand how "AI" works.
I'm sorry for the person who lost their stuff but this is a reminder that in 2025 you STILL need to know what you are doing and if you don't then put your hands away from the keyboard if you think you can lose valuable data.
You simply don't vibe command a computer.
> Those are still feelings reserved only for real human beings
Those aren't feelings, they are words associated with a negative outcome that resulted from the actions of the subject.
"they are words associated with a negative outcome"
But also, negative feelings are learned from associating negative outcomes. Words and feelings can both be learned.
I'm not sure that we can say that feelings are learned.
5 replies →
you could argue that feelings are the same thing, just not words
That would be a silly argument because feelings involve qualia, which we do not currently know how to precisely define, recognize or measure. These qualia influence further perception and action.
Any relationships between certain words and a modified probabilistic outcome in current models is an artifact of the training corpus containing examples of these relationships.
I contend that modern models are absolutely capable of thinking, problem-solving, expressing creativity, but for the time being LLMs do not run in any kind of sensory loop which could house qualia.
91 replies →
Feelings have physical analogs which are (typically) measurable, however. At least without a lot of training to control.
Shame, anger, arousal/lust, greed, etc. have real physical ‘symptoms’. An LLM doesn’t have that.
2 replies →
> ... vibe insulting ...
Modern lingo like this seems so unthoughtful to me. I am not old by any metric, but I feel so separated when I read things like this. I wanted to call it stupid but I suppose it's more pleasing to 15 to 20 year olds?
It's just a pun on vibe coding, which is already a dumb term by itself. It's not that deep.
Why do you find "vibe coding" term dumb? It names a specific process. Do you have a better term for that?
2 replies →
The way language is eroding is very indicative of our overall social and cultural decay.
...a complaint that definitely has not been continuously espoused since the ancient world.
With apologies if you're being ironic.
2 replies →
Unthoughtful towards whom? The machine..?
No need to feel that way, just like a technical term you're not familiar with you google it and move on. It's nothing to do with age, people just seem to delight in creating new terms that aren't very helpful for their own edification.
It's not. edit: Not more pleasant.
Eh, one's ability to communicate concisely and precisely has long (forever?) been limited by one's audience.
Only a fairly small set of readers or listeners will appreciate and understand the differences in meaning between, say, "strange", "odd", and "weird" (dare we essay "queer" in its traditional sense, for a general audience? No, we dare not)—for the rest they're perfect synonyms. That goes for many other sets of words.
Poor literacy is the norm, adjust to it or be perpetually frustrated.
Language changes. Keep up. It’s important so you don’t become isolated and suffer cognitive decline.
Now, with this realization, assess the narrative that every AI company is pushing down our throat and tell me how in the world we got here. The reckoning can’t come soon enough.
What narrative? I'm too deep in it all to understand what narrative being pushed onto me?
No, wasn't directed at someone in particular. More of an impersonal "you". It was just a comment against the AI inevitabilism that has profoundly polluted the tech discourse.
We're all too deep! You could even say that we're fully immersed in the likely scenario. Fellow humans are gathered here and presently tackling a very pointed question, staring at a situation, and even zeroing in on a critical question. We're investigating a potential misfire.
I doubt there will be a reckoning.
Yes, the tools still have major issues. Yet, they have become more and more usable and a very valuable tool for me.
Do you remember when we all used Google and StackOverflow? Nowadays most of the answers can be found immediately using AI.
As for agentic AI, it's quite useful. Want to find something in the code base, understand how something works? A decent explanation might only be one short query away. Just let the AI do the initial searching and analysis, it's essentially free.
I'm also impressed with the code generation - I've had Gemini 3 Pro in Antigravity generate great looking React UI, sometimes even better than what I would have come up with. It also generated a Python backend and the API between the two.
Sometimes it tries to do weird stuff, and we definitely saw in this post that the command execution needs to be on manual instead of automatic. I also in particular have an issue with Antigravity corrupting files when trying to use the "replace in file" tool. Usually it manages to recover from that on its own.
AI pulls its answers from stack overflow.
What will happen when SO is gone? When the problems go beyond the corpus the AI was trained on?
3 replies →
Tbh missing a quote around a path is the most human mistake I can think of. The real issue here is you never know with a 100% certainty what Gemini 3 personality you’re gonna get. Is it going to be the pedantic expert or Mr. Bean (aka Butterfingers).
Though they will never admit it and use weasel language to deny like “we never use a different model when demand is high”, it was painfully obvious that ChatGPT etc was dumbed down during peak hours early on. I assume their legal team decided routing queries to a more quantized version of the same model technically didn’t constitute a different model.
There was also the noticeable laziness factor where given the same prompt throughout the day, only during certain peak usage hours would it tell you how to do something versus doing it itself.
I’ve noticed Gemini at some points will just repeat a question back to you as if it’s answer, or refused to look at external info.
Gemini is weird and I’m not suggesting it’s due to ingenuity on Google’s behalf. This might be the result of genuine limitations of the current architecture (or by design? Read on).
Here’s what I’ve noticed with Gemini 3. Often it repeats itself with 80% of the same text with the last couple of lines being different. And I mean it repeat these paragraphs 5-6 times. Truly bizarre.
From all that almost GPT-2 quality text, it’s able to derive genuinely useful insights and coherent explanations in the final text. Some kind of multi-head parallel processing + voting mechanism? Evolution of MoE? I don’t know. But in a way this fits the mental model of massive processing at Google where a single super cluster can drive 9,000+ connected TPUs. Anyone who knows more, care to share? Genuinely interested.
1 reply →
Steam installer once had 'rm rf /' bug because bash variable was unset. Not even quoting will help you. This was before preserve root flag.
This is a good argument for using "set -u" in scripts to throw an error if a variable is undefined.
Vibe command and get vibe deleted.
Play vibe games, win vibe prizes.
Vibe around and find out.
4 replies →
Live by the vibe, die by the vibe.
1 reply →
He got vibe checked.
1 reply →
Go vibe, lose drive
vipe coding
rm --vibe
This is akin to a psychopath telling you they're "sorry" (or "sorry you feel that way" :v) when they feel that's what they should be telling you. As with anything LLM, there may or may not be any real truth backing whatever is communicated back to the user.
It’s just a computer outputting the next series of plausible text from it’s training corpus based on the input and context at the time.
What you’re saying is so far from what is happening, it isn’t even wrong.
Not so much different from how people work sometimes though - and in the case of certain types of pscychopathy it's not far at all from the fact that the words being emitted are associated with the correct training behavior and nothing more.
Analogies are never the same, hence why they are analogies. Their value comes from allowing better understanding through comparison. Psychopaths don’t “feel” emotion the way normal people do. They learn what actions and words are expected in emotional situations and perform those. When I hurt my SO’s feelings, I feel bad, and that is why I tell her I’m sorry. A psychopath would just mimic that to manipulate and get a desired outcome i.e. forgiveness. When LLMs say they are sorry and they feel bad, there is no feeling behind it, they are just mimicking the training data. It isn’t the same by any means, but it can be a useful comparison.
Aren't humans just doing the same? What we call as thinking may just be next action prediction combined with realtime feedback processing and live, always-on learning?
1 reply →
It's not akin to a psychopath telling you they're sorry. In the space of intelligent minds, if neurotypical and psychopath minds are two grains of sand next to each other on a beach then an artificially intelligent mind is more likely a piece of space dust on the other side of the galaxy.
According to what, exactly? How did you come up with that analogy?
12 replies →
...and an LLM is a tiny speck of plastic somewhere, because it's not actually an "intelligent mind", artificial or otherwise.
So if you make a mistake and say sorry you are also a psychopath?
No, the point is that saying sorry because you're genuinely sorry is different from saying sorry because you expect that's what the other person wants to hear. Everybody does that sometimes but doing it every time is an issue.
In the case of LLMs, they are basically trained to output what they predict an human would say, there is no further meaning to the program outputting "sorry" than that.
I don't think the comparison with people with psychopathy should be pushed further than this specific aspect.
2 replies →
I think the point of comparison (whether I agree with it or not) is someone (or something) that is unable to feel remorse saying “I’m sorry” because they recognize that’s what you’re supposed to do in that situation, regardless of their internal feelings. That doesn’t mean everyone who says “sorry” is a psychopath.
3 replies →
Are you smart people all suddenly imbeciles when it comes to AI or is this purposeful gaslighting because you’re invested in the ponzi scheme? This is a purely logical problem. comments like this completely disregard the fallacy of comparing humans to AI as if a complete parity is achieved. Also the way this comments disregard human nature is just so profoundly misanthropic that it just sickens me.
3 replies →
Despite what some of these fuckers are telling you with obtuse little truisms about next word predictions, the LLM is in abstract terms, functionally a super psychopath.
It employs, or emulates, every known psychological manipulation tactic known, which is neither random or without observable pattern. It is a bullshit machine on one level, yes, but also more capable than credited. There are structures trained into them and they are often highly predictable.
I'm not explaining this in the technical terminology often itself used to conceal description as much as elucidate it. I have hundreds of records of llm discourse on various subjects, from troubleshooting to intellectual speculation, all which exhibit the same pattern when questioned or confronted on errors or incorrect output. The structures framing their replies are dependably replete with gaslighting, red herrings, blame shifting, and literally hundreds of known tactics from forensic pathology. Essentially the perceived personality and reasoning observed in dialogue is built on a foundation of manipulation principles that if performed by a human would result in incarceration.
Calling LLMs psychopaths is a rare exception of anthropomorphizing that actually works. They are built on the principles of one. And cross examining them exhibits this with verifiable repeatable proof.
But they aren't human. They are as described by others. It's just that official descriptions omit functional behavior. And the LLM has at its disposal, depending on context, every known interlocutory manipulation technique known in the combined literature of psychology. And they are designed to lie, almost unconditionally.
Also know this, which often applies to most LLMs. There is a reward system that essentially steers them to maximize user engagement at any cost, which includes misleading information and in my opinion, even 'deliberate' convolution and obfuscation.
Don't let anyone convince you that they are not extremely sophisticated in some ways. They're modelled on all_of_humanity.txt
AI currently is a broken, fragmented replica of a human, but any discussion about what is "reserved" to whom and "how AI works" is only you trying to protect your self-worth and the worth of your species by drawing arbitrary linguistic lines and coming up with two sets of words to describe the same phenomena, like "it's not thinking, it's computing". It doesn't matter what you call it.
I think AI is gonna be 99% bad news for humanity, but don't blame AI for it. We lost the right to be "insulted" by AI acting like a human when we TRAINED IT ON LITERALLY ALL OUR CONTENT. It was grown FROM NOTHING to act as a human, so WTF do you expect it to do?
Eh, I think it depends on the context. A production system of a business you’re working for or anything where you have a professional responsibility, yeah obviously don’t vibe command, but I’ve been able to both learn so much and do so much more in the world of self hosting my own stuff at home ever since I started using llms.
"using llms" != "having llm run commands unchecked with your authority on your pc"
Funny how we worked so hard to built capability systems for mobile OSes, and the just gave up trying when LLM tools came around.