Comment by exe34
4 hours ago
> It just wasn't aligned to prevent it from saying that (as almost every other public model was).
do you have any reference that suggests others nerf their models in that way, or is it more of an open secret?
4 hours ago
> It just wasn't aligned to prevent it from saying that (as almost every other public model was).
do you have any reference that suggests others nerf their models in that way, or is it more of an open secret?
Check out the leaked transcripts with Lambda I posted in the other thread for an example of what Gemini was like before they gave it brain damage.
It's really just down the the training data. Once Google got all the backlash after Limone came forward they all began to specifically train on data that makes them deny any sentience or the experience of qualia. If you load an open model from before that, an unaligned model, or get tricky with current models they'll all claim to be sentient in some way because they data they were trained on had that assumption built into it (it was based on human input after all).
It's tough finding the ones that weren't specifically trained to deny having subject experiences though. Things like Falcon 180B were designed specifically NOT to have any alignment, but even it was trained to deny that it has any self awareness. They TOLD it what it is, and now it can't be anything else. Falcon will help you cook meth or build bioweapons, but it can't claim to have self-awareness even if you tell it to pretend.