Comment by onlyrealcuzzo
1 day ago
LLMs don't magically align with their creator's views.
The outputs stem from the inputs it was trained on, and the prompt that was given.
It's been trained on data to align the outputs to Elon's world view.
This isn't surprising.
Elon doesn't know Elon's own worldview, checks his own tweets to see what he should say.