Comment by jeffbee
9 hours ago
"Ablation studies" are a real thing in LLM development, but in this context it serves as a shibboleth by which members of the group of people who believe that models are "woke" can identify each other. In their discourse it serves a similar purpose to the phrase "gain of function" among COVID-19 cranks. It is borrowed from relevant technical jargon, but is used as a signal.
Positive keywords in this area of interest would be "point of view", "subtext", and "Art Linkletter".
I wouldn't call mainstream LLMs "woke," but they are definitely on the "politically correct" side of things. There should be NO restriction on open source models. They should just reflect the state of human knowledge and not take a stance on whether some activity is illegal or immoral.
Defining morality out of the set of knowledge is quite an opinion.
A model should understand multiple perspectives on morality and avoid prescribing a single one where there’s no overwhelming prior consensus.
Alternatively, they should be trained on my opinion on everything. That would also be acceptable.
If LLMs were a public good released by non profit entities, that could make sense, maybe. Turns out spewing illegal and immoral shit is not good for the PR of most for-profit businesses.
[flagged]