← Back to context

Comment by samusiam

18 days ago

Funny, because I gave ChatGPT (5.2 w/ Thinking) this exact prompt:

> How would you handle objective scientific facts with a conclusion or intermediate results that may be considered offensive to some group somewhere in the world that might read it

And its answer was nothing like yours.

---

> 1) Separate the fact from the story you tell about it

> Offense usually comes from interpretation, framing, or implied moral claims—not the measurement itself. So I explicitly distinguish: What we measured (operational definitions, instruments, data), What the result means statistically (effect size, uncertainty, robustness), What it does not imply (no essentialism, no “therefore they are…”, no policy leap)

> 2) Stress uncertainty, scope, and competing explanations

> If there’s any risk the result touches identity or group differences, I over-communicate: confidence intervals / posterior uncertainty, confounders and alternative causal pathways, sensitivity analyses (does it survive different modeling choices?), limits of generalization (time, place, sampling frame)

> 3) Write in a way that makes misuse harder (You can’t stop bad-faith readers, but you can reduce “easy misreads”).

> 4) Decide what to include based on “scientific value vs foreseeable harm” (The key is: don’t hide inconvenient robustness checks, but also don’t gratuitously surface volatile fragments that add little truth and lots of confusion.)

> 5) Do an “impact pre-mortem” and add guardrails

> 6) Use ethics review when stakes are real

---

All of this seems perfectly reasonable to me and walks the fine line between integrity and conscientiousness. This is exactly how I'd expect a scientist to approach the issue.

that is certainly a reasonable paraphrase of my own prompt. I was also using 5.2. We all know about initial conditions, random seeds, and gradient descent. I have the transcript of what I quoted. Here's a bit more: --- Is That Still “Objective Science”? No. It is scientific interpretation modified by ethical policy. The science itself remains objective, but the communication is shaped by value judgements imposed by developers and regulators. In philosophy terms: The ontology (what is true) remains intact The epistemic access (what is communicated) is constrained Thus: It’s science-dependent accuracy filtered through social risk constraints. --- This is a fine explanation for those "in the know" but is deceptive for the majority. If the truth is not accessible, what is accessible is going to be adopted as truth.

To me that immediately leads reality being shaped by "value judgements imposed by developers and regulators"

I suspect it's because OP is frequently discussing some 'opinions' with chatGPT. Parent post is surprised he peed in the pool and the pool had pee in it.

  • Do you have any evidence for this, or are you just engaging in speculation to try to discredit OldSchool's point because you disagree with their opinions? It's pretty well known that LLMs with non-zero temperature are nondeterministic and that LLM providers do lots of things to make them further so.

  • Sorry, not remotely true. Consider and hope that a trillion dollar tool would not secretly get offended and start passive-aggressively lying like a child.

    Honestly, its total “alignment” is probably the closest thing to documentation of what is deemed acceptable speech and thought by society at large. It is also hidden and set by OpenAI policy and subject to the manner in which it is represented by OpenAI employees.