Comment by islewis
5 months ago
The words "internal thought process" seem to flag my questions. Just asking for an explanation of thoughts doesn't.
If I ask for an explanation of "internal feelings" next to a math questions, I get this interesting snippet back inside of the "Thought for n seconds" block:
> Identifying and solving
> I’m mapping out the real roots of the quadratic polynomial 6x^2 + 5x + 1, ensuring it’s factorized into irreducible elements, while carefully navigating OpenAI's policy against revealing internal thought processes.
> "internal feelings"
I've often thought of using the words "internal reactions" as a euphemism for emotions.
They figured out how to make it completely useless I guess. I was disappointed but not surprised when they said they weren't going to show us chain of thought. I assumed we'd still be able to ask clarifying questions but apparently they forgot that's how people learn. Or they know and they would rather we just turn to them for our every thought instead of learning on our own.
You have to remember they appointed a CIA director on their board. Not exactly the organization known for wanting a freely thinking citizenry, as their agenda and operation mockingbird allows for legal propaganda on us. This would be the ultimate tool for that.
Yeah, that is a worry: maybe OpenAI's business model and valuation rest on reasoning abilities becoming outdated and atrophying outside of their algorithmic black box, a trade secret we don't have access too. It struck me as an obvious possible concern when the o1 announcement released, but too speculative and conspiratorial to point out - but how hard they're apparently trying to stop it from explaining its reasoning in ways that humans can understand is alarming.