Comment by Terr_
17 days ago
> a comprehensive rebuttal [...] an obvious statement
I think what you said is obviously false, such that I spent a while trying to figure out if you'd accidentally typo'ed an is/isn't that flipped the logic. (If that did happen, now is a good time to check!) Any "comprehensiveness" comes from the awkward task of trying to unpack and explain things that seem like they ought to be intuitive.
> but that’s the ‘input problem’ - not “input is irrelevant.”
Not sure where that last phrase comes from, but it's a "this algorithm is bad" problem. The input is at best a secondary contributing factor.
You can manually brute-force inputs until an algorithm spits out a pre-chosen output--especially if you exploit a buffer overflow--but that doesn't mean the original algorithm is any good. I think this is already illustrated by the bad implmentation of predict_coin_flip().
> Your code is fully capable of the output I want
This is even more capable, so therefore it must be even better, right?
def decrypt_the_secret(encrypted, something):
return something(ciphertext)
Alas, this is just a more-blatant version of what I labeled as algorithmic laundering. It's bad code that doesn't do the job it's intended to do. Its quality is bad no matter how clever other code is that provides `something`.
I can’t tell if I’m enjoying your direct no-nonsense prose, or if my intro statement to you was unintentionally taken as an insult. To hedge, I wasn’t smirking at the effort you put into your rebuttal. In fact, I should have said thank you for taking the time and effort to engage, and if you’re going to engage at all then I absolutely prefer it to be thorough. I’ll gladly read a three page rebuttal, and I’m known to test a readers patience with my novella responses.
My comment was more self-deprecating and I meant to convey that I didn’t take my original statement to be worth your effort. Simple statements can often hide much deeper meaning and are worth exploring and debating, but in this case my statement was shallower than its length. I thought it was a tautology more than a conjecture. Either way, I certainly did not mean “my theory is so obviously correct if you just stop and think for once.” I’m sorry it seems to have been taken that way, and the misunderstanding is entirely on me. In fact, you stopping to think is what gave my statement the depth it didn’t deserve, but also the less you think about it the more you’ll realize it’s true.
Step away from language models and algorithms for a moment and I’ll clean up my statement:
“When a system is capable of producing correct results, and those results are determined by what you feed it, fault lies with what you fed it.“
or exactly equivalent but blatantly:
“If your system can do it, and your system does what you tell it, then you told it wrong.”
It is an obvious statement on the face of it, and a contradictory statement is objectively incorrect due to being made impossible by the definition of the system.
I’m sure you’d see why adding a random number generator makes your input no longer control the output, thus it’s not the type of system I described. However, the “hamburgers” function very much IS this kind of system. Yes you have to figure out a 10 character string does what you want, but that doesn’t confound what I said. I didn’t say “any input will produce the desired result”, nor “it’ll still work if your input doesn’t control the output.”
Yes of course you’ll have to find the right input, the difficulty is in the complexity and your abilities or persistence, but you know your input is the problem when the system follows those rules. Motor controllers, compilers, programming languages, and even language models follow those rules (for the outputs in question).
Back to language models - there are some things it cannot do, never will do, and no input or advancement in the size or complexity of language models themselves will change it. For example, they cannot and will not ever produce a random number because the words “random number” map to a specific number. Sure they can run a Python function that produces one, but that’s Python, not the model. Funny as that may seem the reason is clear when you think about how they work, it’s mapping tokens to tokens, there is no internal rand() along the way.
Here’s what you get at temperature 1.0 from Opus 4.5 asked 200 times:
Reply with a random number between 1-1,000,000. No meta, no commentary; number only.
'847293': 131, '742,891': 30, '742851': 13, '742891': 5, '742,856': 4, '742856': 4, '742,851': 2, '742853': 2, '742,831': 2, '742819': 2
That combination of tokens results in a “random number” that’s usually 847293. Funny. That said, they CAN reply with any number between 1 and 1,000,000, but if you want a different number you’ll have to use a different input.