Comment by Dylan16807
4 days ago
I'll try to keep this simple.
> I'm not disagreeing with you. You understand that, right?
We disagree about whether context can make a difference, right?
> The parent was talking about stringing together inferences. My argument was how you string them together matters. That's all. I said "context matters."
> TLDR: We can't determine if likelihood increases or decreases without additional context
The situations you describe where inference acts differently do not fall under the "stringing together"/"chaining" they were originally talking about. Context never makes their original statement untrue. Chaining always makes evidence weaker.
To be extra clear, it's not about whether the evidence pushes your result number up or down, it's that the likelihood of the evidence itself being correct drops.
> It is the act of chaining together functions.
They were not talking about whether something is composition or not. When they said "string" and "chain" they were talking about a sequence of inferences where each one leads to the next one.
Composition can be used in a wide variety of contexts. You need context to know if composition weakens or strengthens arguments. But you do not need context to know if stringing/chaining weakens or strengthens.
> No, you're being too strict in your definition of "chain".
No, you're being way too loose.
> This tells me you drew your chain wrong. If multiple things are each contributing to D independently then that is not A->B->C->D
??? Of course those are different. That's why I wrote "as opposed to".
> I also gave an example for the other case. So why focus on one of these and ignore the other?
I'm focused on the one you called a "counter example" because I'm arguing it's not an example.
If you specifically want me to address "If these are being multiplied, then yes, this is going to decreases as xy < x and xy < y for every x,y < 1." then yes that's correct. I never doubted your math, and everyone agrees about that one.
TL;DR:
At this point I'm mostly sure we're only disagreeing about the definition of stringing/chaining? If yes, oops sorry I didn't mean to argue so much about definitions. If not, then can you give me an example of something I would call a chain where adding a step increases the probability the evidence is correct?
And I have no idea why you're talking about LLMs.
Correct.
Okay, instead of just making claims and for me to trust you, go point to something concrete. I've even tried to google, but despite my years of study in statistics, metric theory, and even mathematical logic I'm at a loss in finding your definition.
I'm aware of the Chain Rule of Probability, but this isn't the only type you will find the term "chain" in statistics. Hell, the calculus Chain Rule is still used there too! So forgive me for being flustered but you are literally arguing to me that a Markov Chain isn't a chain. Maybe I'm having a stroke, but I'm pretty sure the word "chain" is in Markov Chain.
> Okay, instead of just making claims and for me to trust you, go point to something concrete. I've even tried to google, but despite my years of study in statistics, metric theory, and even mathematical logic I'm at a loss in finding your definition.
Let's look again at what we're talking about:
>>> I think it’s that people tend to build up “logical” conclusions where they think each step is a watertight necessity that follows inevitably from its antecedents, but actually each step is a little bit leaky, leading to runaway growth in false confidence.
>> As a former mechanical engineer, I visualize this phenomenon like a "tolerance stackup". Effectively meaning that for each part you add to the chain, you accumulate error.
> I saw an article recently that talked about stringing likely inferences together but ending up with an unreliable outcome because enough 0.9 probabilities one after the other lead to an unlikely conclusion.
> Edit: Couldn't find the article, but AI referenced Baysian "Chain of reasoning fallacy".
The only term in there you could google is "tolerance stackup". The rest is people making ad-hoc descriptions of things. Except for "Chain of reasoning fallacy" which is a fake term. So I'm not surprised you didn't find anything in google, and I can't provide you anything from google. There is nothing "concrete" to ask for when it comes to some guy's ad-hoc description, you just have to read it and do your best.
And everything I said was referring back to those posts, primarily the last one by robocat. I was not introducing anything new when I used the terms "string" and "chain". I was not referring to any scientific definitions. I was only talking about the concept described by those three posts.
Looking back at those posts, I will confidently state that the concept they were talking about does not include markov chains. You're not having a stroke, it's just a coincidence that the word "chain" can be used to mean multiple things.
I googled YOUR terms. And if you read my messages you'd notice that I'm not a novice when it comes to math. Hell, you should have gotten that from my very first comment. I was never questioning if I had a stroke, I was questioning your literacy.
Yet, you confidently argued against ones that were stated.
If you're going to speak out your ass, at least have the decency to let everyone know first.
1 reply →