“Are you the one?” is free money

5 days ago (blog.owenlacey.dev)

As a math guy who loves reality tv, I was also drawn to the show and wrote a blog post [0] about how to programmatically calculate the probabilities as the show progresses. It was a lot of fun optimizing it to be performant. You can `pip install ayto` to use it to follow along with the show or try out scenarios.

The linked post is a very thorough treatment of AYTO and a great read. I really like the "guess who" bit on how to maximize the value of guesses. It's a shame the participants aren't allowed to have pen and paper—it makes optimization a lot trickier! I'm impressed they do as well as they do.

[0]: https://danturkel.com/2023/01/25/math-code-are-you-the-one.h...

  • Let's be friends :')

    Loved your post, really enjoyed getting into the meat of it. I wanted to position mine to a layman, kept asking myself "can I explain this to my Dad?"

    I think where the post falls short is the absence of a silver bullet that contestants can use to win the game sooner.

  • And sometimes they just don't do better as a plot point, staying together an extra week after finding out they are not the one because of the intensity of their love (they met 4 days before)

    • Giving them more credit than they probably deserve but: when you're solving "by hand" like they are in the show, keeping a known non-match couple together may actually be helpful for interpreting the results of a matchup ceremony because you'll know that that couple didn't contribute to the beams.

      2 replies →

This was great, but it skipped over the most interesting bit - how you actually choose which matchups and truth booths. That is, an actual strategy that contestants could use that doesn't require a computer.

  • In addition to Mastermind, Wordle also falls into the same category.

    Optimal play to reduce the search space in both follow the same general pattern - the next check should satisfy all previous feedback, and included entries should be the most probable ones, both of those previously tested, and those not. If entries are equally probable, include the one which eliminates the largest number of remaining possibilities if it is correct.

    For wordle, «most probable» is mostly determined by letter frequency - while in Mastermind, it’s pure probability based on previous guesses. For instance, if you play a Mastermind variant with 8 pegs, and get a 2/8 in the first test - each of your 8 pegs had a 1/4 chance of being correct. So you select 2 at random to include in the next guess.

    If you then get a 2/8 from the second - you would include 4 previous entries in the next guess, 2 entries from the first that was not used in the second, as well as 2 entries from the 2nd - because the chance you chose the correct entries twice, is less than the chance the two hits are from the 6 you changed.

    • > In addition to Mastermind, Wordle also falls into the same category.

      > Optimal play to reduce the search space in both follow the same general pattern - the next check should satisfy all previous feedback, and included entries should be the most probable ones, both of those previously tested, and those not.

      The "next check should satisfy all previous feedback" part is not exactly true. That's hard-mode wordle, but hard mode is provably slower to solve than non-hard-mode (https://www.poirrier.ca/notes/wordle-optimal/) where the next guess can be inconsistent with previous feedback.

    • > For wordle, «most probable» is mostly determined by letter frequency

      I don't think that's a justified assumption. I wouldn't be surprised if wordle puzzles intentionally don't follow common letter frequency to be more interesting to guess. That's certainly true for people casually playing hangman.

      3 replies →

    • > Optimal play to reduce the search space in both follow the same general pattern - the next check should satisfy all previous feedback

      Thank you! I might look into this once I break my current streak of the localised wordle clone I'm playing now.

      I always try to use as many different bits for the first few rounds...

      But then again, maybe I'm not so good at these kinds of games as I think.

      1 reply →

  • Thank you! This is consistent with feedback I got from the pudding, and is ultimately the reason they didn't go ahead with the post. I tried reverse-engineering the information-theory approach to try see what sort of decisions it made.

    I noticed that for any match up score of X, the following match up would keep exactly X pairs in common. So if they scored 4/10 one week, they would change 6 couples before the next one. Employing that approach alone performed worse than the contestants did in real life, so didn't think it was worth mentioning!

    • It should be easier to understand the optimal truth booth strategy. Since this is a yes/no type of question, the maximum entropy is 1 bit, as noted by yourself and others. As such, you want to pick a pair where the odds are as close to 50/50 as possible.

      > Employing that approach alone performed worse than the contestants did in real life, so didn't think it was worth mentioning!

      Yeah, this alone should not be sufficient. At the extreme of getting a score of 0, you also need the constraint that you're not repeating known-bad pairs. The same applies for pairs ruled out (or in!) from truth booths.

      Further, if your score goes down, you need to use that as a signal that one (or more) of the pairs you swapped out was actually correct, and you need to cycle those back in.

      I don't know what a human approximation of the entropy-minimization approach looks like in full. Good luck!

      1 reply →

Most hilarious part of this is that if you've ever watched "The Challenge" then you know that these people, truly, often cannot add 3 digit numbers together let alone understand information theory

When my wife and I watched the show I wrote a solver on the side so we always had the current probabilities and impossible combinations on the side.

I am thinking about making a website for it when the next season starts.

Also: in Germany at least they have 10 x 10 candidates from the start, but sometimes they add a 11th or even 12th of one gender so that there are double matches (e.g. 1 woman has two man as match and needs to find one of it to succeed). This raises the possible combination quite a bit.

  • Would love to see this!

    Yes there's a gender fluid season and a season where someone had > 1 match, as well as people leaving part way through the season (apparently perfect matches are interchangeable...). All very interesting spins on the core problem to solve; would be really interested if anyone tries to tackle those seasons.

Love this article, OP, especially those interactive demos. One small suggestion:

> A truth booth is where a male & female ...

The use of "male" and "female" as nouns sounds very unnatural. "A man and a woman" would be a little less jarring, imo.

  • [flagged]

    • > it gives off incel vibes imo

      One really should be careful making statements like these on the Internet. It's a stronger signal than people saying "male" and "female".

    • I've always found this to be such a pedantic hill to die on.

      I don't think the author is an incel and it's pretty rude to throw out that kind of language for what's pretty clearly just a style choice.

      1 reply →

    • I hear this commonly about using the words "male" and "female." I think it's unfair. For one thing, the military uses them frequently, and so would veterans. Another reason is that their meanings are age-agnostic which helps to emphasize the intent of the speaker--to differentiate on sex alone, not sex plus age.

      3 replies →

    • Keep in mind that not everyone on the internet uses English as their first language and they might use words weirdly cause it resembles what they're used to

  • One small boon of AI is that making small interactive web components for demonstrations is now a few-minutes diversion instead of an hour or more. I have no idea if that's what OP did, but I've been happy with generating low-stakes code for blog posts.

They have an example that calculates the expected information gained by truth booths and all of the top ones are giving more than one bit. How can this be? It is a yes/no question a max of 1 bit should be possible

  • The author defines one “bit” as ruling out half the remaining options.

    So a yes might rule out 75% of remaining options (for example) which provides 2 bits of information.

    • We have to make a distinction between "expected information gain" vs "maximum information gain". An answer of "yes" generally gains >1 bit, but an answer of "no" generally gains <1 bit, and the average outcome ends up <1. It is impossible for a single yes/no question to have an expected gain of >1; the maximum possible is precisely 1.

      4 replies →

  • Great spot! The max expected information is 1. I've updated this part of the post to only show examples that are < 1, thank you for raising!

  • Because when it's true, you also learn about any prior match ups involving those two people.

    • That's not how information works. Learning more from one outcome than the other decreases the probability of that outcome occurring, so the expected information (which is the sum of the outcome probability times the outcome information for each of the two possible outcomes) is always less than or equal to one.

      If all you can get is a "true" or "false" you expect, at most, one bit of information.

      15 replies →

  • Do you mean the diagram following the sentence "Suppose we have calculated the expected information gained by potential truth booths like below:"?

    Yes, that looks like a mistake -- a truth booth only has 2 outcomes, so it can produce at most 1 bit of information.

    Regarding the other mentions on the page of information levels exceeding 1 bit: Those are OK, since they allow match-ups, which for 6 people have 7 possible outcomes, thus can yield up to log2(7) ≈ 2.81 bits.

Fun post. I'd be interested to know: How many consecutive Truth Booths (or: how many consecutive Match Ups) are needed to narrow down the 10! possibilities to a single one?

Discussing "events" (ie, Truth Booth or Match Up) together muddles the analysis a bit.

I agree with Medea above that a Truth Booth should give at most 1 bit of information.

  • Based on my research, MUs perform better than TBs. For my simulated information theories, the MUs gained ~2 bits of information on average vs ~1.1 for TBs.

    So if only MUs, we're talking around 10 events - meaning you could get enough information on MUs alone to win the game! Conversely, it would take about 20 events to do this just for TBs.

    It's not super obvious from the graphs, but you can just about notice that the purple dots drop a bit lower than the pink ones!

    Hope this helps

  • If you can only check pairings one at a time I’m not sure it’s possible to do better than greedily solving one person at a time.

    • So, for 10 pairs, 45 guesses (9 + 8 + 7 + 6 + 5 + 4 + 3 + 2 + 1) in the worst case, and roughly half that on average?

      It's interesting how close 22.5 is to the 21.8 bits of entropy for 10!, and that has me wondering how often you would win if you followed this strategy with 18 truth booths followed by one match up (to maintain the same total number of queries).

      Simulation suggests about 24% chance of winning with that strategy, with 100k samples. (I simplified each run to "shuffle [0..n), find index of 0".)

    • Agreed. There's an argument elsewhere about how a truth booth can possibly have an expected return of more than 1 bit of information, but in reality most of the time it's going to give you way less than that.

A thing to note - the contestants are not allowed to have even pen and paper, as mentioned in the other blogpost. So they need to do these computations in their heads.

I saw an episode of this and felt the contestants didn’t seem that interested in winning the money. Just romance. I was curious how suboptimally they tended to play.

  • Everything is lined up for sub-optimal play.

    For a start, the setting is an emotive one. It's not just a numeric game with arbitrary tokens, it's about "the perfect romantic partner." It would take an unusually self-isolating human to not identify who they feel their perfect match should be and bias towards that, subconsciously or consciously. We (nearly) all seek connection.

    Then, it's reality TV. Contestants will be chosen for emotional volatility, and relentlessly manipulated to generate drama. No-one is going to watch a full season of a handful of math nerds take a few minutes to progress a worksheet towards a solution each week coupled with whatever they do to pass the time otherwise.

    • I'd watch a game show where you put a variety of math nerds on each team and watch them argue about the optimal strategy. Who's strategy will win? The quant analyst or the bioinformatician? Tune in next week!

      3 replies →

    • We need a ManningCast version of the show. For those unaware, ManningCast is a show following an NFL game with special guests and nontraditional commentary and analysis. Think of it kind of like having the Mannings in the living room while watching an NFL game.

      In my hypothetical version of "Are you the one?", the math nerds would be giving commentary and explaining the math behind how they'll solve "Are you the one?" while also hilariously explaining how foolish the contestants' theories are.

    • > No-one is going to watch a full season of a handful of math nerds take a few minutes to progress a worksheet towards a solution each week coupled with whatever they do to pass the time otherwise.

      Um, what about those of us who watch Blood on the Clocktower streams?

  • That's because the real game is occurring both before and after the show in modern reality tv competitions. The goal is to be entertaining and get social media followers and potential invites to further reality tv shows.

Love the approach of using information theory to optimize decisions. The concept of "expected information gained" is fascinating - it's basically what good analytics should do: help you ask the right questions to eliminate uncertainty fastest.

The interactive visualizations on this post are fantastic. More technical content should be presented this way. Makes complex probability much more intuitive.

> Season 8: In this season, they introduced gender fluidity. Whilst an interesting problem on its own, this would have wreaked havoc on my model.

Well I guess free money except for that one. In that one, one of the contestants, Danny, did the math for optimizing their remaining Truth Booths and Match Ups to get it down to a 50/50 shot.

> This post is my first foray into content like this. I wanted to scratch the itch of an interesting maths problem, with a light-hearted spin that I hope you enjoyed as much as I did making it.

Really impressive imo. I don't remember the last time I was this engaged reading an article on HN.

I would love to know how the author went about constructing those lovely charts. Some library, or done by hand?

  • Hey! It's all vanilla html/js/css - which I'm pretty proud of. I wanted something really minimal, and felt most charting libraries were overkill so wanted to keep my bundle size small. I'm thinking of making the interactive parts open source so people can take a look for themselves if that would be helpful

> I also pitched this idea to The Pudding, and had a great experience with them nerding out about this subject. Though they didn't take my up on my idea, I left with really great and actionable feedback, and I'm looking forward to my next rejection.

Would've been a great Pudding post imo, but oh well, happy to find this nice devblog instead.

This brings up an area that’s been on the edge of my curiosity for years: how do you combine the knowledge of the experts (contestants) with logic to do better either than either strategy individually?

It’s mostly about how to elicit the information from the contestants. Once you have the probabilities from them, it seems relatively straightforward.

  • I think you could do some form of Bayesian analysis with the prior being how likely each contestant thought that their available partners were "The One" for each other. Then the truth booth would be used to update your priors.

If the goal is to find the perfect matching in some maximum number of turns or less, it's possible to do even better by using a full game tree that minimises the maximum height of the tree ( = number of turns required), instead of using information/entropy as done here.

Basically, using the entropy produces a game tree that minimises the number of steps needed in expectation -- but that tree could be quite unbalanced, having one or more low-probability leaves (perfect matchings) many turns away from the root. Such a leaf will randomly occur some small fraction of the time, meaning those games will be lost.

For concreteness, a game requiring 6 bits of information to identify the perfect matching will take 6 steps on average, and may sometimes require many more; a minimax tree of height 7 will always be solved in at most 7 steps. So if you're only allowed 7 steps, it's the safer choice.

  • That's a good point, entropy is only a heuristic for the thing you actually want to optimize, worst-case guesses (though it's probably a very good heuristic).

    > Basically, using the entropy produces a game tree that minimises the number of steps needed in expectation

    It might be even worse than that for problems of this kind in general. You're essentially using a greedy strategy: you optimize early information gain.

    It's clear that this doesn't optimize the worst-case, but it might not optimize the expected number of steps either.

    I don't see why it couldn't be the case that an expected-steps-optimal strategy gains less information early on, and thus produces larger sets of possible solutions, but through some quirk those larger sets are easier to separate later.

  • > For concreteness, a game requiring 6 bits of information to identify the perfect matching will take 6 steps on average, and may sometimes require many more

    I'm not following your logic. Consider the setup we actually have:

    1. You get to ask a series of yes/no questions. (Ignoring the matchups.)

    2. Each question can produce, in expectation, up to one bit of information.

    3. In order to achieve this maximum expectation, it is mathematically necessary to use questions that invariably do produce exactly one bit of information, never more or less. If your questions do not have this property, they will in expectation produce less than the maximum amount of information, and your expected number of steps to the solution will increase, which contradicts your stated goal of minimizing that quantity.

    You get the minimum number of steps needed in expectation by always using questions with maximum entropy. Yes. But those questions never have any variation in the amount of entropy they produce; a maximum entropy strategy can never take more - or fewer - steps than average.¹

    ¹ Unless the number of bits required to solve the problem is not an integer. Identifying one of three options requires 1.585 bits; in practice, this means that you'll get it after one question 1/3 of the time and after two questions the other 2/3 of the time. But identifying one of 128 options requires 7 bits, you'll always get it after 7 questions, and you'll never be able to get it after 6 questions. (Assuming you're using a strategy where the expected number of questions needed is 7.)

    • You're correct but the complexity is in the things you're ignoring for simplicity.

      This game is constructed such that the questions you can ask are not arbitrary, so you cannot choose them to always produce one bit of entropy (you need to frame your questions as ten matchups in parallel, using all the contestants exactly once) and the number of bits you need may indeed not be an integer.

      Because you can't choose your questions to partition the state space arbitrarily, that affects not just the question you ask today, but also previous days: you want to leave yourself with a partitionable space tomorrow no matter what answers you get today.

      In the Guess Who analogy, it's against the rules or at least the spirit to ask "does your character have a name which is alphabetically before Grace?". That would allow a strategy which always divides the state space exactly in two.

      1 reply →

    • > Unless the number of bits required to solve the problem is not an integer.

      That is one case where root-to-leaf path lengths can vary, though it's not obvious to me that it exhausts all such cases -- in particular, even if we have "ideal leaves" (numbering a power of 2, and each equally likely), it's not clear that there is always a question we can ask that divides a given node's leaves exactly in half.

> This time, one guy has two matches which means that there will be eleven girls, but only ten boys.

One thing the show runners do subtle alterations that makes the logic much harder.

The Traitors has to do lots of these tricks when not playing the Celebrity edition because there's a self-selection for the sort of person who has already played Werewolf/Avalon-type games.

What are some other gameshows one could nerd out about? I'll start:

An obvious one is the traitors, but I dunno if there's much you can do with this one as the contestants rarely gain much concrete information.

"Deal or no deal" / "let's make a deal" would have interesting game theory approaches - probably has a lot of parallels with Monty Hall?

Countdown (UK) - solving the maths puzzles on here using integer programming would be cool

> Score Probability

> 0 0.3679

> 1 0.3679

> 2 0.1839

> 3 0.0613

> 4 0.0153

> 5 0.0031

For 0, it's a well known [1] result 1/e, I remember a puzzle where people left their hat and then pick one randomly.

Looking at the table it looks like the general formula is 1/(e*n!) that is a Possion distribution. Compare with https://en.wikipedia.org/wiki/Poisson_distribution#Examples_...

Anyway, I'm not sure if my observation helps too much to solve the original problem.

[1] at least I know it :)

I think the part that stings most about this article is when he says, "In my research I came across a boardgame called Mastermind."

My lived childhood is old enough to be someone's "research."

  • Sorry :')

    My wife and I bought the game - it's a great turn based came you can play whilst having trashy reality shows on in the background!

    • No no, I was just blindsided by my own feelings when I read that and thought it was kind of funny. Judging from the inexplicable downvoting that comment received (?!) apparently that wasn't clear. I always enjoyed Mastermind, though there is a line of attack on the solution that can render it a little boring. It still maintains a bit of a Wordle-like pleasure.