Tuesday, March 24, 2020

Pig II: Disproving Pigs


Last time, we talked about the Pig problem – a hypothetical cooperative game in which players don’t have all the information, and it is a rational choice to not give all the information out to the other players, despite it being a free action. Pig stands for Purposefully Imperfect Information Game, so we’ll call a game that satisfies these rules a Pig.

So, now we have to think about what kinds of things need to happen for a Pig to exist. The main thing we need is for rational player A to decide not to give rational player B information that is exclusive to rational player A. For example, if the game involves hands of cards which are kept secret from the other players, there’s gotta be some scenario where player A doesn’t want to tell player B their cards, despite them all working towards the same goal.

On the surface level, this seems impossible to me. Bad decisions in games are made with imperfect information, and the more imperfect the information, the worse risk you run of making a bad decision. But that’s not a good enough argument – it implicitly assumes that more knowledge is better, and that’s what we’re trying to prove.

So I bounced around some ideas with a friend of mine, and we hit upon something critical. A perfect player can simulate not knowing something. If you’re playing poker and one of your opponents accidentally drops a card, you can’t erase the knowledge of what that card was from your mind, but you can play in such a way that pretends you didn’t see it. It’s difficult for humans to do it in a lot of cases, because of biases and human imperfections, but our idealized and perfectly rational player could do it.

So, a perfect player can simulate knowing some subset of what they actually know. They can calculate the best course of action for a different situation than theirs, and that’s the key point that proves that a Pig cannot exist in this form. The proof is actually pretty simple once you have this point clear.


Proof: Let’s start with a game involving n players.

Case 1: All of the information in the game is known to one player or the other – so, if they tell each other everything they know, they know everything about the game. There’s nothing that will be revealed later in the game.

Let’s define a game G as the game with the players sharing and hiding information to make the game have the best possible outcome. So, if player A thinks they should share half their information, and player B thinks they should share none of their information to get the best possible game result, they do exactly that. That’s our definition for G, the “optimal” game.

Now, let’s construct a game G’, in which all players immediately share all information. Because players have the ability to simulate playing the game as if they only know a subset of what they knew, they now have a myriad of strategies on how to play. For example, if there’s two pieces of knowledge each for two players, they can play in sixteen different ways – each player knows their own information, but there are sixteen possibilities of how much they know about each other’s information.

If there is information that players shouldn’t have shared, all players know and agree on it, because they’re all perfectly rational. So, one or more of these ways to play the game – one combination of known and unknown information – is optimal, and everyone agrees on it. By definition, this is G, the optimal game. And suddenly, we’re done. Anything the players can do by sharing and hiding information, they can also do by sharing all information, deciding what information to share and what to hide, and simulating that knowledge. Therefore, the "sharing all information" solution, G', will always be at least as good as the players’ best ideas on what to share and what to hide.


Case 2: Some information is not known to the players, and may or may not be revealed throughout the course of the game.

This case turns out to be very similar. We can again define G as the optimal game, and G’ as the construction of sharing all knowledge, deciding on the optimal strategy, and simulating knowledge. But in this case, the game isn’t solved immediately. There will be new information revealed to the players, who may have to chose to hide or show this information. But we can just repeat our process of showing all information, deciding on a strategy, and simulating knowledge each time information is revealed. Therefore, at every step, G’ has access to every possible combination of players’ knowledge and lack of knowledge, so one of those possibilities must be G.


And that’s it. Pigs cannot exist, as we’ve defined them. But that doesn’t mean we’re done here. On the contrary, I was very relieved when I nailed this proof down, because that means I can finally start tinkering! More on that next time!