Tuesday, March 24, 2020

Pig II: Disproving Pigs


Last time, we talked about the Pig problem – a hypothetical cooperative game in which players don’t have all the information, and it is a rational choice to not give all the information out to the other players, despite it being a free action. Pig stands for Purposefully Imperfect Information Game, so we’ll call a game that satisfies these rules a Pig.

So, now we have to think about what kinds of things need to happen for a Pig to exist. The main thing we need is for rational player A to decide not to give rational player B information that is exclusive to rational player A. For example, if the game involves hands of cards which are kept secret from the other players, there’s gotta be some scenario where player A doesn’t want to tell player B their cards, despite them all working towards the same goal.

On the surface level, this seems impossible to me. Bad decisions in games are made with imperfect information, and the more imperfect the information, the worse risk you run of making a bad decision. But that’s not a good enough argument – it implicitly assumes that more knowledge is better, and that’s what we’re trying to prove.

So I bounced around some ideas with a friend of mine, and we hit upon something critical. A perfect player can simulate not knowing something. If you’re playing poker and one of your opponents accidentally drops a card, you can’t erase the knowledge of what that card was from your mind, but you can play in such a way that pretends you didn’t see it. It’s difficult for humans to do it in a lot of cases, because of biases and human imperfections, but our idealized and perfectly rational player could do it.

So, a perfect player can simulate knowing some subset of what they actually know. They can calculate the best course of action for a different situation than theirs, and that’s the key point that proves that a Pig cannot exist in this form. The proof is actually pretty simple once you have this point clear.


Proof: Let’s start with a game involving n players.

Case 1: All of the information in the game is known to one player or the other – so, if they tell each other everything they know, they know everything about the game. There’s nothing that will be revealed later in the game.

Let’s define a game G as the game with the players sharing and hiding information to make the game have the best possible outcome. So, if player A thinks they should share half their information, and player B thinks they should share none of their information to get the best possible game result, they do exactly that. That’s our definition for G, the “optimal” game.

Now, let’s construct a game G’, in which all players immediately share all information. Because players have the ability to simulate playing the game as if they only know a subset of what they knew, they now have a myriad of strategies on how to play. For example, if there’s two pieces of knowledge each for two players, they can play in sixteen different ways – each player knows their own information, but there are sixteen possibilities of how much they know about each other’s information.

If there is information that players shouldn’t have shared, all players know and agree on it, because they’re all perfectly rational. So, one or more of these ways to play the game – one combination of known and unknown information – is optimal, and everyone agrees on it. By definition, this is G, the optimal game. And suddenly, we’re done. Anything the players can do by sharing and hiding information, they can also do by sharing all information, deciding what information to share and what to hide, and simulating that knowledge. Therefore, the "sharing all information" solution, G', will always be at least as good as the players’ best ideas on what to share and what to hide.


Case 2: Some information is not known to the players, and may or may not be revealed throughout the course of the game.

This case turns out to be very similar. We can again define G as the optimal game, and G’ as the construction of sharing all knowledge, deciding on the optimal strategy, and simulating knowledge. But in this case, the game isn’t solved immediately. There will be new information revealed to the players, who may have to chose to hide or show this information. But we can just repeat our process of showing all information, deciding on a strategy, and simulating knowledge each time information is revealed. Therefore, at every step, G’ has access to every possible combination of players’ knowledge and lack of knowledge, so one of those possibilities must be G.


And that’s it. Pigs cannot exist, as we’ve defined them. But that doesn’t mean we’re done here. On the contrary, I was very relieved when I nailed this proof down, because that means I can finally start tinkering! More on that next time!

Thursday, March 5, 2020

Purposefully Imperfect Information (Pig I)

I don't know much about game theory (just like every other branch of mathematics with a name, haha). Game theory was always an afterthought in the classes I took – tacked on to the end of discrete math, or mentioned briefly in a problem set. But I play a lot of games, and I see interesting mathematics arise from them. 

The kinds of games I’m talking about are very idealized. You’ve got players, who are perfectly logical and rational, you’ve got some sort of win condition, which might be different for each player, and you’ve got some type of allowed actions or events set to happen which offer the players a choice. Games can have perfect information, like chess, where every player knows everything possible about the state of the game, or not, like poker, where you don’t know what cards your opponent has. There’s competitive games, where each player is self-interestedly trying to win, and cooperative games, where players have to work in teams or all together to win. Cooperative games lead to “super-rationality” and other interesting concepts, and that’s what got me thinking about this hypothetical and a strange question.

Hypothetical: Let’s say you have a cooperative game between players with some high level of rationality. Perfectly rational, super-rational, theoretical rationalities, I’m not sure yet. This game features imperfect information, so certain players will know things other players will not. Finally, communication between players is a free action. 

Can you picture the game? Here’s a quick example. You and three others divide a deck of cards into four hands of thirteen cards each, and each player can see their hand. Each player takes a turn passing or playing a card onto the table. The goal is to play all cards, in order, into four separate stacks sorted by suit (like the goal of solitaire) with as few passes as possible. And you’re free to talk to the other players about your hand whenever you feel like it.

Simple enough, right? You and your friends are rational, you don’t have all the information, talking is free, and you’re working together to sort the deck. This is the kind of game I’m talking about.

Now, the question is: Is there a game that satisfies these rules in which it’s not optimal to give out all the information you know?

In the example above, the best strategy would just be to show everyone your cards, and decide together what to do. In that game, more information means you will always make better decisions, and the players are all identically rational, so everyone would agree on a best way to play, and play. But is there a game where more information, maybe even perfect information, could make a rational player play worse than a game with limited information?

I don’t actually have an answer. I’ve got some ideas, but no answer just yet. Interesting, isn’t it? Interesting enough for a name, I think. Such a game would have players purposefully hiding information about the game from their people who are supposed to be on their team... so... Omission? Censoring? Purposefully Imperfect Information? Oooh, I like that one, because Purposefully Imperfect Information Game abbreviates to PIIG, which "abbreviates" to PIG, with some artistic license! Henceforth, this shall be known as the Pig problem! Until next time!