space and games

March 19, 2010

When is an honest vote a cabal equilibrium?

Filed under: Voting — Peter de Blanc @ 2:11 pm

When I posted about cabal equilibria to election methods mailing list, Jameson Quinn asked me when honest voting is a cabal equilibrium. I have a partial answer.

Assume we are using an election method that satisfies the Condorcet criterion, and there exists a double Condorcet winner. Then an honest vote is a cabal equilibrium.

What is a double Condorcet winner, you ask? It is a candidate C such that, for every other pair of candidates A and B, there exists a majority of voters who each prefer C over both A and B.

Proof: Suppose that an honest vote is not a cabal equilibrium.

Then there must be some set S of voters that can improve the outcome for themselves by changing their votes. Let B be the new winner. Then no member of S may prefer C to B.

In the new strategy-profile, C is no longer a Condorcet winner (or else C would be elected). However, B is still ranked below C by a majority of voters. This is because only members of the set S changed their vote, and by assumption, members of S were already ranking B above C.

Thus, there must be some other candidate A who isn’t ranked below C (i.e. some members of S changed their vote by moving A above C).

Since all voters who prefer C to B are still voting honestly, the set of voters who prefer C to both B and A must not be a majority.

Thus C is not a double Condorcet winner.

March 14, 2010

The War Club vs. the Ant-Men, part 1

Filed under: General — Peter de Blanc @ 11:19 pm

Paron had been at the academy for far too long. He had switched from geometry to biology, and finally to game theory. When at last he finished his thesis, it was a cause for celebration. The party was far from grand; more than twenty people were packed into Paron’s meager, candle-lit apartment. Like organisms, conversations competed with their kin for limited attention and limited air while sleep-deprived gamers competed to dominate virtual markets. I was in my element.

“The strange thing about Ant War,” I mused, “is the player. We’re willing to anthropomorphize ants to the point of substituting our own decisions for theirs in the game, but ant behavior is completely inhuman.”

“It’s not like normal human behavior,” said my opponent, Nik, “but unusual situations can produce unusual behaviors. We’re fighting a virtual war; humans could also fight a real one.”

This comment drew Paron’s attention. “More unusual than you might think. A war requires extreme cooperation. The ants in a colony are all sisters, and they share 3/4 of their genes because their father is basically a glorified sperm cell. To get humans to cooperate like ants, they’d have to be born of incest.”

“You don’t need common end goals to cooperate,” countered Nik, “only common proximate goals. Not to mention the fact that evolution formed our goal systems imperfectly; we may agree on values aside from inclusive fitness.”

“But a war requires two coalitions,” said Paron. “You’d need everyone in your coalition to share proximate goals – and everyone in the opposing coalition to share an opposing proximate goal. And if both coalitions are composed of humans? — what could ever cause that, apart from inclusive fitness?”

I moved my ants, then re-entered the conversation. “What about competing protocols? Maybe one coalition follows one set of rules, and the other coalition follows different rules. Economic productivity would be boosted if everyone followed the same rules, but whichever coalition is forced to convert has to pay a cost.”

“But then you’d just bid on it,” said Paron.

“Why don’t ants bid on land?” asked Nik.

Paron said, “Ants don’t engage in inter-colony trade. They’re not smart enough to trade, so they war instead.”

“Let’s go back to end goals,” said Nik. “I think there are goals that all humans share; we all value love, life, and laughter, and not just for ourselves and our kin. One coalition could be human. The opposing coalition could be inhuman.”

“Like what? Giant ants?” I asked.

Nik said, “Or the broad-shouldered people across the sea.”

January 26, 2010

Summary on Cabal Equilibria and Voting

Filed under: Voting — Peter de Blanc @ 11:29 pm

Last year I introduced cabal equilibria. This post is a summary of my research on cabal equilibria and election methods. Since this is just a short summary, I’m going to skip over any caveats about exact ties in elections.

In any game, a cabal equilibrium is a strategy profile in which no set C of players can simultaneously change strategies in such a way that at least one member of C benefits and no member of C is worse off. Every cabal equilibrium is a Nash equilibrium, but the reverse is not true. For example, in the prisoner’s dilemma, the Nash equilibrium is for both players to defect. This is not a cabal equilibrium, because if both players changed their strategy to cooperate, then both players would benefit. Thus, the prisoner’s dilemma has no cabal equilibria.

The cabal equilibria in elections are particularly interesting, and are related to the notion of a Condorcet winner. In a ranked voting method, a Condorcet winner is a candidate A such that, for any other candidate B, more than half of the voters ranked A higher than B. Of course, the existence and identity of the Condorcet winner (as defined above) depends on how the voters actually vote, not on their true preferences, so for our discussion it’s important to define a pure Condorcet winner as any candidate A such that, for any other candidate B, more than half of the voters actually prefer A to B.

Before I can give my first result, I have to define an election method criterion. An election method is weakly majority-controllable if any majority of voters can cooperate to dictate the outcome of the election, assuming that they already know how the remaining voters are voting. Plurality voting, range voting, all Condorcet voting methods, instant-runoff voting, and the Borda count are all weakly majority-controllable.

My first result is that in any weakly majority-controllable election method, a cabal equilibrium will always elect a pure Condorcet winner.

This is quite easy to see: if an election selects some candidate A who is not a pure Condorcet winner, then there must be some other candidate B whom a majority prefers to A. Since the election is weakly majority-controllable, that majority could change their votes to force B to be elected. That majority has just benefited by changing strategies, so the original election must not have been a cabal equilibrium. Thus any election which does not elect a pure Condorcet winner is not a cabal equilibrium; by contraposition, any cabal equilibrium elects a pure Condorcet winner.

Now let’s define a strongly majority-controllable election method as one in which a majority can control the outcome of the election in which they have to choose their strategies first, even if some members of the majority betray the majority afterwards (as long as it remains a majority). The Borda count is not strongly majority-controllable, but all of the other election methods I mentioned above are.

My second result is that in any strongly majority-controllable election method, the existence of a pure Condorcet winner guarantees the existence of a cabal equilibrium, so this is a partial converse of the first result.

To see this, suppose A is a pure Condorcet winner. Then there is some way that the electorate could vote that guarantees that A will be elected, even if some minority were to change strategies.

Now suppose that there exists a set C of voters that can change their strategies in a way that is beneficial for them. In order for there to be any benefit, the outcome of the election must be changed. No minority of voters has the power to change the outcome, so C must be a majority, and since they consider the change beneficial, none of them may prefer A to the new election outcome. But then A is not a pure Condorcet winner – contradiction.

The Borda count is not strongly majority-controllable, so the existence of a pure Condorcet winner does not guarantee that there is a cabal equilibrium. Here’s an example:

The preferences of the voters are:
5: A > B > C
4: C > B > A
A is the pure Condorcet winner, but the majority cannot simultaneously guard against B and C. For example, if 3 of the majority vote ABC and 2 vote ACB, then the minority can all vote BCA to elect B. Thus there is no cabal equilibrium.

In Hay voting, there are no cabal equilibria except in some degenerate cases. Since every cabal equilibrium is a Nash equilibrium, the only thing that might be a cabal equilibrium is the Nash equilibrium in which each voter votes his or her true preferences. If there exist two players whose utility functions are not scalar multiples of each other, then they can cooperate by each transferring some voting mass between a pair of candidates between which they are relatively indifferent.

Similarly in random ballot, the Nash equilibrium is for each voter to vote for his or her favorite candidate. If there’s a pair of voters who most prefer A and B respectively, but who would choose some compromise C over a coin toss between A and B, then they can cooperate by switching their votes to C.

In both of the above examples, there’s a way for two players can cooperate to get a result that is better than the Nash equilibrium, but now each player has an incentive to betray the other and vote their original choice. This mirrors the Prisoner’s Dilemma, which also has no cabal equilibrium.

I’d like to find an election method where cabal equilibria are likely to exist even with a very large number of candidates, but such a method could not be weakly majority-controllable because the probability of a pure Condorcet winner vanishes as the number of candidates increases (assuming random preferences). This is what I’m thinking about now.

January 19, 2010

RPS Equilibrium Conundrum

Filed under: General — Peter de Blanc @ 10:05 pm

Clearly, it’s absurd that paper beats rock, but if rock beat paper then the game would become pointless.

Suppose we changed the rules such that paper only scores 1/2 point against rock. A full victory (rock against scissors or scissors against paper) scores 1 point, and a loss scores -1 point. Draws score 0 points. What mixed strategy is best in this game?

I found this equilibrium: p(rock) = p(paper) = 2/5, and p(scissors) = 1/5. If the opponent plays this strategy, then anything we do has an expected utility of 0. If both players use this strategy, then neither player has an incentive to change, so it’s an equilibrium.

This result seems strange to me. The rule change makes paper worse, and yet in the resulting equilibrium, we increase the probability of throwing paper. Who wants to explain this?

December 28, 2009

Germs, Selection, and Disease

Filed under: General — Peter de Blanc @ 10:35 am

The germ that infects you was selected for its ability to spread across hosts, but the germ population in your body is being selected for its ability to spread within your body. The latter is more destructive than the former, so the germ population in your body becomes more destructive over time. Thus for any infectious disease, we should expect the period of maximal transmissibility to precede the period of maximal suffering.

November 4, 2009

Go proverbs: “A rich man should not pick quarrels.”

Filed under: General, Go — Peter de Blanc @ 1:19 pm

Go players have hundreds of proverbs — pithy sentences that convey important heuristics. It is not enough to simply read proverbs; you must study them at length to unfold them into procedural knowledge.

Most proverbs are particular to Go (e.g. six die but eight live), but some generalize to other adversarial situations, and a few proverbs contain important lessons about rationality.

One of my favorite proverbs states that a rich man should not pick quarrels. Go, in its most common formulations, is a game of satisficing. The player with more points wins the game, and winning is enough; there is no extra reward for winning by a large margin. The proverb says that if you are currently winning (i.e. you are a rich man), then you should not do things (such as picking quarrels) that make the outcome more random. By decreasing the variance in the probability distribution for your final score, you increase the probability that you will hold onto enough points to win. Anything that makes the game simpler and more predictable is good for you.

We can see this in Chess (the winning player should seek to trade pieces) and in epee fencing (the winning player should seek double-touches).

If, on the other hand, you are a poor man, then you should pick quarrels. There’s a good example of this in Indiana Jones and the Temple of Doom. In one scene, Indy is in the middle of a rope bridge, and swordsmen are approaching from either side, so Indy cuts the bridge.

If you are winning, simplify. If you are losing, complexify.

October 16, 2009

Shock Levels are Point Estimates

Filed under: General — Peter de Blanc @ 10:50 pm

Eliezer Yudkowsky1999 famously categorized beliefs about the future into discrete “shock levels.” Michael Anissimov later wrote a nice introduction to future shock levels. Higher shock levels correspond to belief in more powerful and radical technologies, and are considered more correct than lower shock levels. Careful thinking and exposure to ideas will tend to increase one’s shock level.

If this is really true, and I think it is, shock levels are an example of human insanity. If you ask me to estimate some quantity, and track how my estimates change over time, you should expect it to look like a random walk if I’m being rational. Certainly I can’t expect that my estimate will go up in the future. And yet shock levels mostly go up, not down.

I think this is because people model the future with point estimates rather than probability distributions. If, when we try to picture the future, we actually imagine the single outcome which seems most likely, then our extrapolation will include every technology to which we assign a probability above 50%, and none of those that we assign a probability below 50%. Since most possible ideas will fail, an ignorant futurist should assign probabilities well below 50% to most future technologies. So an ignorant futurist’s point estimate of the future will indeed be much less technologically advanced than that of a more knowledgeable futurist.

For example, suppose we are considering four possible future technologies: molecular manufacturing (MM), faster-than-light travel (FTL), psychic powers (psi), and perpetual motion (PM). If we ask how likely these are to be developed in the next 100 years, the ignorant futurist might assign a 20% probability to each. A more knowledgeable futurist might assign a 70% probability to MM, 8% for FTL, and 1% for psi and PM. If we ask them to imagine a plethora of possible futures, their extrapolations might be, on average, equally radical and shocking. But if they instead generate point estimates, the ignorant futurist would round the 20% probabilities down to 0, and say that no new technologies will be invented. The knowledgeable futurist would say that we’ll have MM, but no FTL, psi, or PM. And then we call the ignorant person “shock level 0″ and the knowledgeable person “shock level 3.”

So future shock levels exist because people imagine a single future instead of a plethora of futures. If futurists imagined a plethora of futures, then ignorant futurists would assign a low probability to many possible technologies, but would also assign a relatively high probability to many impossible technologies, and there would be no simple relationship between a futurist’s knowledge level and his or her expectation of the overall amount of technology that will exist in the future, although more knowledgeable futurists would be able to predict which specific technologies will exist. Shock levels would disappear.

I do think that shock level 4 is an exception. SL4 has to do with the shocking implications of a single powerful technology (superhuman intelligence), rather than a sum of many technologies.

September 22, 2009

Vote matching

Filed under: General — Peter de Blanc @ 6:11 pm

In light of my previous post, I’d like to suggest a vote-matching scheme. Let’s start with an example:

Kodos Kang Washington

Suppose there’s a presidential election between Kodos, Kang, and Washington. Kodos and Kang seem to be the leading candidates.

Alf and Beth are trying to decide who to vote for. They both like Washington, but they don’t want to waste their votes. Alf thinks Kodos is the “lesser of two evils,” while Beth prefers Kang.

If Alf votes for Kodos and Beth votes for Kang, as they are inclined to do, then their two votes will “cancel out,” at least in the race between Kodos and Kang. This means that if they both agree to switch their votes to Washington, the balance of votes between Kodos and Kang will not change. Washington gets two extra votes!

This sort of vote-matching should be able to benefit some third-party candidates in real life, too. The key requirement is that voters who prefer the third-party candidate disagree about which of the two front-runners is worse. In that case, two voters can promise to vote for the third-party candidate instead of their “lesser of two evils.” If this sort of vote-matching scheme took off, I think we could see a big change in politics.

September 21, 2009

Will conditional commitments change politics?

Filed under: Voting — Peter de Blanc @ 12:44 pm

If you think that Condorcet voting would be a good thing, then you should also be in favor of voter collusion, such as Nader Trading, or political analogues of Facebook groups like “Once we reach 4,096 members, everyone will donate $256 to SingInst.org” or “1 million people, $100 million to defeat aging.” When groups can collectively change strategies to benefit group members, the global situation will start to look like a cabal equilibrium, and cabal equilibria in plurality or approval votes always elect a Condorcet winner. Of course, this only works to the extent that people follow through on their promises.

September 11, 2009

Base Rates: A Cautionary Tale

Filed under: General — Peter de Blanc @ 3:01 pm

The other day, I was reading a wikipedia article related to a topic we had been discussing in one of my classes. One of the statements in the second section confused me, and after a bit of thought I was convinced that it was indeed a mistake. Looking at the history, I noticed that this mistake was the result of an edit that had been made the day before.

Naturally, I reverted the article to the previous version. Looking at the history again, I noticed that the mistake had come from someone with an IP address very similar to my own. A quick search revealed that this person was in Philadelphia.

I decided that I was about 60% sure that it was someone in my class. Immediately I singled out a single person with 30% confidence.

There are about 1.5 million people in Philadelphia. There are about 15 people in my class. It would take a likelihood ratio of about 100,000 to pick out my class, and a likelihood ratio of about 1.5 million to pick out one person.

In class the next day, when I asked if anyone had edited wikipedia recently, they all said no.

And that’s how I lost 1.3 bits from my Bayes score.

« Previous PageNext Page »

Powered by WordPress