In light of my previous post, I’d like to suggest a vote-matching scheme. Let’s start with an example:
Suppose there’s a presidential election between Kodos, Kang, and Washington. Kodos and Kang seem to be the leading candidates.
Alf and Beth are trying to decide who to vote for. They both like Washington, but they don’t want to waste their votes. Alf thinks Kodos is the “lesser of two evils,” while Beth prefers Kang.
If Alf votes for Kodos and Beth votes for Kang, as they are inclined to do, then their two votes will “cancel out,” at least in the race between Kodos and Kang. This means that if they both agree to switch their votes to Washington, the balance of votes between Kodos and Kang will not change. Washington gets two extra votes!
This sort of vote-matching should be able to benefit some third-party candidates in real life, too. The key requirement is that voters who prefer the third-party candidate disagree about which of the two front-runners is worse. In that case, two voters can promise to vote for the third-party candidate instead of their “lesser of two evils.” If this sort of vote-matching scheme took off, I think we could see a big change in politics.
If you think that Condorcet voting would be a good thing, then you should also be in favor of voter collusion, such as Nader Trading, or political analogues of Facebook groups like “Once we reach 4,096 members, everyone will donate $256 to SingInst.org” or “1 million people, $100 million to defeat aging.” When groups can collectively change strategies to benefit group members, the global situation will start to look like a cabal equilibrium, and cabal equilibria in plurality or approval votes always elect a Condorcet winner. Of course, this only works to the extent that people follow through on their promises.
The other day, I was reading a wikipedia article related to a topic we had been discussing in one of my classes. One of the statements in the second section confused me, and after a bit of thought I was convinced that it was indeed a mistake. Looking at the history, I noticed that this mistake was the result of an edit that had been made the day before.
Naturally, I reverted the article to the previous version. Looking at the history again, I noticed that the mistake had come from someone with an IP address very similar to my own. A quick search revealed that this person was in Philadelphia.
I decided that I was about 60% sure that it was someone in my class. Immediately I singled out a single person with 30% confidence.
There are about 1.5 million people in Philadelphia. There are about 15 people in my class. It would take a likelihood ratio of about 100,000 to pick out my class, and a likelihood ratio of about 1.5 million to pick out one person.
In class the next day, when I asked if anyone had edited wikipedia recently, they all said no.
And that’s how I lost 1.3 bits from my Bayes score.