space and games

April 6, 2010

Breaking the Butterfly Effect: The “Dobbins Selector” Algorithm

Filed under: Useful Statistics — Peter de Blanc @ 3:13 pm

Challenge: Design an algorithm that:

• takes a discrete probability distribution and a source of random bits, and returns a sample from the probability distribution.
• if running the algorithm on distribution P1 with random bits S yields X1, and running it on (P2, S) yields X2, and X2 ≠ X1, then P2(X2) / P2(X1) > P1(X2) / P1(X1)

I ran into this problem in the context of a computer game I’m developing. Rather than storing every detail of the game world in the map file, I store a random seed that is used to generate most map features. I found that every time I tweaked certain parameters in my code, old map files became unrecognizable.

Thus, we would like a random sampling algorithm that is unlikely to change its output when we make a small change to the underlying probability distribution (assuming we keep our original random seed). In particular, if we increase the probability of one outcome and renormalize the others, then the only way the outcome should change is if it changes to the outcome whose probability we increased.

If you want to solve the problem yourself, this would be a good time to stop reading.

I shared this problem with Michael Dobbins and he solved it for me.

Mike’s method is to associate each face of a simplex with one of the outcomes. We pick a random point inside the simplex, and measure the distance from that point to each of the faces, then divide that distance by the probability of the associated outcome. The lowest number wins.

March 31, 2007

Laplace’s Rule of Succession

Filed under: Useful Statistics — Peter de Blanc @ 1:08 am

Laplace’s Rule of Succession is really easy.

Say we have a 2-sided coin that comes up heads with some frequency f, and we have a uniform distribution on [0, 1] for the value of f. Then after n coin-flips, k of which are heads, the Rule of Succession gives:

P(heads | k, n) = (k + 1) / (n + 2)

as our probability for a heads on the next flip.

I decided to bring this up because I see people using Maximum Likelihood for this sort of thing all the time, which is pretty silly given how easy it is to do it the Bayesian way.