8,000-Gon
Blake asked me for an 8,000-gon. Here it is.
Note: I’m not sure if the explanations below are very clear. Feedback would be appreciated.
Let’s define one’s “charisma” as the number of friends one has. If Fred has 6 friends, than his charisma is 6; if Susan has 81 friends, then her charisma is 81.
Now if we select a person at random (let’s call ver Bijecto the Bland), and then randomly select one of vis friends (let’s call ver Functor the Magnificent), then we should expect at least as high a charisma for Functor as for Bijecto, simply as a result of the selection process.
Intuitively, one may reason as follows: Bijecto was selected at random, so we should expect ver to possess an average charisma. Someone with very many friends is more likely to be friends with any particular person, so the higher your charisma, the more likely you are to have Bijecto as a friend. Thus, if we select one of Bijecto’s friends at random (say, by looking at vis cell phone), then we are quite likely to select a charismatic person. We end up selecting Functor, and so we might guess that Functor is charismatic.
Confused? Let’s work an example.
Here’s a graph containing seven vertices. Let’s think of the vertices as representing people, and the edges as representing friendships. Thus, two vertices are “friends” if there is an edge connecting them.
Now let’s consider these two processes.
The theory is that the expected value of process 2 will be at least as high as that of process 1. Let’s see if it works out.
We can use a table to figure out the expected charisma of a random vertex. The first column lists each vertex. The second column indicates the probability that this vertex will be selected. Since each vertex is equally likely to be selected, this is 1/7 in all cases. The third column lists the charisma of each vertex.
The last column, labeled “Contribution,” lists how much each vertex contributes to the expected value of the process. This value is simply the product of the two previous columns. Thus, summing the “contribution” column will yield the expected value.
Vertex Chance of Selecting Charisma Contribution 1 1/7 2 2/7 2 1/7 2 2/7 3 1/7 2 2/7 4 1/7 4 4/7 5 1/7 3 3/7 6 1/7 3 3/7 7 1/7 2 2/7 Total 18/7
So the expected value is 18/7. This works out to about 2.57. Now let’s see about…
This is a bit trickier. First we are selecting a random vertex (the “initial vertex”, v_{i}), and then we are selecting one of its friends (the “final vertex”, v_{f}). We are interested in the charisma of the final vertex. So we need to figure out the probability of each vertex being selected as the final vertex.
In other words, we wish to assess the probabilities p(v_{f} = 1) … p(v_{f} = 7). To do this we will need the product and sum rules of probability theory. Given two propositions A and B, the product rule states that p(A^B) = p(A) * p(B|A), where p(A^B) indicates the probability of A and B both being true (called the conjunction of A and B), p(A) is the probability of A, and p(B|A) is the probability of B given A (or B conditional on A).
Thus, for instance, p(v_{i} = x ^ v_{f} = y) = p(v_{i} = x) * p(v_{f} = y | v_{i} = x). Using this rule, we can construct a table indicating the probability of each pair of initial and final vertices.
First, we need to determine the probability of each possible value for v_{i}. These are the values p(v_{i} = 1) … p(v_{i} = 7). This is easy; they’re all 1/7.
Initial Vertex Chance of Selecting 1 1/7 2 1/7 3 1/7 4 1/7 5 1/7 6 1/7 7 1/7
Now we need to determine the conditional probabilities: those of the form p(v_{f} = y | v_{i} = x). These will be 0 for anything which is not a friend of v_{i}, and all friends of v_{i} are equally likely to be selected as v_{f}.
For instance, vertex 1 has two friends: 2 and 3. If vertex 1 is selected as v_{i}, then vertices 2 and 3 each have a probability of 1/2 of being selected as v_{f}. Here’s the complete table, along with the graph, for reference.
v_{f} v_{i} 1 2 3 4 5 6 7 1 0 1/2 1/2 0 0 0 0 2 1/2 0 0 1/2 0 0 0 3 1/2 0 0 1/2 0 0 0 4 0 1/4 1/4 0 1/4 1/4 0 5 0 0 0 1/3 0 1/3 1/3 6 0 0 0 1/3 1/3 0 1/3 7 0 0 0 0 1/2 1/2 0 |
It will now be easy to determine the probability of each conjunction, simply by multiplying the conditional probabilities by 1/7. Then, because v_{i} = 1 … v_{i} = 7 are mutually exclusive and exhaustive propositions, we can add up the values in each column to obtain the probability of each vertex being selected as v_{f}.
v_{f} v_{i} 1 2 3 4 5 6 7 1 0 1/14 1/14 0 0 0 0 2 1/14 0 0 1/14 0 0 0 3 1/14 0 0 1/14 0 0 0 4 0 1/28 1/28 0 1/28 1/28 0 5 0 0 0 1/21 0 1/21 1/21 6 0 0 0 1/21 1/21 0 1/21 7 0 0 0 0 1/14 1/14 0 Total: 1/7 3/28 3/28 5/21 13/84 13/84 2/21
Now we have a probability distribution for v_{f}. Process 2 returns the charisma of v_{f}, so we can compute the expected value of this process with one more table:
Vertex Chance of Selecting Charisma Contribution 1 1/7 2 2/7 2 3/28 2 3/14 3 3/28 2 3/14 4 5/21 4 20/21 5 13/84 3 13/28 6 13/84 3 13/28 7 2/21 2 4/21 Total 39/14
So the expected value here is 39/14, which is about 2.79. The value for the first process was 2.57. This agrees with our hypothesis.
If I’m still not making sense, maybe another example will help:
We select a random node and look at its charisma. We have a probability of 1/9 of selecting the central vertex, in which case the charisma will be 8. We have a probability of 8/9 of selecting one of the outer vertices, which will have a charisma of 1. Thus the expected charisma is (1/9)*8 + (8/9)*1 = 16/9.
We have a probability of 1/9 of selecting the central vertex as v_{i}, in which case v_{f} will be one of the outer vertices, which will have a charisma of 1. We have a probability of 8/9 of selecting one of the outer vertices as v_{i}, in which case v_{f} will be the central vertex, with charisma 8. Thus the expected charisma is (1/9)*1 + (8/9)*8 = 65/9, which is much greater than 16/9, the expected value we obtained from process 1.
So with process 1, we’ll usually pick one of the outer vertices, which has a low charisma. With process 2, we’ll usually start at one of the outer vertices, and then move to the central vertex, which has a high charisma. This is why the expected value of process 2 is larger.
Maybe you’ve noticed that the expected value of process 1 is always twice the number of edges divided by the number of vertices. This will soon turn out to be important for…
Let G = (V, E) be a finite graph such that each vertex lies on at least one edge. If x is a vertex, then let friends(x) indicate the set of vertices which share an edge with x. Note that it is necessary to require each vertex to have at least one friend in order to define process 2.
I will be using the pound sign (#) to indicate the cardinality of a set. Thus #V is the number of vertices in the graph, #E is the number of edges, and #friends(x) is the charisma of vertex x.
Let’s look at process 1. The probability of selecting any given vertex is 1/#V, so the expected value of the process is:
We can factor 1/#V out of the sum, obtaining:
Since each edge is counted twice (e.g. if x and y are friends, the friendship adds 1 to the charisma of both x and y), the sum is simply twice the number of edges, so
Which can be written as
That is, each edge contributes 2/#V to the expected value.
Now let’s look at process 2. If our initial vertex is x, then our final vertex will be some y ∈ friends(x), and its charisma will be #friends(y). Once we select x as our initial vertex, each friend of x has a probability of 1/#friends(x) of being selected as the final vertex. So the expected value of process 2 given that x is the initial vertex is:
Each vertex has a probability of 1/#V for being selected as v_{i}, so the expected value becomes:
Now we want to rewrite this as a sum over edges instead of a sum over vertices. Let’s consider the edge e = {x, y}. Process 2 involves crossing an edge from the initial to the final vertex. What is the probability that the edge crossed will be e, and if we do cross e, what value should we expect the process to return?
There are two ways this can happen. First, we could start on vertex x and cross the edge to vertex y. The probability of this is 1 / (#V * #friends(x)). The resulting charisma would be #friends(y), so this possibility contributes (1/#V) * (#friends(y) / #friends(x)) to the EV of the process.
Alternatively, we could start on vertex y and cross the edge to vertex x. This would contribute (1/#V) * (#friends(x) / #friends(y)) to the EV of the process. Summing over all edges, we obtain:
Compare this to:
Now we wish to show that EV(P1) ≤ EV(P2). That is, we want to show that:
Multiplying both sides by #V, we get:
Since both sums have the same number of terms, it would be sufficient to show that each term on the left side is less than or equal to each term on the right side. That is, to show that
for all x and y. For conciseness, let a = #friends(x) and let b = #friends(y). Then we need to show that
Multiplying both sides by ab, we obtain:
…which we know to be true, because the square of a real number is never negative. Now we’re done.
Now we know that you are less charismatic than your friends, at least if you’re an average guy like me. More precisely, we know that the expected charisma of a randomly-selected person is ≤ that of a randomly-selected friend of a random person.
Did you have fun? Because here are a couple of puzzles for you to think about:
Okay, thanks for tuning in! See you next time.
This is a graph:
This is another graph:
A graph is a collection of vertices and edges. Each edge connects two vertices. Mathematically, we can define a graph as an ordered pair (V, E), where V is the set of vertices and E is a set of unordered pairs of vertices. For example:
This is the same as our first graph, except now the vertices are labeled.
Let’s call this graph G_{0} = (V_{0}, E_{0}), where:
V_{0} = {1, 2, 3, 4, 5, 6, 7}
E_{0} = {{1, 2}, {1, 3}, {2, 4}, {3, 4}, {4, 5}, {4, 6}, {5, 6}, {5, 7}, {6, 7}}
You can play many fascinating games with graphs, such as Igo (囲碁), called Go in English.
You can play Go on any graph, although it is usually played on this one:
We will not be using this graph when discussing Go in the remainder of this article. For simplicity, we will use this graph:
G_{1} = (V_{1}, E_{1}), where
V_{1} = {1, 2, 3, 4, 5}
E_{1} = {{1, 2}, {2, 3}, {3, 4}, {4, 5}}
In a Go position, each vertex may be occupied by a black or white stone (or it may be empty). For example, position q_{2}:
Within a Go position, two adjacent stones of the same color are connected. Furthermore, connectivity is transitive: if stones a and b are connected, and stones b and c are connected, then stones a and c are connected. A group consists of a stone and all the stones which are connected to it. For example, position q_{2} has two groups, each consisting of one stone. There are also two groups in this position, consisting of two black stones and one black stone:
The liberties of a group are the empty vertices adjacent to it. The left group has one liberty (vertex 3), and the right group also has one liberty (vertex 4). A group is called dead if it has no liberties.
A situation consists of two pieces of information: a Go position, and a turn color. A turn color may be either black or white. For example, the situation (q_{2}, white):
White to play.
In Go, there are two players: black and white. On a player’s turn, ve may either place a stone of vis color on an empty vertex, or pass. After a stone is placed, the player removes all dead groups of the opposite color, and then removes all dead groups of vis own color. After any action, the turn color is reversed.
A game is a finite sequence of alternating situations and actions. Each action transforms the current situation into a new situation, which is added onto the game sequence. It is forbidden to place a stone which would result in a situation that has already occured in the game. If two consecutive actions are both passes, the game ends. The player with more stones wins. Here is a game of Go:
0: Black to play.
1: Black plays at 3. White to play.
2: White plays at 4. Black to play.
If black plays at 5, the white group (consisting of a single stone at 4) will have no liberties, so it will have to be removed from the board.
3: Black plays at 5. White to play.
Black just captured the stone. Note that it would now be illegal for white to play at 4, because that would lead to a repetition of situation 2. It is forbidden to play a stone which would create a situation that has already occured in the game.
4: White plays at 2. Black to play.
Black can capture this stone too:
5: Black plays at 1. White to play.
6: White plays at 4. Black to play.
Note that it would be illegal for black to play at 5, because that would lead to a repetition of situation 5.
7: Black passes. White to play.
8: White plays at 2. Black to play.
Now black has no legal moves. If ve placed a stone at 1, 3, or 5, that stone would immediately be removed, and this would lead right back to situation 8. It is forbidden to place a stone which would result in a situation that has already occured in the game, but it is not illegal to pass.
9: Black passes. White to play.
10: White plays at 3. Black to play.
11: Black passes. White to play.
12: White passes. Game over. White wins by 3 stones.
So black lost. What could ve have done differently? How about this?
0: Black to play.
1: Black plays at 2. White to play.
2: White plays at 4. Black to play.
3: Black passes. White to play.
4: White passes. Game over. Draw.
These, then, are the rules of Go:
I will later refer to this last rule as the goal of Go.
Now I want to talk about another game. This game is called Life, and it was invented by John Horton Conway in 1970. You can read more about Life here. It may be more correct to call it a simulation instead of a game, but I will call it a game anyway.
Although Life may be played on any graph, it is normally played on this one:
This graph extends infinitely in all directions. The rules of life are as follows:
I’ll give you one example of an interesting pattern in Life. The active vertices are drawn with large circles, the inactive ones with small circles.
Note that the final pattern is the same as the initial pattern, except displaced up and to the left. This is called a glider.
Okay, we’ve looked at two games that can be played on graphs. I’d like you to imagine a graph as a universe, and the rules of Go and Life as possible laws of physics. Which one is more physics-like? Think about it for a minute, scroll down, and I’ll tell you my opinion.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
I think that Life is much more physics-like than Go. Given any state in Life, the rules uniquely specify which state comes next. The rules of Go do not specify, even stochastically, how to obtain the next situation from the current situation – it all depends on the actions of the two entities called players, which are not specified by the rules of Go.
Life has been proven Turing Complete. That means, among other things, that you can build a mind in it. The thoughts and actions of that mind would be completely determined by the initial state and the rules of the game. No such feat is possible in Go.
Another difference is that Go has a goal and Life does not. Does physics have a goal – an objective morality that exists independently of any beings that live in the universe?
Consider that in Go, the players exist outside of the game. The actions of the players are not specified by the rules of Go, but when the players read the goal of the game, this influences their decisions. So it might make more sense to think of the goal not as a rule, but as an explanation to the players of what they might want to do with the rules.
What if we added a goal to life?
It is pretty clear that this goal does not have any effect at all on the outcome of a game of life. In fact, no matter what goal we came up with, it would not affect what happens within the game, and any minds which exist within Life would have no way of perceiving this goal. Similarly, if physics possesses an objective morality, minds living within the universe would not be affected by it.
It is a common error, when one begins to think about morality, to suppose that the universe has a goal, and that human moral systems have already been influenced by it. Given this assumption, one might then infer that other minds could reason their way to a morality similar to that which we already possess. This is a mistake which is rooted in our species psychology. In our evolutionary past, we humans only had to deal with other humans, who shared many of our ideas about morality, so we could get away with thinking that these ideas were properties of the universe itself. They’re not.
Powered by WordPress