space and games

January 6, 2007

8,000-Gon

Filed under: Graphs — Peter de Blanc @ 4:39 am

Blake asked me for an 8,000-gon. Here it is.

8,000-gon

August 6, 2006

You Are Less Charismatic Than Your Friends

Filed under: Graphs — Peter de Blanc @ 10:33 pm

Note: I’m not sure if the explanations below are very clear. Feedback would be appreciated.

Let’s define one’s “charisma” as the number of friends one has. If Fred has 6 friends, than his charisma is 6; if Susan has 81 friends, then her charisma is 81.

Now if we select a person at random (let’s call ver Bijecto the Bland), and then randomly select one of vis friends (let’s call ver Functor the Magnificent), then we should expect at least as high a charisma for Functor as for Bijecto, simply as a result of the selection process.

Intuitively, one may reason as follows: Bijecto was selected at random, so we should expect ver to possess an average charisma. Someone with very many friends is more likely to be friends with any particular person, so the higher your charisma, the more likely you are to have Bijecto as a friend. Thus, if we select one of Bijecto’s friends at random (say, by looking at vis cell phone), then we are quite likely to select a charismatic person. We end up selecting Functor, and so we might guess that Functor is charismatic.

Confused? Let’s work an example.

Graph 1

Here’s a graph containing seven vertices. Let’s think of the vertices as representing people, and the edges as representing friendships. Thus, two vertices are “friends” if there is an edge connecting them.

Now let’s consider these two processes.

  • Process 1: select a random vertex, and look at its charisma
  • Process 2: select a random vertex, then select one of its friends at random, then look at its charisma

The theory is that the expected value of process 2 will be at least as high as that of process 1. Let’s see if it works out.

Process 1:

We can use a table to figure out the expected charisma of a random vertex. The first column lists each vertex. The second column indicates the probability that this vertex will be selected. Since each vertex is equally likely to be selected, this is 1/7 in all cases. The third column lists the charisma of each vertex.

The last column, labeled “Contribution,” lists how much each vertex contributes to the expected value of the process. This value is simply the product of the two previous columns. Thus, summing the “contribution” column will yield the expected value.

Vertex		Chance of Selecting	Charisma	Contribution
1		1/7			2		2/7
2		1/7			2		2/7
3		1/7			2		2/7
4		1/7			4		4/7
5		1/7			3		3/7
6		1/7			3		3/7
7		1/7			2		2/7	    
Total							18/7

So the expected value is 18/7. This works out to about 2.57. Now let’s see about…

Process 2:

This is a bit trickier. First we are selecting a random vertex (the “initial vertex”, vi), and then we are selecting one of its friends (the “final vertex”, vf). We are interested in the charisma of the final vertex. So we need to figure out the probability of each vertex being selected as the final vertex.

In other words, we wish to assess the probabilities p(vf = 1) … p(vf = 7). To do this we will need the product and sum rules of probability theory. Given two propositions A and B, the product rule states that p(A^B) = p(A) * p(B|A), where p(A^B) indicates the probability of A and B both being true (called the conjunction of A and B), p(A) is the probability of A, and p(B|A) is the probability of B given A (or B conditional on A).

Thus, for instance, p(vi = x ^ vf = y) = p(vi = x) * p(vf = y | vi = x). Using this rule, we can construct a table indicating the probability of each pair of initial and final vertices.

First, we need to determine the probability of each possible value for vi. These are the values p(vi = 1) … p(vi = 7). This is easy; they’re all 1/7.

Initial Vertex		Chance of Selecting
1			1/7
2			1/7
3			1/7
4			1/7
5			1/7
6			1/7
7			1/7

Now we need to determine the conditional probabilities: those of the form p(vf = y | vi = x). These will be 0 for anything which is not a friend of vi, and all friends of vi are equally likely to be selected as vf.

For instance, vertex 1 has two friends: 2 and 3. If vertex 1 is selected as vi, then vertices 2 and 3 each have a probability of 1/2 of being selected as vf. Here’s the complete table, along with the graph, for reference.

Graph 1
					vf
	vi	1	2	3	4	5	6	7  
	1  	0	1/2	1/2	0	0	0	0
	2	1/2	0	0	1/2	0	0	0
	3	1/2	0	0	1/2	0	0	0
	4	0	1/4	1/4	0	1/4	1/4	0
	5	0	0	0	1/3	0	1/3	1/3
	6	0	0	0	1/3	1/3	0	1/3
	7	0	0	0	0	1/2	1/2	0

It will now be easy to determine the probability of each conjunction, simply by multiplying the conditional probabilities by 1/7. Then, because vi = 1 … vi = 7 are mutually exclusive and exhaustive propositions, we can add up the values in each column to obtain the probability of each vertex being selected as vf.

				vf
vi	1	2	3	4	5	6	7   
1  	0	1/14	1/14	0	0	0	0
2	1/14	0	0	1/14	0	0	0
3	1/14	0	0	1/14	0	0	0
4	0	1/28	1/28	0	1/28	1/28	0
5	0	0	0	1/21	0	1/21	1/21
6	0	0	0	1/21	1/21	0	1/21
7	0	0	0	0	1/14	1/14	0   
Total:	1/7	3/28	3/28	5/21	13/84	13/84	2/21

Now we have a probability distribution for vf. Process 2 returns the charisma of vf, so we can compute the expected value of this process with one more table:

Vertex		Chance of Selecting	Charisma	Contribution
1		1/7			2		2/7
2		3/28			2		3/14
3		3/28			2		3/14
4		5/21			4		20/21
5		13/84			3		13/28
6		13/84			3		13/28
7		2/21			2		4/21	    
Total							39/14

So the expected value here is 39/14, which is about 2.79. The value for the first process was 2.57. This agrees with our hypothesis.

If I’m still not making sense, maybe another example will help:

Flower Graph

Process 1:

We select a random node and look at its charisma. We have a probability of 1/9 of selecting the central vertex, in which case the charisma will be 8. We have a probability of 8/9 of selecting one of the outer vertices, which will have a charisma of 1. Thus the expected charisma is (1/9)*8 + (8/9)*1 = 16/9.

Process 2:

We have a probability of 1/9 of selecting the central vertex as vi, in which case vf will be one of the outer vertices, which will have a charisma of 1. We have a probability of 8/9 of selecting one of the outer vertices as vi, in which case vf will be the central vertex, with charisma 8. Thus the expected charisma is (1/9)*1 + (8/9)*8 = 65/9, which is much greater than 16/9, the expected value we obtained from process 1.

So with process 1, we’ll usually pick one of the outer vertices, which has a low charisma. With process 2, we’ll usually start at one of the outer vertices, and then move to the central vertex, which has a high charisma. This is why the expected value of process 2 is larger.

Maybe you’ve noticed that the expected value of process 1 is always twice the number of edges divided by the number of vertices. This will soon turn out to be important for…

The Proof:

Let G = (V, E) be a finite graph such that each vertex lies on at least one edge. If x is a vertex, then let friends(x) indicate the set of vertices which share an edge with x. Note that it is necessary to require each vertex to have at least one friend in order to define process 2.

I will be using the pound sign (#) to indicate the cardinality of a set. Thus #V is the number of vertices in the graph, #E is the number of edges, and #friends(x) is the charisma of vertex x.

Let’s look at process 1. The probability of selecting any given vertex is 1/#V, so the expected value of the process is:

EV(P1) = Σx∈V 1/#V * #friends(x)

We can factor 1/#V out of the sum, obtaining:

EV(P1) = (1/#V) Σx∈V #friends(x)

Since each edge is counted twice (e.g. if x and y are friends, the friendship adds 1 to the charisma of both x and y), the sum is simply twice the number of edges, so

EV(P1) = 2 * #E / #V

Which can be written as

EV(P1) = (1/#V) Σ{x, y} ∈ E 2

That is, each edge contributes 2/#V to the expected value.

Now let’s look at process 2. If our initial vertex is x, then our final vertex will be some y ∈ friends(x), and its charisma will be #friends(y). Once we select x as our initial vertex, each friend of x has a probability of 1/#friends(x) of being selected as the final vertex. So the expected value of process 2 given that x is the initial vertex is:

EV(P2 | vi = x) = Σy∈friends(x) #friends(y) / #friends(x)

Each vertex has a probability of 1/#V for being selected as vi, so the expected value becomes:

EV(P2) = (1/#V) Σx∈V Σy∈friends(x) #friends(y) / #friends(x)

Now we want to rewrite this as a sum over edges instead of a sum over vertices. Let’s consider the edge e = {x, y}. Process 2 involves crossing an edge from the initial to the final vertex. What is the probability that the edge crossed will be e, and if we do cross e, what value should we expect the process to return?

There are two ways this can happen. First, we could start on vertex x and cross the edge to vertex y. The probability of this is 1 / (#V * #friends(x)). The resulting charisma would be #friends(y), so this possibility contributes (1/#V) * (#friends(y) / #friends(x)) to the EV of the process.

Alternatively, we could start on vertex y and cross the edge to vertex x. This would contribute (1/#V) * (#friends(x) / #friends(y)) to the EV of the process. Summing over all edges, we obtain:

EV(P2) = (1/#V) Σ{x, y}∈E #friends(x) / #friends(y) + #friends(y) / #friends(x)

Compare this to:

EV(P1) = (1/#V) Σ{x, y} ∈ E 2

Now we wish to show that EV(P1) ≤ EV(P2). That is, we want to show that:

(1/#V) Σ{x, y} ∈ E 2 ≤ (1/#V) Σ{x, y}∈E #friends(x) / #friends(y) + #friends(y) / #friends(x)

Multiplying both sides by #V, we get:

Σ{x, y} ∈ E 2 ≤ Σ{x, y}∈E #friends(x) / #friends(y) + #friends(y) / #friends(x)

Since both sums have the same number of terms, it would be sufficient to show that each term on the left side is less than or equal to each term on the right side. That is, to show that

2 ≤ #friends(x) / #friends(y) + #friends(y) / #friends(x)

for all x and y. For conciseness, let a = #friends(x) and let b = #friends(y). Then we need to show that

a/b + b/a ≥ 2

Multiplying both sides by ab, we obtain:

a2 + b2 ≥ 2 ab

a2 + b2 − 2 ab ≥ 0

(ab)2 ≥ 0

…which we know to be true, because the square of a real number is never negative. Now we’re done.

Now What?

Now we know that you are less charismatic than your friends, at least if you’re an average guy like me. More precisely, we know that the expected charisma of a randomly-selected person is ≤ that of a randomly-selected friend of a random person.

Did you have fun? Because here are a couple of puzzles for you to think about:

  • Under what conditions will the expected values of the two processes be exactly equal?
  • Can you think of any other phenomena similar to this one?

Okay, thanks for tuning in! See you next time.

June 5, 2006

Games with Graphs

Filed under: Graphs — Peter de Blanc @ 10:01 pm

This is a graph:

graph 1

This is another graph:

graph 2

A graph is a collection of vertices and edges. Each edge connects two vertices. Mathematically, we can define a graph as an ordered pair (V, E), where V is the set of vertices and E is a set of unordered pairs of vertices. For example:

graph 1 (labeled)

This is the same as our first graph, except now the vertices are labeled.
Let’s call this graph G0 = (V0, E0), where:

V0 = {1, 2, 3, 4, 5, 6, 7}
E0 = {{1, 2}, {1, 3}, {2, 4}, {3, 4}, {4, 5}, {4, 6}, {5, 6}, {5, 7}, {6, 7}}

You can play many fascinating games with graphs, such as Igo (囲碁), called Go in English.

You can play Go on any graph, although it is usually played on this one:

The graph for Go

We will not be using this graph when discussing Go in the remainder of this article. For simplicity, we will use this graph:

linear 5

G1 = (V1, E1), where
V1 = {1, 2, 3, 4, 5}
E1 = {{1, 2}, {2, 3}, {3, 4}, {4, 5}}

In a Go position, each vertex may be occupied by a black or white stone (or it may be empty). For example, position q2:

Within a Go position, two adjacent stones of the same color are connected. Furthermore, connectivity is transitive: if stones a and b are connected, and stones b and c are connected, then stones a and c are connected. A group consists of a stone and all the stones which are connected to it. For example, position q2 has two groups, each consisting of one stone. There are also two groups in this position, consisting of two black stones and one black stone:

The liberties of a group are the empty vertices adjacent to it. The left group has one liberty (vertex 3), and the right group also has one liberty (vertex 4). A group is called dead if it has no liberties.

A situation consists of two pieces of information: a Go position, and a turn color. A turn color may be either black or white. For example, the situation (q2, white):

White to play.

In Go, there are two players: black and white. On a player’s turn, ve may either place a stone of vis color on an empty vertex, or pass. After a stone is placed, the player removes all dead groups of the opposite color, and then removes all dead groups of vis own color. After any action, the turn color is reversed.

A game is a finite sequence of alternating situations and actions. Each action transforms the current situation into a new situation, which is added onto the game sequence. It is forbidden to place a stone which would result in a situation that has already occured in the game. If two consecutive actions are both passes, the game ends. The player with more stones wins. Here is a game of Go:

0:                       Black to play.

1: Black plays at 3.     White to play.

2: White plays at 4.     Black to play.

If black plays at 5, the white group (consisting of a single stone at 4) will have no liberties, so it will have to be removed from the board.

3: Black plays at 5.     White to play.

Black just captured the stone. Note that it would now be illegal for white to play at 4, because that would lead to a repetition of situation 2. It is forbidden to play a stone which would create a situation that has already occured in the game.

4: White plays at 2.     Black to play.

Black can capture this stone too:

5: Black plays at 1.     White to play.

6: White plays at 4.     Black to play.

Note that it would be illegal for black to play at 5, because that would lead to a repetition of situation 5.

7: Black passes.         White to play.

8: White plays at 2.     Black to play.

Now black has no legal moves. If ve placed a stone at 1, 3, or 5, that stone would immediately be removed, and this would lead right back to situation 8. It is forbidden to place a stone which would result in a situation that has already occured in the game, but it is not illegal to pass.

9: Black passes.         White to play.

10: White plays at 3.    Black to play.

11: Black passes.        White to play.

12: White passes.        Game over. White wins by 3 stones.

So black lost. What could ve have done differently? How about this?

0:                       Black to play.

1: Black plays at 2.     White to play.

2: White plays at 4.     Black to play.

3: Black passes.         White to play.

4: White passes.         Game over. Draw.

These, then, are the rules of Go:

  • Go is played on a graph. One player takes the black stones and the other player takes the white stones.
  • Black and white stones may occupy vertices.
  • Contiguous stones of the same color form groups.
  • A group’s liberties are the empty vertices adjacent to it.
  • A group with no liberties is dead.
  • A situation consists of a position and a turn color.
  • On a player’s turn, ve may either place a stone or pass. Then it becomes the other player’s turn.
  • When a player places a stone, ve must then remove all dead groups of the other player’s color, and then remove all dead groups of vis own color.
  • It is forbidden to place a stone which would result in a situation that has already occured in the game, but it is never illegal to pass.
  • Conventionally, Go is played on a 19×19 grid. The initial situation has no stones on the grid, and black plays first.
  • The player with more stones wins. If neither player has more stones, the game is a draw.

I will later refer to this last rule as the goal of Go.

Now I want to talk about another game. This game is called Life, and it was invented by John Horton Conway in 1970. You can read more about Life here. It may be more correct to call it a simulation instead of a game, but I will call it a game anyway.

Although Life may be played on any graph, it is normally played on this one:

Graph for Life

This graph extends infinitely in all directions. The rules of life are as follows:

  • In a game state, each vertex is either active or inactive.
  • A game consists of a sequence of states. The initial state may be any state of your choosing.
  • Given the current state, obtain the new state as follows:
    • Set all vertices in the new state as inactive.
    • If a vertex is active in the current state, and it has either two or three active neighbors, then activate it in the new state.
    • If a vertex is inactive in the current state, and it has exactly three active neighbors, then activate it in the new state.

I’ll give you one example of an interesting pattern in Life. The active vertices are drawn with large circles, the inactive ones with small circles.

Glider 0
Glider 1
Glider 2
Glider 3
Glider 4

Note that the final pattern is the same as the initial pattern, except displaced up and to the left. This is called a glider.

Okay, we’ve looked at two games that can be played on graphs. I’d like you to imagine a graph as a universe, and the rules of Go and Life as possible laws of physics. Which one is more physics-like? Think about it for a minute, scroll down, and I’ll tell you my opinion.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

I think that Life is much more physics-like than Go. Given any state in Life, the rules uniquely specify which state comes next. The rules of Go do not specify, even stochastically, how to obtain the next situation from the current situation – it all depends on the actions of the two entities called players, which are not specified by the rules of Go.

Life has been proven Turing Complete. That means, among other things, that you can build a mind in it. The thoughts and actions of that mind would be completely determined by the initial state and the rules of the game. No such feat is possible in Go.

Another difference is that Go has a goal and Life does not. Does physics have a goal – an objective morality that exists independently of any beings that live in the universe?

Consider that in Go, the players exist outside of the game. The actions of the players are not specified by the rules of Go, but when the players read the goal of the game, this influences their decisions. So it might make more sense to think of the goal not as a rule, but as an explanation to the players of what they might want to do with the rules.

What if we added a goal to life?

  • The goal of Life is to make gliders.

It is pretty clear that this goal does not have any effect at all on the outcome of a game of life. In fact, no matter what goal we came up with, it would not affect what happens within the game, and any minds which exist within Life would have no way of perceiving this goal. Similarly, if physics possesses an objective morality, minds living within the universe would not be affected by it.

It is a common error, when one begins to think about morality, to suppose that the universe has a goal, and that human moral systems have already been influenced by it. Given this assumption, one might then infer that other minds could reason their way to a morality similar to that which we already possess. This is a mistake which is rooted in our species psychology. In our evolutionary past, we humans only had to deal with other humans, who shared many of our ideas about morality, so we could get away with thinking that these ideas were properties of the universe itself. They’re not.

Powered by WordPress