space and games

January 20, 2009

Intensional vs. Extensional Goals

Filed under: General — Peter de Blanc @ 12:39 am

Two types of goals for an agent are intensional and extensional goals. An intensional goal can be defined in purely mathematical terms, while an extensional goal depends on the universe in which the agent finds itself.

Some examples of intensional goals:

  • Find a prime number at least 200 bits long.
  • Prove Fermat’s Last Theorem.
  • Unscramble a Rubik’s Cube.
  • Fill in a Sudoku puzzle.

Some examples of extensional goals:

  • Predict the orbit of Mercury.
  • Drive a car across the Mojave Desert.
  • Win a trivia game.
  • Earn at least $500.

If we were coding a Go AI, we could try to build it to achieve either an extensional or an intensional goal. The obvious extensional goal is “win the game.” One possible intensional goal is “output a move that a minimax player would output.” In both cases we would probably include some sort of time limit.

“Output a move that a minimax player would output” stands out from the other examples of intensional goals listed above. In all of the other examples, the agent can be sure that its output is correct before it returns, but if I tell you to “output a move that a minimax player would output,” I haven’t given you an implementable procedure for checking whether you’ve achieved the goal.

It’s not so hard to think of other intensional goals with this property. Instead of asking an agent for a proof of Fermat’s Last Theorem, I could ask it to output 1 if a proof exists, and 0 otherwise.

Let’s say that these two are examples of “the hard kind of intensional goal,” and the four listed at the top of the page are “the easy kind of intensional goal.” The “easy” one are not necessarily easier individually than the “hard” ones; it’s easier for me to output a 1 than to output a proof of Fermat’s Last Theorem. But the “easy” ones are easier to think about, and as a class they’re easier.

In fact, the “hard” intensional goals are so hard that, from a certain point of view, they include the extensional goals! Take any extensional goal, and replace all the unknown parts with a probability distribution (perhaps based on algorithmic complexity), and you have an intensional goal.

Well, that’s a bit of a cop-out, because I don’t actually know how to do that for any of the extensional goals I listed at the top of the page. But we can do it for any goal for which we have already coded a success-detector – a piece of code which can determine, after the fact, if we have achieved our goal. In the Go example, we can do this. The Go AI interacts with its opponent through a a predefined protocol in which the AI outputs its moves and the opponent inputs vis moves to the AI. The board state can be built up from the list of moves, and so after any hypothetical series of plays, the AI can determine whether the game is over, and who won.

So to reformulate “win the game” as an intensional goal, we can suppose that our opponent’s moves are generated by some unknown Turing machine drawn from a given probability distribution. We use the list of moves which have occurred so far to do a Bayesian update on this distribution. Then any possible policy for generating moves has a probability of winning, and we output the move recommended by the policy with the greatest winning probability.

This way we can specify a program (called AIXI) which, if run on a big enough computer, would output what we want to output. And then our goal can be intensionally defined as “output whatever AIXI would output.”

A minimax player is a tall order. AIXI is an even taller order. We can’t actually run these programs, but we want to output, in a reasonable amount of time, whatever they would output. This may require uncertain reasoning about mathematics.

Powered by WordPress