WOLFRAM NOTEBOOK

ORIGINAL BOOKS: Gerald H. Thomas, A Field Theory of Games: Introduction to Decision Process Engineering, Volume 1. ISBN: 978-1-57955-047-9. Wolfram Media: https://wolfr.am/1heWhy9EV
A Field Theory of Games: Introduction to Decision Process Engineering, Volume 2. ISBN: 978-1-57955-079-0.
Wolfram Media: https://wolfr.am/1heXctLgz

Introduction

For decades, the study of strategic interactions between rational decision makers has formed the basis of game theory. In this two-volume series, Professor Gerald H. Thomas, instructor of a successful engineering course, extends game theory concepts to focus on dynamic games, introducing students to a new take on game theory referred to as the field theory of games. Thomas prioritizes conceptual understanding over mathematical equation solving, making the text accessible to not only engineering students but also to a more general audience, including business students.

By using a toolkit based on the Wolfram Language, readers can bypass the need to solve linear programming problems and partial differential equations by hand, allowing them to arrive at solutions with practical applications more efficiently. Though this book begins with classical game theory, it differs from the usual approaches to dynamic games and deals with incomplete information by using constraints in a geometric theory, where the shortest path provides a deterministic prediction of future behaviors. In Volume 1, students will learn to apply introductory ideas to a system without constraints. The next installment of the series, Volume 2, will explore the consequences of adding constraints and provide an application guide.
The following is a sample chapter from the volume 1 book.

1 | Game Theory

It is well documented in game theory literature that understanding simple recreational games leads to deep insights into how we make decisions and thus helps us understand the decision process. Consider, as an example, one of the simplest recreational games, tic-tac-toe. At the outset, we assume that the reader has minimal knowledge of the Wolfram Language but will gradually increase their knowledge as they apply it to our game theory examples. We start here with a call to Wolfram|Alpha, which also can be done using the shorthand query form (
==
) or
.
Wolfram|Alpha results for the query “Tic Tac Toe”:
In[]:=
WolframAlpha["Tic Tac Toe"]
Out[]=
Assuming "Tic Tac Toe" is referring to a mathematical game | Use as
referring to a mathematical definition
or
a board game
instead
Input interpretation:
tic-tac-toe (mathematical game)
Illustration:
Statement:
Two players alternately place pieces (typically X's for the first player and O's for the second) on a 3
×
3 board. The first player to get three matching symbols in a row (vertically, horizontally, or diagonally) is the winner. If all squares are occupied but neither player has three symbols in a row, the game is a tie.
Alternate names:
naughts and crosses
|
three-in-a-row
|
ticktacktoe
|
wick wack woe
Models:
Example payoff matrix:
1
2
3
4
5
6
7
8
9
1
0
1
0
1
0
1
0
1
0
2
0
0
0
1
1
0
1
1
0
3
0
1
0
1
1
1
1
1
1
4
0
1
0
0
0
1
1
0
1
5
0
1
1
1
0
1
1
1
1
6
0
0
1
1
0
0
0
1
1
7
0
1
1
1
1
1
0
1
1
8
0
1
1
0
1
1
0
0
0
9
0
1
1
1
1
1
1
1
0
Decision tree:
History:
More details
Formulation date:
100 BC ( 2122years ago)
Classes:
fair
|
finite
|
futile
|
perfect information
|
sequential
|
two-player
|
zero-sum
This game, though simple, illustrates one of the challenges of applying game theory. We start with a game in extensive form that has a dynamic series of moves based on the detailed rules of the game. Often, as with this case, the optimal set of moves is well known. To show that they are optimal requires reformulating the game into strategic form, or normal form. To use game theory in practice, this is where the hard work is done. For the simple game of tic-tac-toe, we sketch the process.
Note: We introduce special vocabulary using italics. This vocabulary list is summarized at the end of each chapter.
From the Wolfram|Alpha data, we see that we can convert the game of moves where one player puts an X followed by the second player putting an O, etc. into a matrix game where we can enumerate nine strategies (as decision trees) for each player. These normal strategies can be chosen by each player before the game starts; the choices for each player can be written on a piece of paper ahead of time as a matrix where the entry shows win or lose (payoff). The one additional caveat is that in the table, the normal choices are limited by assuming each player chooses the best move (they don’t give away a win by not blocking the other player). Such choices could be easily added, though it would make the table larger while adding no new insights.
Given the above analysis, the extensive view of the progress of the game as moves of X and O is determined. The payoff of who wins (payoff
=1
) or whether there is a tie (payoff
=0
) is set. If we considered the moves that are not best, then we would also indicate losing (payoff
=-1
). This articulates the strategic possibilities for tic-tac-toe. The first player may win given a bad move by the second player but, at worst, can generate a tie. The second player can always make a good move that forces the tie.
In fact, the game of tic-tac-toe summarizes an important aspect of game theory. After playing such a game over and over again, each player may hit upon a static equilibrium choice. When that choice is reached by each player, we have an equilibrium case. Game theory gives us the rules for computing this static equilibrium case.
In the simplest games, the optimal choice for each player is to choose one of their pure strategies. The game theory rules articulated here provide the “proof” of the well-known optimal method of play for tic-tac-toe. As part of the proof, we have replaced the details of the rules of the game and the moves with a single matrix where each player chooses one row (column) to determine the payoff. These are pure strategies.
However, in the general case, the optimal choice is for each player to choose a mixture of their pure strategies. The mixture specifies the frequencies that each player uses to pick from the available pure (normal) strategy choices. In other words, the player adopts a mixed strategy in which each pure strategy is played according to these frequencies.
Why do we need to go beyond static solutions? A realistic playing of tic-tac-toe by people unfamiliar with the game will not normally start with players picking such solutions. Assume they start with some mixed strategy. Over time, each play of the game will change these chosen frequencies as the players adjust to what their opponent has done. After transient behaviors have died out, we envision not only the static situations of game theory, but stable steady-state flow situations that are stationary but not static. In this chapter, we will define more precisely what we mean by stationary flow.
In this book, we apply our field theory of games to such stationary flow situations after transients have disappeared. This theory, which we have also called decision process theory, is an extension of classical game theory and is not related to ordinary decision theory. This is an “engineering” approach to the theory, which can be applied to strictly determined games, such as tic-tac-toe, in which the optimal choice for each player is a pure strategy. It can also be applied to the wide literature spanned by traditional game theory, including economics and strategic business and military decisions, and to a host of other applications.
There are multiple strands that lead to our decision process theory. The theoretical side consists of physical theories, differential geometry, and game theory. The empirical side stems from physical theories, engineering, and systems dynamics and takes cues from business and strategic planning applications. In particular:
  • From game theory (Von Neumann & Morgenstern, 1944), we take the idea that the mixed normal strategies reflect a causal behavior even though the instantaneous decisions humans make are not deterministic.
  • From the study of complex networks (Barabási, 2003), we take the idea that there are deterministic connections across the network.
  • From differential geometry, we take the idea of creating a spacetime using the game theory space of mixed normal strategies along with the game theory notion of utility as a measure (Thomas, 2006).
  • These statements require justification, which is done elsewhere (Thomas, 2016). In this book, we provide working notebooks that lay the groundwork for interested readers to apply these ideas to problems of their own choosing.

    1.1 Game Theory Strategies

    Von Neumann and Morgenstern (1944) demonstrated that recreational games have a close connection to economic and strategic games. They did an exhaustive analysis showing that any strategic decision process (including all of the moves of each player along with any possible random events) could be put into a standard form in which each player plays a single strategy, chosen from a list of pure strategies. Our dynamic theory builds on their approach.
    The basic idea, illustrated previously for tic-tac-toe, is that the list is created by laying out all the moves (as in a decision tree) one player might make, countered by all the moves the other players might make. In this way, for each player, the strategic choice reduces to picking a single item from the list. They called this view of the game the normal form, as opposed to the form we often use based on individual moves, which they call the extensive form. The normal form is often called the strategic form of a game in which the essential elements of the game are captured in a matrix of payoff numbers.
    By thinking about games in their normal form, we reduce the analysis of any game to a payoff matrix that gives the utility, or value, each player places on their single pure choice against their opponent’s single pure choices. In practice, the number of pure strategies may be enormous. To apply this approach, even with reasonable computation power, we may need to identify a reduced subset of these pure strategies. This is often possible, as demonstrated by the enormous success of game theory. When we can reduce the number of strategies, the added benefit is increased understanding of the process.
    To illustrate these concepts, we give a second example, this time of a more realistic decision process that is not a recreational game. A situation that occurs in war is that of one side attacking another; we call the defender blue and the attacker red. Though there are many possibilities, it is instructional to simplify the choices down to a few for each side. One common example (Williams, 1966) is one in which the problem has been reduced to the normal form in which there are two significant choices for each side. For example, blue can defend one of two installations, one of which is less valuable and one is more valuable according to the blue game matrix of values.
    Blue’s view of utility for the attack-defense game:
    In[]:=
    attackDefenseBlue:={{4,1},{3,4}}
    MatrixForm@attackDefenseBlue
    Out[]//MatrixForm=
    4
    1
    3
    4
    Note: In each chapter, we will collect many of the formulas, such as this one, into an initialization section. Therefore, we use the delayed value
    :=
    to allow for running this code only if needed. We use the immediate value
    =
    whenever logic demands it be used.
    The two assignments differ in terms of when the update to the values occurs. For example, when we defer immediate evaluation using the semicolon shorthand (
    ;
    ), the value for
    z
    is 5 even though we have changed the value of
    x
    .
    Example using the semicolon shorthand:
    In[]:=
    y=3;
    In[]:=
    z=x+y;
    In[]:=
    x=5
    Out[]=
    5
    In[]:=
    z
    Out[]=
    8
    In[]:=
    Clear[x,y,z]
    We can ensure that z obtains the updated value if we use the delayed evaluation:
    In[]:=
    u:=2
    In[]:=
    v:=3
    In[]:=
    w:=u+v
    In[]:=
    u:=5
    In[]:=
    w
    Out[]=
    8
    In[]:=
    Clear[u,v,w]
    This is a way to reflect that
    w
    depends on
    u
    and
    v
    .
    Another way is to make w a function of u and v using the delayed notation:
    In[]:=
    u:=2
    In[]:=
    v:=3
    In[]:=
    w[x_,y_]:=x+y
    In[]:=
    u:=5
    In[]:=
    w[u,v]
    Out[]=
    8
    In[]:=
    Clear[u,v,w]
    We will use both types of delayed assignments in this book, noting that the functional notation, though cleaner, becomes hard to read when there is a large number of parameters.
    Red can attack these two installations according to the red game matrix of values. In a zero-sum game, red’s game matrix is the same as blue’s with negative values.
    Red’s view of utility for the attack-defense game:
    In[]:=
    attackDefenseRedView:={{-4,-1},{-3,-4}}MatrixForm@attackDefenseRedView
    In the literature, this is often represented as a vector for each position.
    Matrix view of the payoffs for blue and red:
    In our approach, we find a more convenient but equivalent convention. For both blue and red, we label the rows based on blue choices and the columns based on red choices. The attack-defense scenario we have introduced is defined as a zero-sum game in that both players see the same payoffs based on the given choice options. When one person wins an amount, the other loses the corresponding amount.
    As an aside, motivated by our theoretical analysis of the field theory of games, for any n-player game, we assert that each player maintains their own internal zero-sum map of what they think every other player sees. Therefore, we characterize each player by
    how they characterize the game as seen from one player’s perspective. As an example, for a two-person game, for blue, we assume that the symmetric game for them is:
    Returning to our example, we take the perspective of blue. So for a zero-sum game, we would say that red’s view of how blue sees the game is then the same as blue.
    Red’s view of how blue sees the utility for the attack-defense game:
    In[]:=
    attackDefenseRed:={{4,1},{3,4}}
    MatrixForm@attackDefenseRed
    This gives a standardized way to describe the game consistent with the theory we present. At the level discussed here, it is equivalent to the vector view common in the literature in which each player is assigned their own game matrix. This generalizes to the case of non-zero-sum games in which the game matrices are not the same.
    The same convention works when there are many more choices available for each player and when there are many more players. The matrix for blue would consist of blue strategies along the rows and each of the other player strategies listed in sequence on the columns. The payoff matrix for each of the other players can also be listed in terms of how each sees blue’s payoffs.
    We take as a basis for a dynamic theory this essential part of solving game theory problems, namely identifying the normal pure strategies. Moreover, even for a dynamic theory, it is extremely helpful if we can reduce the key set of pure strategies that need to be considered. For tic-tac-toe, this was done by eliminating those choices where players make an obvious stupid move, such as avoiding a move that would win the game by moving elsewhere. For the attack-defense model, this was done by a careful analysis of the problem by military strategists (Williams, 1966) before the problem was presented to us.

    1.2 Game Theory Max-Min Solutions

    In dynamic theory, the fixed points of the flow correspond to the game theory optimal strategies. Therefore, we give an abbreviated version of how such optimal strategies are defined and provide the tools to compute them in game theory (strictly speaking, these are finite matrix games). A reader wanting more detail can refer to the exhaustive literature on the subject.
    As an example, take the attack-defense model.
    Matrix form of the attack-defense model for blue:
    Von Neumann and Morgenstern (1944) proved that there is always a max-min mixed-strategy solution to games. This was later generalized by Nash (1951) to non-cooperative games. For zero-sum games, the idea of the max-min solution is for player 1 to consider for each choice they might make what the worst outcome (min) is for them based on choices made by player 2. They should then choose the best (max) of those choices. Player 2 makes the corresponding set of choices. If we stick to pure strategy choices, there is no solution that works for both players. However, if we generalize to mixed strategies, there is always a max-min solution.
    In most game theory applications related to zero-sum games, the max-min solution is computed using linear programming techniques. To be able to compare our dynamic results to game theory results, we summarize the basic ideas and provide the computational means to obtain these solutions using linear programming. For more information, see a simplified explanation by Williams (1966) and a survey of game theory literature and a detailed explanation of linear programming by Thomas (2016), along with additional references. We adopt the latter description.
    matrix are positive. We need to apply this assumption in order to apply the linear programming technique. Once we have the solution, we subtract the constant to obtain the max-min game value.
    The game theory solution to any two-person game with an input game matrix g:
    We now have what we want for future use. For any game, we have functions that will determine the max-min solutions.
    For the attack-defense game we introduced earlier as our example, the linear programming provides the two solutions that are proportional to the player’s mixed strategies.
    Attack-defense game mixed strategies:
    In[]:=
    solP1[attackDefenseBlue]
    solP2[attackDefenseRed]
    We can normalize each of these so that their sum is unity. They have the following interpretation in terms of frequencies: the best strategy for blue is to defend the more valuable installation 3:1 more often than defending the less valuable installation; and the best strategy for red is to attack the less valuable installation 3:1 more often than attacking the more valuable installation. It is important that the other player not know exactly which of these strategies is chosen at any given play of the game.
    The linear programming method will also determine the game value. We provide the general formulas, which, as noted previously, can then be used for any input game matrix.
    Normalized strategies and game values for a two-person game:
    In[]:=
    normStrategy[s_]:=If[Tr[s]0,s/Tr[s],"NA"]
    gameValueP1[g_]:=1/solP1[g].hedge1Fn[g]+Min[g]-1
    gameValueP2[g_]:=-1/solP2[g].hedge2Fn[g]-Min[-g]+1
    Normalized strategies and game values for the attack-defense model:
    In[]:=
    normStrategy@solP1[attackDefenseBlue]
    gameValueP1[attackDefenseBlue]
    normStrategy@solP2[attackDefenseRed]
    gameValueP2[attackDefenseRed]
    It is worth noting that the linear programming method is effective at providing the mixed strategies and game values, even with large numbers of strategies (like hundreds of pure strategies). This is its appeal. Technically, the max-min solution is looking for certain points in a convex space, and there may in fact be more than one. It is a much harder problem to identify all of these points. In the dynamic theory, the same problem recurs: we have to determine all of the fixed points.

    1.3 Short List of Two-Person Games

    Here is a short list of games that you can analyze to determine the mixed strategies and game values. These games, or several variants of these games, are common and consistent in game theory literature. Additional references as well as additional games can be found in Williams (1966) and, with some overlap, in Thomas (2016).
    Rock-paper-scissors zero-sum game:
    In[]:=
    rspGame:={{0,1,-1},{-1,0,1},{1,-1,0}}
    Rock-paper-scissors non-zero-sum game:
    In[]:=
    rspNZSGame:={{0,2,-1},{-1,0,1},{1,-1,0}}
    Base game:
    In[]:=
    baseGame:={{7,1,3,0,2},{0,1,6,4,2},{1,2,0,5,5}}
    Morra game:
    In[]:=
    morra:={{0,2,2,-3,0,0,-4,0,0},{-2,0,0,0,3,3,-4,0,0},{-2,0,0,-3,0,0,0,4,4},{3,0,3,0,-4,0,0,-5,0},{0,-3,0,4,0,4,0,-5,0},{0,-3,0,0,-4,0,5,0,5},{4,4,0,0,0,-5,0,0,-6},{0,0,-4,5,5,0,0,0,-6},{0,0,-4,0,0,-5,6,6,0}}
    The date game:
    In[]:=
    theDate:={{1/3,0,-(1/3)},{0,2/3,1/3},{1/3,-(1/3),1}}
    Colonel Blotto:
    In[]:=
    colonelBlotto:={{4,2,1,0},{1,3,0,-1},{-2,2,2,-2},{-1,0,3,1},{0,1,2,4}}
    The cat-mouse game:
    In[]:=
    catMouse:={{0,0,0,1,0,0,0,1},{0,1,1,1,1,1,1,0},{0,1,1,1,1,1,1,0},{1,1,1,1,1,1,1,0},{0,1,1,1,1,1,1,1},{0,1,1,1,1,1,1,0},{0,1,1,1,1,1,1,0},{1,0,0,0,1,0,0,0}}
    The prisoner’s dilemma:
    In[]:=
    prisonersDilemma:={{-(1/10),-1},{0,-(9/10)}}
    prisonersDilemma2:=-Transpose@(prisonersDilemma)
    The game of chicken:
    In[]:=
    chicken1:={{0,-1},{1,-10}} (* row labels are "swerve" and "go straight" *)
    chicken2:=-Transpose[chicken1]
    Economic chicken:
    In[]:=
    econChicken1:= {{0,4},{1,3}} (* row labels are "crash" and "swerve" *)
    econChicken2:=-Transpose[econChicken1]
    Horn’s game:
    In[]:=
    hornsGame :={{0,2,2,0,0,0,0,0,0},{0,0,0,0,3,3,0,0,0},{0,0,0,0,0,0,0,4,4},{3,0,3,0,0,0,0,0,0},{0,0,0,4,0,4,0,0,0},{0,0,0,0,0,0,5,0,5},{4,4,0,0,0,0,0,0,0},{0,0,0,5,5,0,0,0,0},{0,0,0,0,0,0,6,6,0}}
    Attack-defense model:
    In[]:=
    attackDefenseBlue:={{4,1},{3,4}}
    attackDefenseRed:=attackDefenseBlue
    Drinking game:
    In[]:=
    drinkingGame:= {{55,10},{10,110}}
    Stag hare model (the stag hunt):
    In[]:=
    stagHare:={{3,0},{2,1}} (* row labels are "stag" and "hare" *)
    Tragedy of the commons:
    In[]:=
    tragedyOfCommons:={{100, 70}, {140, 80}} (*fishing example, 1 fish, 2 fish options *)
    Tic-tac-toe game and subgames:
    In[]:=
    ticTacToeGame:={{0,1,0,1,0,1,0,1,0},{0,0,0,1,1,0,1,1,0},{0,1,0,1,1,1,1,1,1},{0,1,0,0,0,1,1,0,1},{0,1,1,1,0,1,1,1,1},{0,0,1,1,0,0,0,1,1},{0,1,1,1,1,1,0,1,1},{0,1,1,0,1,1,0,0,0},{0,1,1,1,1,1,1,1,0}}
    ticTacToeSubGame[num_] := ticTacToeGame[[1 ;; num]]

    1.4 Non-Zero-Sum Game Example

    The game of chicken is an example of a non-zero-sum game. Imagine two drivers, approaching each other in cars at 100 mph. There are many potential moves as they approach. There can be feints, accelerations, slow-downs, etc. These are all part of the rules of the game. However, game theory, along with some considerable thought, abstracts this contest into two normal strategies: each player makes one strategy choice, labeled “crash” or “swerve.” These choices effectively describe the decision tree set of moves actually made. We choose the economic chicken model from our list of chicken models.
    Matrix form for each player:
    In[]:=
    MatrixForm[econChicken1]
    MatrixForm[econChicken2]
    Since each player sees the same scenario, they may each win and see a positive payoff. When expressed from the point of view of player 1, the player matrix for player 2 is the negative transpose of player 1. We use our formulas to construct the normalized strategies and game value for each player.
    Strategic properties of the game of chicken:
    In[]:=
    normStrategy@solP1[econChicken1]
    gameValueP1[econChicken1]
    normStrategy@solP2[econChicken2]
    gameValueP2[econChicken2]
    Based on the game theory analysis, each player “swerves” and sees a game value of 1 (we say player 2 sees –1 as measured in units of player 1; player 1 thinks they have lost whenever player 2 wins). This equilibrium value is called the Nash equilibrium in the literature.
    Though the formal definition of the Nash equilibrium takes us away from our main thread, one way to capture it is to say the best strategy has the property that no player can do better against this strategy as long as all the other players adhere to their best strategy. The definition of the Nash equilibrium is more general than the max-min solution. For our general theory, we don’t build on the details of either the max-min solution or those of the Nash equilibrium. However, we want to get results that are basically in accord.
    Based on this, there are actually two Nash equilibria. Player 1 believes that their best choice is to swerve since they analyze that player 2 will always crash. Player 2 makes the same analysis. We shall see that this distinction is important, especially in dynamic behaviors. We will revisit this in the general theory.

    1.5 Strategic Flows

    We have selected the particular concepts already discussed from standard game theory because we believe they can be used to generalize game theory from statics to dynamics. We say game theory is static because once we specify the game, the optimal mixed strategy for each player is determined and does not change in time. If we consider the space of all possible mixed strategies, we may say that the optimal strategy is then a fixed point in that space.
    For dynamic theory, we make the further assertion that real-world decisions may not always be optimal but still correspond to points in this space. For example, players may not be experienced and make suboptimal decisions. A dynamic theory is one in which we follow the behaviors of these points in time. If, at each instance of time, we think of the actual point in space as the strategic position, then we define the strategic flow as the change in that strategic position over time. Note that the strategic space of mixed strategies includes all the players, so this flow reflects the motion of their collective decision choices. A dynamic solution is characterized by that point moving in space.
    Though our goal was to locally replicate game theory, our line of thinking leads to an unexpected result. We find that the essential property of the static equilibrium of game theory is not that the game is necessarily resting at a particular point; instead, it can be generalized to say that such games correspond to a constant flow through that point. As part of this change in thinking, we relax the idea that mixed strategies sum to unity for each player; for a max-min solution, the mixed strategies will lie along a line.
    We go a step further and say that the essential idea of a max-min solution is captured by the strategic flow with the condition that the relative flow values match the equilibrium mixed strategy frequencies. Thus, a generalized max-min solution corresponds to a uniform vector field that is parallel to the max-min line solution through the equilibrium mixed strategy value. In the general case, the strategic flow vector field becomes the important characteristic of the theory, which reduces to game theory for generalized max-min solutions.
    We now have defined a stationary flow, one in which the flow vector is constant in time and consists of a parallel vector field. If we relax the condition that the stationary flow is a constant parallel vector field, we no longer have a max-min solution. We propose that the general dynamic case allows such stationary flow behaviors. Such behaviors might be convective or circular.
    To arrive at such new behaviors, we replace the max-min static argument with a dynamic generalization (Thomas, 2006) to game theory that relies on thinking of the space of mixed strategy possibilities as locally flat and hence described by theories using differential geometry. We will say more about this in later chapters, but for the mathematically minded, we note here that such locally flat theories provide a natural generalization when we think that the behavior we have observed (such as static game theory) holds over a local region of points, around our equilibrium direction, and over small intervals of time. To stitch such behaviors together that hold at different points, we obtain a curved space geometry with non-flat (i.e. non-max-min) solutions. The simple geometric example is going from a Euclidean space to a spherical space, which is, in fact, locally flat.
    In summary, our position is that game theory dictates that the static (Nash) equilibrium can be deduced from knowing the relative utility of the different normal mixed strategies and is summarized by the game matrix for the two players. We illustrated that previously with the non-zero-sum game of chicken, which by far is the most popular game with students. In this book, we extend this to cases in which the vector field of strategic flow does not correspond to the generalized max-min solution. Though the most general case is that the vector field is time dependent, we focus on the special cases in which the transient behaviors die out, leaving stationary flow. We explore the structures of such stationary flows.
    In the next chapter, we expand our theoretical exposition of decision process theory. We focus on using the strategic flows associated with the normal form of the mixed strategies while keeping the basic ideas of payoffs that are associated with the underlying game. This expansion opens up a new computational area for exploration.

    Vocabulary

    Exercises

    Analyze the children’s game of rock-paper-scissors. You can use the game matrix provided.
    In a similar fashion, analyze the other games provided. Of particular interest is the non-zero-sum game of rock-paper-scissors. Here, player 1 receives an extra value of utility if they win so that it is personal. There will be a similar game matrix for player 2. How do you interpret the results for this game?
    Show that a submatrix of the payoff matrix for tic-tac-toe corresponding to the normal strategies is given by the Wolfram|Alpha expression. Does this submatrix capture all of the strategic possibilities of the game?
    Show that the following submatrix of tic-tac-toe gives the full game results based on the symmetry of the game. Here we take the first three rows for X and all of the columns for O.
    Games need not have the same number of rows and columns. In addition to the example in the previous exercise, here is another example. Solve the Colonel Blotto game from our short list.

    Q&A

    Is there a reason to focus on games with only a few strategies?
    We do this for simplicity of exposition. In practice, the games may have many pure strategies. Nevertheless, it is often true that common sense may rule out all but a manageable set of options.
    Do game theorists analyze games based on the moves (the extensive form)?
    Yes, there is a lot of literature on that.
    How many players can be involved in a game?
    There is no limit to the number of players that can be involved. Of course, the analysis becomes progressively more complicated as more players are added. There will be differences between how we generalize to multiplayer games and how that was done originally; the original approach had flaws that were not overcome. We believe the approach in Thomas (2016) fixes those flaws.
    What is the basis of the game payoff matrix?
    The game payoff matrix assumes that each player has a knowable utility that can be assigned to the combined set of choices made. This utility may not be convertible from one player to another, however. A dynamic theory needs to accommodate that reality and, in essence, provide a mechanism for convertibility. At the mathematical level, the theoretical basis for the game payoff matrix is that each player is assumed to have a worldview strategy that is an isometry; this means that all scalars and tensors of the geometry are independent of that value. It is then a theorem in differential geometry that to each isometry, there will be an antisymmetric tensor to which we can associate a payoff matrix (Thomas, 2016).
    Is the approach taken here Bayesian?
    No, though much of game theory literature relies on that approach. However, the initial concepts of game theory depend only on establishing mixed strategies as relative frequencies. There is no presumed assumption that we can determine which choice a person actually makes. This part of game theory says that nevertheless, the equilibrium value for the mixed strategies can be established as a deterministic value.
    Have other authors studied the network aspects of decisions?
    Yes, see Barabási (2003), who gives arguments for why network effects exist. In the context of this chapter, network effects would be seen as differences in flow vectors at two different points in the strategic space.

    Tech Notes

    The use of linear programming to compute the game theory strategies is documented in the References. In particular, see Thomas (2006) and Thomas (2016) for examples.

    More to Explore

    Read S. Parreiras’s article in The Mathematica Journal on Karush–Kuhn–Tucker equations for another example of the Blotto game (wolfr.am/Blotto)
    Try this classic prisoner’s dilemma game by S. Lichtblau (wolfr.am/PrisonersDilemma)
    Use Wolfram|Alpha to explore the simple game of tic-tac-toe: the normal strategies correspond to a branch on the decision tree (wolfr.am/tic-tac-toe)

    CITE THIS NOTEBOOK

    Wolfram Cloud

    You are using a browser not supported by the Wolfram Cloud

    Supported browsers include recent versions of Chrome, Edge, Firefox and Safari.


    I understand and wish to continue anyway »

    You are using a browser not supported by the Wolfram Cloud. Supported browsers include recent versions of Chrome, Edge, Firefox and Safari.