Tag Archives: Artificial intelligence

AI for Tic-Tac-Toe (Part 4)

Hello, this article is certainly the last about Tic-Tac-Toe and Artificial Intelligence. The first article (link) concerns the presentation of this little project and two bots using random behaviors: "Full Random" and "Partial Random". If there are two aligned cells, 'Partial Random' automatically places the third symbol to win. The second article (link) concerns two other bots: "The Trapper" and "The Brute". "The Trapper" places symbols to let two positions to win, So wherever the opponent plays, "The Trapper" wins. "The Brute" is an invincible bot, he knows each choice and its result, so he avoids bad ways. The third article (link) concerns the first learning bot and an interface for humans. The bot, named "The Bandit" defines choices depending of the probability to win or lose.

So, this article concerns a Neural Network bot. What is a Neural Network? I suggest you to see the skull of your favorite neighbor and you should discover an object composed with strange matter. In fact, this object is a brain and a brain is a Neural Network. The aim of mad scientists is to create robots which may replace humans: IBM has already simulated the brain of a monkey (results published during the International Conference for High Performance Computing, Networking, Storage and Analysis at Salt Lake City; SC12).

Neural Network is an complex subject and I have used this article to understand and implement the bot.

Neural Network

A Neural Network is composed of lot of Neurons and links between Neurons. A Neuron receives data from  Neurons and define a state (0 or 1) which will send to another Neurons.

A Neuron connected to others Neurons.
A Neuron connected to others Neurons.

The value of the state depends on the state of input neuron and a weight associated with each link. The computation is decomposed into two steps:  a weighted sum and a thresholding.

 sum = \sum_{i}^{\text{neurons_in}} weight_i \cdot input_i

 output = \begin{cases} 1, & \text{if } sum>\theta \\ 0, & \text{if } sum \le \theta \end{cases}

It's possible to use a activation function (for example: a sigmoid function) instead of the thresholding, but I prefer the second because the program is easier to  debug.

Simple example

I consider the Tic-Tac-Toe as a complex example. The simple examples described in this part will be a boolean function: XOR. The XOR have two inputs and one output and the behavior is defined in the next table.

 \begin{array}{c|c|c} \rm In 1 & \rm In 2 & \rm out \\ \hline 0 & 0 & 0 \\ 0 & 1 & 1 \\ 1 & 0 & 1 \\ 1 & 1 & 0 \end{array}

In this example, I use five neurons, but the more complex choice, in neural network,  is the method to consider inputs and outputs. In the case of the boolean input I prefer using -1 and 1 instead of 0 and 1. In fact, if all of the inputs are set to 0, the computed sum of each neuron will be null for the first iteration. I think the system will converge speeder with this choice. For output, if the sum is positive, so the result is 1, else 0.

Example of neural network
Example of neural network. At Left, the inputs. At right the output. At the middle, the neurons.

So if the inputs are 1 and -1 (as the figure), the neurons calculate the weighted sum and result is compared with the threshold \theta = 0 (bias).

 \begin{matrix} (1 * 0.68) + (-1 * -0.21) = 0.89 &, \text{ as } & 0.89+\theta_1>0 & \text{ then } Out_1=1 \\ (1 * 0.57) + (-1 * 0.59) = -0.02 & , \text{ as } & -0.02+\theta_2 \le 0 & \text{ then } Out_2=0 \\ (1 * 0.82) + (-1 * -0.60) = 1.42 &, \text{ as } & 1.42+\theta_3>0 & \text{ then } Out_3=1 \\ (1 * -0.32) + (-1 * 0.53) = 0.85 &, \text{ as } & 0.85+\theta_4 \le 0 &\text{ then } Out_4=0 \end{matrix}

Next, the same mechanism is applied to the global output.

 (1*-0.44) + (0*0.10) + (1*-0.045) + (0*0.25) = -0.485

As the sum is negative, the brain believes that 1 \oplus 0 = 0. In fact, the result is wrong because the brain has never learnt the rules.

When a biological brain is punished, a chemical effect changes the weights of connections. With the numerical neural network, the learning system is a backward error propagation. Arbitrarily, we consider there are an error of -1 (the value needs to be negative; it is equal to the desired value minus current value).

Next, weights of connections are modified depending on the chosen error and the output value of the neuron. Rate is a constant parameter which reduces the effect of modification (here 0.05).

 weight {+}{=} (-1) * rate * output * error

Output may have two value 0 and 1, so, values are modified only if the state of the neuron is 1.

Backward error propagation in connections
Backward error propagation in connections

The next step is similar to the normal use of Neural Network: for each neuron, an error is computed as a weighted sum.

 sum_{\text{error}} = \sum_{i}^{\text{neurons_out}} error_i * weight_i

Next, the propagation depends of the value of the neuron:

 error_{\text{neuron}} = sum_{\text{error}} * output_{\text{neuron}}

Initially, the bias \theta, is set to 0. But, we need to modify it:

 \theta_{\text{neuron}} {+}{=} (-1) * rate * error_{\text{neuron}}

Finally, the error is sent to connections to modify their weight. The next figure shows the neural network after learning. The system had made mistake only nine times to converge to this solution.

Neural Network after learning phase
Neural Network after learning phase

8th competitor: 'The Brain'

The game Tic-Tac-Toe is a more complex case than the Xor function. So, a difficult will be the choice of inputs or outputs. The inputs corresponds at each cell of the game: If the cell is empty, then value is 0, if the symbol is a cross 1 and  if the symbol is a nought -1. The output corresponds at each cell too and the selected cell is the cell with the maximal value.

"The Brain" need to learn two things:

  • It must chose an empty cell.
  • It must avoid the opponent aligns 3 symbols

Now, we limit the learning to the first point, so the bot need to select an empty cell.

The first 1000 games, the bot made 4624 selection error. Finally, the system converges to 4614. The efficiency of the bot is low. The question "Why?" is complex to answer: maybe a problem with the number of neurons or their initialization, bad choice of parameters (rate/error) or output/input mechanism, bad programmer...

I have tested many solutions. For example, I have cheated and modified input value. With other parameters (-1 if empty cell and else 1), I have hoped to solve the problem of selection. Currently, I abandon this bot, because I have no idea to solve the problem.

Conclusion

The creation of this bot is a relative failure. In the article (link), the author tests the program with Tic-Tac-Toe and seem to have interesting results. But, the learning concerns particular cases set to the neural network and it doesn't learn by trial and error.

If we chose a constant error during the execution, values diverge and tend toward infinity. If we want to limit values, we need to use this formula:

 Error = Output-Desired

But we need to define a relative error to each cell. This means the learning needs to be assisted. I would like a bot which selects a cell, next, the program said that the bot has made a mistake. With assisted learning, the program informs to the bot where the bot can play (all positions). In fact, assisted learning works and the bot selects finally empty cells without error.

Nevertheless the  experiment of Neural Network doesn't provide the expected result: the bot isn't autonomous.

AI for Tic-Tac-Toe (Part 3)

The last time, I have added many chalengers to play to Tic-Tac-Toe (link). Theses bots are able to detect and create traps. The last bot use a tree of differents cases. So it knows the bad choices and avoid them. All of the presented chalengers are non intelligent bots. Before I descript Neural Network, there are a last-minute chalenger. Also, its the first intelligent bot because it can learn from its mistakes.

6th competitor: 'The Bandit'

The bandit is based on the Multi-armed bandit problem.

A bandit is a slot machine. The player inserts coins in a machine, push a button and if he is lucky, he can win lot of money. But when the player go to casino, he have the choice with many machines. What is the machine with the better probability to win?

The best way is the observation of the behavior of the machine and the attribution of a score at each machine. If the player wins with a machine, so the score will be increassed. If the player losses, the score will be decreassed. Before each game, the player chooses the bandit with the best score.

In the case of the Tic-Tac-Toe chalenger, 'The Bandit' doesn't know the criters to win. So it tests and learns. If the bandit losses, that means each choice of the game is bad and will be avoided. For each choice, 'The Brute' uses a tag to know the best path and 'The Bandit' prefers to attribute a score.

The algorithm of 'The Brute' is more complex than the algorithm of the 'The Bandit' because programmer need to define win conditions and found a stategy to provides these informations. 'The Bandit' needs only a tree of choices with score and selects the more valuable path.

The score of the choices is increased or decreased depending of the result of a game. There are différent stategy to determine the gain and the loss: is winning more important than lossing?

Battle #15: 'The Bandit' vs 'Full Random'

The next chart corresponds to the results of battles after the learning period. The X-Axis is the ratio between the value of gains and losses. At left of the chart, the bot receive no gain and only losses to modify the score. So the bot want avoid to lose and prefers a tie-end. At right of the chart, the bot receive only gains and the bot want win and reduces the tie cases.

Strangely, the 'Full Random' is the most dangerous opponent of the bandit and a 'fifty-fifty' strategy is sufficient again the others veterans. This is due to the randomize aspect of the opponent: if 'Full Random' have the posibility to win, it may choose an another cell and lets the victory to 'The Bandit'. Nevertheless if the bot learns with another opponent, it has less difficulty to fight 'Full Random'.

Battle #16: 'The Bandit' vs 'Full Random' (2nd round)

Here 'Partial Random' is the master of 'The Bandit'. It has teached basis of Tic-Tac-Toe. 'The Bandit' have no difficulty to learn and it losses less than 400 times again 'Partial Random'. The only problem of the learning concerns the case where there are no loss: If 'The Bandit' forget than it can loss, so it has more probability to lose.

Now, 'The Bandit' is more efficient again 'Full Random'. I have also tested others masters. 'The Trapper' isn't a good teacher, the results are similar than the battle #15. Maybe it is too specialized in traps cases. The teaching of 'The Brute' reaches the same level than those of the 'Partial Random'. I think the 'Bandit' just needs to learn the rules of the game.

Battle #17: 'The Bandit V2' vs 'Full Random' (3nd round)

I have designed a second version of 'The Bandit' because I have found many problems in the algorithm.

First, a fixed value (gains or losses) is added to the weight and this may become very high. So I have placed limits in the precedent version. I want to test another solution with weight incrementation which consists to use an variable gains/losses. So the new weights tend toward limits without crossing the limits. I think this modification have few impact but it allow to keep an order in the weight. In the case of a fixed gains/losses and the use of limits, two ways may have the same value but have drive to lose with different probability. So now, the gains/losses corresponds to a percentage between current weight and a limit.

Secondly, it is possible to modify the weight if the result is tie. But if the weight become negative du to ties, it may become lower than a lost match and the way may be modified and create an instability with the result. Now, the weight of tie way tend to 0 instead of the negative limit.

Thirdly, The learning is long and the game contains lot of similar cases with transformation (symmetry or rotation). Now, each time a weight is modified, the weight of similar gameboard is modified to improve learning speed. This choice is due to the next competitor which is very special: it is not possible to prepare battles with thousand of games.

This chart corresponds to the case with the variable gains/losses and the accelerated learning. The boot seems more stable and efficient. When gains are superior to losses, the learning needs more games before stabilization. In the other cases, the learning is fast.

7th competitor: ‘The Homo Sapiens’

I have just said than this competitor is particular: it is an biological computer with lots of sensors and actuator. So if I want use it to play Tic-Tac-Toe, We need to create an adapted interface. A possible interface may be pen and paper, but this is adapted to a match Homo Sapiens vs. Home Sapiens and we want to compare this competitor to other bots. So, the interface is composed of a keyboard, a mouse and a screen. The screen displays a 3x3 grid. Each cell of the grid may have three different contents: empty, 'X' or 'O'. The Homo Sapiens needs to use the mouse to select a cell.

Interface between Homo Sapiens and TicTacToe application.
Interface between Homo Sapiens and TicTacToe application.

The user needs to select two competitors (here "Homo Sapiens" and "Full Random") and to click on the button "play". Now, competitors are beginning the battle.

Example of battle
Example of match

The application and some bots have parameters. The user needs to click on the buttons with "cog-wheel" icon.

TicTacToe Interface with parameters widget
TicTacToe Interface with parameters widget

"The Homo Sapiens" allows to understand the behavior of "The Bandit": In fact, it always plays in the same way. So, it is very no fun to play again "The bandit". Even if "The Brute" cannot lose, its randomize behavior is more interesting.

Conclusion

"The Bandit" uses a learning approach to play in TicTacToe. Rules of the game are not defined to the bot and if rules changes, "The Bandit" may adapt its choices.

Historically, The Homo Sapiens is the first player which had played Tic-Tac-Toe. Nevertheless, it needs more time to select a cell and its behavior may be strange: it may solve complex cases and make stupid mistakes, chooses to lose or refuse to play. So, comparisons of efficiencies are difficult.

AI for Tic-Tac-Toe (Part 2)

The last time, I have talked about non-intelligent bot to play to Tic-Tac-Toe (link). The first bot is named 'Full Random' and chose a cell randomly. The second bot is named 'Partial Random', It work in the same way, except if there are two cells aligned with the same symbol. In this case, it fill the last cell to win or to avoid losing. Nevertheless, there are many possible traps.

The 'Partial Random' can align 3 cells and avoid that the opponent align cells (except the cases where there are traps)
The 'Partial Random' can align 3 cells and avoid that the opponent align cells (except the cases where there are traps)

3rd competitor: 'The Trapper'

Now, I present 'The Trapper'. It is an evolved version of 'Partial Random': it creates 'traps' if this is possible. It tests each move and selects one which allows to have two choices. If the opponent select one then the bot win with the second.

Create a trap
Create a trap

This competitor is not unbeatable. Many time it loses the initiative and the opponent can create traps.

Battle #5: 'The Trapper' vs 'Full Random'
Battle #6: 'Full Random' vs 'The Trapper'
Battle #7: 'The Trapper' vs 'Partial Random'
Battle #8: 'Partial Random' vs 'The Trapper'

4th competitor: 'Guided Trapper'

Last time, I have talked about a unbeatable bot. but I am maybe a liar. In fact, I haven't tested all combinations and I can't found my old code. So, I use my memory. I remember a tip: if the bot is the second player so its first move is to chose the central cell.

This choice allow to create a bot which prefers to tie than to lose.

Battle #9: 'Partial Random' vs 'The Guided Trapper (Version 1)'

You can compare battles #8 and #9. The difference concerns the position of the first move of 'The Guided Trapper'.

An weakness of previous bot is the losing of initiative: many times the bots have no choice, the first play to a position to avoid aligning, the second too and this fact create an trap. The previous bots, can’t predict a solution.

5th competitor: 'The Brute'

The next competitor, have tested all solutions by brute-force. So it knows the bad choices and avoid it.

Initially, it creates a tree with all possible solutions (255168 cases).

  • the first player can win in 131184 cases (51%).
  • the second player can win in 77904 cases (30%).
  • there are 46080 tie cases (19%).

The mechanism is simple: there are two tags for each choice: "lose" or "win".  If the current choice is winning, that means the previous player have selected a bad cell and the previous choice is tagged "lose". Now, if all future choices are losing then the current choice is winning.

If there are a winning case, that means the previous is bad. And if there are only losing cases, so the previous case is a winning choice.
If there are a winning case, that means the previous is bad. And if there are only losing cases, so the previous case is a winning choice.
Battle #10: 'The Brute' vs 'Full Random'
Battle #11: 'Full Random' vs 'The Brute'

This new bot can't lose. It's the best and displaying others battles seem useless.

Battle #12: 'The Brute' vs 'Partial Random'
Battle #13: 'The Brute' vs 'The Trapper'
Battle #14: 'The Brute' vs 'The Brute'

Conclusion

This second article shows three other competitors. One which is able to create primitive traps, the second which is guided in a particular case and the third which is unbeatable.

The Brute can be improved. Currently, it chooses a random cell among winning case and non losing case. The choices can be weighted to use weaknesses of the
opponent according to a method as the Multi-armed bandit