This post is a merging of two previous discussions. One is the familiar tactics versus strategy discussion, and the other is the computer versus human opponent's discussion. The point of the article is to compare the challenge of strategy versus a computer with that of strategy versus a human.
In a previous article about strategy versus tactics, I wrote that strategy is essentially the human aspect of game theory. That doesn't make any sense out of context, so I will recreate the context here.
Strategy and Tactics: Definitions
The first definitions of strategy and tactics are that strategy is the "grand view" of how you will be achieving your goal, and tactics are the specific steps taken to implement a strategy. For instance, if the goal is to annihilate an army, one strategy is to attack their left flank, and the tactics to implement this strategy are to move under cover, take the high ground, and use burst fire on the artillery on the left flank, followed by infantry.
A contrary view defines tactics and strategy as functionally identical. In this view, a strategy is only a grander type of a tactic. In other words, let's say that the strategy is "attack the left flank", and the tactics are to "move here", "achieve cover", and "fire your weapons".
These tactics can also be broken down into further components. "Move here" also requires tactical decisions about how to move, what to move, and so on. And the strategy is only part of a larger strategy "annihilate the army", which breaks down into "aerial bombardment, "feint forward attack", "attack the left flank".
So, the argument goes, there is no real division between strategy and tactics.
Another view is that strategy is what you do when you don't know what the correct tactics are. For instance, as long as Chess is unsolved, your strategy is to control the center or pin pieces. If chess becomes solved, then every act is either better or worse for achieving your goal of winning, so all acts devolve into better or worse tactical moves.
The next view is that strategy gives positional advantage without changing other metrics, while tactics achieves a measurable movement towards your goal. For instance, if your goal is to kill 100 units, a strategic move is one that does not kill or weaken any units, but that puts your guns into a positions that makes killing easier. A tactical move is one that kills or weakens at least 1 unit.
While these views have their differences, they overlap in some senses.
Strategy as a Human Element
Let's assume that a better strategy offers opportunities for more effective tactics, either through reduced resource expenditure, more available tactical options, or more effective progression towards your goal.
My interest is in describing what to do when you are faced with two relatively equal strategic options. Even if you know exactly what tactical superiority can be achieved, your decision as to which strategy to implement depends on non-measurable quantities.
One strategy might be the correct move to exploit a weakness of your specific opponent, which may induce him to make a mistake that only he would make. Actually, this may be measurable through previous experience with your opponent, or by having studied him.
Or, one strategy may simply have less variables associated with it. Familiarity with a strategy can make you more comfortable in implementing it. Actually, this too is a measurable advantage, as your mental energy is freed up, preserving resources.
These decisions require experience, both with your own style of play and with your opponent's. As computer games evolve, we see games becoming better and better as they implement strategies input by their creators. But, what they still don't do well, is change strategies during play, from one skirmish to the next, or from one game to the next, based on assessments of wins and losses against particular opponents.
I won't say that these types of strategic decisions cannot be programmed into a computer. But they are, still, uniquely human experiences.
What Should an AI Do?
Computers can be programmed to learn about an opponent, from tactic to tactic, from game to game, to weigh previous battles and decide when they should learn from a victory or defeat to try something new. Computers can be programmed to examine an opponent's weaknesses to decide whether to play against them, or maybe even judge that that is what they are expecting and try something totally surprising.
An AI with no ability to learn and adapt, or with glaring repetition or weaknesses is a boring AI.
However, an AI with no weaknesses is also a boring AI. AI's should not be perfect reactors to every situation. If they are, then an entire facet of human strategy goes out the window. What is the fun if you can't exploit a weakness? If you have to choose between two strategies, and either of them are equal on the books, then there is little reward or punishment for choosing either when your opponent will competently handle either with aplomb. You expect your opponent to learn from exploitation, but not to not have any.
Perfect AI doesn't mean that it can't be fooled, however. Even a perfect AI can't (shouldn't) know whether an advancing column is a feint or a main attack. It has to juggle risk versus what it knows about you, while you have the same opportunity to do so about it.
One of the main arguments by Gary Kasparov against the famous Chess match of 1997 was that the computer was fed incredible amounts of information about him while he was not privy to information about the computer's previous matches. That gave a decided strategic advantage to the computer.
Even if the game is solved, so long as both players aren't give enough time to exhaustively search the solution space, each needs to make strategic decisions against their particular opponent.