The Dutch and Norwegian navies are set to replace their submarine fleets and joint acquisition could be an option. Yet, as Bjorn-Olav Knutsen argues, cooperation is not without its challenges.
The post Going deep in defence cooperation? Conditions for successful Dutch-Norwegian cooperation on the acquisition of new submarines appeared first on European Geostrategy.
So, Britain goes to war. Ten hours of Parliamentary debate (including that speech by Hilary Benn) over whether or not to use forces that will be a marginal addition at best resulted in roughly two thirds of MPs voting yes. David Cameron’s strategy, if it is worth the word, appears to be a combination of the Underpants Gnome model (“Step 1: Use force, Step 2: … Step 3: Peace! Victory! Votes!”), and the Goldilocks approach to intervention (Not enough to “win”, not enough to be irrelevant, just enough to make us beholden to events). From my perspective, the pitch of debate regarding what was at stake in the Parliamentary debate appeared to be Sayre’s law in action, albeit with added high explosives. Professor Wallace Sayre’s original formulation, that “Academic politics is the most vicious and bitter form of politics, because the stakes are so low” is all the more relevant, since we were already using force against ISIS and committing ISR assets to the region.
So what changed? Or rather, what now? The problem, as I see it, is that we are now ultimately responsible for a civil war that doesn’t appear to have an acceptable end for anyone. Ending the Syrian civil war appears to be the top priority. Writing in The New York Times, Anatol Lieven argues that this will require working with Russia, and carving up both Syria and Iraq to a greater or lesser extent. I think he’s probably right, but it won’t end there, because, from my perspective, this option ends with complete and utter impunity for war crimes. Much is made of the need to put political pressure on Assad to make way, as this is a symbolic move that might allow the civil war to end. But what about the war crimes? Are we going to have a re-run of the ICTY in the Levant? If yes, please explain to me how we’re meant to make Assad give way, and convince his security forces and military to stop fighting. If no, I’m somewhat bemused that Parliament has managed to debate its way into a crusade for the common good and justice that is predicated impunity.
(Editor’s note: Adam Elkus is a PhD student at George Mason University working on computation and strategy)
Recently, KCL’s Kenneth Payne published an article on the potential meaning of artificial intelligence for future strategy. Some of the complexities of tackling this science fiction-esque topic lie in the duality of AI itself as a scientific discipline. While many believe that AI is a discipline oriented around the engineering of synthetic intelligence, one should also note that it has also alternatively claimed that doing so will help us understand human (and other forms of) intelligence. For example, Herbert Simon and Alan Newell’s General Problem Solver was an endeavor with relevance for both AI and cognitive science. Simon and Newell derived the idea of means-end reasoning from a view of human problem solving and implemented it programmatically in a way that could be mechanized by a machine. The same holds true for artificial neural networks as well, which are based somewhat on ideas from computational neuroscience and much more so on engineering utility for problems in machine learning.
Payne and his co-author Kareem Ayoub focus in particular on the use of games and microworlds to develop AI systems:
More complex scenarios than Atari games are possible. Microworlds are abstract representations used by the military to assist in strategic decision-making. They have been used to conceptualise the terrain, force deployment, enemy responses and movements. The use of modular AI in this example domain allows users to create their own microworld simulation with its own rules of play and run limitless iterations of possible events. Jason Scholz and his colleagues found that a reinforcement-learning based AI outperformed human counterparts in these microworld wargames. Their ability to do this rested on two factors: (1) the machine could go through rounds much faster than a human counterpart, and (2) the machine could process every possible move simultaneously, providing previously unseen recommendations. Allowing that many military campaigns can be dimensionally reduced to microworlds – indeed many tabletop staff college exercises do precisely that – such an approach with modular AI proves valuable for rapid iteration of potential options.
A worthy addition to this observation, however, is that microworlds such as strategy games presume a certain view of human problem-solving behavior that is relatively new to strategic theory. [0] Consider the machine representation of chess, the most famous strategic game played by humans and computers. Like game theory, chess is can be visualized via extended-form representation, as seen in images like this:
The minimax algorithm visualizes strategy in terms of how both “min” and “max” players connect the initial moves to the payoff values in the terminal nodes at the bottom of the tree. The goal of min is to force the max player to the lowest payoff. Conversely, max would like to receive the highest payoff value. For a full explanation of minimax, readers are advised to consult the nearest friendly neighborhood game theorist, such as political scientist Phil Arena. [1] Yet despite the fact that zero-sum games in game theory and chess share the same basic representation and solution concept, they diverge in one peculiar way.
The following image does not build the full game tree; notice that it only partially encompasses it:
This is due to the problem of how chess is represented on a machine; building the full game tree would be intractable due to the sheer size of the game. Moreover, a chess program would not be able to reason about other games that lack chess’ peculiar characteristics. [2] Two methods that have been commonly used to explain how humans and machines deal with chess’ sheer complexity are knowledge representation and search:
Given the relatively slow rate at which moderately skilled players can generate analysis moves, estimated in Charness (1981b) to be about four moves per minute, it is obvious that much of the time that human players spend is not in generating all possible moves (perhaps taking a move per second) but in generating moves selectively and using complex evaluation functions to assess their value. Computer chess programs can achieve high-level play by searching many moves using fast, frugal evaluation processes that involve minimal chess knowledge to evaluate the terminal positions in search. Deep Blue, the chess program that defeated World Champion Garry Kasparov in a short match in 1997, searched hundreds of millions of positions per second. Today’s leading microcomputer chess programs, which have drawn matches with the best human players, have sophisticated search algorithms and attempt to use more chess knowledge but still generate hundreds of thousands or millions of chess moves per second. Generally, chess programs rely on search more heavily than knowledge; for humans it is the reverse. Yet, each can achieve very high performance levels because knowledge and search can trade off (Berliner & Ebeling, 1989).
Both knowledge and search, however, stem from the same fundamental way that old-school cognitive scientists and computer scientists define the problem of strategy, which is very different how the strategic studies profession views it. First, let us note therepresentation of game states as a hierarchal tree that proceeds from most abstract to most primitive; it takes us the entire time to move from the top of the tree to the end of the game and the actual payoffs. Another example can be found in the way in which hierarchal task network planning algorithm in AI begins with composite tasks and breaks them down until the algorithm reaches simple actions, which vaguely corresponds to distinctions of strategy and tactics known to Kings of War readers:
Computer-literate readers will also notice the similarity to this tree structure and the directory structure on a computer filesystem. [3]
Why does it make sense to view the world as a tree that moves from most general to most specific? This is an interesting topic about which a good deal of intellectual history was been written. Broadly speaking, it is not surprising that Cold War-era efforts to optimize hierarchally organized systems such as military bureaucracies produced a view of the world as a hierarchally decomposed tree. But that in and of itself does not fully explain the choice of representation. Ontologies, taxonomies, and other forms of hierarchal knowledge representation are common in science and philosophy. What computing did was make them dynamic processes. The interaction or composition of components produced behavior.
George Miller, the famous cognitive scientist, produced a book titled Plans and the Structure of Behavior. In contrast to behaviorist conceptions that did not envision much of an intermediary structure between stimulus and behavior, Miller and his counterparts in AI argued that the internal organization of cognition could tell us much about the outward manifestations of complex behaviors. Hence, it makes sense to study chess players in terms of how they organize their knowledge and search processes, as such internal representation could tell us much about how they are capable of producing complex strategies.
While deep neural networks are often viewed as oppositional to this broadly cognitivist view, this is not necessarily the case. [4] After all, one sees hierarchal representations (albeit defined highly differently) frequently in deep learning research. Hierarchal representations are key to recent research in reinforcement learning as well. And hierarchies also appear quite frequently in both AI work on evolving neural networks and neuroscience research on computation in the brain. Finally, one should also note that hierarchy (differing levels of abstraction) and modularity (different functions) appear to be one of the more interesting explanations for what ideas about animal behavior have in common with computing.
The consequences of this view are that the principal problems of strategy, seen computationally, lie in computational limitations.
The main problem for action selection is combinatorial complexity. Since all computation takes both time and space (in memory), agents cannot possibly consider every option available to them at every instant in time. Consequently, they must be biased, and constrain their search in some way. For AI, the question of action selection is: what is the best way to constrain this search? For biology and ethology, the question is: how do various types of animals constrain their search? Do all animals use the same approaches? Why do they use the ones they do? …. Ideally, action selection itself should also be able to learn and adapt, but there are many problems of combinatorial complexity and computational tractability that may require restricting the search space for learning.
The core problem with a computational view of strategic behavior is that it views strategy in terms of the interface between an “outer environment” and an “inner environment.” If the inner environment of an artifact is well adapted to the outer environment that surrounds it, it will serve its purpose. In other words, if, say, the Department of Defense is able to configure its force structure and military operational concepts to meet the threat of X or Y adversary, its “inner environment” is well-adapted to realize the intended purpose of war and defense. This sort of view of strategy and defense underlies both systems analysis and net assessment, though net assessment is far more qualitative and eclectic. It also underlies the idea of ends, ways, and means held by many strategists – we must find the correct configuration of ways (actions) and means (resources) to meet the desired end. [5]
Let us contrast this to a more classical view of strategy, which would see strategy as the way in which a political community finds a way of fulfilling a desired purpose through the instrumental usage of violence. Here, the problem is not really the combinatorial complexity of searching for a path to the goal or optimizing a utility function, but in the difficult process of using social action to achieve a desired end. First, the end might be contested or ambiguously defined. As KCL PhD candidate Nick Prime and I noted, many strategic ends are essentially compromises and products of fractious politics. Second, what it means to fulfill it is always fairly uncertain during the actual process of strategy formulation.
Mathematically measured criteria are useful for measuring the distance between intention and goal, but metrics of progress depend on highly subjective definitions of not only the goal but also what it means to realize it. Defining the problem in Vietnam, for example, in terms of eradicating enemy infrastructure in South Vietnam presumes that the most important problem lies in Vietcong “shadow governments” that erode power and authority. This is a highly contestable view of the problem, because a combination of targeted killings and the toll of the failed Tet Offensive wiped out enemy infrastructure inside South Vietnam and we still failed to achieve our strategic goals.
Computation is likely a very useful model for thinking about strategy, especially (as Ayoub and Payne do) from a machine’s point of view. But it should also be observed just how alien this view is from the perspective of classical strategy, and recognized that no model is the territory. As a computer modeler, I never assume that any abstractions I build for coursework are anything but reductions of the “real” thing. [6] As computers become more and more present in strategy and command, we should keep these thoughts and the distinctions they suggest in mind. But is there any middle ground?
One meeting ground between the “system” view of strategy and the more humanistic view can be found potentially in the idea of “control” expressed by J.C. Wylie and others.
Control denotes the utility of strategy being found in the way in which an agent is able to manipulate the key features of the environment in a way that advantages the strategist and disadvantages the opponent. The classical view of computation and behavior in AI and cognitive science has been opposed by another set of views that de-emphasizes elaborate internal representation and emphasizes the way in which interaction with the environment produces intelligent behavior. [7]
The environment defines a relation between environmental object and an organism that affords the organism with the capability to perform a certain action. Control of the sea, for example, affords certain strategic capabilities that airpower and landpower does not, and vice versa. The simplest way of designing a mobile robot around its environment, for example, would start with basic behaviors (if X stimulus, perform Y action) and then utilize more complex control structures to inhibit or favor certain behaviors based on the situation. One behavior might be privileged over another even if they both correspond to the same environmental input. Hence, by changing the nature and pattern of the environment to your advantage, you in term exert control over your opponent. If I am playing hide-and-seek with a TurtleBot, for example, I can thwart my Dalek-like adversary if I re-arrange the topology of my apartment as to frustrate it in numerous ways. [8]
Food for thought, certainly. Meanwhile I will continue to dump Golang code into my ParrotAR in the vain hope that I can engineer a taco copter to deliver me tacos while I do research. I at least know that robots can deliver coffee, which is a good start. I can live without tacos but its hard to see how a PhD student can be “intelligent” without any coffee.
Adam Elkus is a PhD student in Computational Social Science at George Mason University and a 2015-2016 New America Foundation fellow in NAF’s Cybersecurity Initiative. He writes on strategy, technology, and other subjects while finding time to ponder how a drone can deliver tacos to his domicile.
[0] It is rather old in the social and behavioral sciences as well as other fields. See Margaret Boden’s Mind as Machine for a good history of the cognitive science view. Lawrence Freedman and Nils Gilman have aptly covered the social science literature.
[1] You can use Manhattan distance or some other metric to compute what is “near” in this statement.
[2] Chess and machines have a very old and interesting history. For more, see this handy overview of computer chess.
[3] This is a representation of the UNIX filesystem structure. See this article for an overview of the distinction between Linux and Windows filesystems. Linux and Mac OSX also differ in their interpretations of the basic UNIX structure. For more, see this and this.
[4] Connectionism (known as the Parallel Distributed Processing research program) in artificial intelligence and cognitive science is a different level of analysis. To see how the classical conception of AI and cogsci perceives mind, consult the physical symbol systems hypothesis.
[5] Indeed, Ends-Ways-Means can be viewed as a kind of organizational programming, as implied by Antulio Echevarria here and stated more bluntly by Christopher Paparone here.
[6] For a dense look at the philosophy of simulation, I recommend Manuel De Landa’s book on “synthetic reason.”
[7] It’s worth noting that the answer to understanding rationality probably lies in a combination of both. See this recent overview of new work in neuroscience and AI.
[8] There are two design strategies in AI, broadly. Make a simple organism that can be effective in a range of environments or build a highly brittle and complicated system for a well-defined environment. See Poole and Mackworth for more.