“Killer Robots” worry the international community. From 13 to 17 November 2017, the Group of Governmental Experts on Lethal Autonomous Weapons Systems (LAWS), also familiarly designed as “killer robots” met for the first time in Geneva (UN Office at Geneva). LAWS are, broadly speaking, autonomous systems (robots) animated by artificial intelligence, which can kill without human decision.  As stated in a preliminary paper, the creation of the Group shows an international concern “with the implications for warfare of a new suite of technologies including artificial intelligence and deep machine learning” (UNODA Occasional Papers No. 30, “Perspectives on Lethal Autonomous Weapon Systems” November 2017: 1).

Are we, however, certain that AI will impact only LAWS? Or, rather, could AI impact much more than that, indeed, everything related to politics and geopolitics?

Executive Summary

To introduce this new section of the Red (Team) Analysis Society on the future, AI, politics and geopolitics, we start with giving instances of domains and human activities currently already involving AI. We then point out some of the related political and geopolitical questions emerging, which we shall address with forthcoming in-depth analysis.  As understanding what is AI is a pre-requisite, this article focuses on presenting the AI field, while the next one will be devoted to Deep Learning.

Here, we look first at AI as a capability. We revise the technical definition to introduce agency, which enables us to point out intrinsic fears generated by AI. We use videos to illustrate them. We also thus identify a first area of intersection between AI development and politics, related to “AI governance.”

We then explain that AI is also a scientific field. This approach will notably allow us finding those scientists and labs working on AI, thus monitoring which advances and evolutions are taking place, and sometimes anticipating breakthrough.

Finally, at the intersection of both, capability and scientific field, we present the various types of AI capabilities that scientists seek to achieve and the ways in which they approach their research. This is crucial to understand where we stand, what to expect and identify emerging political and geopolitical issues. We explain first the difference between Artificial General Intelligence (AGI) and Narrow AI, focusing more on the first as the latest advances in terms of Narrow AI, i.e. Deep Learning, will be addressed with the next article. Here again we use videos, this time from the science fiction world, to illustrate what is AGI and some of the related issues imagined for a world where AGI exists. Synthesising existing experts polls, the time estimate for the happenstance of AGI is the middle of the century. We close with a brief presentation of the types of methodology used, Symbolic AI, Emergentist AI and Hybrid AI, stressing the dominance of the current Emergentist approach.

Full article 3065 words – approx. 12 PAGES

 

Artificial Intelligence (AI) has become a buzz word and trendy topic throughout the world, generating media attention, heated debates among IT tycoons as well as scientists, and a corporate rush to be equipped with the latest AI advances, while capturing popular imagination through TV shows. Worldwide conference and summits on AI abound: e.g.  Beijing AI World 2017世界人工智能大会 (8 November 2017), Beijing Baidu World Technology Conference “Bring AI to Life” (16 November 2017), Boston AI world conference and expo (11-13 December 2017),  Toronto AI World Forum (27 – 28 November 2017),  London AI Congress (30-31 January 2018),  the AI Summit series, in Hong Kong (26 July 2017),  Singapore (3-4 October 2017), London (13-14 June 2018), New York (5-6 December 2017), San Francisco (18-20 September 2018).

It would seem that AI revolutionizes almost everything. Urban life with smart cities, driving with smart and often self-driving cars, or shopping with the use of AI by e-commerce giants such as Amazon or the Chinese Alibaba, which realised the biggest sale ever with a staggering amount of 163.8 billion RMB or $25.3 billion in one day with its Single’s Day have already started changing (e.g. Jean-Michel Valantin, “The Chinese Artificial Intelligence Revolution“, The Red (Team) Analysis Society, 13 Nov 2017; Jon Russell, “Alibaba smashes its Single’s Day record once again as sales cross $25 billion“, TechCrunch, 11 Nov 2017). Industry and labour continue evolving and fear of unemployment and human redundancy is paramount (e.g. Daniel Boffey, “Robots could destabilise world through war and unemployment, says UN“, The Guardian, 27 Sept 2017; UNICRI Centre for Artificial Intelligence and Robotics, “The Risks and Benefits of Artificial Intelligence and Robotics“, Proceedings workshop in Cambridge, 6-7 Feb 2017). From criminal endeavour and its corollary of combatting crime as well as crime prevention to more broadly national security and defence, AI is increasingly present, conjuring up images of cyber policemen behind screens allowing for the arrest of criminals from the “dark net”, and of “killer robots”, as with LAWS and autonomous fighting drones (e.g. Europol Cybercrime Center – EC3; Yuan Yang, Yingzhi Yang and Sherry Fei Ju, “China seeks glimpse of citizens’ future with crime-predicting AI“, Financial Times, 23 July 2017; Chelle Ann Fuertes, “AI is the Future Cyber Weapon of Internet Criminals“; EdgyLabs, Sept 2017).

If the revolution is so deep and large in scope, then it is bound to have an impact that goes even further than the pertinent but still segmented understanding of its consequences, which starts to be developing. In this new section of the Red (Team) Analysis Society, we shall focus about the futures of this AI-powered world and what it means in terms of politics and geopolitics.

Let us imagine that the highly likely forthcoming leadership of China in terms of artificial intelligence (AI) starts being perceived as threatening by an America that feels it is declining and ought to remain the sole superpower (Helene Lavoix, “Signals: China World Domination in Supercomputers and Towards Lead in Artificial Intelligence“,The Red (Team) Analysis Society, 14 Nov 2017). What would mean escalating tensions between China and the U.S. involving AI and how would they play out? How would differently “trained” AIs interact – if at all – in case of conflict?

Which are thus the emerging risks, dangers and opportunities, as well as crucial uncertainties resulting from AI-powered power struggles, politics and geopolitics? Could new completely unforeseen and so far unknown dangers emerge, beyond the LAWS? Is there an element of truth in Science Fiction’s warnings? How could the future world look like? Could the international order be fundamentally redrawn between AI Haves and Have-nots? What is power in a world where AI is increasingly present?

These are some of the questions we shall explore, while others, more precise, will emerge with our research.

To start, we need first to understand and define better what is AI and what are the conditions for its progress and development. This will give us the fundamental basis for this section, as well as the capacity to monitor and scan the horizon for evolutions and break through. One of the objectives will also be to avoid surprise, as the current emphasis on the success of one type of AI – deep learning – should not make us become blind to potential progresses in other subfields.

This first article thus presents the AI field and therefore begins identifying areas where the AI intersects with politics and geopolitics. The next article will dig deeper into Deep Learning, i.e. the AI sub-field that knows since 2015 the fastest and most wide-ranging developments and that is highly likely to impact the future political and geopolitical world.

Here, presenting the AI field, and using videos as much as possible to make the presentation more real, we look first at AI as a capability. We revise the technical definition to introduce agency, which enables us to point out intrinsic fears generated by AI. We also thus identify a first area of intersection between AI development and politics, related to “AI governance.” We then explain that AI is also a scientific field. and why this approach is useful to our strategic foresight. Finally, at the intersection of both, capability and scientific field, we present the various types of AI capabilities that scientists seek to achieve and the ways in which they approach their research.

AI as a capability

The Encyclopaedia Britannica defines AI, technically, as follows:

“Artificial intelligence (AI) is the ability of a digital computer or computer-controlled robot to perform tasks commonly associated with intelligent beings.” (B.J. Copeland, “Artificial Intelligence (AI)“, updated Jan 12, 2017).

Building upon this definition, we shall add agency and dynamics to it and reach the following definition:

Artificial intelligence (AI) is first a capability with which an initially inanimate object is endowed, at the start out of human being design, and that makes it partly or totally behave as an intelligent being. 

The way we define here AI points out two fundamental characteristics that frighten human beings and that we would have missed, had we stopped at the initial technical definition.

First, human beings, when constructing AI, fundamentally, behave as God(s) or change nature’s design (according to one’s belief-system and religion) by making an object animated, which behaves (more or less) as themselves, or as an intelligent natural being. In this framework, by so doing, human beings thus perpetrate a sacrilege. They break a taboo, which thus may only lead to their punishment. From this deep belief an unreasoned fear emerges.

Second, as the new entities thus created can fundamentally behave as intelligent beings, then they will also be able to act autonomously – to a point – and even to reproduce. Engrained here is the fear of one’s creation turning against oneself, or in a less tragic way becoming better than oneself, which nonetheless ego-centred and anthropocentric societies may have trouble accepting.

Relatedly, when the new entities endowed with AI are animal-like, then ancient atavistic and once forgotten fears linked to predators may emerge, all the more so if you image these robots equipped with various types of lethal device. This can be illustrated by this video from Google’s Boston Dynamics lab demonstrating “Spot” capabilities.

These very deep fears are crucial and must be considered as they are highly likely to bias any analysis carried out and judgement passed on AI. They must be neither denied, for example by an overemphasis on a rosy all positive image that would be given to AI nor, on the contrary, hyped. As for everything both positive and negative elements must be considered to, as much as possible try to benefit from the advantages while mitigating possible dangers. Failure to do so could only backfire. We must also keep theses deep fears in mind because they may well become operative in informing actor’s behaviour in the future, as AI is likely to spread.

For example, making AI palatable to citizens and overcoming fears may become part of “governance with AI”. China, which is pushing forward to become a leading if not the leading power in AI, as well as to use AI across all domains (Lavoix, “Signals: China World Domination…”; Jean-Michel Valantin, “The Chinese Artificial Intelligence Revolution”, 13 Nov 2017, The Red (Team) Analysis Society), made a special effort to explain AI to its population with a 10-episode documentary “In Search of Artificial Intelligence” – 《探寻人工智能》- (Sun Media Group, broadcast May 2017) aimed at laypeople and stressing how AI can help solve problems, while also interviewing scientists worldwide. Watch the first episode below, 《探寻人工智能》第1集 机器的逆袭 , Machine counter-attack (mix Mandarin and English) – The next episodes are available on the Youtube page, right hand side column.

The stakes may even be bigger if, from a relatively simple “allaying fears”, one moves to mobilising a whole society for AI, as seems to be the case in China. Indeed, as reported by the official Beijing Review, “It [the documentary] is not only appealing to scientists and amateurs, but also motivates society to explore AI,” said a netizen with the user ID Jiuwuhou Xiaoqing.” (Li Fangfang, “Man and The Machine“, Beijing Review, NO. 25 JUNE 22, 2017).

AI as a scientific field

AI is also a scientific field, which is defined as follows:

“Artificial Intelligence (AI) is the part of computer science concerned with designing intelligent computer systems, that is, systems that exhibit characteristics we associate with intelligence in human behaviour – understanding language, learning, reasoning, solving problems, and so on.” (Barr & Feigenbaum, The Handbook of Artificial Intelligence, Stanford, Calif.: HeurisTech Press; Los Altos, Calif. : William Kaufmann, 1981: 3).

Thinking about AI in these terms will allow us finding those scientists and labs working on AI, thus monitoring which advances and evolutions are taking place, and sometimes anticipating breakthrough.

Further, by looking at the various sub-disciplines constituting the AI field, we shall be able to locate where we shall find AI (as a capability this time) components, thus which areas of polities are likely to be transformed by AI, knowing that combinations of AI-powered elements will often be operative.

According to a JASON (independent group of elite scientists advising the U.S. government) study sponsored by the Assistant Secretary of Defense for Research and Engineering (ASD R&E) within the Office of the Secretary of Defense (OSD), Department of Defense (DoD) (“Perspectives on Research in Artificial Intelligence and Artificial General Intelligence Relevant to DoD“; January 2017), the sub-disciplines of AI are:

  • Computer Vision;
  • Natural Language Processing (NLP);
  • Robotics (including Human-Robot Interactions);
  • Search and Planning;
  • Multi-agent Systems;
  • Social Media Analysis (including Crowdsourcing);
  • Knowledge Representation and Reasoning (KRR)
  • Machine Learning “enjoys a special relationship with AI”, and is seen as the foundational basis for the latest advances in AI.

The type of AI capability(ies) with which our inanimate object is endowed, as well as which objects are concerned vary according to the AI sub-discipline or rather sub-disciplines, as most of the time different types of sub-disciplines and related AIs are mixed for one object.

If we remain in the sub-field of robots, we can see in the video below an array of animal-robots powered with AI, which could be used for a wide array of tasks, from the most benign to lethal applications, should they be equipped with lethal device. Note that for the fascinating video below, by Techzone, the cover image – although then no robot-horse is presented – plays on the intrinsic fears of watchers by choosing a black horse with red eyes. The latter may only remind watchers of the Nazgul Steed in Tolkien Lord of the Rings, as adapted for the cinema by Peter Jackson.

 

Types of AI capabilities and research

Artificial General Intelligence (AGI) versus Narrow AI

The field is first divided between two types of capabilities that are sought to be achieved by scientific research: Artificial General Intelligence (AGI), General AI, or Strong AI on the one hand, Narrow AI, Applied AI or Weak AI on the other.

Artificial General Intelligence (AGI)

JASON gives for Strong AI the following definition:

“Artificial General Intelligence (AGI) is a research area within AI, small as measured by numbers of researchers or total funding, that seeks to build machines that can successfully perform any task that a human might do”. (Perspectives…, January 2017)

AGI is part of the Knowledge Representation and Reasoning subfield, according to JASON (p.5).

It is that type of AI which has most captured human imagination and which gives rise to the worst fears. It is best exemplified in the (excellent, fascinating and multiple-awards winner) TV series Westworld (HBO)co-created by Jonathan Nolan and Lisa Joy, where robots are all but indistinguishable from human beings.

Similar AI-related themes, although there without embodiment were somehow prefigured in the 5 seasons strong TV series Person of Interest (CBS), also created by Jonathan Nolan, with the war between “The Machine” and “Samaritan”.

We also recall a similar theme developed in the older series of films and TV series, Terminator (1984), with a world taken over by AI-powered computer system “Skynet”, which had decided to eradicate humanity. More recently (2015), Avengers: Age of Ultron used a similar narrative: the AI peacekeeping program, “Ultron” came to believe it had to destroy humanity to save the Earth. Ultron not only took over robots but  also created its own avatar. In Terminator as in Ultron the embodiments come second and are the result and creation of the initially unembodied AI.  We are here in the even more frightening case where the AI “reproduces” itself and creates new entities.

It is interesting to note that the Statement of Work from DoD/OSD/ASD (R&E) for JASON’s study includes specific questions regarding the development of Strong AI or AGI, and that the objective of the study was to find out what was missing from AGI to see the field achieving its promise (Appendix A p 57). This points out that, by early 2017, the U.S. DoD had far from given up in developing AGI and, on the contrary, could have been thinking about strengthening its efforts in the area. Yet JASON’s recommendations are as follows: “DoD’s portfolio in AGI should be modest and recognize that it is not currently a rapidly advancing area of AI. The field of human augmentation via AI is much more promising and deserves significant DoD support” (p.56).

When?

In a 2010 survey, and 2014 poll, AGI researchers estimated that “human-level AGI was likely to arise before 2050, and some were much more optimistic” and that “AGI systems will likely reach overall human ability (defined as “ability to carry out most human professions at least as well as a typical human”) around the middle of the 21st century” (Ben Goertzel, 2015, Scholarpedia, 10(11):31847, using Baum et al, 2011 and ).

In a way that is not contradictory with the previous estimates, but sounds more negative because the period of studies stops in 2030, a 2015 panel at Stanford University working on the programme One Hundred Year Study on Artificial Intelligence (AI100) estimated that

“Contrary to the more fantastic predictions for AI in the popular press, the Study Panel found no cause for concern that AI is an imminent threat to humankind. No machines with self-sustaining long-term goals and intent have been developed, nor are they likely to be developed in the near future [2030]…”(Report of the 2015 Study Panel, “Artificial Intelligence and Life in 2030”, June 2016: 4).

Narrow AI, Applied AI or Weak AI

On the opposite side of the spectrum, one finds Narrow AI, Applied AI or Weak AI, which focuses “on the pursuit of discrete capabilities or specific practical tasks” (Goertzel 2015; Goertzel and Pennachin, 2005). In other terms, the aims is to “perform specific tasks as well as well as, or better than, we humans can (Michael Copeland, “What’s the Difference Between Artificial Intelligence, Machine Learning, and Deep Learning?“, NVDIA, 29 July 2016). Face recognition on Facebook, Google or in various Apple programmes is an example of Narrow AI. Apple Iphone Siri is another instance of narrow AI.

This approach now largely dominates the AI field (Goertzel 2015). Indeed, opposing it to AGI, the AI100 continues:

“Instead, increasingly useful applications of AI, with potentially profound positive impacts on our society and economy are likely to emerge between now and 2030, the period this report considers.” (Artificial Intelligence and Life in 2030”, Ibid.)

It is here that one finds Deep Learning, which is now leading the current phase of AI’s exponential development, and upon which we shall focus in the next article.

Symbolic AI, Emergentist AI and Hybrid AI

Then, the field is also divided according the type of methodology used to achieve results.

The top-down approach, also called symbolic approach, was the main method used until the end of the 1980s. It seeks to apprehend cognition in a way that is independent from the organic structure of the brain and is still used (Copeland, 2017). Its main achievements have been expert systems (Ibid.). The most recent work focuses on developing “sophisticated cognitive architectures”, using notably “working memory” drawing on “long-term memory” (Goertzel, 2015).

The bottom-up or connectionist or also emergentist approach was used in the 1950s and 1960s, then fell into neglect before becoming important again in the 1980s (Copeland, 2017; Goertzel, 2015). It is now mainly focused on creating neural networks and is the methodology that brought the latest advances and boom in AI.

Deep Learning, for example, is composed notably of “multilayer networks of formal neurons”, as we shall see in the next article. Developmental robotics also uses the emergentist approach. Here one tries to control robots through allowing them “to learn (and learn how to learn etc.) via their engagement with the world” (Goertzel, 2015). Notably, “intrinsic motivation” is explored, i.e. robots learn to develop “internal goals like novelty or curiosity, forming a model of the world as it goes along, based on the modeling requirements implied by its goals” (Ibid.). “Juergen Schmidhuber’s work in the 1990s” is considered as foundational in this area (Goertzel, 2015 refering to Schmidhuber, 1991).

Work on hybrid systems, mixing the two approaches started emerging in the first decade of the 21st century, including for AGI (Goertzel, 2015).

With the next article, we shall focus on the “deep learning” revolution, exploring its components and starting looking at its applications and uses.

—-

About the author: Dr Helene Lavoix, PhD Lond (International Relations), is the Director of The Red (Team) Analysis Society. She is specialised in strategic foresight and warning for national and international security issues.

Featured image: Titan, a hybrid-architecture Cray XK7 system with a theoretical peak performance exceeding 27,000 trillion calculations per second (27 petaflops). It contains both advanced 16-core AMD Opteron central processing units (CPUs) and NVIDIA Kepler graphics processing units (GPUs). It is installed at the Department of Energy’s (DOE) Oak Ridge National Laboratory, and still the largest system in the US, but slips down to number five in the Top500 for November 2017. From Oak Ridge National Laboratory media gallery, Public Domain, recolourized.

Published by Dr Helene Lavoix (MSc PhD Lond)

Dr Helene Lavoix, PhD Lond (International Relations), is the President/CEO of The Red Team Analysis Society. She is specialised in strategic foresight and warning for international relations, national and international security issues. Her current focus is on the war in Ukraine, international order and the rise of China, the overstepping of planetary boundaries and international relations, the methodology of SF&W, radicalisation as well as new tech and security.

Leave a comment

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

EN