An overview of AI
Scholars and academics have several difficulties in concretely defining what AI is, even at European level. It has been defined as a complex system that bears intelligent behavior from the analysis of its environment, as well as machines or agents that observe, learn and take appropriate measures, basing this on experience gained contemporaneously. Overall, AI can be divided into three different classes or branches:
- The most basic is Artificial Narrow Intelligence (ANI), which is designed for specific and limited tasks (e.g. Google Translate algorithms, facial recognition technology);
- The second, not yet developed, is Artificial General Intelligence (AGI), which simply transfers knowledge from one domain to another, like a human brain;
- The third, i.e. Artificial Super Intelligence (ASI), exceeds the capacity of the human brain. It is quite complicated to affirm how this complex system would operate, or if it would be a progress or a threat to mankind. Luckily (or unluckily), we are very distant from the full development of this technology.
In order to maximize the development of it, new rules are necessary to guarantee principles and rights, specifically for the European community. In order to tackle the huge development carried out by the United States of America and China, the old continent should respond by setting two objectives: first, to craft a favorable environment for AI investment; second, to emerge as a quality brand for AI. Europe has a great chance to lay down global standards to enhance a high level of welfare for Europeans and other people, defining the rules for a stable and wide acceptance of AI. Comprehending and analyzing AI is fundamental in order to embrace the next challenges of the future. AI will change many aspects of our lives and bring important benefits to society and the economy due to better healthcare, an efficient public administration, and safer transport.
AI and military worldwide
A great development will also hugely affect the defense sector, both in arms and operations. The so-called Unmanned Air Vehicles (UAVs) can be widely used for security reasons, offensive or defensive actions, civil protection, disaster relief intervention, or border control. It is not hard to see why AI could represent a great opportunity as well as a great danger. While the lack of human oversight will lead to real development in the military decision-making process, the same lack could affect human dignity and ethical actions. As a matter of fact, AI already has various uses in the military field by countries and non-state actors. For instance, in April 2017 the US launched a new project (i.e. Project Maven), in partnership with Google, with whom they are trying to develop computer algorithms that can help military and civilian analysts surrounded by the big volume of data. On the other hand, non-state actors such as Hamas, Hezbollah, Islamic State or Houthis are already using this kind of technology for intelligence, surveillance, reconnaissance, and offensive purposes. For example, ISIS put high-definition cameras under drones to acquire information on the ground. Due to its air capability, in the battle of Mosul in 2017, Daesh was able to kill up to 30 Iraqi soldiers in just one week. UAVs would give terrorist groups the opportunity to plan mass attacks or specific assassinations with no human sacrifice.
A boost for the EU’s defense?
AI might represent the opportunity for the EU to boost its Common Security and Defence Policy (CSDP), especially in risk detection, in protection and preparation capabilities, and in the improvement of the EU’s defense production capacities. For instance, AI could help the Union to collect data from big geographical areas; to better prevent meltdowns in some areas and avoid early action; to improve border assistance initiatives, by using new techniques such as biometrics and facial recognition to handle customs checks and monitoring; to categorise or evaluate policy options; and to help the Political and Security Committee (PSC) to plan a stronger allocation of resources and capabilities during CSDP initiatives. In addition, the application of AI for training and exercises should not be underestimated, in both civilian and military actions: it could be used to create training simulations in order to prepare European personnel to predict possible behaviors by individual adversaries and terrorist groups.
The European Union also needs an ambitious AI strategy in the military field if it wishes to remain a relevant player. Of course, the application of AI-enabled systems in favor of CSDP initiatives might lead to unintended consequences, especially moral and legal. These concerns regard the ethical implications of using autonomous (or quasi-autonomous) systems for the identification, tracking and targeting of individuals and the level of human control on AI-systems. For the purpose of solving these questions on the morality of AI, it is necessary to build devices that are trustworthy, including the processes and people behind the technology.
In September 2018, Russian President Vladimir Putin declared that “whoever becomes a leader in this sphere (i.e. Artificial Intelligence), will be the ruler of the world”. War will be influenced by the autonomy and autonomous weapons systems as well as the fusion between soldiers and new types of machinery. The probabilities that countries and other entities such as non-state actors will use AI in warfare are very high. It is important not to consider AI as something in the distant future, but a present issue which should be regulated, as the last episodes between the US and Iran have shown. AI is more than science-fiction.