The field of artificial intelligence (AI) has seen significant advances over the past few years, in areas such as smart vehicles and smart building, medical robots, communications, and intelligent education systems. These advances are expected to have implications in several policy areas (economic, societal, education, etc.), and governments around the world are increasingly considering them. In October, the US National Science and Technology Council released a report on Preparing for the Future of Artificial Intelligence and a National Artificial Intelligence Research and Development Strategic Plan. In UK, the parliamentary Committee on Science and Technology published a Report on Robotics and Artificial Intelligence. Earlier this year, the Committee on Legal Affairs in the European Parliament (EP) released a draft Report with recommendations to the Commission on Civil Law Rules on Robotics (expected to be discussed in plenary in January 2017). The following main policy issues are covered in these four documents.
Economic and social
Al has significant potential to lead to economic growth. Used in production processes, AI systems bring automation, making processes smarter, faster, and cheaper, and therefore bringing savings and increased efficiency. AI can improve the efficiency and quality of existing products and services, and also generate new ones, thus leading to the creation of new markets. For this potential to be fully explored, there is a need to ensure that the economic benefits of AI are broadly shared at the level of the society, and that possible negative implications are adequately addressed.
One such possible implication is related to the disruptions that AI systems could bring on the labour market. Concerns are raised that automated systems will make some jobs obsolete, and lead to unemployment. There are also opposing views, according to which AI advancements will generate new jobs, which will compensate for those lost, without affecting the overall employment rates. All analysed documents call for further monitoring of job trends, to better understand the real risks and opportunities brought by AI. The EP draft report even goes one step further, looking into the implications of AI for the viability of social security systems.
One common point in all four documents is the need to better adapt education and training systems to new digital skills requirements. The rapid growth of AI generates an increasing need for individuals to be equipped with the necessary skills allowing them not only to make use of AI technologies, but to contribute to their development. The US reports outline the need for actions aimed to increase ‘the size, quality, and diversity of the workforce’ in AI. The UK report calls for more governmental commitment towards addressing the broader digital skills crisis, and emphasises the fact that adapting the work force to the AI requirements does not mean only preparing the new generations, but also allowing the current work force to re-skill and up-skill itself. Calls are also made for governments to actively encourage more gender and racial diversity within the AI workforce.
Safety and security
AI applications in the physical world (e.g. in transportation) bring into focus issues related to human safety, and the need to design systems that can properly react to unforeseen situations, and have minimum unintended consequences. AI also has implications for cybersecurity. On the one hand, there are cybersecurity risks specific to AI systems. As AI in increasingly embedded in critical systems, they need to be secured to potential cyber-attacks. On the other hand, AI has applications in cybersecurity, and the US reports particularly point to the fact that such applications are expected to play an increasingly important role in defensive and offensive cyber measures. AI is, for example, used in email applications to perform spam filtering, but it is also increasingly employed in applications aimed to detect more serious cybersecurity vulnerabilities and address cyber-threats. The use of AI in weapon systems is another aspects tackled in both the US and UK reports. They mention that autonomous weapons, just like conventional ones, must adhere to international humanitarian law.
AI systems work with enormous amounts of data, and this raises concerns regarding privacy and data protection. The US reports note that AI applications need to ensure the integrity of the data they employ, as well as protect privacy and confidentiality. The UK report outlines that anonymisation and re-use of data are two key aspects that need to be addressed, and that difficulties associated with balancing privacy, anonymisation, security, and public benefit need to be further explored. The EP draft report points to the fact that any EU policy in the field of AI should embed privacy and data protection guarantees, in line with the principles of necessity and proportionality. It also calls for the development of standards for the concepts of privacy by design, privacy by default, informed consent, and encryption in AI systems.
As AI systems involve judgements and decision-making – replacing similar human processes – concerns have been raised regarding ethics, fairness, justice, transparency, and accountability. The risk of discrimination and bias in decisions made by AI systems is one such concern. One way of addressing some of these concerns, the US reports point out, is to combine ethical training for AI practitioners with the development of technical methods for designing AI systems in a way that they can avoid such risks (i.e. fairness, transparency, and accountability by design). Although there seems to be a general understanding on the need for algorithms and architectures to be verifiably consistent with existing laws, social norms, and ethics, achieving this might be a challenge because of ethical issues varying according to culture, religion, and belief.
Recommendations are made in all reports for governments to actively support research and innovation in the field of AI, and to increase related funding. The US National AI Research and Development Strategic Plan outlines concrete areas for governmental-funded research, especially in fields that are less likely for the industry to address, such as: effective methods for human-AI collaboration; understanding the ethical, legal and societal implications of AI; safety and security of AI systems; AI standards; and general-purpose AI (‘systems that exhibit the flexibility and versatility of human intelligence in a broad range of domains’). The UK report calls for the adoption of a national strategy that would set out the ‘government’s ambition and financial support’ for AI. The EP draft report notes that research should focus on exploring the risks and opportunities of AI technologies.
Intellectual property rights (IPR)
IPR issues are briefly tackled in US and EP documents. The EP draft report calls for a balanced approach to IPR when applied to hardware and software standards, and outlines the need for codes that both protect and foster innovation. The US documents, on the other hand, emphasise the advantages of increased availability of open-source software libraries and toolkits providing access to cutting-edge AI technologies for developers, and note that the government should encourage the adoption of open AI resources.
One overarching question is whether AI-related challenges (especially regarding safety, privacy and data protection, and ethics) call for new legal and regulatory frameworks, or whether existing ones can be adapted to address them. The US reports underline that ‘the approach to regulation of AI-enabled products[…] should be informed by assessment of the aspects of risk that the addition of AI may reduce, alongside the aspects of risks it may increase’. Adapting current regulation is seen as the most suitable approach for the time being. A similar conclusion is reached in the UK report, which notes that ‘it is too soon to set down sector-wide regulation’ for AI. Both US and UK documents note that, when considering regulatory approaches towards AI, attention should be paid to ensuring that such approaches do not hinder innovation and progress. Additionally, the UK report calls for the creation of a Commission on AI, tasked with identifying principles for the development and application of AI, providing advice to the government, and fostering public dialogue.
The EP draft report also acknowledges that existing legal regimes and doctrines can be applied to robotics, but notes that ‘the current legal framework would not be sufficient to cover the damage caused by the new generation of robots’. It therefore calls for an EU directive on civil law rules on robotics, as well as for a guiding ethical framework for the design, production, and use of robots. It further suggests the creation of a European Agency for robotics and AI, to provide technical, ethical, and regulatory expertise to support the EU and its member states.
Aspects related to accountability and liability in AI systems are also viewed as important legal issues to consider. As pointed in the UK report, the question is ‘if something goes wrong, who is responsible?’ (e.g in case of automated vehicles, is it the manufacturer, the software developer, the owner of the vehicle?). This question raises issues of civil, and even criminal liability, and there is a need to further discuss whether such issues should be tackled in courts, or whether new legislation is needed. The EP draft report even considers the question of how AI machines can be held responsible for acts or omissions, and it explores aspects related to the legal status of AI machines (i.e. robots): should they be regarded as natural persons, legal persons, animals or objects, or should a new category be created?
As the US reports note, the policy implications of AI have also attracted the attention of intergovernmental organisations such as the United Nations, G7, and the Organization for Economic Cooperation and Development. Governments seem to increasingly believe that AI would benefit from international cooperation in promoting research and development, and identifying suitable responses for related challenges. In the USA, a recommendation is made for the government to develop a strategy on international engagement related to AI, and to cooperate with other stakeholders in the development of AI standards. The EP draft report calls for international harmonisation of technical standards, mainly to avoid risks of market fragmentation, and to address consumer concerns unitary. It also encourages international cooperation in setting regulatory standards, under the auspices of the UN.