Since homo sapiens developed from earlier primates c. 300,000 years ago, we have constantly improved technology to serve our needs efficiently and with less effort. This allowed us to focus more on evolving technology even further, resulting in exponential growth. Moore’s law (which is more of a projection rather than a law) projected that processing power will double every two years; this remains roughly true even today. Our descendants in another 300,000 years will benefit from technology that we cannot even fathom today; they will probably think of us as unsophisticated primates engaging in tasks that can be automated.
One can sense that this progress will mostly come from the fields of artificial intelligence, robotics and our ability to diffuse their benefits effectively to benefit most people. Providing remote healthcare through robotics and AI to populations who do not have access to it is a prime example of how this technology can be used to deliver healthcare to inaccessible places. As AI and robotics continue to develop, surgical operations can be performed by surgeons remotely, and healthcare staff can be guided by AI systems to do the rest. As this technology becomes accessible through innovation, our quality of life will improve.
In a world where all basic human needs are taken care of and all repeatable tasks are automated, humans can use the natural wonder of consciousness to give direction to the machines it employs. Once we reach that level, people will no longer compete with each other to gross up wealth since robots and virtual reality will be able to serve all basic and recreational needs. Advancements in technologies started out by individual contributions like the ones given by Ada Lovelace for creating a conceptual framework for a programmable machine and Alan Turing for his ideas in the fields of computing and AI. As technologies continued to progress, innovations shifted from being individualistic contributions to collaborations and iterations of system improvements like the internet. The EU is introducing digital innovation hubs to communicate ideas that were developed, and encourage participation to develop new ones.
We know that we are more intelligent than other species that walk on the earth; but we can only have limited intelligence and we don’t know what exists beyond our limit. Although this is impossible with current technology, we must be aware of the fact that artificial intelligence may exceed our intelligence, giving an advantage to its owners; developers should therefore be ethical in creating these systems, and ensure that systems created do not inhibit human freedom.
Another concern is privacy; AI systems built on big data may maliciously match anonymized data with individuals to create datasets that can potentially be used to guide targeted campaigns to affect social structures, e.g. the democratic process. Data may also be obtained and/or kept illicitly. The EU commission is a pioneer regulator in the field; it already taken steps against mal-practice by introducing the General Data Protection Regulation (GDPR). The GDPR is aimed to protect users privacy and is applicable to any situation where data about individual users are held; including AI systems.
Artificial intelligence also brings about concerns about the allocation of resources, since corporations will require less human resources to produce their end product. Governments need to ensure that the distribution of wealth is fair. The commission is already engaging in studies to assess the impact of AI and digital technologies will pose on its population, and is already taking steps to ensure that all members of society can access basic service and have a basic income.
The commission is aware that the EU may be lagging behind the US and China in artificial intelligence. To encourage development, the European commission has issued a number of communications that set out policies to:
- Create a roadmap for collaboration between countries and adopt legislation to improve data sharing.
- Create a European approach to artificial intelligence, to increase investment in AI, and use part of the funds to create an open “AI4EU” platform that enables accessibility to AI to all. The goal is to generate more than €20bln of investment over the next decade
- Build trust in human-centric artificial intelligence by taking steps for building trust in artificial intelligence by putting forward “guidelines for trustworthy AI”.
The commission communicated a press release on 8 April 2019 to put forward “guidelines for trustworthy AI” to incite developers, suppliers and users of AI to apply these guidelines that build on existing regulation.
The guidelines are built on the following pre-requisites; AI applications (1) should comply with the law, (2) should fulfill ethical principles and (3) should be robust.
These guidelines establish seven key requirements:
- Human agency and oversight: AI-based systems should have appropriate degrees of control measures; including adaptability, accuracy and explainability. This should protect the user’s fundamental rights and not misguide, decrease or limit human autonomy.
- Technical robustness and safety: AI applications should be resilient against both overt attacks, and manipulation of data by the algorithms themselves. This requirement is designed to protect the user against decisions made by the algorithm, and calls for developers to ensure they have fall-back plans in case of problems.
- Privacy and data governance: Individuals should have control over their data. Data collected should not be used to harm or discriminate against the user. Application developers should ensure that data is protected at every stage in the lifecycle of the AI application.
- Transparency: Users should be aware that they are interacting with an AI system, and should be aware of the AI system’s capabilities and its limitations. The developers should also keep logs of decisions taken and the relevant steps to that decisions.
- Diversity, non-discrimination and fairness: AI systems should be built using data sets that are truly representative of the result that is required. This is aimed to prevent algorithms like the one used by Amazon for recruiting to bias against a social cohort. Establishing good governance models and training the algorithm correctly should eliminate biases.
- Societal and environmental well-being: The impact on society should be monitored closely, especially with situations where automated decisions affect society as whole, like the democratic process.
- Accountability: AI systems should have mechanisms in place to ensure that the system is auditable. Adequate redress to the problems that may be encountered should be in place.
The next step for the commission is to ensure that these guidelines can be implemented in practice, and a consensus for these guidelines is reached. The piloting phase for this process will have two stands: (i) a piloting phase for developers who use AI and an opportunity to submit comments for changes; (ii) present the guidelines to stakeholders in member states to collate feedback.
As part of this initiative, the EU has launched the AI4EU project in January 2019; which brings together AI expertise to create tools and datasets to aid organizations to implement AI solutions. The goal of this platform is to eventually make AI accessible to all.
The commission wants to expand the approach for human-centric AI to other non-European countries, hence it will continue to have multilateral discussions with non-EU countries to arrive at a consensus for human-centric AI. Together with other member states, the commission will push to implement a model for data sharing, focusing on transport, healthcare and industrial manufacturing.
The ultimate of the European commission is to create an AI-capable hub within the European union that is “human-centric” in a way that encourages development, but does not inhibit individual freedom and privacy. The commission is also seeking ways to minimize the impact that such technologies could have on its population by discussing ways that ensure everyone can access basic services and basic income. We think that this approach is the best way forward, and developers should adopt these guidelines to develop AI capabilities that are trustworthy and hence sustainable in the long term.