There is little doubt about the fact that AI and robotics will be two of the most defining technology accelerators of this decade. Apart from their deployment in the field of business, these technologies bear potential to enhance the quality of life of millions of individuals worldwide. One prime example is the possibility for AI and robotics to be used to remotely deliver healthcare services, thus increasing the life expectancy of populations living in isolated areas of our planet. The first glimmerings of an ethical use of robotics and AI were already visible in 2001 when a team of surgeons based in New York performed a surgical procedure on a patient in France. While some operations can be carried out remotely, robots will still need human supervision in order to perform their duties correctly, which leads to several considerations surrounding the use of AI.
Safety measures should be put in place in order to guarantee that any decisions taken by artificial intelligence will not end up damaging human users. This echoes the first of the three laws of robotics written by sci-fi author and visionary Isaac Asimov in 1942, stating that “a robot may not injure a human being or, through inaction, allow a human being to come to harm.”
In April 2019 the European Union has put forward “guidelines for trustworthy AI” for the development, and deployment of responsible artificial intelligence applications. Such guidelines emphasize several factors, including human oversight on AI processes, the need for a human operator to be able to overrun the AI’s decisions, the protection of personal data collected by AI applications, and the necessity of monitoring the impact of AI on society at large.
Keep an eye on our news section as next week we will provide further insights into these guidelines and the implications for businesses and end users.