Does AI make better decisions? Can and should machines have the same values as we do? Legal system for decision making: (personal AI advisors, homes and cars insurance, and business insurance). The use of Ai for military operations – ‘weaponisation’ questions. Morality and ethics of AI. Values and moral philosophy.
• Kumar Jacob – CEO, Mindwave Venutres Limited
• Prof. Marina Jirotka - Professor of Human Centred Computing, University of Oxford
• Dave Raggett - W3C Lead, The Web of Things
• Prof. Noel Sharkey - Professor of AI and Robotics, Sheffield University - CoDirector Foundation for Responsible Robotics
• Ben Taylor - CEO, RainBird Technologies
• Rajinder Tumber - Senior Cyber Security Consultant & Auditor, BAE Systems
The first thought leader was Kumar Jacob, CEO at Mindwave Ventures Limited, a company developing digital products and services for health and care. He shared how AI has impacted health, particularly in two fields: clinical decision making and personalized healthcare. The biggest challenge, according to Kumar, is the debate around data exploitation. A clear framework for how to use patient data and, also, anonymous data is needed.
Professor Marina Jirotka from the University of Oxford spoke next, introducing a new type of methodology called Responsible Research and Innovation [RRI] and inviting the APPG AI group to adapt this framework. RRI stresses inclusivity and democratic decision-making, engaging a variety of stakeholders to anticipate possible outcomes of research, reflect on motivation and products that come out, engage with the public, and act accordingly and responsibly (AREA).
Dave Raggett, W3C Lead at the Web of Things, took the floor and focused his speech on the need for computers to start thinking and learning more like human beings. In order for AI to be successful, various sciences (cognitive, neurolinguistics, social, etc.) have to be combined to produce technologies that can think on multiple-levels. He argued that it would be unethical to have machines that could not adapt as humans.
Professor of AI and Robotics at Sheffield University, Noel Sharkey, was the next thought leader to speak. He highlighted that AI has great potential but the government needs to create rigid laws and guidelines to make sure society is protected from the drawbacks. The first task is to decide which decisions should be delegated to machines and, he argued, that life death decisions should not be included in these.
Ben Taylor, the CEO of Rainbird, a cloud-based AI platform enabling anyone to publish a virtual online expert with human-like decision making capabilities, build on the others and emphasized a key term: liability. He asked the government to work with relevant stakeholders to build a framework that provides guidelines for liability. Society should be able to justify how machines make decisions and, he proposed, a clear audit trial to follow AI impact.
The final speaker was Rajinder Tumber, Senior Cyber Security Consultant and Auditor at BAE systems. He added to the discussion by widening the debate to take into account human nature and differences in values. Not all humans are the same and, certainly, not all humans always behave “ethically”. Hence, he questions: should machines really use humans and human values as prototypes?
Stephen Metcalfe MP and Lord Tim Clement-Jones opened the discussion to questions from the floor. The thought leaders were asked several insightful questions, many of which centred on the debate of accountability and who is ultimately liable – the machine, the individual, or the corporate entity. The group came to consensus that, further evidence gathering has to be conducted and use cases have to be developed. Afterwards, recommendations and regulation can be drafted and implemented.