At a recent conference in San Juan, Puerto Rico, The Future of Life Institute (FLI) released an open letter addressing the opportunities and challenges presented by the potential of artificial intelligence (AI):
“There is now a broad consensus that AI research is progressing steadily, and that its impact on society is likely to increase. The potential benefits are huge, since everything that civilization has to offer is a product of human intelligence; we cannot predict what we might achieve when this intelligence is magnified by the tools AI may provide … ”
Its members are concerned that as increasingly sophisticated achievements in AI accumulate - especially where they intersect with advances in autonomous robotics technology - not enough attention is being paid to safety. If we can design decision-making machines or robots that are capable of “learning” or self-programmation, they point out, how can we ensure that these devices don’t misinterpret human instruction, intent or vision?
Max Tegmark on "The Future of Life - Video: Singularity Summit
The Institute believes that the time has come to be collectively prudent and for all sectors researching AI on any level to come together to talk about a shared and responsible vision. As the letter states, “Up to now the field of AI has focused largely on techniques that are neutral with respect to purpose. We recommend expanded research aimed at ensuring that increasingly capable AI systems are robust and beneficial: our AI systems must do what we want them to do.”
The thousands of signatories to the letter include science celebrities such as physicist Stephen Hawking (who is also on the Institute’s Scientific Advisory Board) and philosopher Nick Bostrom, Director of Future of Humanity Institute at Oxford; robotics specialists such as Bill Bigge of Creative Robotics, Tony Prescott of Sheffield Robotics and Alan Winfield of Bristol Robotics Lab, and a global host of scientists, developers, academics, programmers and entrepreneurs.
Alongside the recently released open letter, the Institute has published a list of research priorities that touches on many AI projects in development and ethical aspects under discussion.
For example, what does legal liability look when a self-driving car causes a fatality? Or, where does legal and moral responsibility lie when an autonomous weapon or armed drone fails to comply with humanitarian law?
The document raises a multitude of questions that cross-disciplinary boundaries. The Institute is clearly attempting to bring all parties to the table for this important discussion about the future of machine-human relations. As Institute co-founder Jaan Tallinn suggested in his conference presentation in San Juan, “… I hope that the letter will function as a corner stone for the official collaboration between the AI-safety and the AI-research communities.”
The Institute was established in 2014 by a volunteer group of concerned researchers, including Skype co-founder Jaan Tallinn and Tesla CEO Elon Musk. Based in Boston, the organization is pan-global and multidisciplinary.
Elon Musk announced jan 15 to donate $10 million to help fund the research needed to “keep AI beneficial” to humanity.
Elon Musk and AAAI President Thomas Dietterich comment on the $10 M donation - Video: Future of Life