Stop or Go? Is runaway AI an existential threat to humanity?
报告人：Prof. Colin Allen, Indiana University (Bloomington)
Colin Allen received his B.A. in philosophy from University College London in 1982 and his Ph.D. in philosophy from UCLA in 1989. He has broad research interests in the general area of philosophy of biology and cognitive science, with particular interests in animal behavior and cognition. He has received funding from the National Science Foundation and several grants from the National Endowment for the Humanities for his work in digital humanities. His work on the prospects of moral capabilities in machines is also influential. Allen has over 100 book chapters, journal articles, and conference proceedings papers. In 2010 he received a Humboldt Research Award, granted in recognition of a researcher's entire achievements to date, from Germany's Alexander von Humboldt Foundation. Professor Colin Allen has been selected as the winner of the 2013 Barwise Prize of the American Philosophical Association.
Abstract: Stephen Hawking warns that "The development of full artificial intelligence could spell the end of the human race". But what does he mean by “full” artificial intelligence, and how close is that really? With IBM’s DeepQA machine Watson winning the Jeopardy! Challenge in 2011 and Google’s "deep learning" system DeepMind having beaten the top-ranked world player less than two months ago, “deep” is the A.I. buzzword of the moment, but is it just hype or is it a major advance towards the scenario that troubles Hawking and others? From self-driving cars to military drones, the merger of A.I. and robotics increasingly puts people into contact with machines that are capable of acting without direct human control. As MIT computer scientist Rosalind Picard has put it, "The greater the freedom of a machine, the more it will need moral standards.” Can the same technologies behind increasingly capable autonomous machines be deployed to help ensure that they behave ethically? Will it be enough to prevent the doomsday scenario? And what can engineers and philosophers each learn about ethics from the attempt to create artificial moral agents?
版权所有©2016 上海交通大学科学史与科学文化研究院 沪交ICP备20160097
技术支持 : 维程互联