学术研究

学术报告:停止,还是前进?失控的人工智能是对人类的生存威胁吗?

报告主题:

Stop or Go? Is runaway AI an existential threat to humanity?

停止,还是前进?失控的人工智能是对人类的生存威胁吗? 

报告人:Prof. Colin Allen, Indiana University (Bloomington)

                艾伦(美国印第安那大学教授)

主持人:杜严勇(上海交通大学科学史与科学文化研究院副教授)

报告时间:2016年6月9日(周四)14:30

报告地点:上海交通大学闵行校区法学楼401会议室

 

报告人简介:

Colin Allen received his B.A. in philosophy from University College London in 1982 and his Ph.D. in philosophy from UCLA in 1989. He has broad research interests in the general area of philosophy of biology and cognitive science, with particular interests in animal behavior and cognition. He has received funding from the National Science Foundation and several grants from the National Endowment for the Humanities for his work in digital humanities. His work on the prospects of moral capabilities in machines is also influential. Allen has over 100 book chapters, journal articles, and conference proceedings papers. In 2010 he received a Humboldt Research Award, granted in recognition of a researcher's entire achievements to date, from Germany's Alexander von Humboldt Foundation.  Professor Colin Allen has been selected as the winner of the 2013 Barwise Prize of the American Philosophical Association. 

 

艾伦教授是国际著名的认知哲学家,出版著作7部,论文100余篇。2008-2009年度,艾伦教授担任美国哲学与心理学学会主席。2013年,获美国哲学学会颁发的巴韦斯奖(Barwise Prize)。该奖每年授予一位在世界范围内哲学与计算领域做出突出贡献的哲学家以终身成就奖。艾伦教授积极与中国学者合作展开学术研究,他目前兼任西安交通大学人文学院讲席教授。

 

报告摘要:

 Abstract: Stephen Hawking warns that "The development of full artificial intelligence could spell the end of the human race". But what does he mean by “full” artificial intelligence, and how close is that really? With IBM’s DeepQA machine Watson winning the Jeopardy! Challenge in 2011 and Google’s "deep learning" system DeepMind having beaten the top-ranked world player less than two months ago, “deep” is the A.I. buzzword of the moment, but is it just hype or is it a major advance towards the scenario that troubles Hawking and others? From self-driving cars to military drones, the merger of A.I. and robotics increasingly puts people into contact with machines that are capable of acting without direct human control. As MIT computer scientist Rosalind Picard has put it, "The greater the freedom of a machine, the more it will need moral standards.”  Can the same technologies behind increasingly capable autonomous machines be deployed to help ensure that they behave ethically? Will it be enough to prevent the doomsday scenario? And what can engineers and philosophers each learn about ethics from the attempt to create artificial moral agents?

版权所有©2016 上海交通大学科学史与科学文化研究院   沪交ICP备20160097

技术支持 : 维程互联