Over the last decade, concerns about the power and danger of Artificial Intelligence have moved from the fantasy of “Terminator” to reality, and anxieties about killer robots have been joined by many others that are more immediate. Robotic systems threaten a massive disruption of employment and transport, while algorithms fuelled by machine learning on (potentially biased) “big data” increasingly play a role in life-changing decisions, whether financial, legal, or medical. More subtly, AI combines with social media to give huge potential for the manipulation of opinion and behaviour, whether to sell a product, influence financial markets, provoke divisive factionalism, or fix an election.
All of this raises huge ethical questions, some fairly familiar (e.g. concerning privacy, information security, appropriate rules of automated behaviour) but many quite new (e.g. concerning algorithmic bias, transparency, and wider impacts). It is in this context that Oxford is creating an Institute for AI Ethics, to open up a broad conversation between the University’s researchers and students in the many related disciplines, including Philosophy, Computer Science, Engineering, Social Science, and Medicine (amongst others).
The Ethics in AI seminars are intended to facilitate this broad conversation, exploring ethical questions in AI in a truly interdisciplinary way that brings together students and leading experts from around the University. Initially, a major aim will be to familarise participants with the landscape of Oxford research, building links and encouraging new connections. Hence early seminars will cover a range of topics with a variety of speakers from different centres, and will, we hope, attract a wide audience from across the University. Later seminars will often be more specialist, focused on a particular area or topic. But importantly, all of the seminars will be followed by a social period with refreshments, to facilitate cross-disciplinary conversation and collaboration.