Ethics in AI Colloquium - AI Systems, Impacts, and Accountability

ethics ai

GPT-4 and other large language models are being used in various areas of daily life, including browsing, voice assistants, and coding tools. AI systems have the potential to benefit society by reducing costs in fields like healthcare and law, improving accessibility and learning opportunities, and aiding in addressing climate change and pandemic preparedness. However, we should not expect them to do so by default. In fact, in the absence of proper governance, we should expect increasingly advanced AI systems to enable severe harms to individuals and society. This talk emphasizes the need for accountability measures such as auditing, external scrutiny, and regulation to ensure that AI development benefits and does not harm all of humanity.

This event was livestreamed.

The Institute for Ethics in AI will bring together world-leading philosophers and other experts in the humanities with the technical developers and users of AI in academia, business and government. The ethics and governance of AI is an exceptionally vibrant area of research at Oxford and the Institute is an opportunity to take a bold leap forward from this platform.

Every day brings more examples of the ethical challenges posed by AI; from face recognition to voter profiling, brain machine interfaces to weaponised drones, and the ongoing discourse about how AI will impact employment on a global scale. This is urgent and important work that we intend to promote internationally as well as embedding in our own research and teaching here at Oxford.

Speaker 

Gretchen Kreuger portrait

Gretchen Krueger is an AI policy researcher at OpenAI. She is focused on increasing accountability in AI development. She has published recently on the risks and limitations of AI systems such as CLIP, DALLE, Codex, and GPT-4.  

Gretchen’s research interests span the various societal dimensions of highly general and capable AI systems and the range of technical and non-technical mechanisms to support responsible development of such systems. In particular, she is interested in studying and incubating methods for strengthening accountability; developing new ways of evaluating and testing frontier AI systems; approaches to collective governance of complex systems; collaborative development of safety standards for advanced AI; and other efforts that reduce societal-scale risk.

Prior to joining OpenAI in 2019, she worked at the AI Now Institute at New York University and at the City of New York’s Economic Development Corporation. She holds a BA from Harvard University and a MA from Columbia University.

 

Commentators 

Elizabeth M. Renieris portrait

Elizabeth M. Renieris is a CIGI senior fellow and an internationally renowned privacy expert with European and US Certified Information Privacy Professional credentials. She is also a lawyer, researcher and author focused on the ethical and human rights implications of new and advanced technologies, with a specific emphasis on artificial intelligence (AI), machine learning, blockchain and digital identity.

A senior research associate at the University of Oxford’s Institute for Ethics in AI and affiliate at Harvard University’s Berkman Klein Center for Internet & Society, Elizabeth has also held fellowships with Stanford University’s Digital Civil Society Lab and the Carr Center for Human Rights Policy at Harvard Kennedy School. She serves as the guest editor to MIT Sloan Management Review’s Responsible AI project and was named to the 2022 list of “100 Brilliant Women in AI Ethics” by Women in AI Ethics.

Elizabeth is the founder and CEO of HACKYLAWYER, an innovative consultancy focused on law and policy engineering. She has advised the World Bank, the UK Parliament, the European Commission and the US Congress, and a variety of start-ups, global corporations and international and non-governmental organizations alike, on law and policy questions related to AI/machine learning, blockchain and digital identity, as well as other new and advanced technologies. She is a frequent writer and speaker on these topics, with bylines in WiredSlateNPRForbes and The New York Times, among other outlets.

Elizabeth is also the author of Beyond Data: Reclaiming Human Rights at the Dawn of the Metaverse (MIT Press). She holds a master of laws from the London School of Economics and Political Science, a doctor of jurisprudence from Vanderbilt University and bachelor of arts from Harvard College.

 

 

carina

 

Dr Carina Prunkl is a Postdoctoral Research Fellow at the Institute and a Junior Research Fellow at Jesus College, Oxford.

Her research focuses on human autonomy and algorithmic fairness. She also works on responsible research and the governance of AI. Previous policy engagements include the UK’s Centre for Data Ethics & Innovation, the UK Ministry of Defence, the Mexican Senate, and the Delegation of the European Commission to Russia. 

Carina Prunkl previously completed a DPhil in Philosophy and a MSt in Philosophy of Physics at the University of Oxford, as well as a BSc and MSc in Physics at Freie Universität Berlin. 

 

 

Michael Cheng portrait

 

Michael Cheng is a doctoral candidate in AI engineering at Magdalen College studying on a Rhodes Scholarship. He is supervised by Philip H.S. Torr, FRS, FREng. 

From an immigrant household in Pennsylvania, Michael grew up with a speech disability, then earned a full scholarship to Harvard University, where he rowed for the varsity men’s lightweight rowing team and graduated Phi Beta Kappa with a BA in History and Mathematics and a MS in Computer Science.

He was elected as student body president by his Harvard classmates and enacted a reform program that attained 76 percent of the vote with a record-high 57 percent voter turnout. He studied African-American literature and political thought with Henry Louis Gates, Jr. and was a Scholastic Art and Writing Awards National Gold Medalist in humor writing.

 

 

Chair 

Vincent Conitzer portrait

 

Vincent Conitzer is Head of Technical AI Engagement at the Institute for Ethics in AI, and Professor of Computer Science and Philosophy, at the University of Oxford. He is also Professor of Computer Science (with affiliate/courtesy appointments in Machine Learning, Philosophy, and the Tepper School of Business) at Carnegie Mellon University, where he directs the Foundations of Cooperative AI Lab (FOCAL). Previous to joining CMU, he was the Kimberly J. Jenkins Distinguished University Professor of New Technologies and Professor of Computer Science, Professor of Economics, and Professor of Philosophy at Duke University. 

Conitzer has received the 2021 ACM/SIGAI Autonomous Agents Research Award, the Social Choice and Welfare Prize, a Presidential Early Career Award for Scientists and Engineers (PECASE), the IJCAI Computers and Thought Award, and an honorable mention for the ACM dissertation award. He has also been named a Guggenheim Fellow, a Sloan Fellow, a Kavli Fellow, a Bass Fellow, an ACM Fellow, a AAAI Fellow, and one of AI's Ten to Watch.