The Alan Turing Institute Research Team
Professor David Leslie is the Director of Ethics and Responsible Innovation Research at The Alan Turing Institute and Professor of Ethics, Technology and Society at Queen Mary University of London. Before joining the Turing, he taught at Princeton’s University Center for Human Values, where he also participated in the UCHV’s 2017-2018 research collaboration with Princeton’s Center for Information Technology Policy on “Technology Ethics, Political Philosophy and Human Values: Ethical Dilemmas in AI Governance.” Prior to teaching at Princeton, David held academic appointments at Yale’s programme in Ethics, Politics and Economics and at Harvard’s Committee on Degrees in Social Studies, where he received over a dozen teaching awards including the 2014 Stanley Hoffman Prize for Teaching Excellence. He was also a 2017-2018 Mellon-Sawyer Fellow in Technology and the Humanities at Boston University and a 2018-2019 Fellow at MIT’s Dalai Lama Center for Ethics and Transformative Values. David now serves as an elected member of the Bureau of the Council of Europe’s Ad Hoc Committee on Artificial Intelligence (CAHAI). He is on the editorial board of the Harvard Data Science Review (HDSR) and is a founding editor of the Springer journal, AI and Ethics. He is the author of the UK Government’s official guidance on the responsible design and implementation of AI systems in the public sector, , Understanding artificial intelligence ethics and safety (2019) and a principal co-author of Explaining decisions made with AI (2020), a co-badged guidance on AI explainability published by the Information Commissioner’s Office and The Alan Turing Institute. He is also Principal Investigator of a UKRI-funded project called PATH-AI: Mapping an Intercultural Path to Privacy, Agency, and Trust in the Human-AI Ecosystem, which is a research collaboration with RIKEN, one of Japan’s National Research and Development Institutes founded in 1917. David was a Principal Investigator and lead co-author of the NESTA-funded Ethics review of machine learning in children’s social care (2020). His other recent publications include the HDSR articles “Tackling COVID-19 through responsible AI innovation: Five steps in the right direction” (2020) and “The arc of the data scientific universe” (2021) as well as Understanding bias in facial recognition technologies: An explainer (2020), a monograph published to support an award-winning BBC investigative journalism piece. David is also a co-author of Mind the gap: how to fill the equality and AI accountability gap in an automated world (2020), the Final Report of the Institute for the Future of Work’s Equality Task Force and lead author of “Does AI stand for augmenting inequality in the COVID-19 era of healthcare” (2021) published in the British Medical Journal. He is additionally the lead author of Artificial Intelligence, Human Rights, and the Rule of Law: A Primer (2021), a primer prepared to support the CAHAI’s Feasibility Study and translated into Dutch and French. In his shorter writings, David has explored subjects such as the life and work of Alan Turing, the Ofqual fiasco, the history of facial recognition systems and the conceptual foundations of AI for popular outlets from the BBC to Nature.
Dr Mhairi Aitken is an Ethics Fellow in the Public Policy Programme at The Alan Turing Institute. She is a Sociologist whose research examines social and ethical dimensions of digital innovation particularly relating to uses of data and AI. Mhairi has a particular interest in the role of public engagement in informing ethical data practices. Prior to joining the Turing Institute, Mhairi was a Senior Research Associate at Newcastle University where she worked principally on an EPSRC-funded project exploring the role of machine learning in banking. Between 2009 and 2018 Mhairi was a Research Fellow at the University of Edinburgh where she undertook a programme of research and public engagement to explore social and ethical dimensions of data-intensive health research. She held roles as a Public Engagement Research Fellow in both the Farr Institute of Health Informatics Research and the Scottish Health Informatics Programme (SHIP). Mhairi was the lead author on an international consensus statement setting out principles to underpin best practice in public engagement relating to data-intensive health research. Mhairi is a Visiting Fellow at the Australian Centre for Health Engagement Evidence and Values (ACHEEV) at the University of Wollongong, Australia. Her recent publications include “Keeping it Human: A Focus Group Study of Public Attitudes Towards AI in Banking” (2020), “Involving the public in data linkage research” (2020), and FAT* conference paper “The relationship between trust in AI and trustworthy machine learning technologies.” (2020).
Antonella Perini is a Research Associate in Data Justice and Global Ethical Futures within the Public Policy Programme at The Alan Turing Institute. Prior to joining the team, she led projects on democratic innovation and collaborative governance at the Latin American think tank Asuntos del Sur, where she worked with grassroot organisations, activists, and policymakers. More recently, Antonella has assisted research projects on disinformation and online hate speech at the Oxford Internet Institute and contributed to research led by the Centre of Technology and Society (CETyS) and Centro Latam Digital that aims to understand the levels of awareness and implementation of ethical principles in start-ups developing AI systems in Latin America. Antonella holds a BA in International Relations from Universidad de San Andrés (Argentina) and an MSc in Social Science of the Internet by the University of Oxford.
Cami Rincon is a Research Assistant in Public Sector AI Ethics and Governance. Cami has experience in community organising, participatory design, and emancipatory approaches to technology. Prior to joining The Alan Turing Institute, Cami researched risks and opportunities for LGBTQ+ people across branches of AI, with a focus on developing AI voice applications competent of trans people's needs and experiences. This work was accepted at the 24th ACM Conference on Computer Supported Cooperative Work and developed into an intensive course for the University of the Arts London’s Creative Computing Institute (CCI). Cami’s project “Queering Voice-AI: Syb”, co-created with Feminist Internet, was selected for the New New Fellowship 2021 (founded by Bertelsmann Stiftung, Superrr Labs, the Goethe Institute, and Allianz Kulturstiftung), which funds projects building just and inclusive digital futures. Cami holds a BA in Human Development from The Evergreen State College, and an MSc in Management of Innovation from Goldsmiths, University of London, awarded with distinction.
Dr Anjali Mazumder is the Theme Lead on AI and Justice & Human Rights and the Research Chair for Equality, Diversity and Inclusion at the Alan Turing Institute. She is motivated by questions that consider the opportunity and benefits of AI to improve services and address societies greatest challenges with potential harms and risks to bias and discrimination, privacy and surveillance, function creep and downstream consequences, placing the law, human rights, diversity and inclusion at the core of responsible data and AI research, innovation and governance. As a statistician, her interests are in causal reasoning, value of evidence and data minimisation, development of probabilistic decision support tools, and creating pathways to enable responsible and trustworthy data flows and socio-technical solutions considering relevant technical, legal, human rights, sociocultural and domain issues. She has secured ~ £1m to address modern slavery – a human rights abuse, codesigning and developing responsible data and AI methods and practices that includes (a) community engagement with survivors (and sex workers) to understand their concerns, raise awareness on how data can be used, and for them to contribute to data-driven solutions; and (b) stakeholder engagement to understand the modern slavery data landscape (including data gaps, uncertainty in representativeness of hidden populations) and examine the legal, cultural and technical barriers and opportunities to data sharing. Her work consistently examines the potential risk of differential outcomes or process of statistical-based methods spanning educational and health assessments to forensic and criminal justice tools to biometric and personal databases for differing purposes. She has over 15 years’ experience tackling fundamental statistical problems of societal importance – human rights, justice, security, the Law, education, public health & safety – working at the interface of research, policy and practice in the UK, the US, and Canada, fostering multi-disciplinary and cross-sector collaborations. Having held a ministerial appointment serving on Canada’s National DNA Databank Advisory Committee (2012-2017), she currently serves on the Research Advisory Board Panel for the Educational Testing Service (as the fairness, equality, and privacy advisor), and senior management board of the UK’s Policy and Evidence Centre for Modern Slavery and Human Rights. She serves the statistics community in various ways including as an elected Council member of the Royal Statistical Society, and member of the Statistics & Law Section and the Data Science Section committees. She holds a doctorate in Statistics from the University of Oxford and two masters’ degrees in Measurement and Evaluation, and Statistics from the University of Toronto. Her recent publications include her recent piece with Dr Leslie in BMJ, “Does “AI” stand for augmenting inequality in the era of covid-19 healthcare?” (2021) and “Psychometric analysis of forensic examiner behaviour” (2020).
Morgan Briggs is the Research Associate for Data Science and Ethics in the Alan Turing Institute’s Public Policy Programme. Briggs is trained as a data scientist and holds an MSc in Social Data Science from the University of Oxford, awarded with distinction. She has worked on topics related to social data science, data science research design, AI applications in a development context, data ethics and the SDGs, and ethical considerations of data science methodologies and digital technologies. Prior to the Turing, Briggs worked as a project manager at the Rhodes Artificial Intelligence Lab (RAIL), collaborating with the World Food Programme’s Innovation Accelerator to develop AI solutions to reduce urban food insecurity in the context of SDG 2, Zero Hunger. Briggs has advised several NGOs on data collection and aggregation processes, as well as modelling techniques, during the Beirut explosion in 2020 and the locust plagues in East Africa. Briggs has also been asked to comment on AI’s applicability to the SDGs, particularly SDG 2, Zero Hunger, and SDG 10, Reduced Inequalities. Her blend of technical skills, past work in the sphere of AI and the SDGs, and knowledge of data science ethics will provide a different lens to help think about the inclusion of low- and middle-income countries when thinking about advancing data justice.
Smera Jayadeva is a Research Assistant in Data Justice and Global Ethical Futures in the Public Policy Programme. Prior to joining The Alan Turing Institute, Smera has worked in a collaborative placement with the Austrian Institute for International Affairs wherein the European and Indian approaches to medical AI were comparatively evaluated. She has experience in conducting research in the Synergia Foundation on themes ranging from geopolitics and policymaking to disruptive technology. Additionally, Smera has worked as an independent researcher in policy evaluation and governance in public and non-profit organisations.
Smera holds an International Master in Security, Intelligence and Strategic Studies with distinction jointly awarded by the University of Glasgow, Dublin City University, and Univerzita Karlova. Her graduate dissertation titled “Systems in the Subcontinent: Data, Power, and the Ethics of Medical Machine Learning in India” evaluated the scope, challenges, and mediatory role of AI in Indian healthcare systems. She also holds a BA with a triple major in History, Economic, and Political Science from Christ University (Bengaluru).
Smera holds an International Master in Security, Intelligence and Strategic Studies with distinction jointly awarded by the University of Glasgow, Dublin City University, and Univerzita Karlova. Her graduate dissertation titled “Systems in the Subcontinent: Data, Power, and the Ethics of Medical Machine Learning in India” evaluated the scope, challenges, and mediatory role of AI in Indian healthcare systems. She also holds a BA with a triple major in History, Economic, and Political Science from Christ University (Bengaluru).
Rosamund Powell is a Research Assistant in Public Sector AI Ethics and Governance. Prior to joining The Alan Turing Institute, Rosamund worked as a researcher with the Institute of AI. She has experience engaging both with legislators and multilateral organisations on the regulation of emerging technologies. She attended the University of Cambridge where she received a BA in Natural Sciences and MSc in the History and Philosophy of Science, awarded with First Class. Her Master’s project titled The Artificial Intelligentsia Reimagined explores how establishment structures influenced early discussions on AI and society and is forthcoming in the British Journal of the History of Science. She is also interesting in exploring artificial intelligence as it relates to the philosophy of cognitive science, anthropology and political theory.
Dr Jat Singh is a Senior Research Fellow and Affiliated Lecturer at the Dept. Computer Science & Technology (Computer Laboratory), University of Cambridge. He leads the multi-disciplinary Compliant and Accountable Systems research group, which works at the intersection of computer science and law. Broadly, his work focuses on making technology work in the public interest and accord with rights, taking an interdisciplinary approach towards issues of governance, control, agency, accountability and trust regarding emerging and data-driven technologies. He also co-chairs the Cambridge the Trust & Technology Initiative, which drives interdisciplinary research exploring the dynamics of trust and distrust in relation to internet technologies, society and power, and he is a Fellow at The Alan Turing Institute. Singh is active in the tech-policy space, having served on advisory councils for the Dept. Business, Innovation & Skills, the Financial Conduct Authority, and the International Association of Privacy Professionals. He has managed a range of successfully interdisciplinary projects in the data governance space, and undertaken commissions on issues regarding data, technology and society for a range of entities, including the UK Government, Information and Privacy Commissioner NSW, Ada Lovelace Institute, and media outlets (e.g. The Guardian). His work has realised significant policy impact, being referenced in (UK and EU) Government reports, parliamentary committees and court judgments. In terms of qualifications, he holds a PhD in Computer Science (focusing on data governance in the health domain) from the University of Cambridge, is a Certified Information Systems Security Professional (ISC2 CISSP), Certified Information Privacy Professional (IAPP CIPP/E), and has a background in law. Singh is currently working on several projects including Realising Accountable Intelligent Systems (RAInS), Towards a legally-compliant Internet of Things, and Contextual fairness in ML. Singh recently presented “The Landscape and Gaps in Open Source Fairness Toolkits” at the CHI 2021, as well as “Differential Tweetment: Mitigating Racial Dialect Bias in Harmful Tweet Detection” at FAccT 2021.
Dr Michael Katell is a Postdoctoral Research Associate for Data Science and Ethics in the Criminal Justice System working within the Public Policy Programme. He is a technology policy scholar and a philosopher of technology whose general concentrations include equity and social justice in digital systems and platforms. Dr Katell received his PhD in Information Science from the University of Washington (UW) in 2020. His dissertation research investigates the ethical implications of automated decision making and the construction of digital identity. During his time at the University of Washington, Dr Katell was a research assistant with the Tech Policy Lab and the Value Sensitive Design Lab. Dr Katell is also co-director of the Critical Platform Studies Group (CritPlat), a research collective whose work has included the legibility of automated systems, community participation in technology policy, and digital labour rights. In collaboration with CritPlat and the American Civil Liberties Union of Washington, Dr Katell led a team at the UW eScience Institute Data Science for Social Good programme to create the Algorithmic Equity Toolkit, a set of resources for social justice advocates involved in deliberations concerning governmental uses of algorithmic technologies. Dr Katell’s recent invited talks, presentations, and related activities include: “An Action-Oriented AI Policy Toolkit for Technology Audits by Community Advocates and Activists.” Conference paper presentation, Conference on Fairness, Accountability, and Transparency (FAccT), “Policy versus Practice: Conceptions of Artificial Intelligence.” Conference paper presentation at the AAAI/ACM Conference on AI, Ethics, and Society (AIES), and “Toward Situated Interventions for Fairness, Accountability, and Transparency: Lessons from the field.” Conference paper presentation, Conference on Fairness, Accountability, and Transparency (FAT*).
Dr Shyam Krishna is a Postdoctoral Research Associate in the Public Policy Programme, and currently stewards the research output and partner engagement of the Advancing Data Justice Research and Practice project. As an engineer-turned-researcher he has an interdisciplinary background in developing an ethical and social justice-oriented view of emergent digital innovations and the technopolitical ecosystems they inhabit. He has researched on the surveillance and data justice impact resulting from algorithmic and data practices of gig economy, digital identity, and fintech platforms.