About
The objective of this project is to fill a gap in data justice research and practice and provide resources that help policymakers, practitioners, and impacted communities gain a broader understanding of data governance. This includes considerations of equity and data justice informed by affected communities that encompass questions of access to, and visibility and representation in, data used in the development of AI/ML systems.
The project aims to provide (a) an assessment of the current state of research in this area and the identification of gaps in order to create a forward looking research agenda and (b) a preliminary guide for three target audiences-policymakers, developers, and communities affected by AI/ML systems–consisting of practical questions to consider in their practice, use, and experience of AI/ML systems, specifically in consideration of realising the 2030 Sustainable Development Goals. We will engage individuals and civil society organisations in the development of each project deliverable. The findings from decidim will be integrated into the final literature review and preliminary guides.
Participatory Design
Participant Engagement
We will engage with individuals with diverse and contextually specific lived experiences of injustice or marginalisation and those actively combatting these, in this case, as related to data. Our goal for one of the resources–an annotated bibliography of existing research and understandings–is to draw on the diverse intellectual and cultural perspectives of communities affected by AI/ML, to pick up on any aspects of current thinking, research, and writing on data justice that may otherwise go unrecognised. To formulate key questions for another resource–the preliminary guidance–our objective is to gain insight from a globally diverse pool of contributors. We intend to ground key questions on what matters to people affected by AI/ML, including an understanding of their greatest hopes and deepest concerns.
Using decidim, a digital platform for participation, we will engage individuals and civil society organisations by introducing an element of community-led co-design into the scoping of the state of the art in data justice research.
Positionality
Team Positionality
As scholars, advocates, and individuals, we are committed to social justice and to revealing the systemic bases of intersectional discrimination in our research practices and life choices. Some members of our team relate to marginalised stakeholders from both a position of kinship and one of solidarity, navigating their own lived experiences and confronting intersectional discrimination. Others reflexively acknowledge their inheritance of legacies of unquestioned privilege along with the limited mindsets that derive therefrom. From such a critical self-acknowledgement of privilege and difference, comes a deep sense of responsibility—namely, the responsibility to marshal the advantages of carrying out research in power centres of the Global North and at well-funded research institutions in order to serve the interests of those on our planet who are all too often marginalised, de-prioritised, and exploited in the global data innovation ecosystem. We recognise how critically important diversity, equity, and inclusion are to carrying out substantively objective and reflexive research.
Domain Expertise
Advisory Board
The Advisory Board is composed of individuals involved in various data communities of practice connected to human rights, modern slavery, global public health, and sustainable development, representing diverse perspectives from lower- and middle-income countries and the marginalised contexts of high income countries.
Members of the advisory board play an active part in our research, providing guidance on how to not only consider data justice more broadly, but also carry out engagements within a variety of research environments and areas while working to co-design a unique, culturally-informed, and international governance framework for responsible AI innovation. Supported by their experience and expertise within the contexts of engagement, this approach enables the research team to connect with a wider network of relevant communities of practice, organisations, and impacted communities while ensuring engagement efforts are culturally aware, dialogically structured, and locally credible.
Project Team
Global Partnership on AI and The Alan Turing Institute
The Global Partnership on AI (“GPAI”) is an international, multi-stakeholder initiative established with a mission to “support and guide the responsible adoption of AI that is grounded in human rights, inclusion, diversity, innovation, economic growth, and societal benefit, while seeking to address the UN Sustainable Development Goals”. GPAI is supported in its mission by four working groups currently comprised of leading international experts and 19 national governments.
The Alan Turing Institute is the UK’s national institute for data science and artificial intelligence. The Alan Turing Institute is the Consultancy Partner for this project, and research will be conducted by members of the The Alan Turing Institute’s Ethics Theme along with the Institute’s Theme Lead of AI and Justice and Human Rights and a Senior Research Fellow and Affiliated Lecturer at the Dept. Computer Science & Technology (Computer Laboratory), University of Cambridge.
If you have any questions, comments, or feedback on this project or platform, please email us at advancingdatajustice@turing.ac.uk
Thank you for your participation!