logo-pic
The Prism Alignment Project
Participatory - Representative - Individualised - Subjective - Multicultural

About


The PRISM alignment project collects diverse human feedback for AI conversations. Our difference lies in our specific mission to collect participatory, representative and individualised feedback to inform subjective and multicultural alignment norms.

In the early days of human feedback learning in AI systems, data was collected from a narrow and unrepresentative set of crowdworkers. This raises concerns about the potential impact of limited voices steering language models that are now used by hundreds of millions of people around the world.

To combat these concerns, we've collected diverse and disaggregated feedback from 1,500 participants born in 75 countries, including census-representative samples from the UK and the US. Our participants converse with over 20 LLMs in real-time, giving rich signals on each response.

With this data, we aim to provide insights into how humans differ in their interactions with large language models across different sociocultural contexts.

Our Partners


Powered by

With support from

Our Team


Hannah Rose Kirk

Hannah Rose Kirk

University of Oxford

Scott A. Hale

Scott A. Hale

University of Oxford

Katerina Margatina

Katerina Margatina

Sheffield University

Bertie Vidgen

Bertie Vidgen

University of Oxford

Paul Röttger

Paul Röttger

Bocconi University

Rafael Mosquera

Rafael Mosquera

ML Commons

Juan Ciro

Juan Ciro

ML Commons

Max Bartolo

Max Bartolo

Cohere

He He

He He

New York University

Alexander Whitefield

Alexander Whitefield

University of Pennsylvania

Andrew Bean

Andrew Bean

University of Oxford

Adina Williams

Adina Williams

MetaAI

Contact Us