Generative pasts of AI

 

In Discriminating Data, Wendy Hui Kyong Chun proposed countering harmful technological disruptions by using AI systems to provide evidence of past discrimination. In other words, to turn automated predictions of injustices on themselves. This resonates with concurrent calls to use AI to resist erasure, loss of memory and other systemic outcomes of past oppressions. In this talk, I discuss potentials to reverse and rewind automated data extractivism. Inspired by Christiane Floyd's recent writing on conversational agents, I document interactions with vectorized word corpora and large language models as a way to learn about relations and (mis)conceptions abstracted from data.

Large datasets precede and enable contemporary AI systems, as a source for training and fine-tuning statistical models. Data is produced in processes of meaning-making and struggle over value, captured through various measurement instruments, observations and scraping techniqes. Cultural practices of measurement and standardisation have histories, and tracing histories can provide insights into the ways unequal relations convert life into continuous data streams, as well as how to resist assumptions of access and neutrality in data.

 

 

 

Festival documentation produced and broadcasted in cooperation with DORFTV

img © Martin Bruner

 

Date
08.05.
Start
19
30
End
20
30
Format
Keynote
Contributor(s)