Canceled // Assessing AI systems from the Outside: Shedding Light on Content Moderation Practices at Meta and TikTok
In 2024, elections will occur in 50 countries– including the United States, India, and the EU. Social media content moderation - where content moderators and AI check the network's content daily to see whether it complies with the platform rules - is crucial in combating hate speech, racism, and misinformation. However, many big tech jobs - including content moderators -- have been cut, and content moderators suffer poor working conditions. Further, big tech companies shield information about how content moderation is carried out, including the use of AI and what is being deleted. However, the impending implementation of a new European law promises to unveil previously shielded information by providing researchers access to data on moderated content for the first time. Our project aims to gain a multidimensional perspective on content moderation during elections by using this data, including the perspective of the workers involved, and analyzing leaked documents.