Episode 4: Not Bias, Not For Us?

 

I'm joined by Thiago Dias Oliva, a PhD candidate at the University of São Paulo, and former Head of Research at the InternetLab, an independent law and technology research centre with ongoing initiatives researching hate speech and AI. Thiago explains how platforms are using AI and machine learning to moderate content online, and the risks this poses for marginalized groups - including the LGBTQ+ community.

 

Further reading:

  • The article Thiago Dias Oliva coauthored with Dennys Marcelo Antonialli and Alessandra Gomes, “Fighting Hate Speech, Silencing Drag Queens? Artificial Intelligence in Content Moderation and Risks to LGBTQ Voices Online” is available here.

  • You can also read Thiago Dias Oliva’s article, “Content Moderation Technologies: Applying Human Rights Standards to Protect Freedom of Expression” at this link.

  • Much has been written about the trouble of biased AI — but two must-reads are Thomas Davidson et al.’s “Racial Bias in Hate Speech and Abusive Language Datasets” and Maarten Saap et al.’s “The Risk of Racial Bias in Hate Speech Detection”.

  • For more on the ongoing work of the InternetLab — a law and technology research centre — check out their website.

  • For more on Thiago’s work, follow him on Twitter.

 
Previous
Previous

Episode 5: Moderating Global Voices

Next
Next

Episode 3: Everything in Moderation