‘AI models are good at taking nudity, but less good at hate speech’

New Delhi Facebook: In just one year of its existence, Facebook’s oversight board has accepted 23 appeals on content moderation decisions and made more than 50 policy recommendations to the company. In an interview, Sudhir Krishnaswamy, the only Indian member on the board and vice-chancellor of the National Law School of India University, discussed the board’s functioning so far, its expansion plans and the need for algorithmic moderation of content. Edited excerpt:

Now that it’s been a year, what do you think of Facebook as a platform?

Facebook is a different kind of platform depending on the jurisdiction you are in. For example, like Myanmar or some countries in West Africa, Facebook is a primary media source. In jurisdictions like India, it’s mixed—Facebook is big, but other media is big too; What is ostensibly private messaging, but also serves as public media is even bigger.

I think a background and understanding of the media environment in each jurisdiction is important because Facebook plays a different role in each of these jurisdictions.

But I think what you are asking is exactly what Facebook might be. I think the promise of this is that it allows for a mediated community, that communities can form regardless of geography, class, background, etc. That kind of faster and bigger community formation can happen, that’s his promise. But as we now know with the entire social media universe, where everyone is both a user and a publisher, just the format of that platform allows many other issues to come up. The idea that an organic community is an automatic result of peer-to-peer social media networks has been severely tested. It has been tested on all platforms I think, and Facebook is no exception. The challenge of what to do about it has not yet been resolved in any jurisdiction.

We often say that Facebook’s own policies are evolving, as are global laws. As the board, are you equipped to make recommendations?

The Board is an extraordinary body in terms of types of people. We take our work very seriously. If there are questions we have doubts about, such as what is the nature of the Amhara-Tigreyan conflict in Ethiopia, we offer an opinion on that. We will consult a security group that is world renowned and expert in a field, get their response within a period of 7-12 days and take that opinion into consideration. We have no pride; Anyone who knows, we will ask them.

We also get public submissions. If we raise an issue about hate speech, top civil society organizations around the world will offer a brief overview. And they are very well researched, well reasoned, saying that you should go in this direction for that matter. So, my understanding is that our process is really strong.

All major platforms want to use algorithmic moderation, but problems remain. Is Artificial Intelligence (AI) a Viable Solution?

This is a developed area. A balance between legal and software courts is being worked out in various fields. On Content Moderation, we find that AI models are quite efficient in handling certain content. For example, they are very good at taking nudity, pornography, banned substances, and weapons and ammunition, but less good at hate speech or provocation, because provocation involves the subtle use of language. Herein lies the fight. Even in areas of nudity, there are tricky matters – such as those characterized by female breasts but related to breast cancer – that algorithms are not able to pick up very well. Algorithms are closed in some areas, but are being trained and retrained. But in some areas, it’s quite off, and I think this is what frustrates a lot of users.

For example, hate speech or countermeasures, where someone says something and you say something back, and it’s your post that gets deleted until the original message remains. These are tough questions and I think people are trying to automate more effectively.

Because these platforms will have an automation layer to scale. There is a definite misunderstanding of scale when people say why don’t you use humans to do everything. Larger platforms have to use automation to some degree. How much, how well, these are pertinent questions.

subscribe to mint newspaper

, Enter a valid email

, Thank you for subscribing to our newsletter!

Never miss a story! Stay connected and informed with Mint.
download
Our App Now!!

,