Has AI amplified disinformation? What opportunities does it offer to counter disinformation and extreme speech? How do we envision "AI for Good" for disinformation research and activism
These were some of the central questions that kicked off the first panel discussion of BOOM's AIxDisinformation event, held on January 20, 2022.
Moderated by Professor Sahana Udupa of Ludwig Maximilian University of Munich, the first panel discussion, titled "Threat and Promise of AI" saw a set of distinguished members of academia, policy making and journalism make a strong case for collaboration between researchers and fact-checkers for the implementation of AI in spotting disinformation, verification, and stopping the spread of such information.
The panel members included Elonnai Hickok, former COO of CIS India, Professor Ponnurangam Kumaraguru of the Indian Institute of Technology, and BOOM's Managing Editor Jency Jacob.
AI Makes Disinfo Easy To Make, Easy To Spread
Speaking on the justifications for using AI in fact-checking, Udupa says, "AI deployment is expected to bring scalability , reduce costs, and decrease human discretion and emotional labour in the removal of objectionable content."
Udupa brings in Kumaraguru to unpack AI for the audience, and to highlight the role it plays in the creation and dissemination of disinformation.
"What do I like? What get's my attention today? These are the choices that AI makes for us today," said Kumaraguru, pointing out social media algorithms as an example of human-AI interaction in daily life. "Many of these (social media) platforms know more about us than what we know about ourselves."
Speaking on the role of AI in the spread of disinformation, Kumaraguru highlighted two points - ease of creation, and ease of reach.
"Creation of disinformation has become so easy today. Over a period of time the skills needed for creating such information which is compelling, influential and viral have gone down. All that has become quite simple," he said. Adding to his point, he remarks, "Because of the social media platforms, because of WhatsApp, reaching a hundred thousand people now is not at all hard with the content that we generate."
Holding Social Media Companies Accountable
Elonnai Hickok, who works in corporate social responsibility, highlights the role of the recommendation system, and the parameters put in place by companies in creating the current issue, and how it could be solved through transparency.
"If we're using a parameters like 'what kind of content is getting the most attention', then such recommendation systems can most certainly amplify harmful content and disinformation," she said. "If we look at how policy makers are looking at the potentiality and actuality that such algorithms amplify disinformation, you see a number of different measures around corporate accountability being put in place."
Hickok then brings in the Digital Services Act (DSA), a legislative proposal by the European Commission to halt the spread of such harmful content, as an example of what could be done from a policy point of view.
"The Digital Services Act requires companies to disclose the parameters by which content is targeted by them. End users have the ability to choose the parameters they are targeted by. This information is to be disclosed in transparency reports. That when automation is used in content moderation, it is disclosed in transparency reports," she adds.
Competing With Bad Actors
Jency Jacob, BOOM's Managing Editor, said that the question of technology replacing a human in fact-checking is incorrectly framed.
"People who are bad actors, those who are part of ideologically driven narratives, or those who are part of political IT cells, they have been using technology far better than all of us," claims Jacob.
Jacob highlights how people had figured out long back the potential of social media in bringing together people on the basis of ideology by using the recommendation systems and engagement tools, and that fact-checkers need to catch up through technology like automation.
"We are always looking for opportunities to automate a large portion of our work. Through a WhatsApp tipline, if people are asking us to fact-check some content which we had fact-checked before, then the bot will recognise it and respond automatically with our fact-checks," he said.
Udupa brings up the important point - the use of localised and culturally coded linguistics to create hate speech and disinformation - and puts it across as a major challenge facing automation in fact-checking. "How can machine learning help us detect this," she asks Kumaraguru.
Kumaraguru explains how this is a challenge that looks hard to meet in the near future, owing to AI scientists not having access to enough real life data to work on. He then insists on the importance of increased collaboration between researchers and fact-checkers, especially in sharing of data and practical information for creation of better tools.
Resonating with Kumaraguru, Jacob agrees on the importance of collaboration between the two fields to bring together the expertise of fact-checking and machine learning to build such tools.