Boztek

What can we do about the spread of AI-generated disinformation?

Summary of Disinformation and AI’s Role in Moderation and Regulation

Disinformation is escalating globally, primarily fueled by easily accessible AI tools. A recent survey reveals that 85% of respondents are concerned about online disinformation, prompting organizations like the World Economic Forum to identify AI-generated disinformation as a critical global risk.

High-profile disinformation instances in 2024 include a bot network on the social platform X focused on U.S. federal elections and a deepfake voicemail of President Biden that discouraged specific voter participation. Internationally, South Asian political candidates proliferated fake videos, images, and articles, notably a deepfake of London Mayor Sadiq Khan that incited violence during a pro-Palestinian protest.

Combating Disinformation

Pamela San Martín, co-chair of Meta’s Oversight Board, emphasizes that while AI has the potential to combat disinformation, it’s not without flaws. The Oversight Board, formed in 2020, is designed to assess Meta’s content moderation policies and advocate for improvements. San Martín acknowledges that AI has made mistakes, such as mislabeling content related to the Auschwitz Museum or misclassifying independent news sources. Nevertheless, she expresses optimistic belief that AI will improve over time, enhancing its effectiveness in addressing disinformation.

AI plays a significant role in moderating social media content, either flagging material for review or taking preliminary actions like warnings or removals. However, the challenge lies in the decreasing cost of disseminating disinformation, which might outpace advancements in moderation technologies.

Imran Ahmed, CEO of the Center for Countering Digital Hate, further notes that social media structures encourage the spread of disinformation. Platforms like X incentivize disinformation through revenue-sharing models, sometimes paying users substantial amounts for viral posts that push conspiracy theories or AI-generated content, creating an ongoing cycle of misinformation. Ahmed describes this as a "perpetual bullshit machine" that undermines democratic foundations based on factual accuracy.

San Martín argues that the Oversight Board has indeed brought some changes, such as advocating for labeled AI-generated content. It has also pressed Meta to enhance identification of non-consensual deepfake images, a growing concern.

The Limits of Self-Regulation

Despite the potential benefits of the Oversight Board, both Ahmed and Brandie Nonnecke, a professor at UC Berkeley studying tech’s impact on human rights, highlight that self-regulation can’t independently combat disinformation effectively. Ahmed critically examines the Oversight Board’s authority, arguing that as long as Meta retains ultimate control over decision-making and policy implementation, true accountability is lacking.

An NYU Brennan Center analysis notes inherent limitations of the Oversight Board, stating that it can only influence a minor fraction of Meta’s decisions, given the company’s control over policy changes and operational transparency. Meta has also threatened to reduce support for the Board, raising concerns over the sustainability of its operations.

Both Ahmed and Nonnecke advocate for robust regulation instead of solely relying on self-governance, which is unlikely to be adopted voluntarily by platforms like X. Nonnecke proposes the notion of product liability tort, which can hold platforms accountable for the harm caused by their "defective" products, alongside advocating for watermarking AI-generated content to clarify its origin.

She suggests potential regulatory approaches, such as requiring payment providers to block transactions related to disinformation or making website hosts implement stricter verification processes for users prone to malicious activity.

Challenges in Regulation

Recent efforts for regulation in the U.S. have encountered obstacles; an October ruling blocked a California law requiring individuals posting AI deepfakes to remove them or incur fines. Nevertheless, Ahmed remains optimistic about the future of regulation, citing positive strides such as OpenAI implementing watermarking for AI-generated images and the emergence of content moderation laws like the Online Safety Act in the U.K.

He expresses a continuous need for regulation due to the significant threat disinformation poses to democracy, public health, and individual well-being. The expectation is that as awareness grows regarding the societal implications of disinformation, there will be more concerted efforts towards establishing regulatory frameworks to address these challenges effectively.

Conclusion

The fight against disinformation, particularly that which is fueled by AI, requires a multifaceted approach that goes beyond self-regulation. While advancements in AI moderation may improve our capabilities to handle false narratives, the broader systemic issues surrounding accountability, transparency, and regulation must be addressed comprehensively. Stakeholders—including tech companies, governments, and civil society—will need to collaborate and innovate solutions that not only mitigate the spread of disinformation but also uphold the integrity of information fundamental to democratic processes.



Leave a Reply