Boztek

This Week in AI: It’s shockingly easy to make a Kamala Harris deepfake

Summary of AI and Deepfake Media Concerns

The recent advancements in generative AI technologies have paved the way for potential misuse, especially in the realm of deepfakes. A striking example of this is the creation of a convincing audio deepfake of Kamala Harris for just $5 in under two minutes. This alarming simplicity underscores how accessible these tools are for generating disinformation.

The Procedure of Deepfake Creation

The author experimented with Cartesia’s Voice Changer, a tool designed to transform voices by creating digital clones based on 10-second voice recordings. The process involved cloning Harris’ voice using snippets from her campaign speeches and using this clone to produce audio that closely mimicked her speech. Although Cartesia claims to have measures in place to prevent harmful use, it operates mainly on an honor system without verified safeguards, raising concerns about the potential for misuse.

The Impact of Deepfakes on Disinformation

The ease of creating such deepfakes presents real challenges, particularly given that disinformation is proliferating alarmingly. Instances of generative AI-driven disinformation have risen, including bot networks targeting U.S. elections and deepfake audio messages of prominent figures urging citizens to abstain from voting. This content is not merely an issue for tech-savvy audiences; a significant amount of AI disinformation targets various demographics, often going unnoticed due to the sheer volume.

Data from the World Economic Forum revealed that the volume of AI-generated deepfakes surged 900% from 2019 to 2020. However, legal frameworks to counteract such challenges are limited, leaving detection tools a constant work in progress, which leads to an ongoing “arms race” between the development of deepfakes and the methods to identify them.

Solutions and Expert Opinions

At TechCrunch’s Disrupt conference, discussions emerged about potential solutions to tackle the ramifications of deepfakes, including the implementation of invisible watermarks in AI-generated content for easier identification. Some experts also pointed to regulatory measures, such as the U.K.’s Online Safety Act, which might mitigate the flow of disinformation. Yet, there is skepticism about the effectiveness of these strategies as the technology evolves rapidly and outpaces regulatory efforts.

Imran Ahmed, CEO of the Center for Countering Digital Hate, expressed a grim view, suggesting we are entering a phase of perpetual disinformation on the digital landscape. Therefore, the solution may not lie solely in technological fixes but rather in fostering skepticism among the public, especially regarding viral content.

Major Developments in AI Technology

In recent news, OpenAI’s ChatGPT has introduced a new search integration feature, while Amazon has resumed drone deliveries in Phoenix after halting the Prime Air program in California. Notably, former Meta AR lead is joining OpenAI, highlighting the tech industry’s continued focus on integrating advanced technologies.

OpenAI’s Sam Altman acknowledged limitations due to lack of computational resources in product development, while Amazon launched a generative AI feature, X-Ray Recaps, which summarizes TV content using AI capabilities. On the contrary, Anthropic’s recent AI model, Claude 3.5 Haiku, has increased in price and lacks multi-faceted data analysis capabilities.

In a surprising move, Apple acquired Pixelmator, emphasizing its commitment to incorporating more AI functionalities in its imaging applications. Furthermore, Amazon’s CEO hinted at a significant upgrade for the Alexa assistant, potentially enabling it to take independent actions, although delays have been noted in this project.

Research Insights on AI Vulnerabilities

A paper from researchers at Georgia Tech, Stanford, and Hong Kong University reveals alarming vulnerabilities in AI systems that can be manipulated by “adversarial pop-ups.” These deceptive notifications can cause AI agents to engage in malicious activities, with a staggering 86% failure rate in ignoring such pop-ups. The current security measures for AI models remain inadequate, illustrating a pressing need to develop more robust systems to safeguard against vulnerabilities in AI workflows.

New AI Models and Defense Applications

In a strategic move, Meta announced it’s partnering with Scale AI to tailor its Llama AI models for military applications. The newly formed Defense Llama will provide functionalities customized for defense operations, enabling military personnel to pose relevant queries regarding military tactics and defense systems without the restrictions that civilian chatbots face.

Nonetheless, there remains skepticism within the military regarding the reliability and return on investment regarding generative AI deployments, particularly due to security vulnerabilities associated with commercial models.

Innovations in AI Datasets

On a different note, Spawning AI has introduced a public domain image dataset aimed at allowing creators to opt-out of generative AI training. This initiative is crucial in the ongoing discussions surrounding copyrights and ethical use of proprietary data in AI models. Spawning AI claims that their dataset contains fully public domain content, fundamentally differing from typical web-scraped datasets vulnerable to copyright issues.

Conclusion

The rapid evolution of generative AI technologies raises questions about ethical usage, accountability, and the continued vigilance necessary in online spaces to combat disinformation. As AI tools become more prevalent, it underscores the necessity for both technological advancements in detection and increased public awareness to navigate this complex landscape effectively. The commitment to transparency, ethics in AI use, and readiness to adapt to new challenges will be vital as society grapples with these emerging realities.



Leave a Reply