U.S. laws regulating AI prove elusive, but there may be hope
- November 4, 2024
- Posted by: chuckb
- Category: TC Artificial Intelligence
The future of AI regulation in the U.S. is fraught with challenges as lawmakers struggle to catch up with technological advancements. Recent efforts show some progress, with states enacting targeted policies, yet the overarching framework remains fragmented and sometimes ineffective. Notably, Tennessee has made strides in protecting voice artists from unauthorized AI cloning, while Colorado has introduced a risk-based regulatory approach to AI. California’s Governor Gavin Newsom recently signed several bills aimed at enhancing AI safety, including requirements for companies to disclose training details. However, a comprehensive federal AI policy akin to the EU’s AI Act is still lacking.
Significant obstacles have emerged at both state and federal levels. For instance, Newsom vetoed California bill SB 1047, which aimed to impose broad safety and transparency measures on AI development, primarily due to lobbying from special interest groups. Furthermore, a bill regulating AI-generated deepfakes on social media is currently on hold amidst ongoing legal challenges.
Despite these setbacks, some experts are optimistic about the potential for AI regulation. Jessica Newman, co-director of the AI Policy Hub at UC Berkeley, stated that many existing federal laws—although not initially crafted with AI in mind, such as anti-discrimination laws—could still apply to AI technologies, offering pathways for regulation.
Newman also contrasted the narrative of the U.S. as a “Wild West” of AI regulation with a more nuanced perspective. The Federal Trade Commission (FTC) has taken steps against companies improperly obtaining data by demanding they remove AI models built on such data. In addition, the Federal Communications Commission (FCC) has declared AI-voiced robocalls illegal, and is considering rules mandating transparency for AI-generated content in political advertising.
At the federal level, President Biden has attempted to introduce regulatory frameworks for AI, signing an executive order that emphasizes voluntary reporting and benchmarking for AI firms. This order led to the establishment of the U.S. AI Safety Institute (AISI) under the National Institute of Standards and Technology, fostering collaboration with leading AI labs like OpenAI and Anthropic. However, this progress is at risk, as the AISI could be dismantled if the executive order is repealed. A coalition of over 60 organizations is urging Congress to enact legislation that solidifies the AISI’s mandate before year’s end.
Elizabeth Kelly, AISI director, echoed the sentiment that ensuring the responsible development of technology is a shared American interest. Despite the setbacks highlighted by Newman’s remarks regarding SB 1047, California State Senator Scott Wiener—a proponent of the bill—expressed optimism for future comprehensive regulations. He remarked that dialogue around AI risks is critical, with major players in tech acknowledging that the dangers posed by AI warrant regulatory action.
Adding urgency to these discussions, Anthropic has warned of potential AI catastrophes if governments fail to implement regulations within the next 18 months. Despite facing pushback from industry leaders, including those from Khosla Ventures and Microsoft, some experts see this pressure as a catalyst for aligning state regulations toward a federal solution. Newman suggested that companies are likely eager to avoid the unpredictability of a state-by-state regulatory patchwork, which has already led to nearly 700 new pieces of AI legislation being proposed this year alone.
In conclusion, the path forward for comprehensive AI regulation in the U.S. remains uncertain but is marked by a mixture of optimism and resistance. As states explore these regulations, and as federal agencies begin to take a more proactive approach, there is hope that a standardized framework may emerge to address the unique challenges presented by AI technologies.