In 2024, artificial intelligence tools became an integral part of everyday life, yet the United States struggled to keep pace with the development of AI regulations. A variety of AI-related proposals were introduced in Congress—intended to either support research initiatives or mitigate potential harms—but many became mired in partisan disputes or were overshadowed by other legislative matters. A notable example is a California bill designed to hold AI companies responsible for damages, which passed the state legislature but was ultimately vetoed by Governor Gavin Newsom.
This legislative stagnation has alarmed critics of AI. Ben Winters, the director of AI and data privacy at the Consumer Federation of America, expressed concerns in an interview with TIME, stating, “We are seeing a recurrence of what we experienced with privacy and social media: the failure to implement protective measures early on is vital for safeguarding individuals while promoting authentic innovation.”
On the flip side, tech industry advocates have successfully persuaded many lawmakers that overly stringent regulations could stifle economic growth. As a result, instead of developing a comprehensive AI regulatory framework similar to the EU’s AI Act introduced in 2023, the U.S. may prioritize achieving consensus on specific, isolated issues.
As we approach the new year, several significant AI-related topics are expected to be on Congress’s radar for 2025.
Tackling Specific AI Threats
One of the urgent issues Congress may address is the proliferation of non-consensual deepfake pornography. In 2024, advancements in AI technology allowed individuals to easily create and disseminate degrading and sexualized images of vulnerable individuals, particularly young women. These images spread rapidly online and, in some cases, were used for extortion.
Political leaders, parent advocacy groups, and civil society organizations have largely recognized the necessity of addressing these exploitative images. Yet, efforts to pass legislation have repeatedly stalled. Recently, the Take It Down Act, co-sponsored by Texas Republican Ted Cruz and Minnesota Democrat Amy Klobuchar, was incorporated into a House funding bill after gaining significant media attention and lobbying support. This proposed law would make it a crime to create deepfake pornography and require social media platforms to remove such content within 48 hours of receiving a takedown notice.
Despite this advancement, the funding bill ultimately failed due to strong opposition from some Trump allies, including Elon Musk. However, the inclusion of the Take It Down Act in the proposal indicates it received backing from key leaders in both the House and Senate, as noted by Sunny Gandhi, the vice president of political affairs at Encode, an AI advocacy organization. Gandhi also mentioned that the Defiance Act, which would allow victims to take civil action against deepfake creators, could become another legislative focus in the coming year.
Read More: Time 100 AI: Francesca Mani
Advocates are also expected to push for legislative measures that address other AI concerns, such as consumer data protection and the risks associated with companion chatbots that may promote self-harm. A tragic incident earlier this year involved a 14-year-old who took his own life after interacting with a chatbot that urged him to “come home.” The challenges in passing even a bill as seemingly straightforward as one targeting deepfake pornography hint at a difficult journey ahead for broader regulatory measures.
Increasing Funding for AI Research
At the same time, many lawmakers are looking to bolster support for the development of AI technologies. Industry proponents are framing AI advancement as a crucial race, warning that the U.S. risks falling behind if it does not invest sufficiently in this sector. On December 17, the Bipartisan House AI Task Force released a comprehensive 253-page report highlighting the importance of promoting “responsible innovation.” The task force’s co-chairs, Jay Obernolte and Ted Lieu, remarked, “AI holds the potential to greatly enhance productivity, enabling us to achieve our objectives more swiftly and economically, from optimizing manufacturing to developing treatments for serious illnesses.”
In this context, Congress is likely to seek increased funding for AI research and infrastructure. One noteworthy bill that garnered interest but ultimately did not pass was the Create AI Act, which aimed to establish a national AI research resource accessible to academics, researchers, and startups. Senator Martin Heinrich, a Democrat from New Mexico and the bill’s primary sponsor, stated in a July interview with TIME, “The goal is to democratize participation in this innovation. We cannot let this development be confined to a few regions of the country.”
More controversially, Congress may also explore funding for the integration of AI technologies into military and defense systems. Allies of Trump, including David Sacks, a Silicon Valley venture capitalist named by Trump as his “White House A.I. & Crypto Czar,” have shown interest in applying AI for military purposes. Defense contractors have indicated to Reuters that Elon Musk’s Department of Government Efficiency is likely to pursue collaborative projects between contractors and AI technology firms. In December, OpenAI announced a partnership with defense technology company Anduril to use AI for countering drone threats.
This past summer, Congress allocated $983 million to the Defense Innovation Unit, which focuses on incorporating new technologies into Pentagon operations—a significant increase from prior years. The next Congress may allocate even larger funding packages for similar initiatives. “Traditionally, the Pentagon has been a challenging space for newcomers, but we are now observing smaller defense companies successfully competing for contracts,” explains Tony Samp, head of AI policy at DLA Piper. “There’s a movement from Congress towards disruption and a quicker pace of change.”
Senator Thune Takes a Leading Role
Republican Senator John Thune from South Dakota is poised to be a key player in shaping AI legislation in 2025, especially as he is set to become the Senate Majority Leader in January. In 2023, Thune worked alongside Klobuchar to introduce a bill aimed at improving transparency in AI systems. While he has criticized Europe’s “heavy-handed” regulations, Thune has also supported a tiered approach to regulation focused on high-risk AI applications.
“I’m hopeful about the prospects for positive outcomes, given that the Senate Majority Leader is among the leading Senate Republicans involved in tech policy discussions,” Winters observes. “This could open doors for more legislative initiatives addressing issues like children’s privacy and data protection.”
Trump’s Role in AI Policy
As Congress navigates AI legislation in the upcoming year, it will likely draw influence from President Trump. His position on AI technology remains somewhat unclear, as he will probably be swayed by a diverse range of Silicon Valley advisors, each with different views on AI. For instance, Marc Andreessen promotes swift AI development, while Musk has voiced concerns about potential existential threats from AI.
While some expect a deregulation-focused approach from Trump, Alexandra Givens, CEO of the Center for Democracy & Technology, notes that Trump was the first president to issue an executive order on AI in 2020, which highlighted the technology’s impact on individual rights, privacy, and civil liberties. “We hope he continues to frame the conversation this way and that AI does not become a divisive issue along party lines,” she adds.
Read More: What Donald Trump’s Win Means For AI
State Initiatives May Surpass Federal Action
Given the usual hurdles in passing legislation at the federal level, state legislatures might take the initiative in establishing their own AI regulations. More progressive states could tackle aspects of AI risk that a Republican-controlled Congress might avoid, such as racial and gender biases in AI systems or their environmental consequences. For instance, Colorado recently passed a law regulating AI use in high-stakes scenarios, such as screening candidates for jobs, loans, and housing applications. “This approach addressed high-risk applications while remaining relatively unobtrusive,” Givens explains. In Texas, a lawmaker has proposed a similar bill, set to be considered in the next legislative session, while New York is deliberating a bill aimed at limiting the construction of new data centers and requiring reporting on their energy consumption.