Q&A
By CHARLES ANZALONE
Published June 12, 2025
Mark Bartholomew
With artificial intelligence rapidly expanding, it is imperative that states with their unique concerns about the technology have the authority to regulate it, says UB legal scholar Mark Bartholomew.
His comments come after the House of Representatives approved a bill that would stop states from regulating artificial intelligence for 10 years. It is under consideration in the Senate.
“We need only look to social media to understand the dangers of regulatory hesitation,” says Bartholomew, professor and vice dean of research in the School of Law, whose research explores the impact of law, culture and technological change on society.
Federal lawmakers “should put in place minimum federal standards that establish basic safeguards while allowing states to address unique local issues and then learn from their successes and failures. The proposed decade-long waiting period assumes we have the luxury of time. We don’t,” he says.
In a Q&A with UBNow, Bartholomew, whose books include “Intellectual Property and the Brain” and “Adcreep: The Case Against Modern Marketing,” discusses the potential ban.
This principle recognizes that states should be allowed to experiment with different policy approaches, allowing for crucial real-world testing of regulatory frameworks before potential national adoption.
States facing different AI challenges — from facial recognition to algorithmic hiring — are uniquely positioned to craft their own responses. California's approach to AI in Hollywood and Silicon Valley will necessarily differ from New York’s concerns about manufacturing automation or hiring on Wall Street. And different legislators naturally bring different experiences and strategies to the table. Allowing states to test different regulatory strategies reveals what works best and what does not — potentially paving the way for adopting some battle-tested national standards down the road. With AI development accelerating at breakneck speed, now is not the time to abandon this regulatory experimentation.
Absolutely. We need only look to social media to understand the dangers of regulatory hesitation. For nearly two decades, social media platforms expanded with minimal oversight while policymakers repeatedly delayed meaningful reforms in favor of industry self-governance. The consequences have been severe. Social media’s unregulated growth has contributed to a mental health crisis among teenagers, facilitated election interference and accelerated political polarization.
By the time Congress began seriously considering social media regulations, the power dynamics had shifted dramatically in favor of the tech giants, whose enormous wealth and influence allowed them to shape the eventual regulatory conversation. Just look at Mark Zuckerberg. He founded Facebook in 2004, but did not actually have to testify before Congress until 2018. These same tech giants are lobbying Congress for this decade-long AI regulatory ban — all the better to cement their existing economic advantages into place and avoid any pesky rules designed to protect consumers and facilitate competition.
AI presents even greater potential for harm than social media, from discrimination encoded in algorithms to mass surveillance capabilities to potentially severe labor displacement. We simply cannot afford to repeat the regulatory mistakes of social media with a technology that promises to be even more transformative.
Proponents of the moratorium argue that it would prevent a “patchwork” of state regulations that might stifle innovation and hamstring American businesses competing against international rivals. This is a real concern, but the proper response to varying state approaches isn’t to squelch all regulation. Instead, Congress should put in place minimum federal standards that establish basic safeguards while allowing states to address unique local issues and then learn from their successes and failures. The proposed decade-long waiting period assumes we have the luxury of time. We don’t.