Free Enterprise

Moving Past AI’s Regulatory Conundrum


This article was co-authored by Pablo Garcia Quint, Technology and Innovation Policy Intern.

How can we verify the authenticity of the information we consume if we see footage of the Pentagon being attacked? What about a picture of the pope wearing a designer jacket? Or what about reports of an FBI agent, suspected to have leaked Hillary Clinton’s emails, found dead in his apartment? 

Disinformation in the age of AI makes people more skeptical about the information they consume, and it’s becoming an urgent matter in the face of the upcoming presidential elections. Regulators and industry leaders have proposed solutions to address this specific issue, but they are outliers in the broader conversation which calls on governments to levy bans, mandate reporting, and impose restrictions on anything “AI.” 

Yet ultimately, the set of problems AI may present are more endemic to human communication than an overgeneralized AI regulation can solve. Thus, the most important question is not how we should regulate AI, but which components of AI merit regulation. The answer to this question will shape the future of AI innovation. 

the letters AI on a blue tech background

Image credit: Claude AI UK.

Regulatory Frameworks to Combat AI Disinformation Lack Long-Term Viability

Regulatory proposals suffer from a classic problem: Do you create regulations by predicting harm, or do you wait to see what really might happen?

Regulating in advance has its advantages. Light-touch regulations, especially guidance statements, are often relied upon to set industry-led standards for emerging industries. But when it comes to AI, regulators have done far more than issue mere guidance. 

An early example is the European Commission’s whitepaper on artificial intelligence, which requires AI systems to comply with existing data privacy restrictions and subjects AI developers to regular conformity assessments. 

Other countries, like France and China, have imposed algorithmic regulations, forcing tech companies to disclose both the algorithms they use to reach specific populations and the information that circulates on their platforms, claiming this will increase transparency and accountability.

In the same vein, the executive order doesn’t fall short of following Europe’s lead, laying out preemptive reporting and auditing standards for AI developers, regardless of whether their actions cause harm.

Unfortunately, whether AI harms come into existence or not, preemptive regulatory action has the power to warp the landscape of AI development. Preemptive regulations run the risk that laws can’t adapt to the evolution of AI, meaning these same laws could become obsolete and hinder some positive contributions to AI’s development. Speculating on what “could be” also increases the economic and legal costs to entry for upstarts, artificially narrowing the likelihood of competitors in this area. 

So what is the alternative?

Some regulators have looked to the outputs that generative AI creates, instead of the development process. 

Sen. Amy Klobuchar introduced a bill that would prohibit the use of AI-generated advertisements in political campaigns without disclosing their generative AI source. The general idea is that such disclosure would help viewers distinguish what could be disinformation by providing accountability and transparency. This approach has been followed by companies such as Meta and Google

But other bills, like the bipartisan Protect Elections from Deceptive AI Act, also cosponsored by Sen. Klobuchar, aim to prohibit anyone from creating and distributing AI-generated content to influence federal elections. However, AI generates more than just images and ads, making regulation ambiguous when applied. 

These laws disclosing the use of AI in these cases could also trigger an alert that is not necessary for its viewers. Inconsequential uses of AI in political campaigns are not accounted for in current regulatory proposals either. For example, as Neil Chilson testified before the US Senate Committee, AI is entangled in content creation at different stages in political campaigns and much more. Staffers could easily brainstorm ideas for a political message using ChatGPT, pictures are enhanced using AI, and videos are edited using AI. Matt Perault highlighted many instances of inconsequential uses in his recent policy brief, like Ron DeSantis’s campaign photo with AI-generated jets flying overhead. 

While regulations could be beneficial, in doing so, the door of non-AI disinformation remains open. For instance, US regulators cannot force foreign actors to seize disinformation activity, let alone disclose whether or not they used AI in an image. If the intention of the foreign actor is solely to disinform, only platform intervention to take down content will stop them. When it comes to domestic actors, like political candidates, current electoral laws will mitigate the problem domestically, but foreign influence will likely have the same effect. 

There is also the less obvious problem with the forced disclosure of AI content — distrust and confusion. In a future where AI is embedded in more technology, either knowingly or unknowingly, disclosing AI content in political campaigns, or even more broadly, would confuse the public as an unintended consequence. 

Minnesota Secretary of State Steve Simon’s testimony before the US Senate Committee noted that regulation could risk leading to undeserved suspicion of true messages — amplifying something called the liar’s dividend, which encourages viewers to dismiss or flag contradictory points of view as disinformation. The skepticism that everything that AI produces is suspicious leads to a false belief. 

Furthermore, while regulatory requirements to disclose AI content are at the forefront of conversation, even the best AI image detectors aren’t reliable yet. False positives, which happen when using even cutting-edge AI detection, only exacerbate this phenomenon. Current examples show this to be the case when images of the war in Israel and Palestine were labeled to be false by an AI image detector when in reality they most likely weren’t. Commenting on the issue, Matthew Brown, the creator of an AI image detector Hugginface, said that AI image detectors “[are] not reliable enough on their own” because they are not 100 percent accurate. 

Legislators Should Chase Problems, Not Solutions

Despite the hyped-up image of an all-powerful AI, we shouldn’t underestimate the power of government regulation to stifle useful innovation. Indeed, regulating some forms of AI could be useful in the future, but there is no broad consensus on what needs to be regulated. But the current environment of fear, based on an overgeneralized description of AI, will only lead to heavy-handed and disproportional restrictions that will put the US decades behind its foreign counterparts.

Instead of rushing to champion the latest regulatory restriction, regulators should be racing to learn about the risks, if any, that AI presents. 

Far from disregarding issues pointed out by developers and researchers, this approach could include a task force or study to dive deeper into these claims. If specific harms are identified that only regulation can solve, those problems will merit serious consideration. But as a broader response, legislative solutions that let cooler heads prevail will make all the difference as this emerging industry propels us into the future.