
Jordan Klavans | YPFP Member | October 26, 2024 | Photo Credit: Flickr
To date, the United States has no overarching, federal artificial intelligence (AI) policy. This, however, doesn’t mean that American AI companies are immune to regulation.
With recent enactment of the European Union Artificial Intelligence Act, a first-of-its kind international AI law, American AI companies are de facto regulated by European standards. Since most large American AI companies also operate in Europe, they are choosing to comply with European Union (EU) standards rather than exit an extremely lucrative market.
In turn, companies adopt EU AI regulatory requirements to avoid duplicative expenditures on differentiated products, models, and data. Dubbed the “Brussels effect,” this concept posits that in a global, interconnected economy, influential regulation from one region can implicitly apply to all.
This shouldn’t be the case. American AI companies’ domestic products should be based on domestic policy. Just as telecommunication technologies have Federal Communications Commission (FCC) oversight and financial technologies have Securities and Exchange Commission (SEC) requirements, AI should have basic rules of the road. Therefore, the US should claim AI governance leadership and enhance innovation through targeted, minimalist policy.
American lawmakers and industry generally agree that AI guardrails have merit. For over a year, top AI companies have uncharacteristically lobbied for AI regulation. While not agreeing on the same policy parameters, AI companies have foreseen regulation like the EU AI Act and want a refined version in the US that provides favorable conditions to their respective business models. This yields a ripe environment for action.
Given timing and alignment, the US must enact a federal AI law which is less stringent than the EU AI Act but operates within a similar framework. This “sweet spot” will increase AI security without creating different sets of standards, and as a byproduct, different sets of operations.
What should this look like? The US should address a few key areas.
First, US policy should only apply to the highest-risk AI applications. The EU AI Act creates a risk-tiered scale where usages are classified into four categories: unacceptable, high-risk, limited risk, and minimal risk. In this system, a higher risk classification subjects an AI system to more regulatory scrutiny, and each tier receives a corresponding degree of oversight.
The US should solely target its oversight to known, high-risk applications. For example, the Department of Homeland Security (DHS) has identified chemical, biological, radiological, and nuclear (CBRN) threats and AI used to operate critical infrastructure as domains which clearly need protective measures.
Second, the US should require that data holders and users retain opt out capabilities like those outlined in the White House’s Blueprint for an AI Bill of Rights. While the EU AI Law similarly contains opt out provisions, it also goes further. The EU outright prohibits AI systems which deploy deceptive techniques, exploit vulnerabilities, create social scoring, and infer other sensitive attributes about an individual.
The US should take a modest approach and not outright ban AI usages. An AI system should be required to simply alert data holders and users who ultimately decide to opt in or out. If data holders do not want their data to be used for training purposes, they can say no, just as a non-invasive pop-up window asking to clear “cookies” from a web browser might appear on someone’s screen. AI companies should pursue copyright agreements for access to training data and other internet-based intellectual property. Users can similarly choose not to interact with this AI system. Some policymakers point to achieving this goal through watermarking AI-generated content.
Third, the United States should create incentives for companies who build and deploy AI models. These incentives can include higher-risk AI oversight exemptions and more access to public-private AI research.
Even though there is no universal definition of “open source,” America should encourage AI companies to adopt more open practices like publishing technical documentation and producing a data training summary. An open system removes barriers to entry, and as seen with cybersecurity software, creates better resilience because a whole community can identify vulnerabilities and attack vectors better than a single company. Rather than a regulatory requirement, an incentive structure would provide a directional push.
Through these measures, the United States would reorient the AI landscape and establish a nimble AI governance policy that will continue to foster innovation while reinforcing necessary protections. With a pathway to garner industry’s support, the US will establish basic guardrails and cement its place as the AI producer with the most advanced and secure products.
Most importantly, the American AI industry would not be obligated to differentiate its products based on market or overshoot to comply with EU standards. America’s comparative advantage is speed and innovation. Slowed by foreign requirements, America urgently must act to ensure AI systems which are deployed in America are also governed by American standards. AI companies will have more agility to compete and succeed in what is a fast-moving race. America’s chief competitor, China, certainly isn’t waiting around.
Jordan Klavans is a Managing Consultant for IBM in Washington, D.C. where he voluntarily supports research and policy initiatives. He holds a BS in Economics and a BA in Political Science from Penn State University. Follow him on LinkedIn.



