Feature Articles

Have a topic request or want to submit an article? Contact the MAGNIFYI Editors

AI Regulations Must be Pro-innovation, Pro-consumer and Pro-safety

Related Categories: Featured Articles, Legislation
AI Regulation

To maximize the benefits of innovation, we must ensure that complex, fast-moving technologies can be treated flexibly, bending to them as they evolve.  It is critical to develop flexible, iterative frameworks that can evolve with technology, rather than trying to fit rigid regulations to changing technologies.

At its core, any existing or proposed AI law should be pro-innovation, pro-consumer, and pro-safety.

A sensible beginning point is to identify and enforce existing state and federal laws and regulations that promote responsible AI innovation.  For example, well-tested federal consumer protection laws such as unfair and deceptive practices laws, antitrust policies, and sector-targeted regulations already exist.

Moreover, many of these federal policies already have state-level complements, including consumer protections and data security laws that apply to emerging technologies.

Some basic tenets:

 Identify regulatory barriers to AI development.  

Once individual barriers are clearly articulated, proposals to remove or avoid them should be put in place.  This includes identifying gaps where existing laws are insufficient or overly burdensome.

 Prohibit regulations that unfairly target or harm AI development.

Avoid any unfair or harmful regulations that can distort the market, stifle innovation, and undermine our country’s competitiveness.  For example, do not allow regulations that include discriminatory tax or other incentives that can radically distort the natural development of AI in the marketplace.  This includes laws that aim to pick winners or losers, or promote or punish certain developments in the AI evolution.  Embrace laws that foster competition and innovation.

Enforce existing citizen rights.

Any regulation must enforce traditional rights citizens already enjoy by narrowly tailoring laws that ensure “a fundamental right to own and make use of technological tools, including computational resources. Any government restrictions on the lawful use of computational resources—including but not limited to hardware, software, algorithms, machine learning, cryptography, platforms, services, and quantum applications.

For example, in April 2025, Montana Governor Greg Gianforte signed the state’s historic “Right to Compute Act” into law.  Montana became the first state to adopt a “Right to Compute Act” that protects a fundamental right.  The new law ensures that state residents can use AI and other forms of algorithmic commerce as a tool unless the state creates a “narrowly tailored” law necessary to address a “compelling government interest.” This law has rightfully caught the attention of several other state and federal legislatures.

 Allow states to be our test laboratories.

Fundamentally state experiments would be best served by adopting permissive, not restrictive, policy frameworks that are clear, consistent, and pro-consumer.

Allowing states to serve as test laboratories -where approved innovators are given room to experiment under the watchful eye of jurisdictional regulators- can prove incredibly advantageous, especially in industries in their infancy.   Society can benefit from the inclusion of test sandboxes that include careful evaluation of results and corrective measures when needed.

Taking lessons learned can help shape more comprehensive regulatory frameworks at the federal or state level.  Although a state laboratory approach has the potential to create a patchwork of regulations, much can be learned from limited experiments conducted under the watchful eyes of regulators and interested parties, thereby identifying and correcting problems along the way.

Allowing state test laboratories while AI regulation is in its infancy also helps the industry attract capital by clarifying regulations in industries where rules and laws are often opaque and confusing.

Maximizing the benefits of innovation, means regulatory policies must accommodate complex, fast-moving technologies that can be treated flexibly and bends to the technology as it evolves.  It is therefore critical to develop flexible, iterative frameworks that can evolve with technology, rather than trying to fit rigid regulations to changing technologies.  States can play a huge role in the shaping of sensible permissive AI regulations that at their core are pro-innovation, pro-consumer, and pro-safety.