What If Regulation Made AI Better? Rethinking the Innovation Myth

Image: Depositphotos

The belief that regulation hinders innovation is deeply rooted in Silicon Valley’s culture. Shaped by a “move fast and break things” ethos, early tech giants thrived in permissive environments with little oversight. Regulation was seen as a brake on progress rather than a catalyst for it.

But this framing is flawed. It assumes all innovation is inherently good and that risks can be corrected later. History suggests otherwise. In industries like pharmaceuticals, aviation, and automotive safety, regulation catalyzed advances in quality, reliability, and public trust—fueling sustainable and inclusive growth.

Take the U.S. Clean Air Act. Automakers initially resisted it, but it ultimately drove innovation in cleaner engine technologies and fuel efficiency. It also addressed environmental injustice, cutting pollution disproportionately affecting marginalized urban communities.

Similarly, the U.S. FDA’s regulation of AI-enabled medical devices requires standards for explainability, bias mitigation, and traceability. Companies that meet these standards benefit from faster adoption and greater trust, while also advancing equity by validating systems across diverse populations.

How Regulation Can Stimulate Innovation

One of the most overlooked benefits of regulation is the clarity it provides. Developers and investors thrive when expectations are known. With clear rules, companies can confidently allocate resources, plan product roadmaps, and enter markets without fear of sudden policy shifts.

Innovation also requires adoption—and adoption depends on trust. Regulation helps ensure AI systems are fair, safe, and accountable, which is especially critical in today’s climate of public skepticism. When consumers know AI is built within ethical and legal boundaries, they are more willing to use it.

Finally, unchecked AI development favors large incumbents who can absorb risk. Well-designed regulation—particularly tools like regulatory sandboxes—can level the playing field. These environments let startups and smaller players test systems under guided oversight, spurring innovation without compromising public protection.

The European Union’s Regulatory Vision

The EU’s approach offers a compelling alternative to the narrative that regulation stifles innovation. The forthcoming AI Act, the first of its kind globally, applies a risk-based framework that increases requirements for higher-risk applications while allowing low-risk systems to flourish. Rather than curbing innovation, it channels it toward safety, fairness, and social value.

Similarly, the GDPR—once feared by tech firms—has spurred investment in privacy-enhancing technologies like federated learning and synthetic data. These tools now drive global standards and new markets. Regulation has not stopped innovation; it has guided it.

In the mortgage sector, for example, our own experience showed how regulatory requirements for fair lending, particularly around disparate impact, encouraged the development of AI tools that helped identify and reach underserved communities. Without that regulatory push, such innovations might never have been pursued.

A Dangerous Divergence: The U.S. Push to Preempt State Regulation

In contrast, the U.S. House of Representatives recently proposed an amendment to ban all state-level regulation of AI for 10 years. This proposal, rooted in the fear that fragmented state laws would stifle innovation, ironically risks creating a vacuum that invites harm.

Such a freeze ignores the reality that states have led the way in enacting meaningful AI laws where federal guidance has lagged. Barring them from doing so removes critical safeguards at a time when AI's social impact is growing rapidly.

Even more dangerously, the amendment assumes AI will remain in its current state. Yet a decade ago, no one foresaw the emergence of Generative AI or its ability to influence elections, create deepfakes, or disrupt education. Delaying regulation now would leave us unprepared for risks we can’t yet imagine.

Rather than a void, we need a layered, collaborative approach—one that empowers both federal and state actors to shape AI governance. The EU has shown that a harmonized but flexible model is possible. While not every proposal will be perfect, doing nothing is a far greater mistake.

What This Means for Policymakers and Industry Leaders

Policymakers must reject the binary choice between overreach and inaction. Regulation should be anticipatory and adaptive, not punitive. Banning oversight is not balance—it’s abdication. Instead, policymakers should promote a governance structure that is risk-based, proportional, and focused on applications with the greatest potential for harm.

Governance must also be rights-based, ensuring that AI systems protect privacy, prevent discrimination, and respect due process. Systems should be accountable and traceable, with documentation and auditability embedded throughout the AI lifecycle. And governance should be collaborative, developed through ongoing input from government, industry, academia, and civil society.

For industry leaders, regulation should be seen not as a barrier but as a guidepost for innovation that is resilient, trustworthy, and future-ready. Firms that lead in transparency, explainability, and fairness are increasingly rewarded in global markets.

Investor behavior confirms this shift. SoftBank Vision Fund now favors companies with robust data governance and explainable models. BlackRock is increasing scrutiny of AI risk exposure in its ESG evaluations. And the Omidyar Network actively invests in startups and nonprofits committed to algorithmic accountability and responsible design.

Responsible governance isn’t just the right thing to do—it’s also a strategic advantage.

Conclusion: Rethinking the Innovation Myth

The idea that regulation inherently suppresses innovation is a myth. In practice, well-designed AI regulation can enable innovation that is more sustainable, inclusive, and aligned with public values.

While there is a strong place for industry-specific rules, several overarching principles should form the foundation of all AI governance. Effective governance must be risk-based, ensuring proportional oversight. It must be rights-based, protecting privacy and human dignity. It should be outcome-oriented, emphasizing safety, fairness, and real-world impacts. It must be accountable and traceable, allowing for oversight and learning. And it should be collaborative and adaptive, evolving as AI technologies and social expectations change.

Blocking regulation may delay short-term friction, but it undermines the long-term trust and clarity that both society and industry need. If we want AI to serve people—not just profits—we must move past outdated myths and embrace regulation as a blueprint for innovation with integrity.


Brian Stucky

Brian Stucky

A recognized thought leader in decision management, Brian Stucky brings three decades of experience designing and implementing business rule and process management systems for both commercial and federal clients. He has implemented and managed business rule development efforts in a variety of domains including the secondary mortgage market, credit card marketing and processing, mutual fund portfolio analysis, insurance underwriting and risk management, and for various Federal civilian agencies.

Brian's focus is now on ethical and responsible artificial intelligence for automated decision systems.

In addition, Brian is now in his sixth year as co-chairman of the Mortgage Industry Standards and Maintenance Organization (MISMO) Decision Modeling Community of Practice. His efforts there have resulted in finalizing the Decision Model and Notation (DMN) standard as an official mortgage industry standard. He also participated in MISMO’s Future State initiative. In January 2021 Brian began serving a two-year term on MISMO's Residential Governance Committee.