Technically Optimistic Podcast
We need a more balanced approach to the public debate about AI—and the regulation that governs it
The rise of artificial intelligence (AI) has captured the world's imagination—and raised significant questions about how these technologies will affect our world. As critical as what we do with AI is how we think about AI, particularly as it relates to legislation and regulation.
AI presents a paradox: it has the potential to vastly improve education, transportation, and healthcare, enhancing our quality of life—yet it is also feared as an uncontrollable force. Both the public and policy-makers should heed experts' concerns about unintended consequences, while also ensuring the technology’s safe progression. To do that, we must embrace nuance. We have to educate the public on what AI is, how it works, and what it can do, both good and bad. Depending on government regulation or private sector self-regulation alone isn’t enough. The public needs to be engaged in this debate, and it needs to shape both private and public sector outcomes.
Today’s AI policy debates often focus on existential risks, but the broader discussion should include transparency, stress testing, deployment guidelines, human involvement, and ethics. These elements can shape daily AI use and drive legislation that safeguards society, promotes fairness, and fosters responsible AI development. Critics may claim government regulation stifles innovation, but a balanced approach can encourage responsible innovation while preventing abuses. At their most effective, regulations act as guardrails, aligning creative efforts with societal needs.
The public needs to be engaged in this debate, and it needs to shape both private and public sector outcomes.
Take, for instance, the all-important area of accountability and transparency. Mandated transparency is one of the government's most effective functions: In sector after sector, reporting requirements and established bodies like the SEC bring necessary sunshine to industries. Legislation is the only way to hold AI system developers and deployers accountable for algorithmic decisions. Through mandated transparency, users can understand AI systems' workings, fostering trust and facilitating a meaningful dialogue between developers, users, and society at large.
Consider how AI collects and uses personal information. A well-balanced regulatory framework can define clear standards for data privacy and security, much like the government approaches telephone numbers, sensitive medical records, and financial information. Regulation should ensure individuals have control over their data, with robust safeguards against unauthorized AI access or misuse. That will allow innovation to flourish while safeguarding privacy rights and personal information.
Regulation done right can drive AI capital investment. What’s often forgotten is that the US’s regulatory and legal regime has helped create stock exchanges, powered American capital markets, and ensured a system of contracts and jurisprudence that is foundational to commerce. That infrastructure yields what emerging economies struggle to create: economic confidence. The government can bestow confidence on AI, which, in turn, encourages investment dollars. Through the government’s deeds and words, the US could send a signal to the world that US-based AI firms are open for business—and are safe businesses to boot.
That role could be paired with the other key role the government can play at this juncture in AI’s development: Ensuring that the biggest tech players do not win by regulatory capture. Already, mammoth tech firms have argued in favor of certain kinds of regulation, like licensing, which would lock out small- and medium-sized start-ups. If the government employs a balanced approach to regulation, it can encourage the kind of fast-moving innovation that comes from small teams—and ensure that already-large tech firms do not simply grow exponentially on the back of this latest technology.
Thoughtful governance can help nurture AI's promise and ensure its widespread use.
The government can also take a page out of its own book and devise a framework for AI stress-testing. In this, they'd borrow from the success of Dodd-Frank legislation, passed in the aftermath of the '08 financial crisis and designed to ensure banks maintained adequate capital reserves and avoided unnecessary risk. That same idea—basic minimum standards for risk and safety—can be applied to AI systems, at scale. These technologies can undergo rigorous assessments to identify potential vulnerabilities and limitations. By subjecting AI systems to extensive testing, we can give companies the room to run—but with enough rigor to keep people safe.
As AI becomes increasingly sophisticated, it cannot exist in a regulatory-free vacuum, devoid of oversight. That said, the government cannot expect to simply lay down legislation and have it keep pace with AI technology's dynamism. Any laws written or agencies launched today must be willing to undergo regular self-examination—if only to help the US keep pace with our international counterparts. But that's not a good enough reason not to regulate. Just as AI technologies have evolved quickly, we must expect our AI regulatory frameworks and bodies to evolve too.
This is a delicate balancing act: We want innovation in spades, but we also need accountability if an AI system malfunctions. We want both committed creation and credible consequences, and we need to think through both before this technology achieves scale. But just as cars beget stop lights, so too can we strike the right balance with AI—one that generates the positives we seek, but avoids the pitfalls that experts warn us about.
The AI revolution has brought us to a crossroads: We are on the cusp of potentially awe-inspiring AI technologies that can produce the next generation of breakthroughs, but we can also spot the risk of unintended consequences and unwelcome societal divisions. Thoughtful governance can help nurture AI's promise and ensure its widespread use.