Lawmakers think they are smart enough to control AI

That analysis makes sense, but who believes Newsom has the humility to heed it?

Lawmakers think they are smart enough to control AI

SACRAMENTO – Quotation compilations are filled with jabs at lawmakers, as deep thinkers complain about the cravenness, venality and opportunism of politicians. Journalist H.L. Mencken complained that a “good politician is quite as unthinkable as an honest burglar.” Napoleon Bonaparte quipped that “in politics stupidity is not a handicap.” I’ve known good, honest and smart politicians. My main beef is their overall lack of humility.

Not so much personal humility, but a sense of limits on what government can accomplish. California is notoriously absurd on this front, as our top politicians routinely make grandiose pronouncements. Their latest ban will change the trajectory of Earth’s climate patterns! They will stand up to greed and Other Evil Forces! Every one of them aspires to sound like John F. Kennedy.

Sure, governments can occasionally accomplish something worthwhile, but the ones that make the most elaborate promises seem least able to deliver basic services. My local public utility promises only to keep the lights on and succeeds at the task virtually every day. By contrast, the state vows to end poverty, but can’t manage to distribute unemployment benefits without sending billions to fraudsters.

It’s with that backdrop that I present the latest hubris: Senate Bill 1047, which sits on the governor’s desk. It’s the Legislature’s “first-in-the-nation,” “groundbreaking,” “landmark” effort to take control of Artificial Intelligence before, as in the movie “Terminator,” AI gains self-awareness. I’ll always remember gubernatorial candidate Arnold Schwarzenegger’s visit to The Orange County Register while on break from filming the 2003 sequel, but Hollywood usually is no model for Sacramento.

Per a Senate analysis, the bill “requires developers of powerful artificial intelligence models and those providing the computing power to train such models to put appropriate safeguards and policies into place to prevent critical harms” and “establishes a state entity to oversee the development of these models.”

Once when testifying about a bill in another state that imposed taxes and regulations on vaping devices, I watched as lawmakers passed around vape examples and looked at them with apparent bewilderment. They had little understanding about how these relatively simple devices operated.

How many California lawmakers truly understand the nature of AI models, which are among the most complex (and rapidly developing) technologies in existence? “I’ll admit I don’t know a lot about AI … very little as a matter of fact … I like the way I may be doing this wrong, better than nobody else is doing anything at all,” said Assembly member Jim Wood, D-Healdsburg, before voting for the bill.

Do you suppose lawmakers will protect us from unforeseen “critical harms” from an almost unknowingly complex technology in ways we have yet to fathom? The government sometimes is efficient in twisting new regulatory tools to abuse our rights, but rarely in service to our protection.

Some tech groups (including my employer, the R Street Institute) sent a letter to Gavin Newsom urging a veto. “SB 1047 is designed to limit the potential for ‘critical harm’ which includes ‘the creation or use of a chemical, biological, radiological or nuclear weapon in a manner that results in mass casualties,’” it argued. “These harms are theoretical. There are no real-world examples of third parties misusing foundation models to cause mass casualty events.”

Yet California lawmakers believe they have to savvy to stop some fictional catastrophe they’ve seen in a dystopian movie by imposing regulations that will, say, require a “kill switch” (like an easy button!). They will create another bureaucracy, where regulators will presumably understand this technology at the level of its designers. If they were that skilled, they’d be start-up billionaires living in Mountain View, rather than state workers living in Orangevale.

While benefits of such vague regulations are hard to imagine, the downsides are clear – especially in a state so dependent on its tech industry. California has been losing tech companies from the Bay Area, but AI is a growing hot spot. Is it wise to chase it away? It’s not as if AI designers can’t easily build their businesses in other communities with a large tech-worker cohort.

Lawmakers amended the bill to remove troubling provisions that could subject AI firms to attorney-general lawsuits and even potential criminal charges, but it still will leave the industry confused and subject to incalculable penalties. This is a start-up heavy industry, yet these provisions will place particular burdens on companies that lack the compliance and legal resources to navigate the state-imposed thicket.

“The entire framework is built on the premise that these advanced models will pose a threat, which is an assumption that is highly contested,” wrote the American Enterprise Institute’s Will Rinehart. “And to top it off, if these AI models are truly dangerous … then California shouldn’t be regulating them anyway – it should be the purview of the federal government.” That analysis makes sense, but who believes Newsom has the humility to heed it?

Steven Greenhut is Western region director for the R Street Institute and a member of the Southern California News Group editorial board. Write to him at sgreenhut@rstreet.org.