By Khari Johnson | CalMatters
When Gov. Gavin Newsom vetoed California’s most high-profile artificial intelligence regulation last fall, he simultaneously asked the state’s deep bench of AI researchers to recommend guardrails that balance safety and innovation. The result of that work, The California Report on Frontier AI Policy report, was released earlier this week.
The report stresses transparency-focused regulations such as whistleblower protections and audits by independent third-parties — mirroring a draft of the report released in March. It also highlights how AI has changed in the past three months, including improvements in its ability to act independently and to help people make dangerous weapons or carry out cyber attacks.
In one example, a language model from the AI company Anthropic threatened to blackmail engineers and tell their partners they cheated on them, according to an evaluation by the company. Another assessment found that highly advanced AI models, known as “frontier models,” can tell when they’re being evaluated.
These reports do not indicate an immediate risk, but underscore the speed of progress of the technology, and that such risks must be taken seriously, said Scott Singer, one of the report’s lead writers, in an interview with CalMatters.
The California Legislature is considering numerous bills to regulate AI, including measures that require online platforms to label AI-generated content and mandate protocols for how chatbots deal with people who talk about self harm. Another prevents developers from blaming AI for harm in court. State Sen. Scott Wiener, a San Francisco Democrat, said in a statement that his office is considering which report recommendations to incorporate into a bill that protects AI whistleblowers.
The report does not address a decade-long pause on state AI regulation beyond encouraging coordination with other governments to avoid the compliance burden for businesses that Republicans in Congress used to justify the proposed moratorium.
Originally Published: