The letter S in a light blue, stylized speech bubble followed by SpeakBits
SpeakBitsThe letter S in a light blue, stylized speech bubble followed by SpeakBits
Trending
Top
New
Controversial
Search
Groups

Enjoying SpeakBits?

Support the development of it by donating to Patreon or Ko-Fi.
About
Rules
Terms
Privacy
EULA
Cookies
Blog
Have feedback? We'd love to hear it!

The case for targeted regulation

anthropic.com
submitted
6 mos ago
bysometimessobertotechnology

Summary

Governments should urgently take action on AI policy in the next eighteen months. The window for proactive risk prevention is closing fast. We suggest some principles for how governments can meaningfully reduce catastrophic risks while supporting innovation.

A year ago, we warned that frontier models might pose real risks in the cyber and CBRN domains within 2-3 years. Surgical, careful regulation will soon be needed.

At Anthropic, we try to deal with this challenge via our Responsible Scaling Policy (RSP) The RSP is an adaptive framework for identifying, evaluating, and mitigating catastrophic risks.

Responsible Scaling Policy (RSPs) are not intended as a substitute for regulation, but as a prototype for it. They become a key part of product roadmaps, rather than just being a policy on paper.

Currently, the public and lawmakers have no way to verify any AI company’s adherence to its RSP (or similar plan), and the outcome of any tests run as part of it. Companies should also be required to publish a set of risk evaluations of each new generation of AI systems.

Regulations should be as surgical as possible. They must not impose burdens that are unnecessary or unrelated to the issues at hand. Any bill or law should also be simple to understand and implement.

Getting this right is essential to realizing AI’s benefits and addressing its risks. California has already tried once to legislate on the topic and made some significant progress.

We think the principles and approach we’ve outlined here are sufficiently simple and pragmatic that they could be helpful outside the US as well as inside it. We also expect that, as long as such policy approaches have a mechanism for standardization and mutual recognition, mandating certain common safety and security approaches for frontier AI companies could ultimately reduce the overall cost of doing business in diverse global regions.

Regulation of frontier models should focus on empirically measured risks, not on whether a system is open-or closed-weights. The RSP framework is designed to make it harder for both insider and outsider threats to compromise a company and exfiltrate its IP.

Regulation should not favor or discourage open-weights models. Instead, it should incentivize developers to address risks. Open-weights can be used to create new data sets.

 birdhouse paper towel plate rack toaster-0
26

4 Comments

4
sneakattack
6 mos ago
It's not in anybody's (re: people making money off this) interest to push for this
2
sometimessoberOP
6 mos ago
Of course not. They have to much to gain.
2
fitasafiddle
6 mos ago
We need to beat them back somehow
3
splitsecond
6 mos ago
It's awesome that they are pushing for regulation on themselves. They probably see some companies abusing their positions.