Go offline with the Player FM app!
LW - Claude 3.5 Sonnet by Zach Stein-Perlman
Fetch error
Hmmm there seems to be a problem fetching this series right now. Last successful fetch was on September 26, 2024 16:04 ()
What now? This series will be checked again in the next hour. If you believe it should be working, please verify the publisher's feed link below is valid and includes actual episode links. You can contact support to request the feed be immediately fetched.
Manage episode 424673800 series 2997284
we'll be releasing Claude 3.5 Haiku and Claude 3.5 Opus later this year.
They made a mini model card. Notably:
The UK AISI also conducted pre-deployment testing of a near-final model, and shared their results with the US AI Safety Institute . . . . Additionally, METR did an initial exploration of the model's autonomy-relevant capabilities.
It seems that UK AISI only got maximally shallow access, since Anthropic would have said if not, and in particular it mentions "internal research techniques to acquire non-refusal model responses" as internal. This is better than nothing, but it would be unsurprising if an evaluator is unable to elicit dangerous capabilities but users - with much more time and with access to future elicitation techniques - ultimately are. Recall that DeepMind, in contrast, gave "external testing groups . . . .
the ability to turn down or turn off safety filters."
Anthropic CEO Dario Amodei gave Dustin Moskovitz the impression that Anthropic
committed
"to not meaningfully advance the frontier with a launch." (Plus Gwern, and others got this impression from Anthropic too.) Perhaps Anthropic does not consider itself bound by this, which might be reasonable - it's quite disappointing that Anthropic hasn't clarified its commitments, particularly after the confusion on this topic around the Claude 3 launch.
Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org
2447 episodes
Fetch error
Hmmm there seems to be a problem fetching this series right now. Last successful fetch was on September 26, 2024 16:04 ()
What now? This series will be checked again in the next hour. If you believe it should be working, please verify the publisher's feed link below is valid and includes actual episode links. You can contact support to request the feed be immediately fetched.
Manage episode 424673800 series 2997284
we'll be releasing Claude 3.5 Haiku and Claude 3.5 Opus later this year.
They made a mini model card. Notably:
The UK AISI also conducted pre-deployment testing of a near-final model, and shared their results with the US AI Safety Institute . . . . Additionally, METR did an initial exploration of the model's autonomy-relevant capabilities.
It seems that UK AISI only got maximally shallow access, since Anthropic would have said if not, and in particular it mentions "internal research techniques to acquire non-refusal model responses" as internal. This is better than nothing, but it would be unsurprising if an evaluator is unable to elicit dangerous capabilities but users - with much more time and with access to future elicitation techniques - ultimately are. Recall that DeepMind, in contrast, gave "external testing groups . . . .
the ability to turn down or turn off safety filters."
Anthropic CEO Dario Amodei gave Dustin Moskovitz the impression that Anthropic
committed
"to not meaningfully advance the frontier with a launch." (Plus Gwern, and others got this impression from Anthropic too.) Perhaps Anthropic does not consider itself bound by this, which might be reasonable - it's quite disappointing that Anthropic hasn't clarified its commitments, particularly after the confusion on this topic around the Claude 3 launch.
Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org
2447 episodes
Semua episod
×Welcome to Player FM!
Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.