Claude Sonnet 4.5 includes AI Safety Level 3 (ASL-3) protections designed to prevent misuse related to chemical, biological, radiological, and nuclear (CBRN) weapons. These safety measures include filters called classifiers that detect potentially dangerous inputs and outputs.
Why was my message blocked?
Sonnet 4.5's safety filters are intended to prevent assistance with CBRN (Chemical, Biological, Radiological, and Nuclear) weapons-related tasks. If you received an error message, the filters detected content that matched patterns associated with these specific threats.
These filters are still being refined and they may sometimes inadvertently flag normal content. As with any automated system, false positives can occur—meaning legitimate queries may occasionally be flagged incorrectly. We're actively working to improve the precision of these classifiers to minimize disruption while maintaining safety.
What you can do
If you believe your message is blocked in response to a legitimate use, you have several options:
Avoid patterns that trigger false positives
The classifiers are sensitive to certain patterns that may resemble jailbreak attempts or obfuscation techniques:
Avoid cipher-like content: Base64-encoded strings, git commit hashes, hexadecimal sequences, and other encoded data can trigger the filters. If you need to include such content, include content surrounding it to explain how and why it is used.
Simplify instructions: Overly long or complex system prompts that include intricate conditional logic may resemble attempts to obfuscate behavior. Keep prompts clear and straightforward.
Be cautious with biology-related content: If your application doesn't specifically require biological or chemical information, consider rephrasing requests to avoid these topics when possible.
Other options
Continue with Claude Sonnet 4: You can switch to Claude Sonnet 4 for the remainder of the conversation, which uses different safety measures and may be able to help with your request.
Send feedback: You can let us know as your feedback helps us improve filter accuracy.
Edit your message: You can try rephrasing your question or providing additional context about your legitimate use case.
Why the filters?
As AI models become more capable, they require stronger protections against potential misuse. Sonnet 4.5's ASL-3 deployment measures are part of Anthropic's Responsible Scaling Policy, which ensures that increasingly capable models have appropriate safeguards.
The filters are specifically designed to prevent extended, end-to-end CBRN workflows that could pose catastrophic risks. They are not intended to block general scientific discussion, educational content, or commonly available information.
For researchers and dual-use applications
If you're working in scientific research and need access for legitimate purposes, we've established access control systems for vetted users. Contact our support team to learn more.