Understanding our policies on using Claude's Outputs for model training and development
When you use Claude, you own the Outputs generated from your Inputs. However, there are important restrictions on using these Outputs to train AI models which are standard practice across the AI industry. We prohibit customers from using our services to train or develop AI models without our written permission. This article explains what uses are permitted, what uses are prohibited, and why these policies exist.
Why we restrict model training
Anthropic invests significantly in making Claude safe, helpful, and harmless. We conduct rigorous pre-release testing, implement multiple safety layers, and continuously monitor our models' behavior. When Outputs are used to train new models without our oversight, additional risks emerge. Safety controls may be lost – models trained on Claude's Outputs won't have our safety measures, potentially leading to harmful or dangerous AI systems. We also have no visibility into deployment, meaning we cannot monitor how these distilled models are used or prevent misuse.
When customers use Claude to generate Outputs that then train competing models, they're essentially using our infrastructure and investment to build direct competitors to our service. Like other software and service providers, we expect that our services won't be used to undermine our product offerings.
What you can do with Outputs
You can use Claude's Outputs to train models that don't compete with Anthropic's own models. This includes creating specialized classifiers and tools such as:
Sentiment analysis tools
Content categorization systems
Summarization tools
Information extraction tools
Semantic search tools
Anomaly detection tools
Outputs can also be integrated into your applications to power features within your products, generate content for your customers, analyze and structure your data, or improve internal workflows and productivity.
What's prohibited
Our Terms do not allow the use of Outputs to train models that are competitive with Anthropic's own. It is also a violation of our Terms to support a third party's attempt to do the same.
Uses that are prohibited include:
General purpose chatbots
Models designed for open-ended text generation
Using Outputs as training targets for models
Reverse engineering training methods