Skip to main content

Documentation Index

Fetch the complete documentation index at: https://claude.com/docs/llms.txt

Use this file to discover all available pages before exploring further.

To use Microsoft Foundry (Azure AI Foundry) as the inference provider, set inferenceProvider to foundry and supply the resource name and API key described below.
In this preview platform integration, Claude models on Microsoft Foundry run on Anthropic’s infrastructure. This is a commercial integration for billing and access through Azure. The data-residency and “no conversation data sent to Anthropic” statements elsewhere in these pages do not apply when using Microsoft Foundry. See the Overview for details.

Configuration keys

SettingRequiredDescription
Microsoft Foundry resource name
inferenceFoundryResource
YesAzure AI Foundry resource name used to construct the endpoint URL (<resource>.services.ai.azure.com). Two to sixty-four characters, lowercase alphanumeric and hyphens.
Microsoft Foundry API key
inferenceFoundryApiKey
YesAPI key for the Foundry resource. May be supplied dynamically by an inferenceCredentialHelper executable instead.
You must also set inferenceModels to a list of Foundry deployment names. See the Configuration reference.

Configure in the app

Open the in-app configuration window (Developer → Configure third-party inference). In the Connection section, set Inference provider to Foundry, then fill in the Foundry credentials card:
FieldValue
Microsoft Foundry resource nameyour-foundry-resource
Microsoft Foundry API keyyour API key
Under Identity & models, add at least one Model list entry using the Foundry deployment name. Then click Export to produce a .mobileconfig (macOS) or .reg (Windows) file for your MDM. See Installation and setup for the export and deployment workflow.