Skip to main content
To use Microsoft Foundry (Azure AI Foundry) as the inference provider, set inferenceProvider to foundry and supply the resource name and API key described below.
In this preview platform integration, Claude models on Microsoft Foundry run on Anthropic’s infrastructure. This is a commercial integration for billing and access through Azure. The data-residency and “no conversation data sent to Anthropic” statements elsewhere in these pages do not apply when using Microsoft Foundry. See the Overview for details.

Configuration keys

SettingRequiredDescription
Microsoft Foundry resource name
inferenceFoundryResource
YesAzure AI Foundry resource name used to construct the endpoint URL (<resource>.services.ai.azure.com). Two to sixty-four characters, lowercase alphanumeric and hyphens.
Microsoft Foundry API key
inferenceFoundryApiKey
YesAPI key for the Foundry resource. May be supplied dynamically by an inferenceCredentialHelper executable instead.
You must also set inferenceModels to a list of Foundry deployment names. See the Configuration reference.

Example

<key>inferenceProvider</key>
<string>foundry</string>
<key>inferenceFoundryResource</key>
<string>your-foundry-resource</string>
<key>inferenceFoundryApiKey</key>
<string>your-api-key</string>
<key>inferenceModels</key>
<string>["claude-sonnet-4"]</string>