Skip to main content
Public beta. Cowork on 3P is under active development. These docs are updated as features ship, and we’ll share a GA timeline once one is set.
Cowork on third-party (3P) is a deployment mode of Claude Desktop (Cowork and Code tabs) that routes all model inference through a provider you configure: Google Cloud Vertex AI, Amazon Bedrock, Azure Foundry, or any compatible gateway you operate. The app runs from a bundled local web application, and conversation history is stored on the user’s device. You get the same agentic Cowork experience (file creation, multi-step research, sub-agent coordination, the Code tab) with inference and billing handled by the provider you choose.
The data-residency, compliance, and “no conversation data sent to Anthropic” statements throughout these pages apply only when inferenceProvider is vertex or bedrock. They do not apply when using Azure Foundry or a gateway. Equivalent guarantees for Azure Foundry are coming; we will update these pages when they are available.

Who it’s for

Cowork on 3P is designed for organizations whose security, regulatory, or contractual requirements prevent them from sending data to Anthropic’s first-party infrastructure. Typical deployments include:
  • Highly regulated enterprises on 3P only — organizations that use third-party inference for regulatory or security reasons
  • International enterprises with data residency requirements — organizations that require in-region data residency and cannot send conversation data to the United States
  • Public sector and defense — agencies and contractors operating under FedRAMP, ITAR, or sovereign-cloud mandates
If your organization can use Anthropic’s first-party products directly, standard Cowork on a Team or Enterprise plan is simpler to deploy, offers an in-app UI for user management, analytics, and RBAC, and releases new features more quickly than Cowork on 3P. Choose Cowork on 3P when routing inference through Anthropic’s API is not an option.

Architecture

Cowork on 3P keeps the standard Cowork feature set and relocates inference to the provider you configure.
ComponentStandard CoworkCowork on 3P
Model inferenceAnthropic APIYour Vertex AI / Bedrock / Foundry / gateway endpoint
Web applicationLoaded from claude.aiBundled inside the desktop app
User identityAnthropic accountLocal device identity only
Conversation storageAnthropic backendLocal disk on the user’s machine
Tool sandboxLocal VMLocal VM (identical)
ConfigurationAdmin console at claude.aiOS-native configuration (MDM-managed or per-user)
The desktop app detects 3P mode at launch from the configured inference provider. When a provider and its credentials are present, the sign-in screen offers the option to skip Anthropic authentication and start the app using your inference-provider configuration instead.

Security posture

  • No conversation egress to Anthropic (Vertex AI and Bedrock only). Prompts, responses, files, and tool outputs are sent only to your configured inference endpoint and stored only on the local machine.
  • Sandboxed tool execution. Shell commands run in the hardened Cowork VM; file access is scoped to your allowed folders and web fetches to your egress allowlist.
  • Auditable telemetry. Crash reports and product analytics are scrubbed of conversation and user data before being sent to Anthropic, and can be fully disabled via configuration keys. Independently, you can export full session activity (prompts, tool calls, token counts) to your own OpenTelemetry collector.
  • Centrally managed. All configuration is delivered via your existing MDM (Jamf, Intune, Workspace ONE, Group Policy) and cannot be overridden by end users when an admin profile is present.
For a detailed treatment of the threat model, sandbox boundaries, and data flows, request access to the Claude Cowork Desktop Security Architecture Overview on Anthropic’s Trust Center. For architecture, telemetry, and controls information specific to Cowork on 3P, see the Claude Cowork Security Overview (Third-Party Platforms) on the Trust Center.

Data residency and international deployment

This section applies when using Vertex AI or Bedrock. Inference requests go directly from the user’s machine to the regional endpoint you configure (inferenceVertexRegion or inferenceBedrockRegion). Because conversation data goes only to that endpoint and to local disk, residency is determined entirely by:
  1. The cloud region you select for inference
  2. The physical location of the user’s device, where conversations are persisted
For multi-region organizations, deploy distinct MDM configuration profiles per geography so each user population points at an in-region endpoint. Vertex AI and Bedrock each offer Claude models in EU, UK, APJ, and other sovereign regions; consult your provider’s model-availability documentation for the current list.

Public sector and highly regulated environments

This section applies when using Vertex AI or Bedrock. Because inference runs in your cloud tenant, Cowork on 3P operates inside whatever compliance boundary your provider and region give you. The desktop application itself contacts Anthropic only for crash reporting, product analytics, and auto-updates, and each of these can be disabled independently via managed configuration. With Anthropic-bound telemetry and updates disabled, the compliance posture of your deployment is determined entirely by your inference provider. If you run inference in a FedRAMP High–authorized region of Vertex AI or Bedrock, model traffic stays within that boundary and the FedRAMP relationship is between your organization and that provider. See Telemetry and egress for the full set of network paths and how to lock them down.

Next steps

Installation and setup

Roll out Cowork on 3P to your organization with MDM, or configure a single machine for evaluation.

Configuration reference

Every managed-configuration key, what it does, and recommended security profiles.

Extensions

Deploy MCP servers, plugins, skills, and hooks across your fleet.

Telemetry and egress

What the app sends to Anthropic, how to turn it off, and the firewall allowlist you’ll need.