Blog

Why your sGTM container suddenly returns 403

A working container that starts returning 403 has three common causes, each with a fast diagnostic.

A 403 from a previously working sGTM container is unusual enough to be alarming and almost always traces to one of three root causes. Walk through them in order.

Cause 1: workspace token expired or rotated

If your container is accessed via the management API (Terraform, custom CI scripts), the auth token has a short TTL. When it expires, every API request returns 403. The fix is to regenerate the token in your Google Cloud project and update wherever it is stored.

Diagnostic: try the request through the GTM web UI. If the UI works but your script does not, the token is the cause.

Cause 2: container config URL changed

When you publish a new version of a server-side container, the config URL is updated. If your tagging server (the runtime, not the workspace) is configured to fetch from a stale URL, it returns 403 because the auth token in that URL no longer matches.

Diagnostic: in your container's environment, check the CONTAINER_CONFIG variable. Compare it to the current config URL shown in the GTM admin UI. If they differ, update the runtime config.

For SprTags-managed containers, this happens automatically. If you self-host on App Engine or similar, it is your responsibility.

Cause 3: IP allowlist on the destination

If you front your tagging server with a WAF or have IP-based access controls on a downstream destination, and the source IP changed (cloud autoscaling can do this), the destination starts returning 403 to your tagging server. Your tagging server then propagates the failure back to the browser.

Diagnostic: check container logs for outgoing 403s separately from incoming ones. Outgoing 403s from a destination point to allowlist drift.

The non-cause everyone suspects

"Did the X-Gtm-Server-Preview header expire?" usually does not return 403. An expired preview header just means events stop showing in the preview pane; production traffic continues normally. If you are seeing 403, look elsewhere.

A diagnostic curl

curl -v https://data.example.com/healthz

SprTags exposes a /healthz endpoint that returns 200 if the container itself is alive. If /healthz returns 200 but /g/collect returns 403, the container is fine and the cause is upstream config. If /healthz returns 403 too, the container itself is misconfigured and you need to look at the workspace settings.