BigQuery is the warehouse most sGTM users already have access to, which makes it the obvious target for streamed event data. Here are the things we keep coming back to.
- Treat your tagging URL like product infrastructure. It is not a marketing toy. Put it on the same uptime monitoring as your checkout.
- Use a custom domain from day one. Switching later is annoying and almost always loses some history.
- Hash PII before it leaves your server. SHA-256 the email, normalise it first, and never log the plaintext.
- Keep a written event taxonomy. It does not need to be elegant. It does need to exist somewhere people can find it.
- Version your container changes. GTM gives you versions for free. Use them. Future-you will be grateful.
- Test in preview mode for everything. Including the things you are sure work. Especially those.
- Reconcile with backend data weekly. A small drift is normal. A growing drift is a fire.
- Partition by ingest date, cluster by event name. Both are cheap to set up and dramatically reduce query costs at scale.
None of these are revolutionary. They are the boring habits that separate teams who trust their data from teams who argue about it. For teams already on GCP, this is one of the lower-friction paths to a real analytics warehouse.