Remote teams don’t get stronger from more meetings.
They get stronger from time together that feels human.
At Xata, Club Xata France 2026 was built around good conversations, shared meals, walks, and the unplanned moments in between.
Check the full story here: xata.io/blog/club-xa...
Posts by Xata - Postgres at Scale
This is the first of several announcements. More next week.
Blog post with architecture deep-dive: xata.io/blog/open-so...
What's in the release:
• Branch operator managing all resources related to a branch.
• Clusters & projects services for the control-plane and REST APIs
• SQL gateway (routing, IP filtering, waking up scaled-to-zero clusters, serving the serverless driver over HTTP / websockets, and more.
Two use cases driving this:
• Preview and testing environments with real production data
• Platforms provisioning per-user Postgres at scale
Xata is now open source. Apache 2.0.
Postgres platform with Copy-on-write branching at the storage layer. Copy a TB-sized database in seconds. Inactive copies scale to zero automatically. 100% vanilla Postgres.
Built our product analytics warehouse in vanilla Postgres instead of adding an OLAP stack. Four data sources (Keycloak, PostHog, Orb, internal DB), materialized views to flatten JSONB, pg_cron for refreshes, and database branches to iterate on the schema safely: xata.io/blog/postgre...
Our CTO Tudor Golubenco is speaking tonight at the PostgreSQL Berlin Meetup, hosted at Zalando.
The talk: How we reduced CloudNativePG wake-up times from 20+ seconds to sub-second for Xata's scale-to-zero Postgres clusters.
20:15 CET if you're in Berlin: meetup.com/postgresql-m...
pgstream v1.0.0 is out, with a major architectural change.
Schema changes are now emitted directly into WAL as logical messages, without schema logs or stored schema state.
If you work with Postgres CDC, this might be interesting.
Details in the blog 👇
xata.io/blog/pgstrea...
Curious to dig deeper? Check out @apatheticmagpie.bsky.social full breakdown on the blog 👉 xata.io/blog/constra...
Did you know Postgres lets you put **constraints on domains**, not just tables?
A domain is a custom data type with rules attached, and Postgres stores those CHECK constraints right in `pg_constraint` linked by `contypid` instead of `conrelid`.
A clean way to centralize data rules 👌
It is being planned, thanks for the interest!
Taking database snapshots and moving large volumes of data over the network is something our customers do regularly. While batching is the de facto way to make this efficient choosing the right batch size is non-trivial considering network variability, latency & system load.
Read how we solved it👇🏽
@divyendusingh.com is doing a great job making agents do all sorts of stuff using databases. In our case with a few simple instructions, they are able to do branching operations, run queries, validate bug fixes and more.
The blog posts are paired with demo videos, have a look 👀 👇🏽
New in xata clone: AI-assisted PII removal config (schema → strict config → validated).
New in xata clone: xata clone config --mode=ai
Feed it your schema + prompt → get a strict, reviewable anonymization config that’s typically more complete than static heuristics.
Blog post: xata.io/blog/smarter...
AI agents get useful faster with guardrails, not plugins.
Repo playbook: gh issue → xata branch create + xata branch wait-ready → xata branch url (not $DATABASE_URL) → psql repro/verify → fix.
Video + write-up:
Batching is often used to process large volumes of data but a batch size that works in one network setup can perform poorly in another.
We applied automatic batch size tuning to Postgres snapshots in pgstream to adapt across different network environments.
Check the post 👇
xata.io/blog/postgre...
If you want to understand how constraint enforcement works internally, @apatheticmagpie.bsky.social breaks it down beautifully in her latest blog:
👉 xata.io/blog/constra...
PostgreSQL stores *all* constraints: check, not-null, PK, FK, unique, exclusion and domain constraints as rows in `pg_constraint` catalog.
In Postgres 18, even NOT NULL constraints now get their own entries here (before 18 they lived in `pg_attribute`!).
Moving a production DB is stressful. Don’t.
Keep prod on your infra. Stream a logical replica (WAL) into Xata, anonymize PII at ingest, then spin up copy-on-write Postgres branches in seconds - one per PR 👇
Testing one last change on prod before the holidays…
Claude reads the issue, spins up a branch named after the git branch, reproduces on realistic data, finds a classic off-by-one (missing `<=`), fixes it, validates, and ships.
Blog + video: xata.io/blog/teachin...
What Claude learns:
- `gh` → read the GitHub issue
- `xata` → create branch → `wait-ready` → `checkout`
- `psql` → query the *branch* URL (via `xata branch url`)
Setup (2 commands):
`curl -fsSL xata.io/install.sh | bash`
`xata ai download claude-skill`
Core idea: add a Claude Agent Skill (~20 lines) that tells Claude: when DB access is relevant, create an isolated copy-on-write Postgres branch and work there, never against $DATABASE_URL / production.
You wouldn’t let Claude Code touch prod.
But some bugs *need* real data to reproduce.
Here’s how we teach Claude to debug safely using Xata database branches 👇
It’s Friday and you want to ship to prod.
Giving agents access to realistic data via branches should scale the feature development without increasing costs.
This shows how you can use the “scale to zero” feature of the Xata platform to improve feature development while keeping costs in check.