
PostgreSQL 19 enters beta this month, and if you maintain a production Postgres cluster, you will want to pay attention. This release does not add one big feature — it closes three gaps that have been driving teams to external tools for years. Native graph queries via SQL/PGQ eliminate the main reason to reach for Neo4j on most workloads. ON CONFLICT DO SELECT fixes the ugly atomic upsert workaround that has been generating dead tuples in production databases since 2015. And REPACK CONCURRENTLY brings the functionality of the pg_repack extension into core, without locking your table.
GA is scheduled for September 2026. Beta is available now.
SQL/PGQ: Stop Running a Separate Graph Database
The headline feature is SQL/PGQ — ISO SQL:2023 Part 16, the standard for property graph queries, now shipping in PostgreSQL 19. The pitch is simple: you define a graph view over your existing relational tables, and then query relationships using pattern-matching syntax without migrating a single row of data.
Here is what that looks like in practice. You have two tables: users and follows. You create a property graph over them, then query it using the GRAPH_TABLE function with pattern-matching syntax. Internally, PostgreSQL rewrites these graph queries into relational operations and runs them against your existing indexes. There is no separate graph storage engine and no query translation layer you have to debug.
The honest caveat: SQL/PGQ is not a full Neo4j replacement. Shortest-path algorithms are not in the first implementation. Deeply recursive billion-node traversals are still better handled by a purpose-built graph database. But for the workloads teams most commonly reach for Neo4j on — knowledge graphs, recommendation relationships, fraud detection, supply chain traversal, org hierarchies — SQL/PGQ is sufficient. And it runs in the same database you already operate, with the same backup strategy, the same connection pool, and the same monitoring stack.
For AI applications specifically, this is notable. Running pgvector for semantic search alongside SQL/PGQ for graph traversal in the same database eliminates an entire class of infrastructure complexity that has been pushing teams toward multi-database agent architectures.
ON CONFLICT DO SELECT: The Upsert Fix That Should Have Shipped Years Ago
The atomic get-or-create problem is one of the most common patterns in production SQL — you want to insert a row and get its ID back, or get the existing row’s ID if it already exists. The standard approach has been a no-op DO UPDATE, which works but creates dead tuple churn. Every conflict generates a new row version, updates all indexes, and writes WAL for data that did not actually change. On tables with high conflict rates — tag resolution in a content platform, entity lookups in an ETL pipeline — this adds measurable bloat and I/O overhead.
PostgreSQL 19 adds a third action: ON CONFLICT (name) DO SELECT RETURNING id. Benchmarks from Cybertec show DO SELECT is approximately 4x faster than the DO UPDATE no-op on high-conflict tables, and 24% faster than the equivalent CTE workaround. It generates no dead tuples and works with multi-value inserts, making it viable for bulk operations.
REPACK CONCURRENTLY: pg_repack Is Now Optional
Table bloat management has long required the third-party pg_repack extension for anyone who wanted to reclaim space without taking a table offline. VACUUM FULL does the job but requires an exclusive lock. PostgreSQL 19 ships a native REPACK command: REPACK TABLE orders CONCURRENTLY and REPACK INDEX orders_customer_idx CONCURRENTLY.
The implementation uses logical decoding to capture concurrent changes while the table copy is built, then applies those changes before a brief final lock-and-swap. The only access exclusive lock is at the very end, and for most tables it is sub-second. pg_repack has features REPACK does not — but the core functionality for the vast majority of use cases is now built in. depesz has a detailed technical breakdown of the implementation if you want to understand the mechanics before trusting it in production.
Two More Changes Worth Noting
Parallel autovacuum arrives in PostgreSQL 19, allowing multiple workers to process indexes simultaneously on a single table. For schemas with heavily-indexed tables — common in e-commerce, financial systems, and SaaS platforms with wide column sets — vacuum runs that currently take hours can complete significantly faster. New GUCs (autovacuum_max_parallel_workers and autovacuum_parallel_workers) let you tune the worker pool at the cluster and per-table level.
Temporal table support also reaches completion via FOR PORTION OF, which lets you update or delete the overlapping portion of a time range while automatically splitting and preserving the non-overlapping periods. If you manage reservation systems, billing periods, or slowly changing dimensions, Neon’s full PostgreSQL 19 feature breakdown is worth reading.
How to Test Beta Now
Beta packages are available at postgresql.org/developer/beta/. The PostgreSQL project explicitly requests that developers test their applications against beta builds and report bugs before the September GA window. If you have a non-trivial schema, that is exactly the kind of real-world feedback the project needs right now.
The Bigger Picture
The “Just Use Postgres” argument has been circulating for years, and PostgreSQL 19 gives it the most ammunition yet. Native graph queries close the most-cited gap. Atomic upserts and in-core table repacking reduce the extension and workaround surface area. None of this makes Postgres the answer to every problem — the ClickHouse engineers are still right about their workloads — but it continues moving the line of what teams can accomplish without reaching for a second database.
Beta is now. GA is September.













