PaulHoule 12 hours ago

The best part I think is "pg_query is special. It doesn’t actually implement parsing SQL. It works by extracting C source code directly from Postgres and wraps it with a nice Rust interface. This allows PgDog to understand all queries that Postgres can."

One of the things that puts metaprogramming on wheels is being able to pull in grammars, schemas and such for external systems. Think how many cool tools we could have if we could do this in a more general way.

  • thenonameguy 9 hours ago

    > One of the things that puts metaprogramming on wheels is being able to pull in grammars, schemas and such for external systems.

    That's what I've been trying to do with: https://github.com/schemamap/schemamap

    For a well-constrained Postgres schema, a deterministic SQL compiler can be built (with plenty of logic programming/constraint-solving/hypergraph-analysis) that can integrate arbitrary external systems data.

    While this is NP-hard, with some clever use of embeddings and ontologies, and use of every single constraint AST within the DB to reduce the search space, this becomes a feasible problem.

    For any Clojurists interested, I've packaged `pg_query`, so you can use it in your applications: https://github.com/schemamap/pg-query-clj

    At the moment I'm saving up for the next development cycle, so not only PG->PG schema mappings can be solved for (JSON schema is next!). Hope this sounds interesting :)

    • Inviz 3 hours ago

      Json schema layer support sounds interesting. Truth be told I didn’t immediately figure out how your project works

  • mdaniel 12 hours ago

    I had such high hopes for tree-sitter but once it went all "and then, $CC -c -o" all was lost :-(

    • jitl 12 hours ago

      What do you mean?

      • mdaniel 2 hours ago

        Unless the system that wishes to consume tree-sitter grammars has access to a linker, it being written in C, and then compiled to machine code, gravely limits the places it can be consumed. That's in contrast to any one of the hundreds of grammar compiler compiler systems that allow targeting a platform of choice without mandating a platform of choice. I do see that tree-sitter alleges to emit wasm, but I am not deep enough in that ecosystem to know if it means "wasm for the browser," or "wasm for extism[1]," or "wasm, but if you already have WASI" or what

        1: e.g. https://github.com/1Password/onepassword-sdk-go/blob/v0.2.1/...

  • boomskats 12 hours ago

    Supabase's postgres LSP works in a similar way iirc.

    • bri3d 10 hours ago

      Yes, the same way. It's all based on the extremely useful `https://github.com/pganalyze/libpg_query` project, which is where the "extracted the parser from Postgres" part comes in.

      Supabase's LSP also uses tree-sitter for corrections and autocomplete, because one drawback of using the server's source is that pg_query only works on well-formed/executable SQL - when it detects a malformed query, it formulates an error and exits, since that's what you want in an SQL server. So for partially-correct syntax and fill-forward, tree-sitter covers the gaps.

  • pphysch 12 hours ago

    I agree. Does anyone know much heavy lifting is done by pg_query in wrapping the Postgres code vs. Postgres in expressing that code in a manner that makes pg_query possible?

    • ZiiS 12 hours ago

      Tends to be a matter of opinion. Postgres does not expose the relevant functions, so https://github.com/pganalyze/libpg_query has to do some heavy lifting to convert their source code into a nice library. Conversely, Postgres is very well written code, in an extremely common language, with a stable release cadence, and such a long track record it is seen as the reference implementation for correctly parsing SQL.

      • lfittl 8 hours ago

        Yeah, as one of the main authors of libpg_query, I think the primary things that make this easier is that Postgres has good abstractions internally, and the parser works independently from other parts (e.g. the community discourages adding settings that affect parser behavior).

        Over the years we've only had to maintain a small set of patches on top of the Postgres source [0], together with some mocks and our libclang-based extraction logic [1]. Of course it would be nice if Postgres just packaged this directly like it packages the client libraries, but there is non-trivial effort involved to do that. From what I recall, the main issue is that error handling and memory allocations work differently in the client-side libraries (and so that would have to either also be moved out of the backend source, or use some other abstraction).

        [0]: https://github.com/pganalyze/libpg_query/tree/17-latest/patc...

        [1]: https://github.com/pganalyze/libpg_query/blob/17-latest/scri...

        • dwmcc an hour ago

          Unrelated to your comment but a big fan of pganalyze. Makes Postgres infinitely more grokable for developers who aren’t experts at running and tuning a database. Keep up the good work!

submerge 26 minutes ago

Am I right to think this could be used to "inject" limits on the number of rows returned by a user query, or otherwise restrict what users see allowed to do?

I know it sounds silly/crazy but I have a use case where I would like to allow "mostly" trusted users to access the database directly and not through an API, but I need to apply some restrictions on their output.

film42 3 hours ago

Having written several pg txn poolers, I like the implementation. I also love your simple branding and I think this could make a fantastic product for enterprise customers. However, my biggest concern as a buyer is longevity. PgCat (another popular pooler in rust) will likely lose funding shortly. Meanwhile, PgBouncer is battle tested, small in scope, and is starting to get consistent new contribution.

  • levkk 2 hours ago

    Thanks!

    Re: pgcat and longevity, it's actually the opposite. Pgcat is/was an open source project that I worked on in my spare time. Thankfully I found a couple engineers (and the awesome community contributions) at large corps to push it over the line.

    PgDog is a startup, so this is now my full time job.

    • film42 an hour ago

      Glad to see you found some footing! I just saw the email about PgML today.

      You should mention the timeline of projects leading up to PgDog in your marketing. I was looking for references to pgcat (very briefly) but didn’t see any. Your background will bring a lot of peace of mind to technology adopters in big enterprise.

      • levkk an hour ago

        Will do. You're not the first one to recommend this, it's about time I listen.

skeptrune 13 hours ago

>we need to ... understand SQL syntax using a parser, Rust ecosystem has a great library called pg_query

I'm impressed both that the Rust ecosystem had the right library available and that it was high enough quality code for the author's team to use it as a reference for building their own additional functionality.

  • craigkerstiens 12 hours ago

    Pgquery was created by the pganalyze team for their own purposes I believe initially for features like index recommendation tooling, but immediately planned as open source. It is indeed a very high quality project with the underlying C implementation having several wrappers that exist for a number of languages[1].

    [1] https://github.com/pganalyze/libpg_query/blob/15-latest/READ...

  • bri3d 10 hours ago

    Oddly, it actually originates from the Ruby ecosystem - `pganalyze` initially created `libpg_query` to parse Postgres in Ruby. `libpg_query` in C does the "magic" of reformulating Postgres's query parser into an AST generator, and serializes the resulting AST into a Protobuf. Then, `pg_query` bridges that into Rust.

  • levkk 12 hours ago

    We're using it to rewrite queries too. It's a pretty cool library.

Sytten 3 hours ago

I feel I am missing something with the approach of those routers. If I am doing cursor pagination over a couple million rows with some ordering this will pull in a huge number of data from each server and then perform it in memory at the router level if I understand correctly.

The neon approach of decoupling storage from processing but keeping the processing a query local to one server seems better to me, but I am maybe missing something.

  • levkk 2 hours ago

    Paginating over millions of rows isn't really done in OLTP use cases that PgDog is targeting, as far as I know. Would be great to learn about yours though. Feel free to reach out!

jedberg 11 hours ago

Wrapping up the actual Postgres code is a brilliant idea. Then you never have to worry about updating your code to keep up with their changing API.

And the fact that this works at all is another example of why Postgres is the default data store on the internet. It's battle tested and has a massive ecosystem of support tools, like this one.

avinassh 11 hours ago

I am long on this project and excited about it.

I am wondering, why it took so long for something like this to happen for Postgres, where as Vitess existed for MySQL more than a decade. I am not an expert, but I hear Citus is not really a fair comparison with Vitess.

  • rubiquity 9 hours ago

    pgdog represents maybe just the VTGate portion of Vitess. There's a whole lot more to Vitess or any replication-on-top of a database type service than just the query router.

    • avinassh 9 hours ago

      > pgdog represents maybe just the VTGate portion of Vitess

      That’s today. The project is developing fast, so I am sure more things will be added :)

      • levkk 8 hours ago

        Exactly :)

antirez 7 hours ago

> with 4 bytes added for itself

Why one would want to do that? Only reason I can think of is, so that values < 4 can be reserved for future users, like new versions of client-server that agree on 64 bit payload if the first 4 bytes are zero or alike. But there are better ways.

  • levkk 7 hours ago

    I'm guessing so it's never zero and can't be confused with something else, like a bunch of NULs. There are a few messages that's don't have a payload, e.g. ParseComplete.

    • antirez 6 hours ago

      Makes sense, thanks.

gourabmi 4 hours ago

Is there something like pg_query for Oracle ? I want to use it for making AST's out of Oracle SQL queries

grep_it 11 hours ago

> More complex examples, like IN (1, 2, 3) or id != 25 can be handled as well. For the former, we can hash all values and route the query to the matching shards. For the latter, we can do the opposite.

This makes it sound like the query would be routed to all shards except the one matching HASH(25). But wouldn't id != 25 need to go to all shards, since there could be values in the id set that hash to the same shard as 25?

  • levkk 11 hours ago

    You're right. Good catch, I'll fix this in the article.

hamburglar 10 hours ago

It would be nice if they could go more into detail about how they decide whether a query is read or write. The casual mention of this makes it sound like they originally took a too-naive approach (looking at the first word) but even ignoring CTEs, you can still have an UPDATE within a SELECT subquery, which makes the problem very hard without parsing the whole query. Perhaps the C++ code extracted from pg handles this for them and returns some metadata that indicates whether there are any writes buried in the query?