Show HN: SnapQL – Desktop app to query Postgres with AI
7 days ago (github.com)
SnapQL is an open-source desktop app (built with Electron) that lets you query your Postgres database using natural language. It’s schema-aware, so you don’t need to copy-paste your schema or write complex SQL by hand.
Everything runs locally — your OpenAI API key, your data, and your queries — so it's secure and private. Just connect your DB, describe what you want, and SnapQL writes and runs the SQL for you.
This is nice -- we're heavy users of postgresql and haven't found the right tool here yet.
I could see this being incredible if it had a set of performance related queries or ran explain analyze and offered some interpreted results.
Can this be run fully locally with a local llm?
just opened a PR for local llm support https://github.com/NickTikhonov/snap-ql/pull/11
Merged! Thanks Stephan
Thank you for the feedback. Please feel free to raise some issues on the repo and we can jam this out there
Looks useful! And the system prompt didn't require too much finessing. I wonder how it would work with some later models than gpt-4o as in my own dabbling around gpt-4o wasn't quite there yet and the latest models are getting really good.
For analytical purposes, this text-to-SQL is the future; it's already huge with Snowflake (https://www.snowflake.com/en/engineering-blog/cortex-analyst...).
Appreciate the input! I'd love to be able to support more models. That's one of the issues in the repo right now. And I'd be more than happy to welcome contributions to add this and other features
Would love to contribute. I have made a fork, will try and raise a PR if contributions are welcome.
Question, how are you testing this? Like doing it on dummy data is a bit too easy. These models, even 4o, falter when it comes to something really specific to a domain (like I work with supply chain data and other column names specific to the work that I do, that only makes sense to me and my team, but wouldn't make any sense to an LLM unless it somehow knows what those columns are)
I'm using my own production databases at the moment. But it might be quite nice to be able to generate complex databases with dummy data in order to test the prompts at the higher levels of complexity!
And thank you for offering to contribute. I'll be very active on GitHub!
I might test this out, but I worry that it suffers from the same problems that I ran into the last time I played with LLMs writing queries. Specifically not understanding your schema. It might understand relations but most production tables have oddly named columns, potentially columns that changed function overtime, potentially deprecated columns, internal-lingo columns, and the list goes on.
Granted, I was using 3.5 at the time, but even with heavy prompting and trying to explain what certain tables/columns are used for, feeding it the schema, and feeding it sample rows, more often than not it produced garbage. Maybe 4o/o3/Claude4/etc can do better now, but I’m still skeptical.
I think this is the achilles heel of LLM-based AI: the attention mechanisms are far, far, inferior to a human, and I haven't seen much progress here. I regularly test models by feeding in a 20-30 minute transcript of a podcast and ask them to state the key points.
This is not a lot of text, maybe 5 pages. I then skim it myself in about 2-3 minutes and I write down what I would consider the key points. I compare the results and I find the AI usually (over 50% of the time) misses 1 or more points that I would consider key.
I encourage everyone to reproduce this test just to see how well current AI works for this use case.
For me, AI can't adequately do one of the first things that people claim it does really well (summarization). I'll keep testing, maybe someday it will be satisfactory in this, but I think this is a basic flaw in the attention mechanism that will not be solved by throwing more data and more GPUs at the problem.
> I encourage everyone to reproduce this test just to see how well current AI works for this use case.
I do this regularly and find it very enlightening. After I’ve read a news article or done my own research on a topic I’ll ask ChatGPT to do the same.
You have to be careful when reading its response to not grade on a curve, read it as if you didn’t do the research and you don’t know the background. I find myself saying “I can see why it might be confused into thinking X but it doesn’t change the fact that it was wrong/misleading”.
I do like when LLM‘s cite their sources, mostly because I find out they’re wrong. Many times I’ve read a summary, then followed it to the source, read the entire source, and realized it says nothing of the sort. But almost always, I can see where it glued together pieces of the source, incorrectly.
A great micro example of this are the Apple Siri summaries for notifications. Every time they mess up hilariously I can see exactly how they got there. But it’s also a mistake that no human would ever make.
[dead]
I got better results with Claude Code + PostgreSQL MCP. I let claude understand my drizzle schema first, and i can instruct it to also look at the usage of some entities in the code. Then it is smarter in understanding what the data represents.
This is not a difficult problem to solve. We can add the schema, columns and column descriptions in the system prompt. It can significantly improve performance.
All it will take is a form where the user supplies details about each column and relation. For some reason, most LLM based apps don't add this simple feature.
It’s not a difficult problem to solve, I did it, last year, with 3.5, it didn’t help. That’s not to say that newer models wouldn’t do better, but I have tried this approach. It is a difficult problem to actually get working.
3 replies →
might be possible to solve this with prompt configuration. e.g. you'd be able to explain to the llm all the weird naming conventions and unintuitive mappings
I did that the last time (again, only with 3.5, things have hopefully improved in this area).
And I could potentially see LLMs being useful to generate the “bones” of a query for me but I’d never expose it to end-users (which was what I was playing with). So instead of letting my users do something like “What were my sales for last month?” I could use LLMs to help build queries that were hardcoded for various reports.
The problem is that I know SQL, I’m pretty good at, and I have a perfect understanding of my company’s schema. I might ask an LLM a generic SQL question but trying to feed it my schema just leads to (or rather “led to” in my trials before) prompt hell. I spent hours tweaking the prompts, feeding it more context, begging with it to ignore the “cash” column that has been depreciated for 4+ years, etc. After all of that it still would make simple mistakes that I hard specially warned against.
Genuinely do not understand the point of these tools. There is already a practically natural language to query RDBMS; it’s called SQL. I guarantee you, anyone who knows any other language could learn enough SQL to do 99% of what they wanted in a couple of hours. Give it a day of intensive study, and you’d know the rest. It’s just not that complicated.
SQL is simple for simple needs, basic joins and some basic aggregates. Even that you won't learn in 2 hours. And that is just scratching the surface of what can be done in SQL and what you need to query. With LLMs and tools like this you simply say what you need in english, you don't need to understand the normalizations, m:n relation tables, CTEs, functions, JSON access operators, etc.
For reference, I’m a DBRE. IMO, yes, most people can learn basic joins and aggregates in a couple of hours, but that is subjective.
> you don’t need to understand the normalizations
You definitely should. Normalizing isn’t that difficult of a concept, Wikipedia has terrific descriptions of each level.
As to the rest, maybe read docs? This is my primary frustration with LLMs in general: people seem to believe that they’re just as good of developers as someone who has read the source documentation, because a robot told them the answer. If you don’t understand what you’re doing, you cannot possibly understand the implications and trade-offs.
1 reply →
Without having looked at it, I would assume the value comes from not having to know the data model in great detail, such that you can phrase your query using natural language, like
"Give me all the back office account postings for payment transfers of CCP cleared IRD trades which settled yesterday with a payment amount over 1M having a value date in two days"
That's what I'd like to be able to say and get an accurate response.
In a business, a management decision maker has to rely on a Db analyst if any query they have cannot be answered by any front end tool they have been given. And that introduces latency to the process
A 100% accurate ai powered solution would have many customers.
But can this generation of llms produce 100% accuracy?
and yet this was on the front page of hacker news for an entire day :D
it's all about friction. why spend minutes writing a query when you can spend 5 seconds speaking the result you want and get 90-100% of the way there.
Mostly because you don’t know if it’s correct unless you know SQL. It’s entirely too easy to get results that look correct but aren’t, especially when using windowing functions and the like.
But honestly, most queries I’ve ever seen are just simple joins, which shouldn’t take you 5 minutes to write.
7 replies →
Can you please add support to add descriptions of each column and enumerated types?
For example, if a column contains 0 or 1 encoding the absence of presence of something, LLMs need to know what 0 and 1 stand for. Same goes of column names because they can be cryptic in production databases.
Oh, I'm looking forward of this client being available as a Flatpak although especially in this case some folks might prefer it to have it available via Snap :D
I like this a lot. I am looking forward to having something similar built into Metabase.
Great tool!
Pardon my technical ignorance, but what exactly is OpenAI's API being used for in this?
OpenAI LLM is used to generate SQL based on a combination of a user prompt and the database schema.
[dead]
Wow, this looks really cool.
What is the current support for OpenAI proxy or non-GPT models?
For example, using locally deployed Qwen models or LLaMA models.
Which MCP is the recommended or “official” for SQLite and PostgreSQl for use with Cursor?
Looks like a good idea. Any reason you didn't use React native?
Not really - I had some previous experience with electron and wanted to finish the core feature set in a few hours, so just went with what I already know.
Are there plans to support other LLM sources, in particular ollama?
Yes! https://github.com/NickTikhonov/snap-ql/issues/1
awesome, looking forward to try it with a self hosted model
I was looking for something like this that supports graphs.
Graph generation is next on the list.
Neo4j?
What's the underlying model to enable this ?
Currently OpenAI 4o
So u already train all knowledgebase or fine tune? Would love to know how can u evaluate correctness.
1 reply →
congrats on the launch! This looks very interesting
data engineering about to be eaten by llms
Am I misunderstanding something? How is this "Everything runs locally" if it's talking to OpenAI's APIs?
This app is using OpenAI via the ai package[0][1], so "Everything runs locally" is definitely misleading.
[0]: https://github.com/NickTikhonov/snap-ql/blob/409e937fa330deb...
[1]: https://github.com/vercel/ai
I guess he means there is no proxy between you and openai. API key won’t leak, etc.
What I meant was that it isn't a web app and I don't store your connection strings or query results. I'll make this more clear
It is a web app, though. You just aren't running the server, OpenAI is. And you're packaging the front end in electron instead of chrome to make it feel as if it all runs locally, even though it doesn't.
Side note: I don't see a license anywhere, so technically it isn't open source.
You might not but openai does.
3 replies →
[dead]
If you can do this, can't you create a read-only user and use it with a database MCP like https://github.com/executeautomation/mcp-database-server ? Am I missing something?
You can set up an MCP and use it in your existing AI app, but is afaiu the first open source standalone app that gives you a familiar interface to other SQL workspace tools. I built it to be a familiar but much more powerful experience for both technical and nontechnical people.
There are competitors with a GUI too, such as https://www.sqlchat.ai/ and https://www.jetbrains.com/datagrip/features/ai/
I wish you luck in refining your differentiation.
3 replies →
https://dbeaver.com/docs/dbeaver/AI-Smart-Assistance/
[flagged]
[flagged]
Interesting lead. What else would they be looking for in a tool like this? My bad re the video, I'll make sure not to toggle dark mode in the next one.
awesome work nick, literally been asking for a vibe coding SQL interface for months
thanks Jaimin. happy you finally found what you were looking for :D