Show HN: Use Claude Code to Query 600 GB Indexes over Hacker News, ArXiv, etc.
10 hours ago (exopriors.com)
Paste in my prompt to Claude Code with an embedded API key for accessing my public readonly SQL+vector database, and you have a state-of-the-art research tool over Hacker News, arXiv, LessWrong, and dozens of other high-quality public commons sites. Claude whips up the monster SQL queries that safely run on my machine, to answer your most nuanced questions.
There's also an Alerts functionality, where you can just ask Claude to submit a SQL query as an alert, and you'll be emailed when the ultra nuanced criteria is met (and the output changes). Like I want to know when somebody posts about "estrogen" in a psychoactive context, or enough biology metaphors when talking about building infrastructure.
Currently have embedded: posts: 1.4M / 4.6M comments: 15.6M / 38M That's with Voyage-3.5-lite. And you can do amazing compositional vector search, like search @FTX_crisis - (@guilt_tone - @guilt_topic) to find writing that was about the FTX crisis and distinctly without guilty tones, but that can mention "guilt".
I can embed everything and all the other sources for cheap, I just literally don't have the money.
I like that this relies on generating SQL rather than just being a black-box chat bot. It feels like the right way to use LLMs for research: as a translator from natural language to a rigid query language, rather than as the database itself. Very cool project!
Hopefully your API doesn't get exploited and you are doing timeouts/sandboxing -- it'd be easy to do a massive join on this.
I also have a question mostly stemming from me being not knowledgeable in the area -- have you noticed any semantic bleeding when research is done between your datasets? e.g., "optimization" probably means different things under ArXiv, LessWrong, and HN. Wondering if vector searches account for this given a more specific question.
I don’t have the experiments to prove this, but from my experience it’s highly variable between embedding models.
Larger, more capable embedding models are better able to separate the different uses of a given word in the embedding space, smaller models are not.
I was thinking about it a fair bit lately. We have all sorts of benchmarks that describe a lot of factors in detail, but all those are very abstract and yet, those do not seem to map clearly to well observed behaviors. I think we need to think of a different way to list those.
This may exist already, but I'd like to find a way to query 'Supplementary Material' in biomedical research papers for genes / proteins or even biological processes.
As it is, the Supplementary Materials are inconsistently indexed so a lot of insight you might get from the last 15 years of genomics or proteomics work is invisible.
I imagine this approach could work, especially for Open Access data?
I just built something like this a week ago: https://github.com/eamag/papers2dataset
I wanted to find all cryoprotective agents that were tested at different temperatures, but it should be extandable to your problem too. Uses OpenAlex to traverse a citation graph and open access pdfs
This is a pretty cool project! Thank you for open sourcing it!
I think a prompt + an external dataset is a very simple distribution channel right now to explore anything quickly with low friction. The curl | bash of 2026
Exactly. Prompt + Tool + External Dataset (API, file, database, web page, image) is an extremely powerful capability.
Seems like you're experiencing the hacker news hug of death.
Should be squared away now! Was my fault missing a health check for a recent weird bug, not a load issue.
The console / login pages are showing an error still.
> a state-of-the-art research tool over Hacker News, arXiv, LessWrong, and dozens
what makes this state of the art?
The scale. How many tools do you know that can query the content of all arxiv papers.
It's just marketing.
It is not a protected term, so anything is state-of-the-art if you want it to be.
For example, Gemma models at the moment of release were performing worse their competition, but still, it is "state-of-the-art". It does not mean it's a bad product at all (Gemma is actually good), but the claims are very free.
Juicero was state-of-the-art on release too, though hands were better, etc.
> It's just marketing. [...] It is not a protected term, so anything is state-of-the-art if you want it to be.
But is it true?
I think we ought to stop indulging and rationalizing self-serving bullshit with the "it's just marketing" bit, as if that somehow makes bullshit okay. It's not okay. Normalizing bullshit is culturally destructive and reinforces the existing indifference to truth.
Part of the motivation people have seems to be a cowardly morbid fear of conflict or the acknowledgment that the world is a mess. But I'm not even suggesting conflict. I'm suggesting demoting the dignity of bullshitters in one's own estimation of them. A bullshitter should appear trashy to us, because bullshitting is trashy.
1 reply →
just like "cruelty free" and "not tested on animals" in usa
First, so best in this?
The tool is state of the art, the sources are historical.
Anyone tried to use these prompts with Gemini 3 Pro? it feels like Claude, Gemini and GPT latest offerings are on par (excluding costs) and as a developer if you know how to query/spec a coder llm you can move between them at ease.
Really useful currently working on a autonomous academic research system [1] and thinking about integrating this. Currently using custom prompt + Edison Scientific API. Any plans of making this open source?
[1] https://github.com/giatenica/gia-agentic-short
That's just not a good use of my Claude plan. If you can make it so a self-hosted Lllama or Qwen 7B can query it, then that's something.
It's ultimately just a prompt, self-hosted models can use the system the same way, they just might struggle to write good SQL+vector queries to answer your questions. The prompt also works well with Codex, which has a lot of usage.
I think that’s just a matter of their capabilities, rather than anything specific to this?
It's a very nifty cool, and could definitely come in handy. love the UX too!
this is great>>@FTX_crisis - (@guilt_tone - @guilt_topic)
Using LLm for tasks that could be done faster with traditional algorithmic approaches seems wasteful, but this is one of the few legitimate cases where embeddings are doing something classical IR literally cannot. You could also make make the LLM explain the query it’s about to run. Before execution:
“Here’s the SQL and semantic filters I’m about to apply. Does this match your intent?”
Great idea! I just overhauled the prompt to explain the SQL + semantic filters better, and give the user clearer adjustment opportunities before long-running queries.
> I can embed everything and all the other sources for cheap, I just literally don't have the money.
How much do you need for the various leaks, like the paradise papers, the panama papers, the offshore leajay, the Bahamas leaks, the fincen files, the Uber files, etc. and what's your Venmo?
Nice, but would you consider open-sourcing it? I (and I assume others) are not keen on sharing my API keys with a 3rd party.
I think you misunderstood. The API key is for their API, not Anthropic.
If you take a look at the prompt you'll find that they have a static API key that they have created for this demo ("exopriors_public_readonly_v1_2025")
The quick setup is cool! I’ve not seen this onboarding flow for other tools, and I quite like its simplicity.
Is the appeal of this tool its ability to identify semantic similarity?
The use case could vary from person to person. When you think about it, hacker news has large enough data set ( and one that is widely accessible ) to allow all sorts of fun analyses. In a sense, the appeal is:
who knows what kind of fun patterns could emerge
The problem with HN isn't that the patterns are hard to discern, it's that no one wants to acknowledge them.
1 reply →
Seems very cool, but IMO you’d be better off doing an open source version and then hosted SAAS.
Would you mind walking through the logic of that a bit for me? I'm definitely interested in productizing this, and would be interested in open sourcing as soon as I have breathing room (I have no money).
Does that first generated query really work? Why are you looking at URIs like that? First you filter for a uri match, then later filter out that same match, minus `optimization`, when you are doing the cosine distance. Not once is `mesa-optimization` even mentioned, which is supposed to be the whole point?
"Claude Code and Codex are essentially AGI at this point"
Okaaaaaaay....
Just comes down to your own view of what AGI is, as it's not particularly well defined.
While a bit 'time-machiney' - I think if you took an LLM of today and showed it to someone 20 years ago, most people would probably say AGI has been achieved. If someone wrote a definition of AGI 20 years ago, we would probably have met that.
We have certainly blasted past some science-fiction examples of AI like Agnes from The Twilight Zone, which 20 years ago looked a bit silly, and now looks like a remarkable prediction of LLMs.
By todays definition of AGI we haven't met it yet, but eventually it comes down to 'I know it if I see it' - the problem with this definition is that it is polluted by what people have already seen.
I’ve got to disagree with this. All past pop-culture AI was sentient and self-motivated, it was human like in that it had it’s own goals and autonomy.
Current AI is a transcript generator. It can do smart stuff but it has no goals, it just responds with text when you prompt it. It feels like magic, even compared to 4-5 years ago, but it doesn’t feel like what was classically understood as AI, certainly by the public.
Somewhere marketers changed AGI to mean “does predefined tasks with human level accuracy” or the like. This is more like the definition of a good function approximator (how appropriate) instead of what people think (or thought) about when considering intelligence.
1 reply →
> most people would probably say AGI has been achieved
Most people who took a look at a carefully crafted demo. I.e. the CEOs who keep pouring money down this hole.
If you actually use it you'll realize it's a tool, and not a particularly dependable tool unless you want to code what amounts to the React tutorial.
1 reply →
> If someone wrote a definition of AGI 20 years ago, we would probably have met that.
No, as long as people can do work that a robot cannot do, we don't have AGI. That was always, if not the definition, at least implied by the definition.
I don't know why the meme of AGI being not well defined has had such success over the past few years.
4 replies →
Charles Stross published Accelerando in 2005.
The book is a collection of nine short stories telling the tale of three generations of a family before, during, and after a technological singularity.
I want to know what the "intelligence explosion" is, sounds much cooler than AGI.
When AI gets so good it can improve on itself
3 replies →
I have noticed that Claude users seem to be about as intelligent as Claude itself, and wouldn't be able to surpass its output.
This made me laugh. Unfortunately, this is the world we live in. Most people who drive cars have no idea how they work, or how to fix them. And people who get on airplanes aren't able to flap their arms and fly.
Which means that humans are reduced to a sort of uselessness / helplessness, using tools they don't understand.
Overall, no one tells Uncle Bob that he doesn't deserve to fly home to Minnesota for Christmas because he didn't build the aircraft himself.
But we all think it.
You seem to be very confused about what intelligence even is.
You, of course, are smarter than them.