Show HN: I wrote a full text search engine in Go

4 days ago (github.com)

I really liked the README, that was a good use of AI.

If you're interested in the idea of writing a database, I recommend you checkout https://github.com/thomasjungblut/go-sstables which includes sstables, a skiplist, a recordio format and other database building blocks like a write-ahead log.

Also https://github.com/BurntSushi/fst which has a great Blog post explaining it's compression (and been ported to Go) which is really helpful for autocomplete/typeahead when recommending searches to users or doing spelling correction for search inputs.

  • >>I wrote a full text search engine in Go

    >I really liked the README, that was a good use of AI.

    Human intelligences, please start saying:

    (A)I wrote a $something in $language.

    Give credit where is due. AIs have feelings too.

I don't care you vibe coded it.. run some benchmarks on it to show how it compares to other stuff.

We are soon entering into the territory of "no one cares if you did it, but can you say something interesting?". I created X software is soon leaving the ranks of cool stuff.

Did you vibe code this? A few things here and there are a bit of a giveaway imho.

  • Another possible tell (not saying this is vibe coded) is when every function is documented, almost too much comments

    • Ohh, I thought that inline comments would make it grokkable and be a low-friction way in. Seems this didn’t land the way I intended :'

      Should a multi-part blog would've been better?

      5 replies →

  • I put Overview section from the Readme into an AI content detector and it says 92% AI. Some comment blocks inside codebase are rated as 100% AI generated.

  • What makes you think so?

    • I wonder if I should really explain or if that would provide a list of things to sanitize before publishing stuff.

      If someone has ever written any code is well aware of what can be done in a weekend and especially that no one doing something "in a weekend" will ever add all those useless comments everywhere, literally a few thousand lines of comments. That takes more time than writing code. Comments in Claude style. Other claude-isms all around.

      It's ok to vibe things, but just say so, no shame.

      And yes, after 5 minutes of looking around I had enough evidence to "prove it". Any moderately competent engineer could.

You are avoiding the questions whether this was vibe coded or not. I see that almost every single project of yours was vibe coded down to the readmes. Why hide this?

Can the index size exceed the RAM size (e.g., via memory mapping), or are index size and document number limited by RAM size? It would be good to mention those limitations in the README.

This is very cool! Your readme is intersting and well written - I didn't know I could be so interested in the internals of a full text search engine :)

What was the motivation to kick this project off? Learning or are you using it somehow?

  • I’m learning the internals of FTS engines while building a vector database from scratch. Needed a solid FTS index, so I built one myself :)

    It ended up being a clean, reusable component, so I decided to carve it out into a standalone project

    The README is mostly notes from my Notion pages, glad you found it interesting!

Why did you create this new account if there's already 3 existing accounts promoting your stuff and only your stuff?

  • Because running a three-account bot‑net farm is fun :D Okay, jk, please don’t mod me out.

    One’s for browsing HN at work, the other’s for home, and the third one has a username I'm not too fond of.

    I’ll stick to this one :) I might have some karma on the older ones, but honestly, HN is just as fun from everywhere

Cool project!

I see you are using a positional index rather than doing bi-word matching to support positional queries.

Positional indexes can be a lot larger than non-positional. What is the ratio of the size of all documents to the size of the positional inverted index?

  • Observation is spot on. Biword matching would definitely ease this. Stealing bi-word matching for a future iteration, tysm :D

    • Well bi-word matching requires that you still have all of the documents stored to verify the full phrase occurs in the document rather than just the bi-words. So it isn't always better.

      For example the phrase query "United States of America" doesn't occur in the document "The United States is named after states of the North American continent. The capital of America is Washington DC". But "United States", "states of" and "of America" all appear in it.

      There's a tradeoff because we still have to fetch the full document text (or some positional structure) for the filtered-down candidate documents containing all of the bi-word pairs. So it requires a second stage of disk I/O. But as I understand most practitioners assume you can get away with less IOPS vs positional index since that info only has to fetched for a much smaller filtered-down candidate set rather than for the whole posting list.

      But that's why I was curious about the storage ratio of your positional index.

This is pretty interesting.

Could you explain more why you avoided parsing strings to build queries? Strings as queries are pretty standard for search engines. Yes, strings require you to write an interpreter/parser, but the power in many search engines comes from being able to create a query language to handle really complicated and specific queries.

  • You're right, string-based queries are very expressive. I intentionally avoided that here so readers could focus on how FTS indexes work internally. Adding a full query system would have shifted the focus away from the internals.

    If you notice there are vv obvious optimizations we could make. I’m planning to collect them and put a these as code challenges for readers, and building string-based queries would make a great one :)

This is good for someone playing around with Go and data structures with vibe coding, but I just hope HN doesn't get flooded with vibe coded toy projects.

Great work! Would be interesting to see how it compares to Lucene performance-wise, e.g. with a benchmark like https://github.com/quickwit-oss/search-benchmark-game

  • Thanks! Honestly, given it's hacked together in a weekend not sure it’d measure up to Lucene/Bleve in any serious way.

    I intended this to be an easy on-ramp for folks who want to get a feel for how FTS engines work under the hood :)

    • Sure, but it says "High-performance" Full Text Search Engine. Shouldn't that claim be backed up by numbers, comparing it to the state of the art?

    • Not _that_ long ago Bleve was also hacked together over a few weekends.

      I appreciate the technical depth of the readme, but I’m not sure it fits your easy on-ramp framing.

      Keep going and keep sharing.