Comment by mrkeen
3 hours ago
Yeah I can't figure out if this is something the author stands by or if it's just a project to mess around with goroutines or something. And it's unfair to criticise if it isn't meant to be good.
> The server reads all the documents into memory at start-up. The corpus occupies about 600 MB, so this is reasonable, though it pushes the limits of what a cloud server with 1 GB of RAM can handle. With 2 GB, it's no problem.
1200 books per 1GB server? Whole-internet search engines are older than 1GB servers.
> queries that take 2,000 milliseconds from disk can be done in 800 milliseconds from memory. That's still too slow, though, which is why fast-concordance uses [lots of threads]
No query should ever take either of those amounts of time. And the "optimisation" is to just use more threads. Which other consumers could have used to run their searches, but now can't.
https://www.pingdom.com/blog/original-google-setup-at-stanfo...
No comments yet
Contribute on Hacker News ↗