Comment by PaulHoule
1 year ago
You can write an external memory spell checker with a tiny amount of RAM: something like
- sort the words in the document
- eliminate unique words (they sort together)
- merge the sorted words with the sorted dictionary and keep only the missing words
I saw this in BASIC in Creative Computing and got it working in on my TRS-80 Color Computer which had much less than 32k of available RAM, so that was the first thing I thought when I saw the headline.
Now this blew people away when it came out
https://winworldpc.com/product/turbo-lightning/1x
it had a compressed dictionary that would fit together with the other programs you were running on a PC and spell check as you typed; there was a 640k limit for the PC but it could only use a fraction of that so as not to interfere and in the early days of the PC you couldn't actually afford to fill it out.
Worth noting that the article mentions this alternative as their first PoC and its drawbacks: "Because of its simplistic implementation, it was not very accurate, and also slow because of dictionary lookups on the disk."
I have a feeling the algorithm used was not the smart merge mentioned in the grandparent. That "slower" spell was in v6 Unix { https://en.wikipedia.org/wiki/Spell_(Unix) } which came out in 1975 and by 1973 there were already Winchester drives doing 885 kB/s with 25ms seeks { https://en.wikipedia.org/wiki/History_of_IBM_magnetic_disk_d... }. A 250e3 word dictionary with average word length of 10 bytes would have only taken about 2500e3/885e3 = 2.8 seconds to scan and the unique words in most documents in practice would have easily fit in RAM (as mentioned in Doug's 1982 paper). Not great, but not so bad, either. People didn't spell check on every key press for another 10 years. ;-)
Someone probably could look all this up in the "unified version control of all Unix" history, but the way Doug describes it in the 1982 paper linked in the article, it sounds like the v6 spell did a "doc word at a time loop" instead of "sort all doc words at once and merge against a pre-sorted dictionary", and in fact it sounds like Johnson selected a small dictionary to facilitate that instead of "merging".
It's easy to compress a sorted dictionary by turning something like
to
where the prefix is the number of characters shared with the last word (and might be coded as a byte as opposed to a digit like you're thinking there. This would be a simple form of compression to code up in BASIC unlike Huffman or most LZW variants which involve bit twiddling and maintaining tree data structures... Though I do remember writing SQ and USQ [1] in BASIC)
[1] https://techtinkering.com/articles/compression-and-archiving...
3 replies →
I guess you really used the fact that most words are repeated to keep the byte count in check? On the old C=64 I had it was a bit of a problem not to blow out the memory with just the text of the document once you started using it for more than a 1 or 2 page paper. Keeping a second sorted copy seems almost luxurious.
I guess you could save the working copy to disk first, then do the sort, then compare, then reload the working copy. I think the C=64 developers probably avoided that strategy because the disk interface was so damn slow.
I may be wrong, but from "external memory" in the description I think the idea is that each of those steps can be done on disk, not RAM. An external merge sort is a pretty standard database primitive, and the other two only require purely sequential access, so are friendly to spinning disks and the like.
Knuth's Sorting and Searching book talks a lot about that sort of algorithm.
The old IBM 360 had a 16MB address space at best, but in 1968 there were very few installations anywhere near filling it. This kind of tape
https://en.wikipedia.org/wiki/9-track_tape
could have a capacity of 80 MB or so, which is (1) much larger than RAM, and (2) mainly sequential (it could fast forward for a bit and try to find a particular block, but it was pretty slow) Contrast this to the floppy drives in the 70-140 kb range which the 8-bit micros had. Thus there was a lot of literature on external memory algorithms that would work on tapes in the early days -- though similar methods are still of interest today for "big data" as RAM is faster by a lot when you access it sequentially, you want to minimize round trips in distributed systems, etc.
(It was funny though when I talked to computer scientists around 2004 and was told to forget about external memory algorithms because 'main memory' was the thing; people realized, for instance, that Google Maps could store all the tiles in RAM in half a rack of 1U servers and if utilization was high it was much cheaper than serving the tiles off disk; Hadoop came along and was a blast from the past that itself was obsolete in just a few years)
Link seems to be broken.
works4me