Comment by dapperdrake
13 days ago
Not quite. For persistence latency, yes.
For read-only access there could be way better caching, especially for common use cases like listing the contents of a filesystem directory. But stuff like this was excluded on purpose.
NFS is really stupid.
NFS made the assumption that a distributed system with over 100 times the latency of a local system could be treated like a local system in every single way.
I am not sure why this means why "NFS is really stupid" if the user assumes that a distributed file system can be treated just like a local system. That is provides the same interface is what makes NFS extremely useful.
And also, this is what makes NFS useless.
Latency is at least two orders of magnitude higher. That is the (relevant) difference here. And treating it like a loc system with all the incidental non-optimizations made the NAS use-case take 40 hours for colored "ls" output.
I find it extremely useful and it works well for many use cases. This already implies that "it is useless" is pure nonsense. If it does not work for your usecase, just don't use it.
1 reply →
It's wasn't "NFS", it was always the users that made that mistake. NFS can be used in a proper and productive manner, but it requires adjustments.
Which all boil down to "replace NFS with something that has a better data model."
Any remote data system would have the same problems. Looping over a list and synchronously fetching files one by one is equally foolish for S3 too.
1 reply →