Comment by quelsolaar
4 years ago
Wouldn't this be an argument to go in the opposit direction? If you are using high level functionality that you dont know the implementation details of, you are running the risk of unintended consequences.
I am a C programmer who have implemented string to number parsing for this very reason. I know exactly what it does and how fast it is.
If you do use code you didn't write, The chance of a standard library being poorly implemented, is probably lover then most other libraries, so picking a non standard lib as a grantee against bad performance seems misguided.
I think it goes both ways in that you either go full low level and write yourself everything (for questionable benefits), or you use a (possibly higher level) language with sane standard library, but the important thing is the quality of said library.
I find that writing everything yourself, especially simple things like a text2inteeger parser, is very valuable because it takes very little time and it levels up your understanding of the system. I'm starting to believe that you rarely understand something until you have implemented it. Therefor implementation is the best way to learn.
I’m totally with you here, it is a really invaluable learning tool. But similarly to science with “standing on the shoulders of giants”, we would not be anywhere if everyone started everything from scratch. Like, it’s okayish to reimplement a vector or something, but even a sort gets harder (especially if you want to make one that is performant both on a few element list and longer ones). And algorithms are just one thing, will you also learn into the totally foreign world of eg. audio processing?
Basically the only silver bullet for productivity increase is shared code. We just have to place higher emphasis on software correctness/quality instead of the functionality churn in certain parts of the software world.