Comment by structural
2 days ago
1. If you want to advance the state of the art as quickly as possible (or have many, many experiments to run), being able to iterate quickly is the primary concern.
2. If you are publishing research, any time spent beyond what's necessary for the paper is a net loss, because you could be working on the next advancement instead of doing all the work of making the code more portable and reusable.
3. If you're trying to use that research in an internal effort, you'll take the next step to "make it work on my cloud", and any effort beyond that is also a net loss for your team.
4. If the amount of work necessary to prototype something that you can write a paper with is 1x, the amount of work necessary to scale that on one specific machine configuration is something like >= 10x, and the amount of work to make that a reusable library that can build and run anywhere is more like 100x.
So it really comes down to - who is going to do the majority of this work? How is it funded? How is it organized?
This can be compared to other human endeavours, too. Take the nuclear industry and its development as a parallel. The actual physics of nuclear fission is understood at this point and can be taught to lots of people. But to get from there to building a functional prototype reactor is a larger project (10x scale), and then scaling that to an entire powerplant complex that can be built in a variety of physical locations and run safely for decades is again orders of magnitude larger.
No comments yet
Contribute on Hacker News ↗