← Back to context

Comment by ipunchghosts

2 months ago

Overall, i like this guys papers but they strike me as someone who is very smart but hasnt looked through the literature carefully. Many of the techniques he is proposing were already done about 5-6 years ago. However, I imagine that because the field is flooded with new humans, they are not aware of this research or think it will lead to a fruitful end (which many other researchers have already thought of this and it didn't lead anywhere hence why it was abandoned). Overall, it seems we are starting to recycle ideas because there isnt enough lit review and or mentoring from senior deep learning / ML folks who can quickly look at a paper and tell the author where the work has been already investigated.

Reviving old ideas and comparing them to SOTA is not necessarily bad especially if they provide benefits over the SOTA model. It brings the old ideas into the community idea cache if you will. It’s somewhat annoying if the authors do it thinking it’s a novel idea when it fact it’s a 20-30 year old one.

This reminds me of some HN comments about rocketry ideas and in the thread one of the comments was “Everything in rocket science has been theorized/tried by some Russian scientist 40-50 years ago” and it still gives me a chuckle.

> Overall, it seems we are starting to recycle ideas because there isnt enough lit review and or mentoring from senior deep learning / ML folks who can quickly look at a paper and tell the author where the work has been already investigated.

Arguably, the literature synthesis and knowledge discovery problem has been overwhelming in many fields for a long time; but I wonder if, in ML lately, an accelerated (if not frantic) level of competition may be working against the collegial spirit.

  • I think it's been accelerated by the review community being overwelmed and the lack of experienced researchers with the combination of classic ML, deep learning, transformers, and DSP backgrounds -- a rare breed but sorely needed.