Comment by almoehi
3 months ago
I’ve had written up a proposal for a research grant to basically work exactly on this idea.
It got reviewed by 2 ML scientists and one neuroscientist.
Got totally slammed (and thus rejected) by the ML scientists due to „lack of practical application“ and highly endorsed by the neuroscientist.
There’s so much unused potential in interdisciplinary research but nobody wants to fund it because it doesn’t „fit“ into one of the boxes.
Make sure the ML scientists don't take credit for your work. Sometimes they reject a paper so they can work on it on their own.
Grant reviews are blind reviews - so you don’t know. Also - and even worse - there is no rebuttal process. It gets rejected without you having a chance to clarify / convince reviewers.
Instead you’d need to resubmit and start the entire process from scratch. What a waste of resources …
It’s the final nail what made me quit pursuing a scientific career path despite having good pubs & PhD /w honours.
Unfortunately it’s what I enjoy the most.
That's unfortunate. My personal sense is that while agentic LLM's are not going to get us close to AGI, a few relatively modest architectural changes to the underlying models might actually do that, and I do think mimicry of our own self-referential attention is a very important component of that.
While the current AI boom is a bubble, I actually think that AGI nut could get cracked quietly by a company with even modest resources if they get lucky on the right fundamental architectural changes.
I agree - and I think having interdisciplinary approach here is going to increase the odds here. There is a ton of useful knowledge in related disciplines - often just named differently - but turns out investigating the same problem from a different angle.
Sounds like those ML "scientists" were actually just engineers.
A lot of progress is made through engineering challenges
This is also "science"