Dude we're running unrestricted recursion and closures on GPUs! If that's not cool to you, I apologize, but that mind-blowingly cool to me, and I wanted to share it, even though the codegen is still initial. Hell I was actually going to publish it with the interpreters only, but I still coded an initial compiler because I thought people would like to see where it could go :(
The closure part was when I had to stop for a moment and go "wait, really?! ... COOL!" and I'm definitely going to try and remember to check back every so often (emphasis on 'try' given I have a brain like a sieve but still ;).
This is very cool and it's being treated unfairly, though it's also obviously not ready for prime time; it's an existence proof.
To illustrate that, many people on here have been losing their mind over Kolomogorov-Arnold Networks, which are almost identically positioned; interesting idea, kind of cool, does what the paper claims, potentially useful in the future, definitely not any use at all for any real use-case right now.
(In part that's probably because the average understanding of ML here is _not_ strong, so there's more deference and credulousness around those claims.)
There’s a big difference between developing something and announcing loudly that you have something cool; the developers have done the latter here.
I think it's clearly pretty cool even if not as fast as people expect it to be
Thats completely unfair. They have developed something cool, just with not all the holes plugged.
Dude we're running unrestricted recursion and closures on GPUs! If that's not cool to you, I apologize, but that mind-blowingly cool to me, and I wanted to share it, even though the codegen is still initial. Hell I was actually going to publish it with the interpreters only, but I still coded an initial compiler because I thought people would like to see where it could go :(
The closure part was when I had to stop for a moment and go "wait, really?! ... COOL!" and I'm definitely going to try and remember to check back every so often (emphasis on 'try' given I have a brain like a sieve but still ;).
It is pretty cool milestone achieved, just not production ready.
This is very cool and it's being treated unfairly, though it's also obviously not ready for prime time; it's an existence proof.
To illustrate that, many people on here have been losing their mind over Kolomogorov-Arnold Networks, which are almost identically positioned; interesting idea, kind of cool, does what the paper claims, potentially useful in the future, definitely not any use at all for any real use-case right now.
(In part that's probably because the average understanding of ML here is _not_ strong, so there's more deference and credulousness around those claims.)