ML is still a thing. I believe that most AI research is still non-LLM ML-related - things like CNN+Computer Vision, RL, etc. In my opinion, the hype around LLMs has a lot to do with its accessibility to the general public compared to existing ML techniques which are highly specialised.
To be fair, I remember that some 5 years ago a lot of ML was quite accessible to programmers as it was often just a couple lines of python using tensorflow, or later pytorch.
I am almost in disbelief that LLMs are the thing that reached the "tipping point" for most companies to magically care for ML. The amount of products, that could have been built properly 5 years ago, that exist now in a slower form because of "reasoning" LLMs, is likely astonishing.
scikit too, i think. LLMs probably took off because they think it's a legal shield against "IP infringement/theft". Amazon has access to whatever magnitude of books and so does google, so if FB, X, mistral (actually i am not sure, is that a university led project? they probably got books if so), openai want to return decent results about books, they gotta get the books, too. Buying, scanning, OCR, copyediting, feeding the training scripts (jsonifying the input most likely), forget that, anna and russia and the high seas are literally right there, 4 octets away.
I'd hope that the very first commercially successful "AI media" be it a 1 minute commercial or a 10 minute TV segment or whatever brings the lawsuits. I really want to know if i can feel any vindication about arguing about this for the last 3 decades of my life. (IP specifically)
more to your member-berries, whole swaths of interesting research disappeared, either abandoned or bought and closed sourced. Genetic Algorithms, artificial life, stuff with optics, 3-atom thick transistors (hey, IBM patented that, but microsoft also did basically the same thing with their STP qubits - everything has to be arranged at atomic widths or whatever. IBM also built a USS Enterprise (unsure if D i am not a huge fan) out of atoms, forget if to scale. in like 2003. Microsoft spent 17 years playing catch-up with *the* hardware people.)
yeah. is the conclusion that moneyed interests suck?
Convolutional neural networks for image recognition and more generally image processing. They are much better than they were a few years ago, when they were all the rage, but the hype has disappeared. These systems improve the performance of radiologists at detecting clinically significant cancers. They can be used to detect invasive predators or endangered native wildlife using cameras in the bush, in order to monitor populations, allocate resources for trapping of pests, etc.
ML generally is for pattern recognition in data. That includes anomaly detection in financial data, for example. It is used in fraud detection.
Image ML/AI is used in your phone's facial recognition, in various image filtering and analysis algorithms in your phone's camera to improve picture quality or allow you to edit images to make them look better (to your taste, anyway).
AI image recognition is used to find missing children by analysing child pornography without requiring human reviewers to trawl through it - they can just check the much fewer flagged images.
AI can be used to generate captions on videos for the deaf or in text to speech for the blind.
There are tons of uses of AI/ML. Another example: video game AI. Video game upscaling. Chess and Go AI: NNUE makes Chess AI far stronger and in really cool creative ways which have changed high level chess and made it less drawish.
ML is still a thing. I believe that most AI research is still non-LLM ML-related - things like CNN+Computer Vision, RL, etc. In my opinion, the hype around LLMs has a lot to do with its accessibility to the general public compared to existing ML techniques which are highly specialised.
To be fair, I remember that some 5 years ago a lot of ML was quite accessible to programmers as it was often just a couple lines of python using tensorflow, or later pytorch.
I am almost in disbelief that LLMs are the thing that reached the "tipping point" for most companies to magically care for ML. The amount of products, that could have been built properly 5 years ago, that exist now in a slower form because of "reasoning" LLMs, is likely astonishing.
scikit too, i think. LLMs probably took off because they think it's a legal shield against "IP infringement/theft". Amazon has access to whatever magnitude of books and so does google, so if FB, X, mistral (actually i am not sure, is that a university led project? they probably got books if so), openai want to return decent results about books, they gotta get the books, too. Buying, scanning, OCR, copyediting, feeding the training scripts (jsonifying the input most likely), forget that, anna and russia and the high seas are literally right there, 4 octets away.
I'd hope that the very first commercially successful "AI media" be it a 1 minute commercial or a 10 minute TV segment or whatever brings the lawsuits. I really want to know if i can feel any vindication about arguing about this for the last 3 decades of my life. (IP specifically)
more to your member-berries, whole swaths of interesting research disappeared, either abandoned or bought and closed sourced. Genetic Algorithms, artificial life, stuff with optics, 3-atom thick transistors (hey, IBM patented that, but microsoft also did basically the same thing with their STP qubits - everything has to be arranged at atomic widths or whatever. IBM also built a USS Enterprise (unsure if D i am not a huge fan) out of atoms, forget if to scale. in like 2003. Microsoft spent 17 years playing catch-up with *the* hardware people.)
yeah. is the conclusion that moneyed interests suck?
1 reply →
Figure.ai's Helix: A Vision-Language-Action Model for Generalist Humanoid Control
https://news.ycombinator.com/item?id=43115079
Convolutional neural networks for image recognition and more generally image processing. They are much better than they were a few years ago, when they were all the rage, but the hype has disappeared. These systems improve the performance of radiologists at detecting clinically significant cancers. They can be used to detect invasive predators or endangered native wildlife using cameras in the bush, in order to monitor populations, allocate resources for trapping of pests, etc.
ML generally is for pattern recognition in data. That includes anomaly detection in financial data, for example. It is used in fraud detection.
Image ML/AI is used in your phone's facial recognition, in various image filtering and analysis algorithms in your phone's camera to improve picture quality or allow you to edit images to make them look better (to your taste, anyway).
AI image recognition is used to find missing children by analysing child pornography without requiring human reviewers to trawl through it - they can just check the much fewer flagged images.
AI can be used to generate captions on videos for the deaf or in text to speech for the blind.
There are tons of uses of AI/ML. Another example: video game AI. Video game upscaling. Chess and Go AI: NNUE makes Chess AI far stronger and in really cool creative ways which have changed high level chess and made it less drawish.
https://github.com/microsoft/OmniParser/ is trending on Github
Bad image generation
Chat bots that tell you to kill yourself