Comment by flumpcakes

2 years ago

AI leaves a bad taste in my mouth but I think it is because we have moved away from ML/Vision problems with a strong background in academic research, and high impact and purposeful development of these into products.

We are now exposed to companies hyping huge general purpose models with whatever tech is the latest fad, which resonates with the average person who wants to generate memes, etc.

This is impressive only at the surface level. Take a specific application: prompting it to write you an algorithm, outside of any copying-and-pasting from a textbook these models will generate bad/incorrect code and then explain why it works.

It's like having an incompetent junior on your team who has the bravado of a senior 10x-er.

That's not to say "AI" doesn't have a purpose, but currently it seems just hyped up by sales people looking for Series-A funding or an IPO cash-out. I want to see the models developed for specific tasks that will have a big impact, rather than the slight-of-hand or circus tricks we currently get.

Maybe that time is passed, and general models are the future and we will just have to wait until they're as good as any specific model that was built for any task you can ask of it.

It will be interesting what happens when these "general" models are used without much thought and their unchecked results lead to harm. Will we still find companies culpable?

I think you hit on some good points. It seems like in common language, AI has taken the meaning “general purpose”, rather than satisfying some criterion of the futurist definition.

Personally, I care very little about whether the machine is intelligent or not. If it actually happens in my lifetime, I believe it will be unmistakable.

I am interested in how people solve problems. If you built and trained a model that solves a challenging task, THAT is something I find noteworthy and what I want to read about.

Apparently utility is boring, and “just ML” now. There’s tons of academic papers I see fly under the radar probably because they solve specific problems that the average person doesn’t know exists. Much of ML doesn’t foray into “popular science” enough to hold general public interest.

I dread the coming "age of buggyness" when imprecise LLMs pervade UIs and make everything always a little broken.

I don't deny that LLM represent a coming revolution in computer interaction. But as someone who's already mastered the command line, programming, etc. I already know how to use computers. LLMs will actually be slower for me for a huge variety of tasks like finding information. English is so clumsy compared to programming languages.

I feel like for nerds like me "user friendlyness" is often just a hindrance. For me this has been the case with GUI in general, touch GUI especially, and probably will be for most LLM applications that don't fundamentally do something I cannot(like stable diffusion).