Comment by simianwords
5 hours ago
I don't think this author has a good mental model for how capable LLM's are. This is what he has to say about AI search. AI based search is one of the biggest leaps to happen to searching and retrieval.
> AI search is still a bad idea.
https://pluralistic.net/2024/05/15/they-trust-me-dumb-fucks/
This is the most charitable thing he has to say about AI.
> AI is a bubble and it will burst. Most of the companies will fail. Most of the data-centers will be shuttered or sold for parts. So what will be left behind?
> We'll have a bunch of coders who are really good at applied statistics. We'll have a lot of cheap GPUs, which'll be good news for, say, effects artists and climate scientists, who'll be able to buy that critical hardware at pennies on the dollar. And we'll have the open source models that run on commodity hardware, AI tools that can do a lot of useful stuff, like transcribing audio and video, describing images, summarizing documents, automating a lot of labor-intensive graphic editing, like removing backgrounds, or airbrushing passersby out of photos. These will run on our laptops and phones, and open source hackers will find ways to push them to do things their makers never dreamt of.
You can imagine that a guy who seriously thinks that the only thing AI will be doing in the future is summarising, describing images and transcribing is either completely clueless or deliberately misleading.
Not a person to be taken seriously
Getting back to a functional search engine is the most interesting part of this technology to me. Something that just gives links to the most relevant pages without a bunch a LLM editorializing on top of it.
But do current LLMs solve that, or do they still ultimately depend on making calls to other search indexes? Seems like they could theoretically be trained to semantically match urls from their training set, but I think the models would have to be specifically architected for that, so I'm curious if anyone knows more about this.
I'd also be interested if there's any small open models working towards that.
It's strange reading people who I see as very intelligent and very interesting who are so, so AI-skeptical, and especially in this case where Doctorow has interacted with other people who I assume are very smart and not prone to buzz word psychosis, who see AI as an immanent existential threat ala sci fi novels. We have a lot of very smart and capable people who are split on this, although I think the split is heavily weighted in favor of people who see the tech as being really freaking amazing/scary
the answer to your question is that society at large finds skepticism or pessimism more interesting. which is why we end up with dilettantes like this guy.
I think those are likely the only useful or net-positive things for society AI will do, at least for some time until there’s a fundamental advancement beyond LLMs. It can obviously do more than that now, like impersonate people for scams, induce psychosis in vulnerable people, shill and astroturf at a scale we haven’t seen before, spam open source projects with terrible PRs and vulnerability reports, and quite a bit more.
why do people believe stuff like this? this is obviously untrue -- AI is already solving open problems in mathematics.
Seeing how it sucks at languages you may be right, even transcribing may be dubious.
how does it suck at languages?