← Back to context

Comment by advael

3 days ago

AI continues to be a stupidly vague term, and the example I keep going back to is present in this article

Meaningful advances in medical diagnosis are not coming from chatbot companies. Some are coming from machine learning methods. Perhaps measuring public sentiment about such a vagary is not a very productive way to quantify anything

That said, I continue to also be frustrated with people using the abstract concept of a new technology as a substitute for the institutions that use that technology to exert power in the world and what they do with that power, which is - as many in the comments already point out - is what the vast majority of people are actually mad about, and right to be

Right now, as I'm writing this comment, AI = LLMs and image generation. That's it. It's as simple as that.

  • > Right now, as I'm writing this comment, AI = LLMs and image generation. That's it. It's as simple as that

    I think agentic harnesses add a lot to LLMs, even if many are just simple loops. They are a separate thing from LLMs, are they not?

    I get the feeling that even if we stopped shipping new models today, new far more useful products would be getting shipped for years, just with harness improvements. Or, am I way off base here?

  • Okay, then why is one of the questions these surveys ask folks about medical diagnosis? Do they mean to imply that advances in this field come from transformer-based chatbots and image generation? Because that framing is used in the "clear benefits of AI" section of every damn article about public opinion and controversy surrounding "AI". If you're right about the public perception of the term, this implies that people who write these articles - "journalists" and tech PR people and surveyors alike - are either ignorant of this general usage or deliberately being deceptive