← Back to context

Comment by TeMPOraL

3 days ago

AI marketing isn't extreme - not on the LLM vendor side, at least; the hype is generated downstream of it, for various reasons. And it's not the marketing that's saying "you're using it wrong" - it's other users. So, unless you believe everyone reporting good experience with LLMs is a paid shill, there might actually be some merit to it.

It is extreme, and on the vendor side. The OpenAI non profit vs profit saga, was about profit seeking vs the future of humanity. People are talking about programming 3.0.

I can appreciate that it’s other users who are saying it’s wrong, but that doesn’t escape the point on ignoring the context.

Moreover, it’s unhelpful communication. Its gives up acknowledging a mutually shared context, the natural confusion that would arise from the ambiguous, high level hype, and the actual down to earth reality.

Even if you have found a way to make it work, having someone understand your workflow can’t happen without connecting the dots between their frame of reference and yours.

  • It really is, for example here is a quote from AI 2027:

    > By early 2030, the robot economy has filled up the old SEZs, the new SEZs, and large parts of the ocean. The only place left to go is the human-controlled areas. [...]

    > The new decade dawns with Consensus-1’s robot servitors spreading throughout the solar system. By 2035, trillions of tons of planetary material have been launched into space and turned into rings of satellites orbiting the sun. The surface of the Earth has been reshaped into Agent-4’s version of utopia: datacenters, laboratories, particle colliders, and many other wondrous constructions doing enormously successful and impressive research.

    This scenario prediction, which is co-authored by a former OpenAI researcher (now at Future of Humanity Institute), received almost 1 thousand upvotes here on HN and the attention of the NYT and other large media outlets.

    If you read that and still don't believe the AI hype is _extreme_ then I really don't know what else to tell you.

    --

    https://news.ycombinator.com/item?id=43571851

You have to be pretty native to think VC’s don’t astroturf forums and let random mobs steer discussions about their investments. Even dinosaurs like Microsoft have been caught doing exactly that many time. Including fake “letters to the editor” campaigns when newspapers were a thing

  • My experience with web forums has been: everything a poster disagrees with is astroturf and bots, everything a poster agrees with is brave people speaking truth to power. I don't doubt that LLM companies are astroturfing comments just like I don't doubt that anti LLM people are sharing threads in their internal Discords and asking their friends to brigade a thread. Trying to infer conspiracy to invalidate an opinion on the Internet is fraught.

I think the relentless podcast blitz by OpenAI and Anthropic founders suggests otherwise. They're both keen to confirm that yes, in 5 - 10 years, no one will have any jobs any more. They're literally out there discussing a post employment world like it's an inevitability.

That's pretty extreme.

  • This was present (in a positive way, though) even in Soviet films for children.

        Позабыты хлопоты,
        Остановлен бег,
        Вкалывают роботы,
        Счастлив человек!
    
        Worries forgotten,
        The treadmill doesn't run,
        Robots are working,
        Humans have fun!

  • Those billions won't raise themselves, you know.

    More generally, these execs are talking their book as they're in a low margin capital intensive businesses whose future is entirely dependent on raising a bunch more money, so hype and insane claims are necessary for funding.

    Now, maybe they do sortof believe it, but if so, why do they keep hiring software engineers and other staff?

> And it's not the marketing that's saying "you're using it wrong" - it's other users.

No, it's the non-coding managers who vibe-coded a half-working prototype, not other users. And here, the Dunning-Kruger effect is at play - those non-coding types do not understand that AI is not working for them either.

Full disclosure: I do rely on vibe-coded jq lines in one-off scripts that will definitely not process more data after the single intended use, and this is where AI saves my time.

It's called grassroots marketing. It works particularly well in the context of GenAI because it is fed with esoteric and ideological fragments that overlap with common beliefs and political trends. https://en.wikipedia.org/wiki/TESCREAL

Therefore, classical marketing is less dominant, although more present at down-stream sellers.

  • Right. Let's take a bunch of semi-related groups I don't like, and make up an acronym for them so any of my criticism can be applied to some subset of those groups in some form, thus making it seem legitimate and not just a bunch of half-assed strawman arguments.

    Also, I guess you're saying I'm a paid shill, or have otherwise been brainwashed by marketing of the vendors, and therefore my positive experiences with LLMs are a lie? :).

    I mean, you probably didn't mean that, but part of my point is that you see those positive reports here on HN too, from real people who've been in this community for a while and are not anonymous Internet users - you can't just dismiss that as "grassroot marketing".

    • > I mean, you probably didn't mean that

      Correct, I think you've read too much into it. Grassroots marketing is not a pejorative term, either. Its strategy is to trigger positive reviews about your product, ideally by independent, credible community members, indeed.

      That implies that those community members have motivations other than being paid. Ideologies and shared beliefs can be some of them. Being happy about the product is a prerequisite, whatever that means for the individual user.