← Back to context

Comment by pizzathyme

9 days ago

My anxiety about falling behind with AI plummeted after I realized many of these tweets are overblown in this way. I use AI every day, how is everyone getting more spectacular results than me? Turns out: they exaggerate.

Here are several real stories I dug into:

"My brick-and-mortar business wouldn't even exist without AI" --> meant they used Claude to help them search for lawyers in their local area and summarize permits they needed

"I'm now doing the work of 10 product managers" --> actually meant they create draft PRD's. Did not mention firing 10 PMs

"I launched an entire product line this weekend" --> meant they created a website with a sign up, and it shows them a single javascript page, no customers

"I wrote a novel while I made coffee this morning" --> used a ChatGPT agent to make a messy mediocre PDF

Getting viral on X is the current replacement for selling courses for {daytrading,amazon FBA,crypto}.

The content of the tweets isn't the thing.. bull-posting or invoking Cunningham's Law is. X is the destination for formula posting and some of those blue checkmarks are getting "reach" rev share kickbacks.

  • Same with Linkedin. Ive seen a lot of posts telling u to comment something to get a secret guide on how to do Y.

    If it was successful, they wouldnt be telling everyone about it

  • Yeah, if you get enough impressions, you get some revenue, so you don't need to sell any courses, just viral content. Which is why some (not ALL) exaggerate as suggested.

    • It's a bit insane how much reach you need before you'd earn anything impactful, though.

      I average 1-2M impressions/month, and have some video clips on X/Twitter that have gotten 100K+ views, and average earnings of around $42/month (over the past year).

      I imagine you'd need hundreds of millions of impressions/views on Twitter to earn a living with their current rates.

      3 replies →

The worst is Reddit these days.

I pretty much never even went there for technical topics at all, just funny memes and such, but one day recently I started seeing crazy AI hype stories getting posted, and sadly I made a huge mistake and I clicked on one once, and now it’s all I get.

Endless posts from subs like r/agi, r/singularity, as well as the various product specific subs (for Claude, OpenAI, etc). These aren’t even links to external articles, these are supposedly personal accounts of someone being blown away by what the latest release of this or that model or tool can do. Every single one of these posts boils down to some irritating “game over for software engineers” hype fest, sometimes with skeptical comments calling out the clearly AI-generated text and overblown claims, sometimes not. Usually comments pointing out flaws in whatever’s being hyped are just dismissed with a hand wave about how the flaw may have been true at one time, but the latest and greatest version has no such flaws and is truly miraculous, even if it’s just a minor update for that week. It’s always the same pattern.

There’s clearly a lot of astroturfing going on.

I actually read through the logs and the code in the rare instances someone actually posts their prompts and the generated output. If I'm being overly cynical about the tech, I want to know.

The last one I did it on was breathlessly touted as "I used [LLM] to do some advanced digital forensics!"

Dawg. The LLM grepped for a single keyword you gave it and then faffed about putting it into json several times before throwing it away and generating some markdown instead. When you told it the result was bad, it grepped for a second word and did the process again.

It looks impressive with all these json files and bash scripts flying by, but what it actually did was turn a single word grep into blog post markdown and you still had to help it.

Some of you have never been on enterprise software sales calls and it shows.

  • > Some of you have never been on enterprise software sales calls and it shows.

    Hah—I'm struggling to decide whether everyone experiencing it would be a good thing in terms of inoculating people's minds, or a terrible thing in terms of what it says about a society where it happens.

“I used AI to make a super profitable stock trading bot” —-> using fake money with historical data

“I used AI to make an entire NES emulator in an afternoon!” —-> a project that has been done hundreds of times and posted all over github with plenty of references

  • > “I used AI to make a super profitable stock trading bot” —-> using fake money with historical data

    Stocks are another matter. There were wonder "algorithms" even before "AI". I helped some friends tweak some. They had the enthusiasm and I had the programming expertise and I was curious.

    That was a couple years ago. None of them is rich and retired now - which was what the test runs were showing - and I think most aren't even trading any more.

  • I vibe coded a few ideas i had in my mind for a while. My basic stack is html, single page, local storage and lightweight js.

    It is really good in doing this.

    those ideas are like UI experiments or small tools helping me doing stuff.

    Its also super great in ELI5'ing anything

I feel the same. I understand some of the excitement. Whenn I use it I feel more productive as it seems I get more code done. But I never finish anything earlier because it never fails to introduce a bizarre bug or behaviour that no one in sane made doing the task would

> "I wrote a novel while I made coffee this morning" --> used a ChatGPT agent to make a messy mediocre PDF

There was a story years ago about someone who made hundreds of novels on Amazon, in aggregate they pulled in a decent penny. I wonder if someone's doing the same but with ChatGPT instead.

  • Pretty sure there was a whole era where people were doing this with public domain works, as well as works generated by Markov chains spitting out barely-plausible-at-first-glance spaghetti. I think that well started to dry up before LLMs even hit the scene.

  • It had happened in Japan. There was on author who were updating 30+ series simultaneously on Kakuyomi, the largest Japanese web novel site. A few of them got top ranked.

  • Afaik, I think the way people are making money in this space is selling courses that teach you how to sell mass produced AI slop on Amazon, rather than actually doing it

At the end of the day, it doesn't really get you that much if you get 70% of the way there on your initial prompt (which you probably spent some time discussing, thinking through, clarifying requirements on). Paid, deliverable work is expected to involve validation, accountability, security, reliability, etc.

Taking that 70% solution and adding these things is harder than if a human got you 70% there, because the mistakes LLMs make are designed to look right, while being wrong in ways a sane human would never be. This makes their mistakes easy to overlook, requiring more careful line-by-line review in any domain where people are paying you. They also duplicate code and are super verbose, so they produce a ton tech debt -> more tokens for future agents to clog their contexts with.

I like using them, they have real value when used correctly, but I'm skeptical that this value is going to translate to massive real business value in the next few years, especially when you weigh that with the risk and tech debt that comes along with it.

  • > and are super verbose...

    Since I don't code for money any more, my main daily LLM use is for some web searches, especially those where multiple semantic meanings would be difficult specify with a traditional search or even compound logical operators. It's good for this but the answers tend to be too verbose and in ways no reasonably competent human would be. There's a weird mismatch between the raw capability and the need to explicitly prompt "in one sentence" when it would be contextually obvious to a human.

  • Imo getting 70% of the way is very valuable for quickly creating throwaway prototypes, exploring approaches and learning new stuff.

    However getting the AI to build production quality code is sometimes quite frustrating, and requires a very hands-on approach.

    • Yep - no doubt that LLMs are useful. I use them every day, for lots of stuff. It's a lot better than Google search was in its prime. Will it translate to massively increased output for the typical engineer esp. senior/staff+)? I don't think it will without a radical change to the architecture. But that is an opinion.

      1 reply →

Pretty much every x non-political/celeb account with 5K followers+ is a paid influencer shill lol.

Welcome to the internet

One of my favorite stories from the dotcom bust is when people, after the bust, said something along the lines of: "Take Pets.com. Who the hell would buy 40lb dogfood bags over the internet? And what business would offer that?? It doesn't make sense at all economically! No wonder they went out of business."

Yet here we are, 20 years later, routinely ordering FURNITURE on the internent and often delivered "free".

My point being, sure, there is a lot of hype around AI but that doesn't mean that there aren't nuggets of very useful projects happening.

  • True, but I think the point of that story is it’s really hard to predict what’s crap and what’s just too early.

    It doesn’t guarantee the skeptics are wrong all the time.

  • "Look at what happened with the internet!" also doesn't mean the same will happen with AI

    Neither argument works

  • There's 0 requirement that [new technology] must follow the path of the internet though. So it's kind of an irrelevant non sequitur.

  • Pets.com was both selling everything at a loss and spending millions on advertising. It wasn't the concept that was the issue.