← Back to context

Comment by salomonk_mur

5 months ago

It's amazing what people will do for clout. His whole reputation is ruined. What was Schumer's endgame?

But does reputation work? Will people google "Matt Shumer scam", "HyperWrite scam", "OthersideAI scam", "Sahil Chaudhary scam", "Glaive AI scam" before using their products? He wasted everyone's time, but what's the downside for him? Lots of influencers did fraud, and they do just fine.

  • Sure, it's complicated. The core of the AI world right now isn't that large and in many ecosystems it's common for people to speak to each other behind the scenes and to learn about alleged incidents regarding individuals in the space. Such whispering can become an impediment for someone with a "name" in a space, even if not necessarily a full loss of their reputation or opportunities.

    • Except that bad PR is good PR. Trump proves this daily. Terry A Davis proved this in the context of tech (he coined the term "glowie" in its original racially charged usage in addition to temple OS). If Chris Chan ever learned to code to make the Sonichu game of their dreams, I'm sure that there would be a minor bidding war on their "talent"

      1 reply →

  • > Lots of influencers did fraud, and they do just fine.

    Since the current created legal landscape does not punish fraudsters they keep doing it and succeeding. Same thing as society allowing people to fail upward.

  • This may sound harsh but it's true.

    You could do shit things and still come out with people perceiving you as a "winner"; because you got money, status, whatever you wanted, e.g. Adam Neumann. This is "fine" because people want to associate themselves with winners.

    Or, you could do pretty much the exact same thing but come out looking as an absolute loser; e.g. SBF, this guy, etc... This is terrible as people do not want to be associated with losers.

    IMO, this guy's career is dead, forever.

That's what I'm wondering. Did he think that nobody would bother checking it? Then he was saying all that stuff about the model being "corrupted during upload" - maybe he didn't think it was going to get as much traction as it did?

Plenty of people have scammed their way to the top of the benchmark league tables, by training on the benchmarking datasets. And a lot of the people who do this just get ignored - they don't take much heat for it.

If the scam hadn't gained enough publicity for people to start paying attention, he would have gotten away with it :)

  • But not really, which is what confuses the heck out of me. Thousands of people downloaded and used the model. It obviously wasn’t spectacular.

    It’s like claiming to have turned water into wine, then giving away thousands free samples all over the world (of water) so that everyone instantly knows you’re full of crap.

    The only explanation I can imagine for perpetrating this fraud is a fundamental misunderstanding that the model would be published for all to try?

    I just can’t wrap my head around the incentives here. I guess mental illness or vindictive action are possibilities?

    Hard to imagine how this plays out.

I haven’t followed this story. What did he do that ruined his reputation? The story link here is broken for me.

  • An AI engagement farmer on twitter claimed to create a llama 3.1 fine tine, trained on "reflection" (ie internal thinking) prompting that outperformed the likes of Llama 405B and even the closed source models on benchmarks.

    The guy says that the model is so good because it was tuned on data generated by Glaive AI. He tells everyone he uses Glaive AI and that everyone else should use it too.

    Releases the model on HF, is an absolute poopstorm. People cannot recreate the stated benchmarks, the guy who released the model literally said "they uploaded it wrong". Pretty much turns to dog-ate-my-homework type excuses that don't make sense either. Turns out people find it's just llama 3.0 with some lora applied.

    Then some others do some digging to find out that Glaive AI is a company that Matt Schumer invested in, which he did not disclose on Twitter.

    He does a holding pattern on Twitter, saying something to the effect of "the weight got scrambled!" and says that they're going to give access to a hosted endpoint and then figure out the weight issue later.

    People try out this hosted model and find out it's actually just proxying requests through to anthropic's sonnet 3.5 api, with some filtering for words like "Claude".

    After he was found out, they switch the proxy over to gpt 4o.

    The endgame of this guy was probably 1. to promote his company and 2. to raise funding for another company. Both failed spectacularly, this guy is a scammer to the nth degree.

    Edit: uncensored "Glaive AI".