Comment by jcalx
8 days ago
Reminds me of this article from two years ago [0] and my HN comment on it. Yet another AI startup on the general trajectory of:
1) Someone runs into an interesting problem that can potentially be solved with ML/AI. They try to solve it for themselves.
2) "Hey! The model is kind of working. It's useful enough that I bet other people would pay for it."
3) They launch a paid API, SaaS startup, etc. and get a few paying customers.
4) Turns out their ML/AI method doesn't generalize so well. Reputation is everything at this level, so they hire some human workers to catch and fix the edge cases that end up badly. They tell themselves that they can also use it to train and improve the model.
5) Uh-oh, the model is underperforming, and the human worker pipeline is now some significant part of the full workflow.
6) Then someone writes an article about them using cheap human labor.
> 5) Uh-oh, the model is underperforming, and the human worker pipeline is now some significant part of the full workflow.
AI stands for "Actually, Indians."
I’ve been reading the article about the failure of new Siri and this quote stuck with me:
>Apple's AI/ML group has been dubbed "AIMLess" internally
The article: https://www.macrumors.com/2025/04/10/chaos-behind-siri-revea...
This has been a running joke in several projects I have been involved in, each time, apparently independently evolved. I never bring it up, but I am amused each time it appears out of the zeitgeist. It’s actually the best Kind of ironic humor, the kind that exposes a truth and a lie at the same time, with just enough political incorrectness to get traction.
I can’t even count the number of of times I have shut down “AI” projects where the actual plan was to use a labor pool to simulate AI, in order to create the training data to replace the humans with AI. Don’t get me wrong, it’s not a terrible idea for some cases, but you can’t just come straight out of the gate with fraud. Well, I mean, you could. But. Maybe you shouldn’t.
https://arstechnica.com/gadgets/2024/04/amazon-ends-ai-power...
holy crap the backstory on amazon fresh. no wonder. meanwhile I thought they'd solved that through tagging or something. I guess I should've known - I had read the articles on how scale ai had a ton of folks in the philippines...
I always thought it stood for Almost Implemented
Or more the more charitable Always Improving
2 replies →
Or it should be changed to MT -> Mechanical Turk
"Our bleeding edge AI/MT app..." does not sound bad at all.
It might fool general public, but the moment "Mechanical Turk" is uttered, some of us would ask "is this done by human?"
1 reply →
worth mentioning amazons amazing high tech "put everything in your cart and just walk out" ((( https://www.businessinsider.com/amazons-just-walk-out-actual... )))
they 100% use this AI "Actually Indians" technology
Destiny fan?
Anonymous
[flagged]
Try replacing it with Germans, and now it sounds like a praise, because the stereotype around Germans and Germany is that way.
Is your problem that their phrasing invites stereotyping, or that the stereotype it invites happens to be negative? Because if it's the latter, do you really think that's the semantic intention here?
9 replies →
It might offend people who are underpaying indians
I’ll add that I’ve heard this before. And it was from an Indian guy and he thought it was absolutely hilarious.
1 reply →
Actually blacks? No, not particularly offensive, why?
I told this AI joke to my Indian friends and they all laughed and said "true". Get a life, stop being a tone policing hall monitor, IRL people off Twitter aren't as easily offended by innocent jokes as you might think.
3 replies →
[flagged]
It's the other way around - it's racist if you're a US American, because in USA every problem is somehow ultimately called or blamed on racism.
Elsewhere in the world, we'd all it xenophobia, or Indophobia if one has something against Indian people specifically.
Though in this case, it's driven primarily by economic stereotypes, coming from the country becoming a cheap services outsourcing destination for the West, so there should be a better term coined for it. The anti-Indian sentiment in IT seems to be the services equivalent of the common "Made in China = cheap crap" belief, and because it applies to services and not products, it turns into discriminating people.
1 reply →
Nothing racist about it, India is essentially the #1 outsourcing destination. Not everything that involves an explicit mention of ethnicity / origin is racist.
3 replies →
It is reality. Reality cannot be racist. :P
5 replies →
You win the internets, sir.
> You win the internets, sir.
Yes, I did. By repeating someone else's apropos joke, I get to reap the sweet, sweet internet points.
downvoted for nostalgic use of Slashdot vernacular? we don't read these memetic cultural artifacts very often anymore. sometimes, you use them, just to keep them alive as a memento of a bygone era.
-- apologies --
> Reputation is everything at this level, so they hire some human workers to catch and fix the edge cases that end up badly.
The most important part of your reputation is admitting fault. Sometimes your product isn't perfect. Lying to your investors about automation rates is far worse for your reputation than just taking the L.
Literally every founder story disproves your theory
Okay slight correction: |lying to shareholders is okay until they start losing money. Then it's worse than admitting fault.
2 replies →
What are some actual examples of founders lying to their investors and getting away with it? I consistently see tech companies openly admit to losing insane amounts of money. OpenAI lost $5 billion last year on $3.7 billion in revenue and they're a $300 billion company????
1 reply →
And so then the moral impetus changes to “it is right to lie to your investors and clients and utilize underpaid manual labour where you claim you have intelligent machinery to get rich?”
It’s tens of millions of dollars lol.
The emperor just took off his socks and is starting to do a little uncoordinated “sexy” dance
The expectation is that the startup lies until they make it. It isn't too dissimilar to theranos.
What is making it, in these cases?
Monopoly? IPO? Exit and leave the bags with someone else?
This is bloody absurd
Or Uber. Or Tesla. Or Amazon Go.
1 reply →
I'd think ambiguous statements about the scope of your AI would make it hard to prove fraud, if you were being careful at all. "Involving AI" could mean 1% AI.
So it's doubly surprising to me the government chose (criminal) wire fraud, not (civil) securities fraud, which would have a lower burden of proof.
Government lawyers almost never try to make their job harder than it has to be.
If you click through to the doj press release, they're saying the statements were pretty explicit.
Yeah, specifying an automation rate of 93-97% to investors when it's "effectively 0%" per your own executives... That's pretty egregious.
7 replies →
To be perfectly honest, I am more amazed that it was a valid business model and people were willing not just invest in it, but offer their rather personal information to an unaffiliated third party.
In this case it's a little bit worse; the "nate" app had a literally "0% automation rate," despite representations to investors of an "AI" automation rate of "93-97%" powered by "LSTMs, NLP, and RL." No ML model ever existed! [1]
See:
> As SANIGER knew, at the time nate was claiming to use AI to automate online purchases, the app’s actual automation rate was effectively 0%. SANIGER concealed that reality from investors and most nate employees: he told employees to keep nate’s automation rate secret; he restricted access to nate’s “automation rate dashboard,” which displayed automation metrics; and he provided false explanations for his secrecy, such as the automation data was a “trade secret.”
> SANIGER claimed that nate's "deep learning models" were "custom built" and use a "mix of long short-term memory, natural language processing, and reinforcement learning."
> When, on the eve of making an investment, an employee of Investment Firm-1 asked SANIGER about nate's automation rate, that is, the percentage of transactions successfully completed with nate's AI technology, SANIGER claimed that internal testing showed that "success ranges from 93% to 97%."
(from [1])
[1]: https://www.justice.gov/usao-sdny/media/1396131/dl?inline
> Turns out their ML/AI method doesn't generalize so well.
I'd argue the opposite. AI typically generalizes very well. What it can't do well is specifics. It can't do the same thing over and over and follow every detail.
That's what's surprised me about so many of these startups. They're looking at it from the bottom-up, something ai is uniquely bad at.
I think you're being excessively generous. According to the linked article,
> But despite Nate acquiring some AI technology and hiring data scientists, its app’s actual automation rate was effectively 0%, the DOJ claims.
Sometimes people are just dishonest. And when those people use their dishonestly to fleece real people, they belong in prison.
This is what we did internally. Someone said we could use LLMs for helping engineering teams solve production issues. Turned out it was just a useless tar pit. End game is we outsourced it.
Neither of these solved the problem that our stack is a pile of cat shit and needs some maintenance from people who know what the hell they are doing. It’s not solving a problem. It’s adding another layer of cat shit.
Going back earlier, a similar thing in 2017 was done.
https://thespinoff.co.nz/the-best-of/06-03-2018/the-mystery-...
Interestingly this was a task that could probably be done well enough by AI now.
Not that these guys knew how close to reality they turned out to be. I assume they just had no idea of the problem they were attempting and assumed that it was at the geotaging a photo end of the scale when it was at the 'is it a bird' end.
Maybe I'm being overly optimistic in assuming people who do this are honestly attempting to solve the problem and fudging it to buy time. In general they seem more deluded about their abilities than planning a con from start to finish.
> its app’s actual automation rate was effectively 0%, the DOJ claims.
In that case, I believe it's a scam. 0% isn't some edge case.
tbh I don’t think any one except for investors care how you deliver a service as long as quality and price are right.
Honestly I think the only real problem here is if you then raise further money claiming you've solved the problem when you haven't, which is also where this particular startup comes unstuck
> Uh-oh, the model is underperforming, and the human worker pipeline is now some significant part of the full workflow.
Tesla robots and Taxis enter the room...