Comment by jcalx

2 years ago

I don't think Kaedim set out to rely so heavily on manual human labor (although given the bad actors in the space, I would not be terribly surprised if they did) — their HN post from last year [1] seems sincere and driven by an interesting, personally-motivated problem. But the general trajectory this always seems to follow is:

1) Someone runs into an interesting problem that can potentially be solved with ML/AI. They try to solve it for themselves.

2) "Hey! The model is kind of working. It's useful enough that I bet other people would pay for it."

3) They launch a paid API, SaaS startup, etc. and get a few paying customers.

4) Turns out their ML/AI method doesn't generalize so well. Reputation is everything at this level, so they hire some human workers to catch and fix the edge cases that end up badly. They tell themselves that they can also use it to train and improve the model.

5) Uh-oh, the model is underperforming, and the human worker pipeline is now some significant part of the full workflow.

6) Then someone writes an article about them using cheap human labor.

Last point aside, this isn't a bad trajectory! You really can get to a point where you've automated most of your work, and there will (and arguably, should) always be some humans in the loop. And the manual work really can help you train the automation. But it's getting to that point that can be dicey, and that's why you have articles like these.

[1] https://news.ycombinator.com/item?id=30552988

Theranos ran the same playbook, but instead of AI models and human labor, it was blood tests and competitors' lab machines.

I don't see why it's so difficult to be up front about practical limitations and avoid getting in trouble between steps 3 and 4.

  • I agree. There are ways not to slide down the slippery slope. Open-sourcing the project at step 2 sidesteps the whole thing. Or being transparent and realistic about practical limitations.

    But in large part it is the psychology of being an "ML/AI startup" that is the trap — thinking SaaS is about the software and not about the service. Then everything else is secondary to the holy algorithm. Manual human labor is seen as just a stopgap measure until the automation is perfected, and to acknowledge that at all is tantamount to admitting imperfection, and thus failure.

    Theranos is an excellent non-software example. Presumably at some point in her life, Holmes really did want to make blood tests more convenient for patients. But Theranos' eventual obsession with the Edison device made them willing to sacrifice more and more on its altar — money, credibility, patients' safety — until it destroyed them utterly.

  • The difficulty is that in a VC world, you admit that you will more-or-less permanently need humans in the loop, which kills margins and scalability. At Google/Meta the number of SERP raters and content moderators are only a few tens of thousands, serving a population of billions - and this only after major success. But at Uber or DoorDash, every sale requires a human in the loop from the start. It’s better to be in the first category than the second. As of now AI startups are seen more to be in the first “pure software” category, meaning the margins are expected to be sky high. Of course, the risk here is hardware costs (which will likely come down within the decade for reasonably useful and general models), and humans in the loop (which likely will be a permanent fixture given the hallucinations and opaqueness of the current generation of LLMs).

They set out without any ability to do what they claimed to do and lied about it to investors, customers, and the press. A 3D model outsourcing shop isn't going to raise a lot of investment money compared to an "AI" startup.