Comment by snowwrestler

2 years ago

The insight here is that the speed of correction is a crucial component of the perceived long-term value of an interface technology.

It is the main reason that handwriting recognition did not displace keyboards. Once the handwriting is converted to text, it’s easier to fix errors with a pointer and keyboard. So after a few rounds of this most people start thinking: might as well just start with the pointer and keyboard and save some time.

So the question is, how easy is it to detect and correct errors in generative AI output? And the unfortunate answer is that unless you already know the answer you’re asking for, it can be very difficult to pick out the errors.

I think this is a good rebuttal.

Yeah the feedback loop with consumers has a higher likelihood of being detrimental, so even if the iteration rate is high, it’s potentially high cost at each step.

I think the current trend is to nerf the models or otherwise put bumpers on them so people can’t hurt themselves. That’s one approach that is brittle at best and someone with more risk tolerance (OpenAI) will exploit that risk gap.

It’s a contradiction then at best and depending on the level of unearned trust from the misleading marketing, will certainly lead to some really odd externalities

Think “man follows google maps directions into pond” but for vastly more things.

I really hated marketing before but yeah this really proves the warning I make in the AI addendum to my scarcity theory (in my bio).