Comment by mieubrisse

19 days ago

I was looking for exactly this comment. Everybody's gloating, "Wow look how dumb AI is! Haha, schadenfreude!" but this seems like just a natural part of the evolution process to me.

It's going to look stupid... until the point it doesn't. And my money's on, "This will eventually be a solved problem."

The question though is what is the time horizon of “eventually”. Very different decisions should be made if it’s 1 year, 2 years, 4 years, 8 years etc. To me it seems as if everyone is making decisions which are only reasonable if the time horizon is 1 year. Maybe they are correct and we’re on the cusp. Maybe they aren’t.

Good decision making would weigh the odds of 1 vs 8 vs 16 years. This isn’t good decision making.

  • Or _never_, honestly. Sometimes things just don't work out. See various 3d optical memory techs, which were constantly about to take over the world but never _quite_ made it to being actually useful, say.

  • > This isn’t good decision making.

    Why is doing a public test of an emerging technology not good decision making?

    > Good decision making would weigh the odds of 1 vs 8 vs 16 years.

    What makes you think this isn't being done?

> It's going to look stupid... until the point it doesn't. And my money's on, "This will eventually be a solved problem."

AI can remain stupid longer than you can remain solvent.

  • Haha, I like your take!

    My variation was:

    "Leadership can stay irrational longer than you can stay employed"

Sometimes the last 10% takes 90% of the time. It'll be interesting to see how this pans out, and whether it will eventually get to something that could be considered a solved problem.

I'm not so sure they'll get there. If the solved problem is defined as a sub-standard but low cost, then I wouldn't bet against that. A solution better than that though, I don't think I'd put my money on that.

  • You just inspired a thought:

    What if the goalpost is shifted backwards, to the 90% mark (instead of demanding that AI get to 100%)?

    * Big corps could redefine "good enough" as "what the SotA AI can do" and call it good.

    * They could then layoff even more employees, since the AI would be, by definition, Good Enough.

    (This isn't too far-fetched, IMO, seeing how we're seeing calls for copyright violation to be classified as legal-when-we-do-it)

People seem like they’re gloating as the message received in this period of the hype cycle is that AI is as good as a junior dev without caveats and it in no way is suppose to be stupid.

To some people, it will always look stupid.

I have met people who believe that automobile engineering peaked in the 1960's, and they will argue that until you are blue in the face.