← Back to context

Comment by atleastoptimal

2 years ago

it’s harder than we thought so we leveraged machine learning to grow it, rather than creating it symbolically. The leaps in the last 5 years are far beyond anything in the prior half century, and make predictions of near term AGI much more than a “boy who cries wolf” scenario to anyone really paying attention.

I don’t understand how your second paragraph follows. It just seems to be whining that text and art generative models are easier than a fully fledged servant humanoid, which seems like a natural consequence of training data availability and deployment cost.

> I don’t understand how your second paragraph follows. It just seems to be whining that text and art generative models are easier than a fully fledged servant humanoid, which seems like a natural consequence of training data availability and deployment cost.

No, it's pointing out that "text and art generative models" are far less useful [1] than machines that would be just as little smarter at boring ordinary work, to relieve real normal people from drudgery.

I find it rather fascinating how one could not understand that.

___

[1]: At least to humanity as a whole, as opposed to Silicon Valley moguls, oligarchs, VC-funded snake-oil salesmen, and other assorted "tech-bros" and sociopaths.

  • > No, it's pointing out that "text and art generative models" are far less useful [1] than machines that would be just as little smarter at boring ordinary work, to relieve real normal people from drudgery.

    That makes no sense. Is alphafold less useful than a minimum wage worker because alphafold can't do dishes? The past decades of machine learning have revealed that the visual-spatial capacities that are commonplace to humans are difficult to replicate artificially. This doesn't mean the things which AI can do well are necessarily less useful than the simple hand-eye coordination that are beyond their current means. Intelligence and usefulness isn't a single dimension.