← Back to context

Comment by atleastoptimal

19 hours ago

You seem to have two separate claims. The first that it would be difficult to achieve AGI with current or proposed technology, and the second being that it would be difficult to control AGI, thus making it too risky to use or deploy.

The second is a significant open problem (the alignment problem) and I'd wager it is a very real risk which companies need to take more seriously. However, whether it would be feasible to control or direct an AGI towards reliably safe, useful outputs has no bearing on whether reaching AGI is possible via current methods. Current scaling gains and the rate of improvement (see METR's horizons on work an AI model can do reliably on its own) make it fairly plausible, at least more plausible than the plain denial that AGI is possible I see around here with very little evidence.