Comment by semi-extrinsic

2 years ago

This assumes an AI which has intentions. Which has agency, something resembling free will. We don't even have the foggiest hint of idea of how to get there from the LLMs we have today, where we must constantly feed back even the information the model itself generated two seconds ago in order to have something resembling coherent output.

Choose any limit. For example, lack of agency. Then leave humans alone for a year or two and watch us spontaneously try to replicate agency.

We are trying to build AGI. Every time we fall short, we try again. We will keep doing this until we succeed.

For the love of all that is science stop thinking of the level of tech in front of your nose and look at the direction, and the motivation to always progress. It’s what we do.

Years ago, Sam said “slope is more important than Y-intercept”. Forget about the y-intercept, focus on the fact that the slope never goes negative.

  • I don't think anyone is actually trying to build AGI. They are trying to make a lot of money from driving the hype train. Is there any concrete evidence of the opposite?

    > forget about the y-intercept, focus on the fact that the slope never goes negative

    Sounds like a statement from someone who's never encountered logarithmic growth. It's like talking about where we are on the Kardashev scale.

    If it worked like you wanted, we would all have flying cars by now.

    • Dude, my reference is to ever continuing improvement. As a society we don’t tent to forget what we had last year, which is why the curve does not go negative. At time T+1 the level of technology will be equal or better than at time T. That is all you need to know to realise that any fixed limits will be bypassed, because limits are horizontal lines compared to technical progress, which is a line with a positive slope.

      I don’t want this to be true. I have a 6 year old. I want A.I. to help us build a world that is good for her and society. But stupidly stumbling forward as if nothing can go wrong is exactly how we fuck this up, if it’s even possible not to.