← Back to context

Comment by weego

2 years ago

Honestly, what does it matter. We're many lifetimes away from anything. These people are trying to define concepts that don't apply to us or what we're currently capable of.

AI safety / AGI anything is just a form of tech philosophy at this point and this is all academic grift just with mainstream attention and backing.

This goes massively against the consensus of experts in this field. The modal AI researcher believes that "high-level machine intelligence", roughly AGI, will be achieved by 2047, per the survey below. Given the rapid pace of development in this field, it's likely that timelines would be shorter if this were asked today.

https://www.vox.com/future-perfect/2024/1/10/24032987/ai-imp...

  • I am in the field. The consensus is made up by a few loudmouths. No serious front line researcher I know believes we’re anywhere near AGI, or will be in the foreseeable future.

    • So the researchers at Deepmind, OpenAI, Anthropic, etc, are not "serious front line researchers"? Seems like a claim that is trivially falsified by just looking at what the staff at leading orgs believe.

      8 replies →

  • I don't understand how you got 2047. For the 2022 survey:

        - "How many years until you expect: - a 90% probability of HLMI existing?" 
        mode: 100 years
        median: 64 years
    
        - "How likely is it that HLMI exists: - in 40 years?"
        mode: 50%
        median: 45%
    

    And from the summary of results: "The aggregate forecast time to a 50% chance of HLMI was 37 years, i.e. 2059"

Many lifetimes? As in upwards of 200 years? That's wildly pessimistic if so- imagine predicting today's computer capabilities even one lifetime ago

> We're many lifetimes away from anything

ENIAC was built in 1945, that's roughly a lifetime ago. Just think about it