← Back to context

Comment by dathinab

6 months ago

I _hope_ AGI is not right around the corner, for social political reasons we are absolutely not ready for it and it might push the future of humanity into a dystopia abyss.

but also just taking what we have now with some major power usage reduction and minor improvements here and there already seems like something which can be very usable/useful in a lot of areas (and to some degree we aren't even really ready for that either, but I guess thats normal with major technological change)

it's just that for those companies creating foundational models it's quite unclear how they can recoup their already spend cost without either major break through or forcefully (or deceptively) pushing it into a lot more places then it fits into

When it comes to recouping cost, a lot of people don't consider the insane amount of depreciation expense brought on by the up to 1 trillion (depending on which estimate) that has been invested in AI buildouts. That depreciation expense could easily be more than the combined revenue of all AI companies.

If birth rates are as much a cause for concern as many people seem to think, and we absolutely need to solve it (instead of solving, for instance, the fact that the economy purportedly requires exponential population growth forever), perhaps we should hope that AGI comes soon.

  • We're not in a stable state now, it's not about population growth, it's about not literally dying out.

    With a birth rate of 1 population will halve every generation. This is an apocalyptic scenario and incompatible with industrial civilization.

    • I am worried about what will happen to various nations' economies relatively soon, long before the population actually halves, but I'm not worried that the fertility rate would continue on its trend as demographics change. Ignoring the potential second-order effects of economic collapse, wars over resources, etc., I think fertility rate would stabilize given that culture and genetics would by definition quickly become dominated by the people who do reproduce.

      1 reply →

I think it's rather easy for them to recoup those costs, if you can disrupt some industry with a full AI company with almost no employees and outcompete everyone else, that's free money for you.

  • I think they are trying to do something like this(1) by long term providing a "business suite", i.e. something comparable to g suite or microsoft 360.

    For a lot of the things which work well with current AI technology it's supper convenient to have access to all your customer private data (even if you don't train on them, but e.g. stuff like RAG systems for information retrieval are one of the things which already with the current state of LLMs work quite well). This also allows you to compensate hallucinations, non understanding of LLMs and similar by providing (working) links (or inclusions of snippets of) sources where you have the information from and by having all relevant information in the context window of the LLM instead of it's "learned" data from training you in general get better results. I mean RAG systems already did work well without LLMs to be used in some information retrieval products.

    And the thing is if your user has to manually upload all potentially relevant business documents you can't really make it work well, but what if they anyway upload all of them to your company because they use your companies file sharing/drive solution?

    And lets not even consider the benefits you could get from a cheaper plan where you are allowed to train on the companies data after anonymizing (like for micro companies, too many people thing "they have nothing to hide" and it's anonymized so okay right? (no)). Or you going rogue and just steal trade secrets to then breach into other markets it's not like some bigger SF companies had been found to do exactly that (I think it was amazon/amazon basics).

    (1:) Through in that case you still have employees until you AI becomes good enough to write all you code, instead of "just" being a tool for developers to work faster ;)

  • Possibly but not necessarily. Competition can erode all economic rents, no matter how useful a product is.

Must "AGI" match human intelligence exactly or would outperforming in some functions and underpformin in others qualify?

  • For me, "AGI" would come in with being able to reliably perform simple open-ended tasks successfully without needing any specialized aid or tooling. Not necessarily very well, just being capable of it in the first place.

    For a specific example of what I mean, there's Vending-Bench - even very 'dumb' humans could reliably succeed on that test indefinitely, at least until they got terminally bored of it. Current LLMs, by contrast, are just fundamentally incapable of that, despite seeming very 'smart' if all you pay attention to is their eloquence.

    • If someone handed you an envelope containing a hidden question, and your life depended on a correct answer, would you rather pick a random person out of the phone book or an LLM to answer it?

      On one hand, LLMs are often idiots. On the other hand, so are people.

      4 replies →

  • At the very least, it needs to be able to collate training data, design, code, train, fine tune and "RLHF" a foundational model from scratch, on its own, and have it show improvements over the current SOTA models before we can even begin to have the conversation about whether we're approaching what could be AGI at some point in the future.

  • no it doesn't has to it just has to be "general"

    as in it can learn by itself to solve any kind of generic task it can practically interface it (at lest which isn't way to complicated).

    to some degree LLMs can do so theoretically but

    - learning (i.e. training them) is way to slow and costly

    - domain adoption (later learning) often has a ton of unintended side effects (like forgetting a bunch of important previously learned things)

    - it can't really learn by itself in a interactive manner

    - "learning" by e.g. retrieving data from knowledge data base and including it into answers (e.g. RAG) isn't really learning but just information retrieval, also it has issues with context windows and planing

    I could imagine OpenAI putting together multiple LLMs + RAG + planing systems etc. to create something which technically could be named AGI but which isn't really the break through people associate with AGI in the not too distant future.

  • Where would you draw the line? Any ol' computer outperforms me in doing basic arithmetic.

    • I'd suggest anything able to match a professional doing knowledge work. Original research from recognisably equivalent cognition, or equal abilities with a skilled practitioner of (eg) medicine.

      This sets the bar high, though. I think there's something to the idea of being able to pass for human in the workplace though. That's the real, consequential outcome here: AGI genuinely replacing humans, without need for supervision. That's what will have consequences. At the moment we aren't there (pre-first-line-support doesn't count).

    • This is a question of how we quantify intelligence, and there aren’t many great answers. Still, basic arithmetic is probably not the right guideline for intelligence. My guess has always been that it’ll lie somewhere in ability to think critically, which they still have not even attempted yet, because it doesn’t really work with LLMs as they’re structured today.

  • That would be human; I've always understood the General to mean 'as if it's any human', i.e. perhaps not absolute mastery, but trained expertise in any domain.

[flagged]

  • "that fool" created a $1.8 trillion company.

    • He created a company that tracks and profiles people, psychologically manipulates them, and sells ads. And has zero ethical qualms about the massive social harm they have left in their wake.

      That doesn't tell me anything about his ability to build "augmented reality" or otherwise use artificial intelligence in any way that people will want to pay for. We'll see.

      Ford and GM have a century of experience building cars but they can't seem to figure out EVs despite trying for nearly two decades now.

      Tesla hit the ball out of the park with EVs but can't figure out self-driving.

      Being good at one thing does not mean you will be good at everything you try.

      3 replies →

    • I’m always fascinated when someone equates profit with intelligence. There are many very wealthy fools and there always have been. Plenty of ingredients to substitute for intelligence.

      Neither necessary nor sufficient.

      9 replies →

    • Aren't there enough examples of successful people who are complete buffoons to nuke this silly trope from orbit? Success is no proof of wisdom or intelligence or whatever.

      17 replies →

    • past performance does not guarantee future results

      also, great for the Wall Street, mixed bag for us the people

    • $1.8 trillion in investor hopes and dreams, but of course they make zero dollars in profit, don’t know how to turn a profit, don’t have a product anyone would pay a profitable amount for, and have yet to show any real-world use that isn’t kinda dumb because you can’t trust anything it says anyways.

      4 replies →

what social political reasons, can you name some of these? we are 100% ready for AGI.

  • Are you ready to lose your job, permanently?

    • > Are you ready to lose your job, permanently?

      You're asking the wrong question and, predictably, some significant portion of people are going to answer "yes".

      Better to ask the question "Are you ready to starve to death already?", which is a more accurate version of "Are you ready to lose you income, permanently".