Comment by goolulusaurs

16 hours ago

The reality is that o1 is a step away from general intelligence and back towards narrow ai. It is great for solving the kinds of math, coding and logic puzzles it has been designed for, but for many kinds of tasks, including chat and creative writing, it is actually worse than 4o. It is good at the specific kinds of reasoning tasks that it was built for, much like alpha-go is great at playing go, but that does not actually mean it is more generally intelligent.

LLMs will not give us "artificial general intelligence", whatever that means.

  • AGI currently is an intentionally vague and undefined goal. This allows businesses to operate towards a goal, define the parameters, and relish in the “rocket launches”-esque hype without leaving the vague umbrella of AI. It allows businesses to claim a double pursuit. Not only are they building AGI but all their work will surely benefit AI as well. How noble. Right?

    It’s vagueness is intentional and allows you to ignore the blind truth and fill in the gaps yourself. You just have to believe it’s right around the corner.

    • "If the human brain were so simple that we could understand it, we would be so simple that we couldn’t." - without trying to defend such business practice, it appears very difficult to define what are necessary and sufficient properties that make AGI.

      1 reply →

  • No but they gave us GAI. The fact they flipped the frame problem(s) upside down is remarkable but not often discussed.

  • In my opinion it's probably closer to real agi then it's not. I think the missing piece is learning after the pretraining phase.

  • An AGI will be able to do any task any humans can do. Or all tasks any human can do. An AGI will be able to get any college degree.

    • > any task any humans can do

      That doesn’t seem accurate, even if you limit it to mental tasks. For example, do we expect an AGI to be able to meditate, or to mentally introspect itself like a human, or to describe its inner qualia, in order to constitute an AGI?

      Another thought: The way human perform tasks is affected by involuntary aspects of the respective individual mind, in a way that the involuntariness is relevant (for example being repulsed by something, or something not crossing one’s mind). If it is involuntary for the AGI as well, then it can’t perform tasks in all the different ways that different humans would. And if it isn’t involuntary for the AGI, can it really reproduce the way (all the ways) individual humans would perform a task? To put it more concretely: For every individual, there is probably a task that they can’t perform (with a specific outcome) that however another individual can perform. If the same is true for an AGI, then by your definition it isn’t an AGI because it can’t perform all tasks. On the other hand, if we assume it can perform all tasks, then it would be unlike any individual human, which raises the question of whether this is (a) possible, and (b) conceptually coherent to begin with.

      6 replies →

  • it must be wonderful to live life with such supreme unfounded confidence. really, no sarcasm, i wonder what that is like. to be so sure of something when many smarter people are not, and when we dont know how our own intelligence fully works or evolved, and dont know if ANY lessons from our own intelligence even apply to artificial ones.

    and yet, so confident. so secure. interesting.

    • Social media doesn't punish people for overconfidence. In fact social media rewards people's controversial statements by giving them engagement - engagement like yours.

  • I think it means a self-sufficient mind, which LLMs inherently are not.

    • What is "self-sufficient" in this case?

      Lots of debate since ChatGPT and Stable Diffusion can be summarised as A: "AI cheated by copying humans, it just mixes the bits up really small like a collage" B: "So like humans learning from books and studying artists?" A: "That doesn't count, it's totally different"

      Even though I am quite happy to agree that differences exist, I have yet to see a clear answer as to what about people even mean when asserting that AI learning from books is "cheating" given that it's *mandatory* for humans in most places.

Yes, I don't understand their ridiculous AGI hype. I get it you need to raise a lot of money.

We need to crack the code for updating the base model on the fly or daily / weekly. Where is the regular learning by doing?

Not over the course of a year, spending untold billions to do it.

  • Technically, the models can already learn on the fly. Just that the knowledge it can learn is limited to the context length. It cannot, to use the trendy word, "grok" it and internally adjust the weights in its neural network yet.

    To change this you would either need to let the model retrain itself every time it receives new information, or to have such a great context length that there is no effective difference. I suspect even meat models like our brains is still struggling to do this effectively and need a long rest cycle (i.e. sleep) to handle it. So the problem is inherently more difficult to solve than just "thinking". We may even need an entire new architecture different from the neural network to achieve this.

    • Google just published a paper on a new neural architecture that does exactly that, called Titans.

    • > Technically, the models can already learn on the fly. Just that the knowledge it can learn is limited to the context length.

      Isn't that just improving the prompt to the non-learning model?

  • I understand the hype. I think most humans understand why a machine responding to a query like never before in the history of mankind is amazing.

    What you’re going through is hype overdose. You’re numb to it. Like I can get if someone disagrees but it’s a next level lack of understanding human behavior if you don’t get the hype at all.

    There exists living human beings who are still children or with brain damage with comparable intelligence to an LLM and we classify those humans as conscious but we don’t with LLMs.

    I’m not trying to say LLMs are conscious but just saying that the creation of LLMs marks a significant turning point. We crossed a barrier 2 years ago somewhat equivalent to landing on the moon and i am just dumb founded that someone doesn’t understand why there is hype around this.

    • The first plane ever flies, and people think "we can fly to the moon soon!".

      Yet powered flight has nothing to do with space travel, no connection at all. Gliding in the air via low/high pressure doesn't mean you'll get near space, ever, with that tech. No matter how you try.

      AI and AGI are like this.

      11 replies →

This is kind if true. I feel like the reasoning power if O1 is really only truly available on the kinds of math/coding tasks it was trained on so much.