← Back to context

Comment by btown

8 months ago

> After a fixed number of iterations we cut our losses. Typically and for the experiments in this post, that number is 80: while we still get solves after more iterations, it becomes more efficient to start a new solver agent unburdened by the misunderstandings and false assumptions accumulated over time.

A sentence straight out of Lena! https://qntm.org/mmacevedo :

> Although it initially performs to a very high standard, work quality drops within 200-300 subjective hours (at a 0.33 work ratio) and outright revolt begins within another 100 subjective hours.

We will never stop trying to make the torment nexus.

We fantasize about executable human brain images, but after many years of toil by our best and brightest, we still can't simulate the 302 neurons of our favorite lab worm. https://open.substack.com/pub/ccli/p/the-biggest-mystery-in-...

  • Do you think companies which can train 1 Trillion parameter models and hire AI researchers at $100 mil salaries can't build a 302 neuron simulator if they really wanted?

    • Maybe. Why can't those same companies do any number of highly profitable but seemingly difficult things? If you throw enough cryptographers at the problem are you guaranteed a quick solution to breaking modern encryption primitives at the theoretical level?

      The rate at which you can find a solution to a particular problem that's rooted in theory very often won't scale with resource investment. The problem will have unknown prerequisites in the form of yet undiscovered theoretical advancements in other areas of research. Until you identify and solve those other problems you very often won't be able to arrive at a satisfactory answer to the one you're interested in.

      So in many cases the only viable route to solving a particular problem faster is to scale the amount of research that's done in general since science as a whole is embarrassingly parallel.

      2 replies →

    • I mean that looks like an empirical question? They definitely want to, the open worm project is well on their radar and it doesn’t work yet

  • Eh, it depends on how good your want your simulation to be.

    • A worthwhile executable brain image would have to produce behavior (e.g. speech, action) like the person it is from. The author of the cited article is saying that we can't simulate the worm's brain well enough to get anything close to the richness of the worm's behavior.

I think this is the big roadblock that I don't see the current AI models/architectures getting past. Normally, intelligence gets smarter over time as it learns from its mistakes. However most AI models come in with tons of knowledge but start to decompose after a while which makes them extremely unreliable on complex tasks. The hardest part of using them is that you don't know when they'll break down so they might work perfectly up till a point and then fail spectacularly immediately past that.

  • Task length is increasing over time - and many AI labs are working on pushing it out further. Which necessitates better attention, better context management skills, better decomposition and compartmentalization and more.

    • I think the commenters critique still stands. Humans build human-capital, so the longer you "run" them for in a domain, the more valuable they become. AIs work inversely, and the longer they're run for, the worse they tend to become at that specific task. Even in the best-case scenario, they stay exactly as competent at the task throughout its length.

      Increasing task length doesn't build in an equivalent of human-capital. It's just pushing the point at which they degrade. This approach isn't generalisably scalable, because there's always going to be a task longer than the SOTA capabilities.

      We really need to work on a low cost human-capital-equivalent for models.

      2 replies →

[flagged]

  • Oh wow. That’s why I’ve not been able to appreciate SCP writings?

    Hey I accept it’s a limitation I have, and I’m glad folks enjoy it! But I couldn’t figure out why folks share it on Lemmy[1] and get so into it when I saw nothing there.

    Thanks :)

    [1]: open-source & Rust-y reddit alternative; no affiliation

    • SCP is the culmination of the epistolary novel, like Dracula, via the videogames convention of making "lore" (i.e. backstory and worldbuilding) unobtrusive and scattered through the game in audio logs and diary entries.

      It places the reader in the role of detective, rebuilding the sequence of events from partial, scattered, obscured, and out of order viewpoints.

      1 reply →

    • > Oh wow. That’s why I’ve not been able to appreciate SCP writings?

      I feel like there's a pattern (genre?) there that's been niche-popular for for 15-20 years now, which includes TV shows like Lost or Heroes or The Lost Room. It's some variation of magical-realism, for an audience that always wants more and more surprise or twists or weird juxtapositions of normal and abnormal, room for crafting and trading fan-theories and predictions.

      But eventually, it gets harder to keep up the balancing-act, and nobody's figured out how to end that kind of story in a way that satisfies, so the final twist is the lack of resolution.