← Back to context

Comment by mdp2021

2 months ago

> doesn't mean there's a known viable pathway [...] solution [...] BTW seems

It means it must be researched into with high commitment. // LLMs are an emergency patch at best (I would not call them «decent» - they are inherently crude). This is why I insist that they must be overcome with urgency already because they are now here (if a community needed wits and a lunatic appears, wits become more needed). // And no, I am not «crossing»: but people do that, hence I am stating an urgency.

We do not need to simulate the brain, we only ("only") need to implement intelligence. That means the opposite of stating hearsay: it means checking every potential uttering and storing results (and also reinforcing the ways that had the system achieve sophisticated thoughts and conclusions).

It is not given that LLMs cannot be part of such system. They surely have a lot of provisional uttering to criticize.

There's already a lot of research happening for the next thing. The AGI race has been on among not only companies, but also between the largest nations. Everybody's doing their best and then some.

It could very well be that the highest intelligent functionalities requires a closely brain-like substrate. We don't know yet, but we'll get there eventually. And it is very likely to be something emergent, not specifically programmed features as you seem to be insinuating with "... checking [...] and storing results..."

  • > as you seem to be insinuating with

    The implementation details are not clear, not the goals.

    I never said that the feature has to be coded explicitly. I said it has to be there.