Comment by nick32661123

5 hours ago

Our only hope is that AI in the long run is both powerful and benevolent enough to be its own "whistleblower" in cases of misuse.

I struggle so hard with this anthropomorphism of LLMs. At the end of the day it's a statistical gradient descent predictor with a bunch of "shit" bolted on top to try and steer outputs in a specific way.

They don't have the actual concept of "benevolent"... or a concept of anything at all. Based on an input, they regress down a path of "what is the next most probable statistical token to output next" and that's fucking it, with the bolted-on shit manipulating these outputs a bit.

I don't doubt that at some point there will be some other AI leap, but I'm not even sure it'll be built on this foundation.

What really needs to be developed is an actual artificial brain of sorts. Much like an infant learns language from first principals, a real AI would have a phase of continuous growth, creating actual memories and being able to reflect upon them. I daresay context windows are not that.

I'd really like to encourage anyone to pump the brakes a bit on how these things actually work, and what they actually are. There is a reason sama is pivoting away from video, et. al. and into corporate software coding, much like anthropic.