Comment by sundarurfriend
3 years ago
Yes. I read it years ago and found it interesting, but re-reading it recently was a different experience. It made me reflect on what we used to think about AI, the similarities and differences to the AI systems we actually have ended up with currently, and what the trajectory in our world is actually going to look like.
Yeah, we didn't end up in the maximizer world I envisioned with Friendship is Optimal. I get a ton of comments about how I predicted the future, but they seem confused, because we didn't end up with a utility function maximizer at all.
Also, hi.
Hi!
It does seem possible that we'll build utility function maximizers on top of the current systems, with things like system messages and AutoGPT being the very early rough steps towards those. But they'll be sloppier than people imagined, unless we start integrating some kind of old school symbolic/knowledge-based AI systems alongside it, to work from concrete formal knowledge rather than piles of the Internet.
But in terms of differences, I was mainly thinking about the social/non-technical differences of the way we're progressing towards AI - with the LLaMa leak and open systems anyone can run and multiple viable competitors, the landscape will be very different and a lot more chaotic than if we'd had some hard-to-reproduce, genius breakthrough like Hanna had.