← Back to context

Comment by stevenjgarner

2 years ago

While we have a strong grasp of the fundamental algorithms and architectures that power generative LLMs, there are many nuances, emergent behaviors and deeper technical details about their behavior, generalization, and internal decision-making processes that we do not fully understand.

How can we pursue the goal of "Safe Superintelligence" when we do not understand what is actually going on?