← Back to context

Comment by digitaltrees

2 days ago

They need to build an epistemology and theory of mind engine into models. We take it for granted when dealing with other humans that they can infer deep meaning, motivations, expectations of truth vs fiction. But these agents don’t do that and so will be awful collaborators until those behaviors are present

We're in the 56k modem era of generative AI, so I wouldn't be surprised if we had that in the next few years, or weeks.

Theory of mind should naturally emerge when the models are partly trained in an adversarial simulation environment, like the Cicero model, although that's a narrow AI example.

And it causes a ton of chaos that we do take that for granted between humans. The annoying collaborator is the person who takes information for granted.

Did you read any research on theory on mind and models? Since gpt4 they were tested using similar metrics to humans and it seems the bigger models “have” it