You do understand that these models are not sentient and are subject to hundreds of internal prompts, weights, and a training set right?
They can’t generate knowledge that isn’t in their corpus and the act of prompting (yes, even with agents ffs) is more akin to playing pachinko than it is pool?
You do understand that these models are not sentient and are subject to hundreds of internal prompts, weights, and a training set right?
They can’t generate knowledge that isn’t in their corpus and the act of prompting (yes, even with agents ffs) is more akin to playing pachinko than it is pool?