Comment by thomassmith65
12 hours ago
If a chatbot that can carry on an intelligent conversation about itself doesn't have a 'semblance of consciousness' then the word 'semblance' is meaningless.
12 hours ago
If a chatbot that can carry on an intelligent conversation about itself doesn't have a 'semblance of consciousness' then the word 'semblance' is meaningless.
Would you say the same about ELIZA?
Moltbook demonstrates that AI models simply do not engage in behavior analogous to human behavior. Compare Moltbook to Reddit and the difference should be obvious.
Yes, when your priors are not being confirmed the best course of action is to denounce the very thing itself. Nothing wrong with that logic!