Comment by SketchySeaBeast
8 days ago
> untrained people
What training are you referring to here? Therapy, mentalism, or being an AI guru?
8 days ago
> untrained people
What training are you referring to here? Therapy, mentalism, or being an AI guru?
Psychology knowledge, both theoretical (thing: first year of undergrad in psych at a good univ), practical (e.g. ability to translate an arbitrary inflammatory statement into NVC), etc.
That seems to make it a non-starter for most people, given that most won't have that first year knowledge.
But also, I hold a minor in psychology. Despite that, I didn't once attend a course that I would describe as any sort "therapy 101" and so I fear your bar is a bit low for any sort of efficacy, but I would guess that's probably because I'm in the "I'm aware my own ignorance" area of the Psychological knowledge curve.
When I think about it again, it is less about one's absolute knowledge of psychology, and more about (as you said) knowing one's own ignorance and having some mental model of an LLM.
One model I have found useful to communicate is that they meet in a bar one random person, who seems to know a lot, but otherwise you have no idea about them, and also - they have absolutely no context of you. In that case, is you treat (with a grain of salt) what they say, it is fine. They may say something inspiring, or insightful, or stupid, or random. If they say something potentially impactful, you would rather double check it with others (and no, not some other random person in bar).
I know both people for whom LLMs were helpful (one way or another). But again, treating it more like a conversation with a stranger.
Worse (not among my direct friends, but e.g. a parent of one) is when people treat it as something omniscient, who will give them direct answer. Fortunately, GPT 4 by them was rather defensive, and kept giving options (in a situation like "should I stay or break"), refusing to give an answer for them (they were annoyed; but better being annoyed than giving agency that way).
When it comes to anything related to diagnosis (fortunately, it has some safeguards), it might be dangerous. While I used that to try to see if it can diagnose something based on hints (and it was able to make really fine observation), it needs to base on really fine prompts, and not always works anyway. In other cases, its overly agreeable nature is likely to get you in the self-confirmation loop (you mention "anxiety" somewhere and it will push for Generalized Anexiety Disorder).
Again, if a person treats it as a random discussion - they will be fine. They met House MD who sees lupus everywhere. Worse, if they stop searching, or take is as gospel, or get triggered by at (likely wrong) diagnosis.
Starting out without that baseline can be tough. You might find tools like Coachers dot org helpful since they break down complex ideas in a way that’s easier to digest. It’s made learning feel less overwhelming for me.