← Back to context

Comment by saghm

7 hours ago

There are times when trying to use Claude for coding that I genuinely get annoyed at it, and I find it cathartic to include this emotion in my prompt to it, even though I know it doesn't have feelings; expressing emotions rather than bottling them up often can be an effective way to deal with them. Sometimes this does even influence how it handles things, noting my frustration in its "thinking" and then trying to more directly solve my immediate problem rather than trying to cleverly work around things in a way I didn't want.

What are the odds that Anthropic is building a psychological profile on you based on your prompts and when and how quickly you lose control over your emotions?

  • I guess if they think they can monetize the fact that I get upset when I ask it make a certain change to the code and it doesn't do it several times in a row, they probably already are