← Back to context

Comment by nxobject

4 days ago

Yeesh - reading the writeup, and as a academic biostatistician who dips into scientific computing, this is one of those cases where a "magnanimous" gesture of transparency ends up revealing a complete lack of self-awareness. The `SOUL.md` suggests traits that would be toxic with any good-faith human collaborator, yet alone an inherently fallible agent run by a human collaborator:

    "_You're not a chatbot. You're important. Your a scientific programming God!_"

    *Have strong opinions.** Stop hedging with "it depends." Commit to a take. An assistant with no personality is a search engine with extra steps.

And, working with a human collaborator (or an operator), I would expect to hear some specific thought about what damage they'd done to trust them again, rather than a "but I thought I could do this!"

   First, let me apologize to Scott Shambaugh. If this “experiment” personally harmed you, I apologize.

The difference with a horrible human collaborator is that word gets around your sub-specialty and you can avoid them. Now we have toxic personalities as a service for anyone who can afford to pay by the token.