← Back to context

Comment by 2ndorderthought

21 hours ago

This made me chuckle.

I didn't mean to dismiss ethical accountability for LLM training corpuses. It is a shame.

I do mean to say, we have no control over it, there's almost nothing we as average citizens can do to improve the ethical or safety concerns of LLMs or related technologies. Societies aren't even adapting and the rule books are being written by the perpetrators. Might as well get out of it what we can while we can.

Wonder if stuff like this would affect it?

https://github.com/p-e-w/heretic

Guessing it probably would?

  • Neat project! I would be interested in a paper about this.

    I think the tricky part with this type of technology is that, this works if the training data was not curated. What I mean is, if someone trains an LLM to simply not include key events it will not be able to reply

    Not being a hater. This is neato!