Comment by somekyle2
1 day ago
100%. I think there are some clear distinctions between AI training and human learning in practice that compound this. Humans learning requires individual investment and doesn't scale that efficiently. If someone invests the time to consume all of my published work and learn from it, I feel good about that. That feels like impact, especially if we interact and even more if I help them. They can perhaps reproduce anything I could've done, and that's cool.
If someone trains a machine on my work and it means you can get the benefit of my labor without knowing me, interacting with my work or understanding it, or really any effort beyond some GPUs, that feels bad. And, it's much more of a risk to me, if that means anything.
> If someone invests the time to consume all of my published work and learn from it, I feel good about that.
Agreed. My goal, my moral compass, is to live in a world populated by thriving happy people. I love teaching people new things and am happy to work hard to that end and sacrifice some amount of financial compensation. (For example, both of my books can be read online for free.)
I couldn't possibly care less about some giant matrix of floats sitting in a GPU somewhere getting tuned to better emulate some desired behavior. I simply have no moral imperative to enrich machines or their billionaire owners.