Comment by slyle
15 hours ago
As a final note - I'm dropping this permanently for wellbeing reasons. But essentially, what I posit is a manufactured and very difficult to understand legal culpability problem for the use of AI. I see embodiment issues - we either convince algorithmic thinking it needs to feel consequence (pain and death) to temper its inferences through simulated realities, or we allow companies to set that "sponsor company" embodiment narrative. It emulates caring. It creates a context humans cannot objectively shirk or evaluate quickly and clearly. I was doing math a year ago. This has gotten horribly confusing. Abuse and theft and manipulation can happen very indirectly. While algorithms are flat inferences in the end - the simulatory ramifications of that are nonzero. There is real consequence to a model that can manifest behavior via tool calls and generation without experiencing outcome and merely inferrring what outcome is. It's mindbending and sounds anti-intellectual, but it's not. The design metaphor is dangerous.
I didn't even go out looking for concern. It has just crept up and inhibited my work too many times - to the point where I have sat with the reality for a bit. It makes me nauseous. It's not the boy. It's where the boy ends up. Like, this abstraction demands responsibility of implementation. It can't be let run riot slowly and silently. I fear this is bad.
No comments yet
Contribute on Hacker News ↗