Comment by jychang

8 hours ago

Then can we permaban people who pretend to be experts in topics they have no clue in? It's even more disrespectful of people who HAVE spent time learning the material.

You want good discussion? Jesus, I had to wade through that slop which was worse than AI slop.

He would have been fine if he just argued a typical moral anti-realism perspective "actually morality is not needed, and the reason is there's no such thing as truly evil", as that's debatable true in philosophy. I would have been fine with that... but THEN HE LITERALLY SHOOTS HIS OWN ARGUMENT IN THE FACE "but sacrificing kids is actually bad" (as truth-apt), and smugly declares shooting his own argument in the face as winning. I can't even. Except it wasn't a clean anti morality argument in the first place, so I didn't assume as much, except then every time he was clearly losing he retracted back into an anti-moral realism perspective. He could have just stayed there, although I would have expected something more like "it would not be objectively evil if Claude destroyed the world, since objective evil doesn't exist"!

Here's chatgpt's translation into dev speak, since I am an engineer, but I don't think I need to write this myself:

------

It’s like a developer insisting, with total confidence, that their system is “provably safe and robust”… and then, the moment they’re challenged, they:

turn off all error handling (try/catch removed because “exceptions are for cowards”),

add assert(false) in the critical path “to prove no one should reach here,”

hardcode if (prod) return true; to bypass the very check they were defending,

ship it, watch it crash instantly,

and declare victory because “the crash shows how seriously we take safety.”

In other words: they didn’t lose the argument because the idea was wrong—they lost because they disabled their own argument’s safety rails and then bragged about the wreck as a feature.

-----

WTF am I supposed to do there?

I can see why philosophers drink.

I'm on your side in this argument (approximately; asking what ethics even is and where it comes from can be productive but shouldn't conclude "and therefore AI agents working with humans don't need to integrate a human moral sense" -- at least that'd be a really bad conclusion to humanity as AI scales up).

Can't recommend letting an LLM write for you directly, though. I found myself skipping your third paragraph in the reply above.