Comment by Antibabelic
9 hours ago
Your response seems AI-generated (or significantly AI-”enhanced”), so I’m not going to bother responding to any follow-ups.
> More importantly, your framework cannot account for moral progress!
I don’t think “moral progress” (or any other kind of “progress”, e.g. “technological progress”) is a meaningful category that needs to be “accounted for”.
> Why does "hunting babies" feel similar to "torturing prisoners" but different from "eating chicken"?
I can see “hunting babies” being more acceptable to “torturing prisoners” to many people. Many people don’t consider babies on par with grown-up humans due to their limited neurological development and consciousness. Vice versa, many people find the idea of eating chicken abhorrent and would say that a society of meat-eaters is worse than a thousand Nazi Germanies. This is not a strawman I came up with, I’ve interacted with people who hold this exact opinion, and I think from their perspective it is justified.
> [Without a moral framework you have] no way to reason about novel cases
You can easily reason about novel cases without a moral framework. It just won’t be moral reasoning (which wouldn’t add anything in itself). Is stabbing a robot to death okay? We can think about in terms of how I feel about it. It’s kinda human-shaped, so I’d probably feel a bit weird about it. How would others react to me stabbing it this way? They’d probably feel similarly. Plus, it’s expensive electronics, people don’t like wastefulness. Would it be legal? Probably.
Honestly, yeah. I got lazy with your responses and just threw in a few bullet points to AI, because honestly it's clear you don't know anything about philosophy. It's like arguing code cleaniness with a new software engineer... it was way more tiring than it was intellectually stimulating. You're basically arguing a sort of moral anti-realism perspective but without any actual points like noncognitivism or whatever, because you're saying moral statements are still truth-apt (xyz is bad) but just... don't matter for some reason? It makes no sense.
At least the discussion with skissane was intellectually interesting, so I didn't bother using AI for those comments.
But seriously, you can just throw your entire conversation into AI and ask "who is philosophically and logically correct between these responses". Remove the usernames if you want a fair analysis. Even an obsolete AI like GPT-3.5 will be able to tell you the correct answer for that question. The reasoning is just... soooo obviously... similar to if a senior engineer looked at a junior engineer's code, and facepalmed. It looks like that, but replace "code" with "philosophical logic".
That's the best way I can break it to you, honestly, because it's probably the easiest way for you to get a neutral perspective. I'm genuinely not trying to be biased when I tell you that.
>I got lazy with your responses and just threw in a few bullet points to AI
This should legit be a permabannable offense. That is titanically disrespectful of not just your discussion partner, but of good discussion culture as a whole.
Then can we permaban people who pretend to be experts in topics they have no clue in? It's even more disrespectful of people who HAVE spent time learning the material.
You want good discussion? Jesus, I had to wade through that slop which was worse than AI slop.
He would have been fine if he just argued a typical moral anti-realism perspective "actually morality is not needed, and the reason is there's no such thing as truly evil", as that's debatable true in philosophy. I would have been fine with that... but THEN HE LITERALLY SHOOTS HIS OWN ARGUMENT IN THE FACE "but sacrificing kids is actually bad" (as truth-apt), and smugly declares shooting his own argument in the face as winning. I can't even. Except it wasn't a clean anti morality argument in the first place, so I didn't assume as much, except then every time he was clearly losing he retracted back into an anti-moral realism perspective. He could have just stayed there, although I would have expected something more like "it would not be objectively evil if Claude destroyed the world, since objective evil doesn't exist"!
Here's chatgpt's translation into dev speak, since I am an engineer, but I don't think I need to write this myself:
------
It’s like a developer insisting, with total confidence, that their system is “provably safe and robust”… and then, the moment they’re challenged, they:
turn off all error handling (try/catch removed because “exceptions are for cowards”),
add assert(false) in the critical path “to prove no one should reach here,”
hardcode if (prod) return true; to bypass the very check they were defending,
ship it, watch it crash instantly,
and declare victory because “the crash shows how seriously we take safety.”
In other words: they didn’t lose the argument because the idea was wrong—they lost because they disabled their own argument’s safety rails and then bragged about the wreck as a feature.
-----
WTF am I supposed to do there?
I can see why philosophers drink.
1 reply →