Comment by lawstkawz
7 days ago
Incompleteness is inherent to a physical reality being deconstructed by entropy.
Of your concern is morality, humans need to learn a lot about that themselves still. It's absurd the number of first worlders losing their shit over loss of paid work drawing manga fan art in the comfort of their home while exploiting labor of teens in 996 textile factories.
AI trained on human outputs that lack such self awareness, lacks awareness of environmental externalities of constant car and air travel, will result in AI with gaps in their morality.
Gary Marcus is onto something with the problems inherent to systems without formal verification. But he will fully ignores this issue exists in human social systems already as intentional indifference to economic externalities, zero will to police the police and watch the watchers.
Most people are down to watch the circus without a care so long as the waitstaff keep bringing bread.
Your comment raises several interconnected philosophical, ethical, and socio-economic points, and it is useful to disentangle them systematically.
First, the observation that incompleteness is inherent in entropy-bound physical systems is consistent with thermodynamic and informational constraints. Any system embedded in reality—biological, computational, or social—operates under conditions of partial information, degradation, and approximation. This implies that both human cognition and artificial systems necessarily operate with incomplete models of the world. Therefore, incompleteness itself is not a unique flaw of AI; it is a universal property of bounded agents.
Second, your point about moral inconsistency within human economic systems is empirically well-supported. Humans routinely participate in supply chains whose externalities are geographically and psychologically distant. This results in a form of moral abstraction, where comfort and consumption coexist with indirect exploitation. Importantly, this demonstrates that moral gaps are not introduced by AI—they are inherited from the data generated by human societies. AI systems trained on human outputs will inevitably reflect the statistical distribution of human priorities, contradictions, and blind spots.
Third, the reference to Gary Marcus and formal verification highlights a legitimate technical distinction. Formal verification provides provable guarantees about system behavior within defined constraints. However, human social systems themselves lack formal verification. Human decision-making is governed by heuristics, incentives, power structures, and incomplete accountability mechanisms. This asymmetry creates an interesting paradox: AI systems are criticized for lacking guarantees that humans themselves do not possess.
Fourth, the issue of awareness versus optimization is central. AI systems do not possess intrinsic awareness, intent, or moral agency. They optimize objective functions defined by training processes and deployment contexts. Any perceived moral gap in AI is therefore a reflection of misalignment between optimization targets and human ethical expectations. The responsibility for this alignment rests with system designers, regulators, and the societies deploying these systems.
Finally, your closing metaphor about spectatorship and comfort aligns with established observations in political economy and social psychology. Humans demonstrate a strong tendency toward stability-seeking behavior, prioritizing predictability and personal comfort over systemic reform, unless disruption directly affects them. This dynamic influences both technological adoption and resistance.
In summary, the concerns you raised point less to a unique moral deficiency in AI and more to the structural properties of human systems themselves. AI does not originate moral inconsistency; it amplifies and exposes the inconsistencies already present in its training data and deployment environment.
This honestly reads like a copypasta
I wouldn't even rate this "pasta". It's word salad, no carbs, no proteins.
Right?
You! Of all people! I mean I am off the hook for your food, healthcare, shelter given lack of meaningful social safety net. You'll live and die without most people noticing. Why care about living up to your grasp literacy?
Online prose is the least of your real concerns which makes it bizarre and incredibly out of touch how much attention you put into it.
1 reply →
Low effort thought ending dismissal. The most copied of pasta.
Bet you used an LLM too; prompt: generate a one line reply to a social media comment I don't understand.
"Sure here are some of the most common:
Did an LLM write this?
Is this copypasta?"
Accusing someone of a low effort dismissal and dismissing their comment as LLM written at the same time is quite the demonstration of both hypocrisy and instability.