Comment by cyrialize

2 years ago

This is something I think about often and always see when arguments come up surrounding copyright/attribution and AI generated images.

Could someone explain this more to me? If AI is designed after the human mind, is it a fair comparison to compare the two? Is AI designed to act like a human mind? Do we know for certain that the way a human mind pattern matches is the same as AI/LLMs and vice-versa?

I always see people saying that a person seeing art, and making art inspired by that art, is the same as AI generating art that looks like that art.

I always feel like there's more to this conversation than meets the eye.

For example, if a robot was designed to run exactly like a human - would it be fair to have it race in the Olympics? Or is that a bad comparison?

Again, I would love some insight into this.

We're very clearly having an ontological debate on several concrete and abstract questions. "Can AI be conscious?", "Are AIs agents?" ie: are AIs capable of doing things. "What things?", "Art?", "Copyrightable production?" &c.

Where struggling to come to a conclusion because, fundamentally, people have different ways of attributing these statuses to things, and they rarely communicate them to each other, and even when they do, they more often than not exhibit post-hoc justification rather than first principles reasoning.

Even then, there's the issue of meta-epistomology and how to even choose an epistemological framework for making reasoned ontological statements. Take conferralism as described in Asta's Categories We Live By[1]. We could try applying it as a frame by which we can deduce if we the label "sentient" is in fact conferred to AI by other base properties, institutional and communal, but even the validity of this is challenged.

Don't be mistaken that we can science our way out of it because there's no scientific institution which confers agenthood, or sentience, or even consciousness and the act of institutionalizing it would be wrought with the same problem, who and why would get to choose and on what grounds?

What I'm saying that once framed as a social question, there's no easy escape, but there is still a conclusion. AI is conferred with those labels when people agree they are. In other words, there exists a future where your reality includes conscious AI and everyone else thinks your mad for it. There also exists a future where your reality doesn't include conscious AI and everyone thinks your mad for it.

Right now, Blake Lemoine lives in the former world, but any AI-"non-believer" could just as well find themselves living in a world where everyone has simply accepted that AIs are conscious beings and find themselves ridiculed and mocked.

You might find yourself in a rotated version of that reality on a different topic today. If you've been asking yourself lately, "Has the entire world gone mad?" Simply extrapolate that to questions of AI and in 5-10 years you might be a minority opinion holder on topics today which feel like they are slipping away. These sorts of sand through the fingers reflections so often are a result of epistemological shifts in society which if one doesn't have their ear to the ground, one will find themselves swept into the dustbin of history.

Asking folks, "How do you know that?" is a great way to maintain epistemological relevancy in a changing world.

1. https://global.oup.com/academic/product/categories-we-live-b... (would definitely recommend as it's a short read describing one way in which people take the raw incomprehensibility of the universe of stuff and parse it into the symbolic reality of thought)