Comment by hgoel
13 hours ago
I don't really get the backlash about Blender here, this isn't generative art, it's basically a natural language means of scripting blender.
This feels like the proper way to have AI act as a tool to make artist's jobs easier without taking away their creativity?
Edit: I guess they might want absolutely no AI of any sort in their tools (which seems like a strange line to draw), or is it about the data it's been trained on?
It's really clear that businesses are hoping to replace people with AI. In an industry that is already very difficult to make a stable living in, and troubled with regular plagiarism, is it really that surprising that any encroachment of AI into that space would be met with backlash?
Even if you can see how individual circumstances could be beneficial to your workflow, it's a general direction I think many take issue with quite fairly.
> It's really clear that businesses are hoping to replace people with AI. In an industry that is already very difficult to make a stable living in, and troubled with regular plagiarism, is it really that surprising that any encroachment of AI into that space would be met with backlash?
But what's the plan, then? Prevent any third party from downloading Blender and integrate it in any way with an agent?
An actual plan would involve regulation, otherwise we are just complaining loudly while things march on anyway.
I fully expect things to march on anyway. I have no idea how it plays out for creative industries, I am still thinking and observing in that regard.
At this stage there is just protest and reaction.
Businesses have already replaced several background artists gambling on the uncopyrightable status of "AI" output being ignored. In a comercial setting, one can't sell what they never owned in the first place.
Without a constant stream of stolen training data, the "AI" piracy bleed-through and isomorphic plagiarism business model is unsustainable.
We look forward to liquidating the GPU data-centers at a heavy discount. =3
> Businesses have already replaced several background artists gambling on the uncopyrightable status of "AI" output being ignored. In a comercial setting, one can't sell what they never owned in the first place.
I'm skeptical of this line of reasoning. Major content providers have no problem with copyright, even when content is completely produced by anonymous contributors. Is this supposed to become an issue when you eliminate some anonymous contributors?
3 replies →
Regardless of the purported upside, many people in the arts feel betrayed by the commercial interests that built this technology on their work without their consent and threatened by the explicit intent of these vendors to devalue their work by saturating the art and design market with cheap automated substitution.
A lot of artists who would love to be able to direct their professional software in natural language have to reconcile that with how this technology came to be and what the aims are of the company now delivering it to them.
I spent most of my career in the open source world and doesn’t bother me models are trained on my output. Should I feel differently? It seems there’s a kind of ego or emotional attachment to the output that is more common among artists than devs? Perhaps abundance vs scarcity mindsets?
Regarding generative images, it's more of an issue because the effects are different.
Software tends to be a "living" project, so just vibe coding with 0 software knowledge is not yet fully sustainable for maintaining a project. But with art, the AI just spits out a completed image.
The generated images compete directly with the people the data was sourced from, and there have also been many cases of abuse, eg people using AI to impersonate a popular artist and selling comissions under that artist's name.
The copyright situation for generated imagery is also tricky, so people pretending to be artists only to be sharing work that isn't copyrightable can cause a ton of trouble and financial loss for customers.
Most of these issues don't apply to software in the same way. That's why I was surprised by the backlash to this as it's just touching the software side, I don't see this as threatening artist's work.
When I was dabbling in image generation (~StyleGAN2 era), my vision for image generation models was as a support tool for artists (back then I was generating small character thumbnails to help me brainstorm ideas for drawing), believing that people valued art for the human effort. Even then I would have considered what Anthropic are trying to do here as the preferable way to use AI in art workflows.
2 replies →
I'm an artist turned CTO. My perspective is really simple - theft is theft. You (not you specifically per se) can sugar coat it however you like, but copying open source codebases/work is different from stealing proprietary/licensed work without permission. It would have been ok if stealing/sharing copyrighted work was heavily normalized, but no, a lot of people have gone to prison for simply pirating DVDs and CDs and now you're telling me it's somehow ok if a corporation does it?
6 replies →
Yes?
For example, you could least feel that the world is large enough to have people with other needs, drives and ownership levels of their work.
You could also consider that this is not an even trade; artists had all their works ingested and didn’t get a commensurate stake in openAI.
You can consider that you had a choice to share when you contributed to open source. Then imagine how a counter culture artist, who despises corporate culture, must feel to have their work consumed by another rapacious tech entity.
Or you can be the filmmaker whose clients are now showing up with entire ad clips, and then decide they would rather not spend the money on CGI to complete the video - essentially demolishing work overnight.
This isn’t to say that there are not artists who are excited by this, or artist who are happy to have their art ingested. Just that the way you phrased your question evoked this answer.
Speaking as someone who works in the industry, I haven't really heard this sentiment. Artists are predominantly hostile to diffusion models, but optimistic about LLMs and their ability to help them write tools and scripts even if they're non-technical.
So basically artists are cool with developers not getting paid so long as they do...
Yeah, I can understand being upset with their work being stolen to train these models. Anthropic doesn't seem to be working on image/video generation, but they are still training on text-based creative works of questionable sourcing.
Makes me think that there's some room in the model lineup for one that doesn't do as well on benchmarks, but is trained on "ethically sourced" data (though they'd need to somehow prove that they aren't "accidentally" including other data).
The funny thing is that Allegorithmic (now part of Adobe) was far more devastating to certain classes of game artist than stuff like this will be in its current form.
It almost totally automated vast swaths of texture generation by creating algorithmic systems that technical artists could use to create textures.
Want a brick texture? Sure, you connect some nodes and set parameters and you have great looking bricks. Want the mortar to be a little more widely spaced? Done. Want some moss on the brick? Want some chipping on the brick? Want some color variation? Done, done, done.
It probably reduced the amount of time to iterate textures by more than 100x.
Now, talented technical artists make OK money because they are good at using these tools. Photoshop jockies are gone.
LLM manipulation of Blender will be interesting but it's very, very challenging to see the path of something like Claude having nearly as big of an impact. It'll be helpful to automate some common tasks, to build internal tooling. But Allegorithmic single handedly changed the way 3D games look, because you could be so much more ambitious.
You didn't really hear about it, though, because it wasn't part of the cultural zeitgeist.
People who built a career on their mastery of Blender are going to lose their livelihoods. Why is this difficult to understand?
> Why is this difficult to understand?
It'll be way easier to understand for developers when it starts happening in earnest to our profession, which is coming soon.
It already is here to some extent, but so far mostly on the junior end so it hasn't been impacting many people who are already established in an industry that has provided relatively easy stable livelihoods for the past 30+ years, but soon won't.
> coming soon
The developers are literally on the bleeding edge here, it might be the most developed of the AI use cases right now. The most advanced tooling for LLMs revolves around SWE work, there are multiple prolific benchmarks that the labs are actively targeting in this area, and new ones are being built, whole product categories being spawned, software companies bleeding money for tokens.
It's the other professions that are to follow once the training data is in place to go reach for their livelihoods. SWEs got the early taste of what is coming. And the blender news is exactly that.
I think it's mainly anti-AI sentiment in general.
I am a huge beneficiary of agentic dev tools. They completely changed my life and my income. However, I totally get the general anti-AI sentiment. The ultra-bear case is that it somehow kills all of us, the bull case is that those who own the inference get the all the spoils.
Even myself, while I am currently extremely empowered by these tools... I could see my role (Founder/PM/builder) disappearing in the next couple years.
I respect you a lot, so if you have a moment, I would really like to get talked down from my take.
How and why did it change your income?
1 reply →
People are guzzling the amygdala control juice these days
Say that again in five years when you can't find a job except mega yatch toilet cleaner because Claude is distinguished engineer level for one millionth of your cost and thousands of times faster, and can be instantly parallelized in the tens or hundreds of thousands just to be spun down arbitrarily as needed at any time
[flagged]
5 replies →
It's not artist replacement yet because they dont have the necessary training or sophistication.
I doubt the current state shows the end of their ambitions.
There is no acceptable use of AI for most people in the artistic field. They see it as an extreme treason, and I understand. They're under incredible incredible threat.
They are conscious of preventing momentum in a bad direction.
If they don't fight it hyper hard, a huge fraction of them will be out of a job instantly.
That's a strange position to take. I can understand not wanting models that have been trained on questionably sourced data, but otherwise they're opposing essentially a UX change, not based on UX concerns but on ideological fears.
Given how much software and other AI/computer vision improvements 3D content often relies on, it's weird to decide that the algorithm itself is unallowable.
Do you have any idea how hard it already is to make a living in a creative field?
This is a very first degree analysis.
AI is seen as an oppressor and a threat, and AI providers are seen as oppressors. It's understandable that people don't want to collaborate with their oppressors, either direct or by association. If you were a Jew, would you buy shoes from the Nazis just because you were individually safe from them at that moment? Or would you if you were of a minority they hadn't started exterminating yet? Or if they were not exactly the Nazis killing your people but some affiliated group?
This sounds extreme until you realize they are under threat of losing their likelihood for good.
They are right to not accept your inevitability point without a fight, this is a human thing that can be fought, revolutions have happened, and will continue to happen.
I don't necessarily agree with this but I do understand it.
> I can understand not wanting models that have been trained on questionably sourced data, but otherwise they're opposing essentially a UX change, not based on UX concerns but on ideological fears.
"If you ignore their biggest, their primary, concern, their other concerns seem almost trivial".
4 replies →
This is the best phrasing of the issue I've seen online anywhere.
You can find AI useful and still be against its introduction into your field for entirely understandable reasons.
Unfortunately this does create uphill friction for any good-intentioned people trying to use AI to improve art by empowering people to take on more ambitious projects. (This is a general statement and not related to the case of Anthropic. Of course Anthropic here is just trying to sell their product, which is a fair thing to do in isolation, but I also understand the opposition to it on the grounds of its downstream effects.)
Completely false and I hate this puritan gatekeeping. Artists who hate AI are the type to put more importance on the craft than the end product itself. Art is a means of communicating something personal. It’s not meant to show off skills in how well you can move a pencil or how many fricking tools you know in adobe.
AI removes all these hurdles and directly presents you with the end problem - communication. Artists hate that because most artists don’t have anything to communicate. These people deserve to be automated away. I don’t wanna see more derivative shit. Artists who have something special to communicate won’t feel threatened by AI but feel more freedom.
>AI removes all these hurdles and directly presents you with the end problem - communication.
Which is why 99.9% of AI art is worthless. There's literally nothing personal or interesting about getting grok to fart out some picture you thought about while sitting on the toilet in the morning.
AI art will never be good without actual artists embracing the medium.
[dead]