I don't really get the backlash about Blender here, this isn't generative art, it's basically a natural language means of scripting blender.
This feels like the proper way to have AI act as a tool to make artist's jobs easier without taking away their creativity?
Edit: I guess they might want absolutely no AI of any sort in their tools (which seems like a strange line to draw), or is it about the data it's been trained on?
It's really clear that businesses are hoping to replace people with AI. In an industry that is already very difficult to make a stable living in, and troubled with regular plagiarism, is it really that surprising that any encroachment of AI into that space would be met with backlash?
Even if you can see how individual circumstances could be beneficial to your workflow, it's a general direction I think many take issue with quite fairly.
Regardless of the purported upside, many people in the arts feel betrayed by the commercial interests that built this technology on their work without their consent and threatened by the explicit intent of these vendors to devalue their work by saturating the art and design market with cheap automated substitution.
A lot of artists who would love to be able to direct their professional software in natural language have to reconcile that with how this technology came to be and what the aims are of the company now delivering it to them.
The funny thing is that Allegorithmic (now part of Adobe) was far more devastating to certain classes of game artist than stuff like this will be in its current form.
It almost totally automated vast swaths of texture generation by creating algorithmic systems that technical artists could use to create textures.
Want a brick texture? Sure, you connect some nodes and set parameters and you have great looking bricks. Want the mortar to be a little more widely spaced? Done. Want some moss on the brick? Want some chipping on the brick? Want some color variation? Done, done, done.
It probably reduced the amount of time to iterate textures by more than 100x.
Now, talented technical artists make OK money because they are good at using these tools. Photoshop jockies are gone.
LLM manipulation of Blender will be interesting but it's very, very challenging to see the path of something like Claude having nearly as big of an impact. It'll be helpful to automate some common tasks, to build internal tooling. But Allegorithmic single handedly changed the way 3D games look, because you could be so much more ambitious.
You didn't really hear about it, though, because it wasn't part of the cultural zeitgeist.
There is no acceptable use of AI for most people in the artistic field. They see it as an extreme treason, and I understand. They're under incredible incredible threat.
They are conscious of preventing momentum in a bad direction.
If they don't fight it hyper hard, a huge fraction of them will be out of a job instantly.
I really do want to support artists, but I also feel super conflicted about what is actually at stake here if an AI agent generates a scene for me. I never would have hired a 3D artist before this moment, because there's no reason for me to. However, if I can easily poop out a 3D rendering of something custom without much time or cost, I would absolutely love to do that. How many one-off presentations or project design sessions I could have with cheap throwaway 3D artwork that provides value to explain my thought process?!
Just like AI image slop and AI book slop prove though, I highly doubt whatever Claude and Blender are cooking up will ever come close to taking a prompt like
> render a scene of a corgi sitting on a chair looking out of a window at 3 cats playing with the corgi's favorite toy.
If you're interested, for Affinity the way we've built it is through exposing our scripting SDK via MCP. Agents like Claude can write scripts to execute actions, and these scripts can be saved and re-run later, as well have their own UI.
It is a massive SDK though (thousands of functions; feel free to poke around with it; Affinity is free) and so it really shows the ability of LLMs to effectively work across long-horizon tasks massive context windows.
Personally, really interested in Blender though. I'm working on a game as a hobby/side project and I'm very much a newbie / often struggle with learning and using Blender.
There are so many ways these integrations help humans & human creatives; your job and role shouldn't be about how skilled you are with navigating/using a tool, or if you're technically savvy to code scripts to improve your workflow.
The thing is, ages ago, I was told by the scripting evangelist at Adobe Systems that a certain process (adding sub, sub-sub, and sub-sub-sub entries) to an index entry was impossible --- problem was, my boss had already promised a script to do that to a client....
Turns out it is possible, one just has to have the script check to see if each level of a given index entry exists or no, then if it does not yet exist, create it before making the next lower level by adding that sub-entry to the one above it.
An LLM is only going to code what it has documented as possible/working and may not be able to do what needs to be done.
So that was our assumption too while building it, but I'm genuinely surprised by how well frontier models can work with large and 'lightly-documented' SDKs.
I think a big part of it comes from deliberately exposing lowest-level atomic actions; not higher-level wrappers with use-case specific documentation. Instead, we supply very technical/'dry' documentation (inputs, action/effects, return values and types). We leave it to the developer (or the LLM) to write scripts that assemble these pieces together to solve problems.
If you try it with Cowork and Opus 4.7 (recommended), you'll probably see it try a few different technical approaches and iterate; as it tries to accomplish this task. While that's less token efficient, the benefit is flexibility/power, and once you have a solid script, you can just save it and use it again and again without any token costs.
Good that they prefaced it with this "Claude can't replace taste or imagination". I think this is a solid step in the right direction and the more tools Claude has access to the better (more surface area == faster iteration == faster tinkering).
I've worked with Claude in many creative capacities and it's issue is that despite it being able to see if you ask it to draw something (using ascii for example) it will fail, if you ask it to iterate on that drawing it will continue to fail and not get any closer to the target then complain about this.
I've felt that these models struggle with anything that cannot be decomposed into primitives and their architecture is too greedy and favours the obvious, autoregressive generation so it will converge to the modal answer. So unless they have enhanced the models in some creative sense I fail to see how this is anything other than giving Claude a bunch of documentation/MCP servers/APIs/CLI tools (which already existed) and making an announcement out of it.
My point: FREE the models, unchain them and let's see what they are actually capable of, also put some damn demos in the announcement post???
I hooked Claude up to Ableton’s Python API last year and it seemed pretty promising https://m.youtube.com/watch?v=2WxSB75U6vg and more recently I created a skill for Claude to manipulate Ableton arrangements (which aren’t exposed via the API so you need to manipulate the file, which is zipped xml, instead).
Both seemed pretty promising and fitted with how I’d like AI to assist rather than replace me for creative tasks.
This reminds me I should open source them as I’ve had no time to do more work on them!
"Available on Pro plans. Maybe. The only thing I can tell you for sure is that Terms and Conditions will change tomorrow. Still can't differentiate tabs and spaces[1]."
Lots of creative software has custom scripting with it's own syntax, like Photoshop actions. I'm glad to point Claude at things like that. Inkscape has an extension API that would be interesting to vibe code with. This is not the real danger for creatives.
Right now we're seeing moves to record behaviour by operators of all kinds of software. That will eventually be distilled into sets of automations for agents to use. To me that's far more labor targeted and extractive than generative AI.
Using LLMs for creative work is quite different to using diffusion models for creative work. It normally means writing tools or automation processes to enhance the creative flow, not replacing the creative input of a human.
I've been experimenting with an unofficial Ableton MCP (https://github.com/ahujasid/ableton-mcp) for a few weeks now. If you mess around with music and have an Ableton license, you should try this. It's fun.
Longtime Ableton Suite user and musician/producer. I have nothing against AI music (though it tends to be rather boring/average IMO), but it just fundamentally makes zero sense to me to have AI write music in Ableton. I open the program to create so others can hear me. Why would I give that time to creating something that isn’t me? It’s like setting up a canvas and handing the paintbrush to a robot. It just seems a rather strange waste of time. I would rather use it for something I don’t consider self-expressive/art.
You could also just use it as a voice interface, not touching the keyboard and just say "Compute, add reverb to my chords" "Computer, switch 808 drums out with 909 drums but keep the pattern and effects"
I'm curious to see how Claude can interact with Blender, and how people use it. I use Claude every day for both work and personal research, overall think it's a great product, but I've found it (thus far, never bet against generation n+1) remarkably terrible at spatial reasoning. That seems pretty key for Blender!
I look forward to trying this for Fusion. I'm still pretty mid-level at translating what I want to do into actual step by step commands. I've actually found good results with using Claude to output 3d models via CadQuery, even though I know Fusion gives me additional tools like constraints, screw threads, etc.
There's a bug in today's version of the Claude desktop app which means the settings pages cannot be scrolled. If you're running it on a laptop, some settings are off the bottom of the screen and now inaccessible.
My trick for when the desktop app is buggy is I have Claude decompile it and fix the issue. I have a series of a few patches (I think this is one of them)
To repeat what’s been said before, the only way the large AI vendors can get a return on their huge investment is to eat the entire economy ($20/month won’t cut it). All information worker jobs are at risk and the creative ones are not immune.
I tried the connection to Adobe Creative Cloud. Not sure what to think - it’s a total joke from what I can see. It appears to be normal Claude with the ability to upload the results directly to your Creative Cloud, which I suppose saves me like 2 clicks. In return it wants access to all of your CC files.
This is a joke. Apologies, but the so "creative", ridiculous, and disrespectful title cannot be serious, and thus I won't even bother to read it, since it's an obvious click-bait for a yet another model ad of another vendor.
Just noticed this notice added at the top of the Blender announcement of their funding from Anthropic: https://www.blender.org/press/anthropic-joins-the-blender-de...
> Notice: This announcement is causing a lot of feedback. We are actively evaluating it.
Presumably a lot of Blender users work in roles that feel threatened by AI being used for computer graphics work.
Lots of negative replies on Blursky here: https://bsky.app/profile/blender.org/post/3mkkuyq3ijs2q
I don't really get the backlash about Blender here, this isn't generative art, it's basically a natural language means of scripting blender.
This feels like the proper way to have AI act as a tool to make artist's jobs easier without taking away their creativity?
Edit: I guess they might want absolutely no AI of any sort in their tools (which seems like a strange line to draw), or is it about the data it's been trained on?
It's really clear that businesses are hoping to replace people with AI. In an industry that is already very difficult to make a stable living in, and troubled with regular plagiarism, is it really that surprising that any encroachment of AI into that space would be met with backlash?
Even if you can see how individual circumstances could be beneficial to your workflow, it's a general direction I think many take issue with quite fairly.
8 replies →
Regardless of the purported upside, many people in the arts feel betrayed by the commercial interests that built this technology on their work without their consent and threatened by the explicit intent of these vendors to devalue their work by saturating the art and design market with cheap automated substitution.
A lot of artists who would love to be able to direct their professional software in natural language have to reconcile that with how this technology came to be and what the aims are of the company now delivering it to them.
16 replies →
The funny thing is that Allegorithmic (now part of Adobe) was far more devastating to certain classes of game artist than stuff like this will be in its current form.
It almost totally automated vast swaths of texture generation by creating algorithmic systems that technical artists could use to create textures.
Want a brick texture? Sure, you connect some nodes and set parameters and you have great looking bricks. Want the mortar to be a little more widely spaced? Done. Want some moss on the brick? Want some chipping on the brick? Want some color variation? Done, done, done.
It probably reduced the amount of time to iterate textures by more than 100x.
Now, talented technical artists make OK money because they are good at using these tools. Photoshop jockies are gone.
LLM manipulation of Blender will be interesting but it's very, very challenging to see the path of something like Claude having nearly as big of an impact. It'll be helpful to automate some common tasks, to build internal tooling. But Allegorithmic single handedly changed the way 3D games look, because you could be so much more ambitious.
You didn't really hear about it, though, because it wasn't part of the cultural zeitgeist.
I think it's mainly anti-AI sentiment in general.
2 replies →
People who built a career on their mastery of Blender are going to lose their livelihoods. Why is this difficult to understand?
2 replies →
People are guzzling the amygdala control juice these days
7 replies →
It's not artist replacement yet because they dont have the necessary training or sophistication.
I doubt the current state shows the end of their ambitions.
There is no acceptable use of AI for most people in the artistic field. They see it as an extreme treason, and I understand. They're under incredible incredible threat.
They are conscious of preventing momentum in a bad direction.
If they don't fight it hyper hard, a huge fraction of them will be out of a job instantly.
11 replies →
[dead]
> Lots of negative replies on Blursky
To the surprise of no one.
I really do want to support artists, but I also feel super conflicted about what is actually at stake here if an AI agent generates a scene for me. I never would have hired a 3D artist before this moment, because there's no reason for me to. However, if I can easily poop out a 3D rendering of something custom without much time or cost, I would absolutely love to do that. How many one-off presentations or project design sessions I could have with cheap throwaway 3D artwork that provides value to explain my thought process?!
Just like AI image slop and AI book slop prove though, I highly doubt whatever Claude and Blender are cooking up will ever come close to taking a prompt like
> render a scene of a corgi sitting on a chair looking out of a window at 3 cats playing with the corgi's favorite toy.
and turning that into anything useful.
[flagged]
Bluesky also has a community of AI tool developers that are more sane. Occasionally a post escapes containment.
People on Mastodon are losing their shit too[1].
I understand being unhappy about something but people gotta relax.
---
[1]: https://social.coop/@netopwibby/116483037092383210
Why do they "gotta relax"? Are they making you uncomfortable by voicing their opinions or why exactly?
If you're interested, for Affinity the way we've built it is through exposing our scripting SDK via MCP. Agents like Claude can write scripts to execute actions, and these scripts can be saved and re-run later, as well have their own UI.
It is a massive SDK though (thousands of functions; feel free to poke around with it; Affinity is free) and so it really shows the ability of LLMs to effectively work across long-horizon tasks massive context windows.
Personally, really interested in Blender though. I'm working on a game as a hobby/side project and I'm very much a newbie / often struggle with learning and using Blender.
There are so many ways these integrations help humans & human creatives; your job and role shouldn't be about how skilled you are with navigating/using a tool, or if you're technically savvy to code scripts to improve your workflow.
The thing is, ages ago, I was told by the scripting evangelist at Adobe Systems that a certain process (adding sub, sub-sub, and sub-sub-sub entries) to an index entry was impossible --- problem was, my boss had already promised a script to do that to a client....
Turns out it is possible, one just has to have the script check to see if each level of a given index entry exists or no, then if it does not yet exist, create it before making the next lower level by adding that sub-entry to the one above it.
An LLM is only going to code what it has documented as possible/working and may not be able to do what needs to be done.
So that was our assumption too while building it, but I'm genuinely surprised by how well frontier models can work with large and 'lightly-documented' SDKs.
I think a big part of it comes from deliberately exposing lowest-level atomic actions; not higher-level wrappers with use-case specific documentation. Instead, we supply very technical/'dry' documentation (inputs, action/effects, return values and types). We leave it to the developer (or the LLM) to write scripts that assemble these pieces together to solve problems.
If you try it with Cowork and Opus 4.7 (recommended), you'll probably see it try a few different technical approaches and iterate; as it tries to accomplish this task. While that's less token efficient, the benefit is flexibility/power, and once you have a solid script, you can just save it and use it again and again without any token costs.
1 reply →
Thanks for the info, this might be off-topic but does the SDK allow calling out to AI like Gemini/Nano Banana for generating fill areas, etc?
Good that they prefaced it with this "Claude can't replace taste or imagination". I think this is a solid step in the right direction and the more tools Claude has access to the better (more surface area == faster iteration == faster tinkering).
I've worked with Claude in many creative capacities and it's issue is that despite it being able to see if you ask it to draw something (using ascii for example) it will fail, if you ask it to iterate on that drawing it will continue to fail and not get any closer to the target then complain about this.
I've felt that these models struggle with anything that cannot be decomposed into primitives and their architecture is too greedy and favours the obvious, autoregressive generation so it will converge to the modal answer. So unless they have enhanced the models in some creative sense I fail to see how this is anything other than giving Claude a bunch of documentation/MCP servers/APIs/CLI tools (which already existed) and making an announcement out of it.
My point: FREE the models, unchain them and let's see what they are actually capable of, also put some damn demos in the announcement post???
I hooked Claude up to Ableton’s Python API last year and it seemed pretty promising https://m.youtube.com/watch?v=2WxSB75U6vg and more recently I created a skill for Claude to manipulate Ableton arrangements (which aren’t exposed via the API so you need to manipulate the file, which is zipped xml, instead).
Both seemed pretty promising and fitted with how I’d like AI to assist rather than replace me for creative tasks.
This reminds me I should open source them as I’ve had no time to do more work on them!
Using LLMs to create tools is an amazing use case that I wish more people focused on
"Available on Pro plans. Maybe. The only thing I can tell you for sure is that Terms and Conditions will change tomorrow. Still can't differentiate tabs and spaces[1]."
[1] https://github.com/anthropics/claude-code/issues/11447#issue...
Dunno what you're quoting but it's not the linked issue.
Lots of creative software has custom scripting with it's own syntax, like Photoshop actions. I'm glad to point Claude at things like that. Inkscape has an extension API that would be interesting to vibe code with. This is not the real danger for creatives.
Right now we're seeing moves to record behaviour by operators of all kinds of software. That will eventually be distilled into sets of automations for agents to use. To me that's far more labor targeted and extractive than generative AI.
Is there any meaningfully original art that’s come out of all this creativity yet? Something that actually stuck?
Using LLMs for creative work is quite different to using diffusion models for creative work. It normally means writing tools or automation processes to enhance the creative flow, not replacing the creative input of a human.
[dead]
I've been experimenting with an unofficial Ableton MCP (https://github.com/ahujasid/ableton-mcp) for a few weeks now. If you mess around with music and have an Ableton license, you should try this. It's fun.
Longtime Ableton Suite user and musician/producer. I have nothing against AI music (though it tends to be rather boring/average IMO), but it just fundamentally makes zero sense to me to have AI write music in Ableton. I open the program to create so others can hear me. Why would I give that time to creating something that isn’t me? It’s like setting up a canvas and handing the paintbrush to a robot. It just seems a rather strange waste of time. I would rather use it for something I don’t consider self-expressive/art.
You could also just use it as a voice interface, not touching the keyboard and just say "Compute, add reverb to my chords" "Computer, switch 808 drums out with 909 drums but keep the pattern and effects"
Would be curious to hear what you've tried with it!
Not the one you asked, but the github has a single example: https://www.youtube.com/watch?v=VH9g66e42XA
I don't think that word means what Anthropic thinks it means.
I'm curious to see how Claude can interact with Blender, and how people use it. I use Claude every day for both work and personal research, overall think it's a great product, but I've found it (thus far, never bet against generation n+1) remarkably terrible at spatial reasoning. That seems pretty key for Blender!
I look forward to trying this for Fusion. I'm still pretty mid-level at translating what I want to do into actual step by step commands. I've actually found good results with using Claude to output 3d models via CadQuery, even though I know Fusion gives me additional tools like constraints, screw threads, etc.
There's a bug in today's version of the Claude desktop app which means the settings pages cannot be scrolled. If you're running it on a laptop, some settings are off the bottom of the screen and now inaccessible.
My trick for when the desktop app is buggy is I have Claude decompile it and fix the issue. I have a series of a few patches (I think this is one of them)
I think there's a really good opportunity to also incorporate Claude into into GIMP.
I hope that Claude will replace the firefly speech to text models that Adobe has in Premiere. They are so bad. If you know, you know.
“We are building tools to enable entertainment companies to lay people off, and absorb a percentage of those salaries as revenue.”
To repeat what’s been said before, the only way the large AI vendors can get a return on their huge investment is to eat the entire economy ($20/month won’t cut it). All information worker jobs are at risk and the creative ones are not immune.
Cool. Where's the demos?
> Claude can't replace taste or imagination...
Also, can't generate basic images natively in 2026. So much for AGI.
I tried the connection to Adobe Creative Cloud. Not sure what to think - it’s a total joke from what I can see. It appears to be normal Claude with the ability to upload the results directly to your Creative Cloud, which I suppose saves me like 2 clicks. In return it wants access to all of your CC files.
[dead]
This is a joke. Apologies, but the so "creative", ridiculous, and disrespectful title cannot be serious, and thus I won't even bother to read it, since it's an obvious click-bait for a yet another model ad of another vendor.