Comment by klaussilveira
3 days ago
One thing no 3D AI tool has ever done is to focus on the concept of enhancing or restyling the textures of existing, UV-unwrapped 3D models. I had to build my own pipeline on ComfyUI and Blender scripts, exporting ID maps and black/white masks from the model UV, in order to get a Stable Diffusion to paint within the boundaries of the UV and consider it as means of painting. Using cavity maps also helped with the model create boundaries. But now I am able to quickly apply, let's say, comic-book style art into existing textures of existing models.
Have you considered providing built-in tools for mesh decimation and UV unwrapping? I know it can be quickly done with meshlab, but I imagine not a lot of Adam users would even understand the need for decimation. Any possibility for also automating rigging?
I suspect that generating textures directly in UV space is a bit of a dead end because UV maps are object-specific, so flux/SD/etc are going to have a hard time understanding the inputs.
I've been experimenting with image-to-image (and video-to-video) for basic texture projection, which I think shows promise:
https://bsky.app/profile/nickfisherau.bsky.social/post/3lqrl...
I just started diving into this today too:
https://github.com/YixunLiang/UniTEX
which works with in "volumetric space" (for lack of a better word), which I think makes a lot more sense.
It actually performs remarkably well with mask maps as Control Nets. You can use a simple black background and white UV shells, that will give you OK results. But then you expand and use curvature maps just like you would with line art control nets, and then you have a controlled painting.
Even simple black/white shells work well, like here: https://x.com/_hackmans_/status/1644811607799738371
yeah you’re pretty much bang on. we haven’t exposed mesh decimation and more functionality mostly because a lot of Adam users are newer to 3d but that can change! the question is how to surface those kinds of features in a more user-friendly way.
we were thinking of rigging for the creative mode. we want to create more fun ways for our users to share their generations and animations could be a step towards that. would you be interested in that feature?
Oh, I don't think you guys should even expose it other than in the export feature. "Export Optimized" or "Export for Game Engines".
I think Mixamo nailed the autorigging years ago. Anything similar to that is good enough. If you guys want to go fancy, check out Cascadeur and what they are doing. For a cool skinning algo, you might want to check voxel heat difusing:
https://github.com/meshonline/Surface-Heat-Diffuse-Skinning
What about painting/iterating on the model, while respecting the UV constraints and boundaries of the original texture, is that on the roadmap?
will check out the skinning thanks! right now all iterations are done through prompts to make it feel as conversational as possible. some forms of in-painting could be cool though.