The current workflow is create an image with stable diffusion, vectorize it, convert it to Lottie format, and than use a function that can create the Lottie animation keyframes.
How would you rig whatever is in the stable diffusion generated image automatically? You'd have to find a way to identify the objects in the image, segment them into their individual limbs/pieces and rig and animate them.
Maybe Meta's open source segmentation tool would be handy. Maybe you can rig the person/animal/object in the image using one of the object detection tools on HuggingFace.
This seems like a really interesting project. I'd love to check it out if you plan on open sourcing it!
The current workflow is create an image with stable diffusion, vectorize it, convert it to Lottie format, and than use a function that can create the Lottie animation keyframes.
How would you rig whatever is in the stable diffusion generated image automatically? You'd have to find a way to identify the objects in the image, segment them into their individual limbs/pieces and rig and animate them.
Maybe Meta's open source segmentation tool would be handy. Maybe you can rig the person/animal/object in the image using one of the object detection tools on HuggingFace.
This seems like a really interesting project. I'd love to check it out if you plan on open sourcing it!