Comment by exz
5 months ago
The current workflow is create an image with stable diffusion, vectorize it, convert it to Lottie format, and than use a function that can create the Lottie animation keyframes.
5 months ago
The current workflow is create an image with stable diffusion, vectorize it, convert it to Lottie format, and than use a function that can create the Lottie animation keyframes.
How would you rig whatever is in the stable diffusion generated image automatically? You'd have to find a way to identify the objects in the image, segment them into their individual limbs/pieces and rig and animate them.
Maybe Meta's open source segmentation tool would be handy. Maybe you can rig the person/animal/object in the image using one of the object detection tools on HuggingFace.
This seems like a really interesting project. I'd love to check it out if you plan on open sourcing it!
Actually there are models for doing the layering.
1. https://github.com/SketchSeg/SketchSeg-Natural-Prior
2. https://github.com/IamCreateAI/LayerAnimate
3. https://github.com/hmrishavbandy/FlipSketch