Show HN: Kaedim – API for 3D User Generated Content
3 years ago
Hi HN, I am Konstantina from Kaedim (https://www.kaedim.com). Kaedim is using ML to transform 2D art, sketches or photos into 3D content. We make it easy to integrate 3D User Generated Content in game or metaverses, with our API.
Creating digital 3D objects is getting increasingly difficult and expensive. There is a very limited supply of people who are good 3D artists, and the cost of training one is very big. It usually involves years of learning difficult 3D software. However, more and more of the digital experiences around us are turning into 3D.
I needed this product myself. The idea for Kaedim was born from a personal frustration when, 2 years ago, I was working on a project for re-creating a cathedral in 3D software for my university degree. Before being hands-on, the concept seemed straightforward to me, “the same way you draw on a piece of paper, you can also draw in 3D, how hard can it be?”. The reality shocked me. Having completely underestimated the task I found myself needing hours to model each 3D object (chairs, tables, walls) using really complicated and steep learning curve 3D software.
Every time I wanted to model something new I had to start from a cube and do all the necessary operations on it to achieve the desired shape. Over and over again. Moreover, there were many times when I would bin my creations and start from scratch for better luck. The reality with 3D modelling software is that it’s almost always easier to start from scratch than to try and fix a modelled object.
After my personal experience of the problem, I started thinking about game devs. “Game developers have this problem at scale, they need to build whole 3D worlds with millions of objects. How do they do it?”. So we started talking to them. Only to discover, there is no secret. 3D asset production is a big bottleneck for them too. There is a very limited supply of people who are really good 3D artists, and the cost of training one is very big. It usually involves years of training on difficult 3D software.
Our solution is an ML algorithm that creates 3D models out of 2D images. We are constantly training on more and more data points to improve the accuracy and we have added a Quality Control step to always guarantee a standard level of quality. We then use the QC results to train our algorithms further.
Artists and game devs have used Kaedim so far to quickly prototype, create and iterate their 3D art, in a cost effective way. However, talking to a lot of game developers, we realised something key. For the same reason games like Minecraft and Roblox are very popular, more and more people want the opportunity to customise and contribute 3D content inside their favourite games/metaverses.
This is why we created the Kaedim API. Within your app, enable your players to upload their 2D inspiration and easily create their own 3D content for customising and populating the game.
Kaedim API Demo Video: https://www.youtube.com/watch?v=k976GJWQrKw
Documentation: https://app.archbee.io/public/m370vHO-M7WGXJQLRlIte/AU-DhH6mX0e1sb_FRXH3i#lk-useful-links
For signup and more information about onboardings get in touch with us here
Discord Server: https://discord.gg/4wN8NSUr
Thanks a lot for reading this! We are adding more and more features over time and would love to hear your feedback and ideas on what you’d like to see from the API.
If you have any cool app ideas that can be built by using Kaedim, drop them in the comments!
From the video I gather that texturing is still a manual step? I'm a little confused how your editor showed the model without a texture, then you were able to do a perfect color fill on the different parts of the model. One of the most difficult parts of modeling is the texturing, (bump/normal map, albedo/lighting, color, etc) with lots of trade offs for how big your texture is, how much can be re-used, not to mention the actual mapping stage, which even the best "smart" auto-mapping tools do just OK.
I'm impressed by the model demo, but a lot of the time comes from determining the style (conceptual design), then implementing that style within the details (which is the baking, painting aspects of creating the model textures). You mention Robolox/Minecraft and the demo uses a kind of low poly metaverse social app, which your demo fits well, so I'm wondering who your target market is, I assume its games/apps with high volume, low detail models at the moment, is this correct?
Thanks for the comment! Yes, texturing is a manual step using a small widget we've built. We do the fill monochrome for the different parts of the model. It's easy to do because of how the model is generated formed by different parts. Then, as it enters the game environment, it's affected by lighting too.
Yes, that is correct in the majority. The focus is games/apps with high volume, low detail. However, we can also do higher detail/resolution and some of our customers are PC and console studios and they use it for prototyping, blocking out scenes, iteration.
Our website and this video feature some higher detail models: https://youtu.be/jSZ7RMq5EKA
> Transform your 2D art into 3D Content
One suggestion: The example on your home page, to me, looks fairly three-dimensional (as far as 2D sketches go), with shading and everything. I would suggest a better example might help with capturing potential customers.
Looks great though
Thanks a lot for the feedback and suggestion! Taking it onboard!
Can we see more examples?
(Including some less successful ones and some failure cases ideally)
Yup, would love to see more examples how this works!
Thanks both! Here is a video demonstrating the web app version, with 3 more examples (one sketch, one concept art, one photo): https://youtu.be/jSZ7RMq5EKA Let me know what you think!
Here are some inputs and outputs: https://i.imgur.com/52Vneuv.png
Haha thanks orliesaurus ;)
Is that AI-based, e.g. like https://www.arxiv-vanity.com/papers/1511.06702/ ?
BTW, great job improving over the public state of the art.
Thanks for the comment and kind words! Indeed it is!
What are the current limitations of the model which you're working on? Could you share some examples where the output is currently sub-optimal and what steps you're taking towards improving it?
Hi Alex, thanks for the question! Currently, we are not producing great results when it comes to realistic humans, animals, and vegetation. Those requests are not served as we haven't yet trained for these kind of inputs. Once our geometry reconstruction has high accuracy for hard-surface objects we'll move on to this category, collect data, train and improve.
> Once our geometry reconstruction has high accuracy for hard-surface objects we'll move on to this category, collect data, train and improve.
is that really possible though? For complex objects there should be multitude of 3d structure that fit a 2d projection and there's really no way to say "which one is the most correct one".
1 reply →
New Discord link: https://discord.gg/ZY6wJwKDCa
How much prior work does the picture uploaded need? Is a simple background enough or further manual processing on the 2D picture has to be done before uploading it (e.g., adding metadata, etc...)?
Hello JaafarRammal, a simple background is enough as long as the main object is clearly distinguishable, no extra metadata is needed, just the image :)
Can this convert a 2d floorplan in DWG format into a 3d model?
Thanks for the question! Currently DWG is not a viable input. We once tried it with a photo of a maze-like floor plan and it just created the “walls” if you like.
That could be interesting. Anyway I could try?
Sounds awesome! Can we provide back and front images of the object? How do you achieve realistic 3D representations?
Thanks for the question mariankh! For the API version, we have it only as single image for the time being. However, if you are using our Web App, we also have an option for uploading up to 6 images :) For realistic 3D representations, we train with a lot of real-life objects and then we also do a Quality Assurance pass to make sure that all our outputs meet our level of standard.
Interesting! Can you share any insight into how your ML algorithm works?
Hi 4thstreet, thanks for your question! Yes, we train on 2D-3D pairs. For example, we have 3D models and then we take images of them so we learn how objects look from different points of view :)
But doesn't this limit you? I mean the work here to collect the data is such a pain. While glad that you are doing this, is there any other method?
Also, what are your thoughts on systems like Apple's object Capture?
1 reply →
Sounds awesome!
Thanks!
Can this be used to generate 3D print files? (stl)
Thanks Peter! Yes, we can add this download format as well (we currently have obj, fbx, glb, gltf)
Thank you!
How does the API work ? is it webhooks ?
Hi kyriakosel, thanks for your question, yes we use webhooks to send the generated 3D model to our clients, our documentation includes more detail: https://app.archbee.io/public/m370vHO-M7WGXJQLRlIte/AU-DhH6m...
Is it websocket API ?
Hello Raoufyousfi, thank you for your question, we currently only use REST + webhooks
Sounds Awesome!
Thanks gieun!
great idea!
:)