I was thinking about the difficulties of making 3D characters look as good as characters in drawn art, combined with the effort of modeling a character from scratch to begin with. Although you can assign a background image to various camera angles, and model the initial mesh to fit its outlines, the result still isn’t always accurate and takes a lot of manual work.
So an idea came to mind: What if Blender could automatically generate character and object models from images? For drawn characters, reference sheets could be used. People could also take photos of themselves against a green screen, and use them to automatically scan themselves into Blender, without having to use expensive 3D scanning equipment. They could also scan objects or even whole rooms! My imagined solution is this:
The addon would require an image of the character or object from 6 angles: Front, back, left, right, top, bottom… which in some form it would have to combine and interpret into a single 3D result. To make the task easier for Blender, the image(s) could be modified in an image editing tool like Photoshop or GIMP… to remove the background so Blender only scans the character’s outline without other elements getting in the way.
The next tricky part is knowing which areas are flat and which are round. Since even if cutting an alpha channel around the character makes the outline clear, that doesn’t say which parts are bumpy and which are flat. For this the artist would need to edit the drawing once more, in order to create a grayscale bump map. This should be easily accomplished by making the original image(s) grayscale, then removing any brightness difference caused by colors which didn’t represent depth.
Additionally, a good tool would also generate the UV map and texture automatically. The texture would obviously be created by cutting and stitching pieces of the drawing, and mapping them to the area of the model they represent. Since the image used to create the texture is the same as the image used to create the model, this should be achievable.
Part of this should already be possible: Make a plane, subdivide it enough times, set a front view drawing of the character as a distortion map (warp texture influence or distort modifier), assign the same image as the texture and cut the background off, and you might have a somewhat acceptable result. Sad part is, this doesn’t create a 3D character from all angles… and if anything it can only be used to make your character look like an outline of itself pressing against a sheet and creating bulges.
Another approach might be to put all 6 directional textures on the faces of a cube in the right order, set the cube’s material to “volume”, then have each texture project itself inward… in such a way that each image contributes to the depth data accordingly. But even if an algorithm to do this was possible, can the density of a volume be solidified into a mesh? The nice part here is that a voxel model could be created this way, compared to just a polygon mesh.
Do you know of any similar but complete solution for Blender, or even any tips and tricks to consider?