Generate 3D mesh out of 2D picture with machine learning

Paper here: http://bigvid.fudan.edu.cn/pixel2mesh/


(a) Input image; (b) Volume from 3D-R2N2 [1], converted using Marching Cube [4]; © Point cloud from PSG [2], converted using ball pivoting [5]; (d) N3MR[3]; (e) Ours; (f) Ground truth.

(I’m not affiliated with the project)

Very cool and certainly the end of low polygon modelers and in the time the others the research is the end many jobs…


Before anyone else is already feeling the weight of the T-4 crushing down on his skull:
Its colomn (e), not (f).

So after the algorithm spits out the model, you would still have to retopologize it and apply all of the texturing and shading.

That means by the time you have the generated model cleaned up and refined (providing you don’t have access to something like Zremesher), you could’ve just done the whole thing from scratch. Still, it’s impressive that a computer can get that close from a 2D input, but a bit more research is needed in the actual production of topology.

Um… Sorry, but yeah… there will be slight crushing of skills due to a T-4.