Deepest Fryer

You did these??? They were the first scans I ever played with, and now I’m all obsessed with trying to photoscan people. Can I ask how you did them??

Oh, Ian - you don’t want to know how: https://xoio-air.de/2014/3d-people-scanning-a-brief-introduction/

The best part is, that we had this weird stuffed guy to scan a body separately from the head. Agiscan was very sensitive to micromovement, so we had to do the body separately. Meanwhile things are much better. I would try Agisoft again, but last time we tried it-seez-3d for I-Pad which was used for these:

It was fairly tolerant, but hey, that’s already 3-4 years ago. So enllighten me, how things are made today :slight_smile: !

OH! This is gold! Thanks for sharing that; I totally assumed you had one of those big camera/light spheres! It’s actually a little surreal reading about other people troubleshooting the same stuff I’ve been working on for the past few months, haha

Actually, your models were a big inspiration to scan folks on a per-pose basis, instead of just trying to get a T-posed person I can manipulate. Folds/wrinkles/tension in clothing is so much more accurate when the person is actually in that position.

I have no idea how things are done today (I assume mostly those light spheres?) My solution isn’t any better, and I think it’s probably only possible because of technology advances (Reality Capture tends to work so much faster/better than other programs I was using). I snag pictures on cloudy days (lot of those in Seattle!), and use two light stands that the models can just barely rest the tips of their fingers on in T-pose. I think this provides a point of reference for the upper body (it’s DEFINITELY the only way I’ve been able to preserve the hands at all, hahaha).

I actually shoot footage (4k if I’m picky, 1080 if not), walking around trying to keep things as much in frame as possible, then going in closer for detail stuff. I think RealityCapture uses the previous/next frames to help place the camera position, because having them all continuous like that seems to help a lot. I was able to scan a dog that way, too- just kept walking around him until he ignored me for a few seconds, then used the short bit when he stayed still, haha.

Then I speed up the footage 4-10x, and export a jpg sequence, and throw it through RealityCapture. I’ve been gradually getting better results! Then I import it into blender, clean stuff up a bit, and rig it with Mixamo (it’s been the easiest way for me to get it to gel with mocap data, which is my end goal).

A lot of times I still have trouble getting the shoulders right, and I’ve been gradually getting more picky about how much I clean stuff up after the fact. A lot of times I’ll jump into sculpting, and that’s AMAZING for smoothing out random artifacts. And texture painting any weird glitches. Once it gets to that stage, cleaning stuff up is really fun.

Actually- the new blender 2.83 splash screen has a bunch of different scans I’ve done over the past few weeks (though you don’t get a great view of any of them (possibly for uncanny valley reasons :stuck_out_tongue: ).

I made a little video about it here, if you’re interested- I’ve learned quite a bit since I made that video, though.

My dreeaaammm is to have a pack of a few dozen vaguely-sci-fi-ish background people I can stick in a scene, then just drag and drop mocap data onto them to populate a background (I have a rokoko motion capture suit, which means I can animate a whole crowd of people ridiculously quick (which is amazing, since I’m a pretty crap character animator, haha). I was able to animate all these guys in like 5 minutes- this is craaaazy exciting technology!). It’s all coming together!

Anyways! Thanks for posting that. It’s fun talking to someone who’s poked around at this stuff too!

1 Like

Hi Ian,
Wow, thanks for sharing your process - I highly appreciate it. I love the heavy welding on the Hamburger - it will be a real killer once it is operational!

You know, the whole idea behind this xoio-air.de page was how we could produce shareable content without all the fancystuff (camera-dome, laserscan, … ) - like a poor-man’s version of asset creation. That reminds me of how we once did that texture-camera and built something like a fishing rod with a camera inside a box attached so we could capture street textures from 3-4m height. Because it looked so cheesy we attached some fans and a broken mainboard to it, to make it look more complex and hightech, hihi. People actually fell for it and thought it was a very delicate device …

Coming back to your thing: A coworker of mine (= Miguel) was intensely looking into peoplescanning again and his results were actually very neat - maybe I try to ask him next week again, how he achieved that and take you in the loop.

And congrats about the splash screen. I just stumbled across your works again - so inspirational ! We are so far mostly a 3ds-Max Studio but I try to push us towards Blender - it’s just so much fun.

In fact currently due to Corona, we are looking for stuff to keep us busy, so maybe we can help you out with the people scanning stuff …

Let me know,

Kindest regards, Peter