hypothetical question here. I’ve been playing a lot with various animation methods over the past few days here, and have had a lot of successful results with clean full body motion capture (here)
So I have four playstation eyes and 2 kinects at my disposal. Is there a way to do a multi camera, depth based facial track, a multi video track, or just depth inside of blender’s existing tools, or is there a scriptable workaround?
I realize there are some problems inherent in a multi camera setup, such as:
1: assuring the cameras are properly aligned within the software to the real world setup, and I’m aware Blender probably doesn’t solve that for you. I’m okay with creating a precise measured setup though and replicating the camera orientations in blender.
2: removal of head movement within the frame of video, but retaining gimbal. I feel like this is something that should be able to be tracked out though, if you define a root somewhere on the head.
3 standard distortion/fov inconsitencies between cameras.
4: controller input? I know with Ipisoft it can only do 2 kinects or 8 pseyes (on a standard motherboard without external controllers, which can only bring your number to 3 without custom hardware. I’m unsure about the compatibility of running both kinects and cameras/pseyes simultaneously.
I’ve seen a few methods for deriving facial based capture in Blender, both through faceshift export, or Blazraidrs method here: (here) These are both interesting methods. I prefer Blazraidr’s, but I see problems with both. First off, I’m not adverse to using marker based tracking, and I feel like faceshift loses out on some qualities because it doesn’t take advantage of that on top of the depth sensor.
anyway, I suppose it’s an open ended discussion. Curious to see what people have to say, if you read all of this thanks.