Disney SIGGRAPH Sampling Based Scene Space Video Processing - ON BLENDER !!!

SIGGRAPH Video:https://www.youtube.com/watch?v=o8tJozrtMvk

PAPER:
http://www.disneyresearch.com/wp-content/uploads/Sampling-Based-Scene-Space-Video-Processing-Paper.pdf

This is like the wet dream of any tracking and comp footage!!!
Keying without green screen!!! perfect 3d tracking, UNBLURRING video, denoising better than neat video&co, and DOF in POST from a normal video with depth data!!! LOOOOL

Guys if you look at it they use blender in the video! Can somebody write them OR implement that paper into a blender node ? I guess its already done by Disney :slight_smile: GPL already maybe? Should we ask Disney if the give the source away ?

This is really cool stuff. I can see the number of paranormal ghost videos posted to YouTube going way up. :slight_smile:

Wow. Pretty impressive indeed.

Waw waw waw. Very impressive.

Not robust depth analysis for keying sadly, and probably requires lots of motion to discriminate.

But WOW that denoise is “Amazing”!!!

Imagine low sample Cycles animation, with better results than the current blur solutions.

So what does it do?

Great! I can’t wait for this

Denoising is the most interesting aspect from the ones shown. Other things can be accomplished in Nuke or similar comp software that can generate pixel depth data from tracking info and manipulate point data in 3D space.

similar comp… like Blender ? :slight_smile: Imagine this integrated with exr deep pixel and 3D cycles clouds and tracking…OMG
I BET this is more accurate than any 3D tracker, cause it generates scene based analyse vectors and redundant z-buffers!

The only thing in this Video where they use Blender is to select a part of the point cloud. Writing import and export for this is probably a 5min task in Python, and doesn’t really indicate in any way that they have implemented the cool stuff in Blender.
Even if they did, the GPL wouldn’t require them to publish the code unless they published builds of the modified Blender. As long as you keep it in-house, nobody has any claims on your modifications, even under GPL.

At the end of the paper, section “Implementation”, it says: “Our method was implemented in Python with CUDA bindings…” So I would second lukasstockner97’s comment of a bpy implementation/addon.

Guys, i only said to ask if they give kindly something away for blender, or to get that paper into blender code by the BF guys. Thats all. Yes, maybe that is py and CUDA, but so is blender…

@whoever BF-dev reads this, this would be a killer, and yes, i know there are a LOT of more urgent projects :stuck_out_tongue:

Didn’t Disney develop and release Ptex as Open Source? http://www.disneyanimation.com/technology/opensource

And theres this older statement http://www.thepowerbase.com/2012/08/walt-disneys-real-commitment-to-open-source/

SIGGRAPH 2015 is looking to be fantastic for amazing visual effects stuff (particularly processing). Really remarkable how far along the field has come in a few short years.

interesting, i’m coding 3d camera based software, and so stumbled on this bit older thread
So they build a 3d environment, based upon a depthgraph, then they probably use a combination of a cheap device, kinect, and combined depthgraph and RGB contrasts as surface detection
(since the kinect is kinda raw not high res depthgraph).
(and likely a kinect since they walked with it and so likely didnt use a laser scanner).

Then next thing to do is store the 3d Pixel, from a few frames back.
Retrieve camera tracking (as Blender can), overlay those images and remove noise by averaging the overlays.
Well kinect is fast in creating depthgraphs 30fps, no problems in that area, a bit coding to match its RGB cam. (part of sdk samples)

While Blender was used for 3d traking (i think), using the fusion sdk examples (MS Kinect 3d framework), might be able to do it without Blender , but nice to see it with blender :slight_smile:

hmm something not shown there that it would be very nice to use this tech to combine it with CG models.
ea have a fake car drive in a bussy street. All kind of crazy car stunts could get a lot easier to film this way.

It could even mean no more green screens required.

Its not top tech as i once saw something a bit more advanced doing the same tricks without 3d input
(build 3d from 2d movie, then apply similar effects). But i think its affordable and make-able tech, by any enthusiast with some C# or c++ coding time. And because of the 3D input maybe near realtime should be possible (but would require FAST cpu horsepower still)

It looks like they’re using standard stereophotogrammetry for the depthmap. Requiring a separate device for capturing depth is rather pointless since the goal of the project is denoising footage from low-quality cameras.

futhermore kinetic and similar cameras only capture up to 3m, if you watch the video you’ll see they have depth data of faraway objects as the background buildings which are hundreds of meters away. That can not be done with cheap-side hardware, so it has to be 3d data tracked from the video motion. Quite impressive btw.

No, there is NO Kinect or TOF camera involved. Its simply a CMOS cam and a low quality one too. This is the whole point of the algorithm.

too bad it will never get to blender trunk or to any of the blender users that are outside of disney’s departments. :slight_smile:

ptex is still not in blender btw - it will take years the foundation to get that in.

The whole point of the algorithm is the “scene space” processing. The input to the algorithm requires a depth map, which could come from various sources. In the paper they mention that they used both multi-view stereo reconstruction and a Kinect to acquire the footage used for demonstration. None of what you have seen was achieved with just a single camera.

There are algorithms to estimate depth from just camera motion, but they probably don’t give good enough results for this algorithm, otherwise you’d expect them to demonstrate that, as well.