slit-scan in Blender

Hello,

I’m trying to create slit-scan images using Blender but I have no idea how to go about that.

Slit-scan means that all pixel columns of the final image are from the same part of the camera, but from different times during the scene.

Is this possible in any way? Nodes? Python? Something else?

Cheers
– Peter

Use the rolling shutter render option with long enough duration.

1 Like

Is there a way that offers me much more flexibility than the rolling shutter render option?

I’m looking for durations as long as 2000 frames and also for a direction left-to-right. The results I’m looking for (which at the moment I have to calculate separately from normally rendered frames) should look like this: https://www.youtube.com/watch?v=TqfvuAlCINY

what are you trying to do ?
i used that with a lens inverter and bellows on a 240 camera many years ago to increase the “perceived” depth of field

By moving the “subject” through the very thin slit of light during a long exposure

Essentially my goal is to swap the x-axis and the time axis and slit scan would be a first step to save me additional processing after completing a normal render, if it can be done at the time of rendering.

The expeected result can be seen in the youtube link above. the result is as many frames long as the camera had horizontal pixels and the resulting video is as many pixels wide as frames in the animation.

there is really no hardware limitation as there is in a REAL lens and camera with almost zero depth of field
in using a generated digital environment

blender is really NOT like a real world camera there really is no need to simulate moving the object through a very thin band of light

you can set the “Virtual” depth of field to the whole object and not just a tinny thin sliver of it

I don’t think you’re understanding what I’m trying to do. Have you watched the video? Have you read what I’ve written as answer to your question?
What I’m trying to do has nothing to do with depth of field

you have a better vid other than a lsd trip

That’s kind of point of what I’m trying to achieve. I’m not lookin for realism or even plausibility.

If anyone else has any ideas how I can slitscan across long animation periods (python would be fine if necessary) please let me know.

You can do this in Processing. I think this is really more of a problem for post.

I’m doing it in processing right now, it adds time to my overall process since I have to render a scene at full resolution and frame rate to even see how the result would look. If I had a way to render a few frames only, directly in Blender, it would make preparing a scene that works so much quicker as I wouldn’t have to sit through several render sessions for tweaks.

(rendering 1920 frames of a scene every time just to see the result in the correct width after running it through processing can be pretty time consuming even if setting everything to basic materials and low samples)

Hi.
I did not know about this technique and I have no idea how to achieve it in blender. Does this seem to be possible in postprocessing from images (using other softwares)?
Anyway, there are interesting and funny videos on youtube. This is a good explanation:

Although I’m not sure if any of that is the same thing you’d like to get.

Edit:
More information about this technique. Thanks to ‘WayBackMachine’ Service, otherwise some information and script would have been lost:
https://web.archive.org/web/20170131030615/http://www.discoverdigitalphotography.com:80/2012/slit-scan-object-photography-how-to

http://www.flong.com/texts/lists/slit_scan/

http://www.slitcam.com/download.php

I’m not sure if any of this is useful for getting video as output

Just to clarify: I know the ins and outs of getting the effect I’m after in post. The problem I’m trying to solve is getting the same results directly in blender so that I can quickly judge if the scene I set up works in producing the result I’m after without having to render 1920 frames at full hd resolution just so that my post processing has all the data it needs to show me even part of a result.

Hum, how is it going to work ? I don’t see a way of doing this at render time. But maybe by looking at the original effect of slit scan (in 2001 space odissey) maybe it’s possible to re-do the effect inside blender by recreating the same rig/setup they used at that time. But I guess it’s tough too because there maybe a trick involving exposing the camera film in a non-standard way. Then It won’t work.

The big issue is that when rendering image 100, you can’t access geometry / render-data that is at frame 1.

In compositor there is a bit of a same problem, you can access the same image sequence at different frames without loading n image sequences each offseted of 1. That’s doable but very very inefficient.

In the VSE I think something is doable too but same issue with the compositor, you’ll end-up loading 1920 images sequences and offset them.

With python you’re screwed too because it won’t help anything related to “doing it at rendertime” and you’ll end up writing a slitscan plugin that load each image and paste it on another one.

One thing that should be possible is to look into lattices deformers and try to mimic a similar deformation, as long as you’re working with abstract images , maybe that can do something that work, but it won’t be slitscan anymore, just deforming geometry that can give similar effect.

I’ll stick to another software , Natron as a slitscan plugin too and is a good compositor application. You can maybe render your images to opengl sequences to have a quick preview then limiting final exports , I can understand why you want to do everything in the same application…

I was really hoping to perhaps be able to do something at rendertime, but as you said it seems to be impossible. Too bad.

My vision was that using a script I could render 1 pixel column from animation frame 1, then the next pixel column from animation frame 2 … and so on all into the same actual image frame. If I understand you correctly the python API doesn’t go deep enough to allow for that.

Yes, the issue is more about how 3D render works in general. Before doing the actual rendering there is a conversion/precomputation between objects in the scene / lights ect… to something that the renderer can use to calculate light and shading interactions. To render the effect you’ll need either to store each of these 1920 “conversions” , or generate them 1920 time per images.
That will lead at best to something either memory intensive or time consuming. Maybe both…

As this approach is very uncommon there is no tools that allows to do that ( I guess it’s the same in other 3D softwares/renderers), and python, that allow you to use blender but using code instead of the mouse indeed can’t help.
You have to make a special 3D software/renderer :S

All that said I’m not a specialist about how 3D rendering work, that’s how I understand it…

I guess my only chance then is to do it in post and get good at modifying my scene to render very fast for previews.
At least now I know that doing it in blender directly is a path I have to abandon

You remember that for very fast rendering for preview and tests purposes only, you have OpenGL render.
That is, to get fast render in Blender and then do tests applying the effect on other software.

OpenGL render makes things so much easier for me. Thanks a bunch!