I just started using Blender (yesterday) as a video editing software. And wow, what an amazing application, I was awestruck by the UI and range of capabilities.
Anyway, I really need to create a splitscreen using two video files. The split screen should be the full left video abutted to the full right video (not half of each). Is there a way to do this in Blender? If not, does anyone have any recommendations for how to do it using another app?
Please realize I am not that experienced with video, so PapaSmurf may come along and contradict my advice.
In the composite nodes, you can use the input node for image to add video - duplicate that to load the second video. link to a scale node for each to scale the video down to fit, and a translate node to move the video around the screen. You should also have a backgournd image loaded as an additional input, maybe generated in the UV image editor.
I would use an alpha over node to combine the tow outputs, and another to overlay them to the background.Add an output viewer to be able to see the progress at each interval in the chain, and you can click the header mark ’ backdrop’ to see the active node in the node window.
If you haven’t used nodes yet, check the wiki in my sig for a lot of details, also use the search function here - lots of really talented people using the nodes for compositing here, as well as material work.
edit: Secundar’s pisc shows EXACTLY how to, so set it up that way
Whoo, I think I understand it, but I know I may be pleasantly surprised by the responses you get after this!
Hey,
Just wanted to follow up and let you know how my pursuit of Splitscreen video ended. I implemented the Craigomatic/secundar solution and it was just what I needed. Until render time. The videos I am trying to combine are both very large (about 2 GB each), and when I begin the render (15 fps), it starts well enough. However, after 14 hours or so, the process has slowed to a crawl (and isn’t anywhere near completion). Each frame render takes close to a minute, where the first frames were taking less than half a second. I haven’t watched the process, but I’m assuming that it gradually slows down until it becomes so slow that it would take years to finish. I don’t remember exact render times, but I can run it again and record those numbers if that’s helpful at all.
I can’t see any reason why this would happen, and I have played with settings to avoid this condition, but no luck. Any ideas as to what’s happening would be appreciated.
I’m not sure what format you are working in, but I think you might be able to cut some of the crawl out by prerendering the individual videos into frames of png or tga, and then using the node setup to combine the two in sections of usable size, and putting the sections together in the sequencer to output to finished video.
By having to deal on the 2 gig limit, the computer is having to write to disk and read it back, and overwrite it. I think the windows OS limits useage, and that’s why there is a button to split the video into 2 gig sections in the ffmpeg buttons.
Hope that helps,
craigo
Edit: I thought video is usually set to 29.97 fps, or 30 in blender. Maybe there is also that to cause part of your problem.
hi. two thoughts to add to the fray.
First, CraigO is spot on regarding the use of image sequence, especially with big files that are approaching the OS limit on file size. Have pity on your poor PC, and save yourself time by going to image sequences. the fastest way to do this is an image node to a composite with output format JPG and Do Composite, OR the VSE where you add the movie as a strip and Do Sequence http://wiki.blender.org/index.php/Manual/Using_VSE#Rendering_a_Video_to_an_Image_Sequence_Set
My second thought is that what you want to do can be done in the VSE using the Transform effect, and the VSE is 10 times faster at this sort of thing, which will save you time. Simply load both strips, Transform one, and AlphaUnder them together.
Regardless, once you have your frames, use the VSE and the FFMPEG options to make a single movie file playing with Codecs and compression to keep the result under 2 gigs.
First, PapaSmurf & craigomatic are right, file sizes are going to be a problem here…
a few more details would be nice, like render size ( both the combined and the input sizes), # frames, input format, etc.
Actually, I have done quite a bit of stereo pairs, both images and animations. I agree with papasmurf, I think that the seq editor is the best way to go here…after all, that’s what it’s for, putting video clips together ;).
But it’s a bit more complicated. You will have to put a transform effect on each of your input strips…both to properly place the strips and to rescale them back to their origional size. I my case, I had two views for my stereo avi, both at 360x480 that I wanted to combine into a single 720x480 stereo pair. When you load a 360x480 strip into the SE when the render size is 720x 480, blender automatically resizes the input strip to fit the render size. This means that the first thing I do is set the xScale End = 0.5 to put the strip back to its origional resolution and centered in the frame. If I want the strip to be on the right side I set the x End to 1/4 the render x ( 180 in my case ). If I want the clip on the left, I set it to -180. Then I do an alpha over to combine them
Also, if you watch the console while you process your new video, you see any error messages that will print out if blender has any problems.
If it is very slow and complains about a low memcache limit, open the info window and go to the System & OpenGL tab and increase the memcache limit.