I’m sure this has been covered on BA but searching has found nothing so far. I’m working on a video with a lot of small shots that have reached 32 channels. Now blender says there’s no more room. But the channel numbers go way above 32? I’ve never had this problem previously. Blender 2.49b
Hi, yellow. I don’t use 2.49 yet (want to let it get completely stable first), so I can’t check the issue you asked about, but have a question: Are you assigning only one strip to each channel? I usually stagger shots along two to four channels, A/B roll style (I learned film & editing with 16mm) and so get many shots on only a few channels, with plenty of room for the FX strips like Cross and such.
Yes, one shot per channel, staggered upwards.
I did consider A/B roll way as you suggest but could see there were more channels numbered than I had strips, didn’t realise that only 32 are usable.
I’ll compact it a bit for the moment.
16mm good luck with that.
I have a Super 8mm Canon 514 with converter lense I’d love to try out but to do process paid film and digitised to HD res is about £25.00 per 3 minute roll.
Would be worth it for the right job though.
Don’t use it any more, that was back in the bad ol’ pre-digital days, when video was all analog (the great VHS/BetaMax wars!) and personal computers were closer to toys than tools (max 65kb RAM, f’rinstance). But the same principles still apply, just newer toys and tools.
65k!? I dreamed of that extra kb on my Commodore 64! Yellow why on earth do you use a track per clip? I have been cutting nonlinear for 15 years now and rarely use more that a couple tracks. This is of course dependant on the task, but even really intense layering struggles to get that high. At which point I would be going to the Node Compositor.
Any chance of a screen grab of the timeline to see what you are up to with effect stacking etc?
Mostly I cut material all on the bottom track with simple cuts between then stack for a mix or key but return as soon as possible to that bottom track.
The other issue is that other edit apps allow a seperate complement of audio tracks too. Unlike Blender that lets you put them anywhere, but this soaks up vision tracks too.
The wrong approach I guess from your response.
I’m working on a project that has hours of footage but in small chunks, the idea of no more than 1-2 mins footage per shot, which I need to condense into a 5 minute final edit according to my brief. I also have a 2 day deadline. Although I have tape logs that give me rough timings it doesn’t account for less than decent footage and I don’t have time to view it all, so editing on the fly nondestructively so I can go back and forth, sliding the chunks over each other to get smooth cuts, there are no real transitions, fades etc.
So doing the A/B roll type thing means that I’m for ever getting footage tripping over each other, layering it upwards allows me to slide chunks back and forth until they jump cleanly between shots.
I’ve not been vid editing for long, so I’m all ears to a better way. Other easier projects I have done the A/B roll.
ah yes multicam live event without timecode or decent sync. Not my favourite way to spend hours of my life. That is indeed the way everyone else solves it, lay up the sources try to get them in ync then proceed to put cuts on the appropriate shots, promoting the favoured ones. I often go through the first camera and select the favoured shots and remove the duds but leave the good stuff in sync with the audio (just with black holes). Then sync up the next angle and look at the filler for the black holes of the first one. Any way I feel your pain, good luck.
Let me know how you render out, codec wise for muxing audio etc. It seems a bit limited in old ffmpeg at the moment.
Ok, so was my first route not the best or the way to go.?
Rendering out I’m using HCEnc 0.24 (Freeware) very good mpeg2 encoder and X264 (VideoLan) CLI for web. Although FFMpeg has packaged the X264 stuff with it I’m using it direct from the authors.
I’m not impressed with the DVD/mpeg2 defaults in Blender via FFMpeg and no time to go through hundreds of options in the menu, when HCEnc produces excellent results with little effort.
Hopefully I’m not going to have the problems you are having with 264 though.
This job I’ve just thrown the shots into the VSE, no conversion via AVISynth to .png first as it’s just a straight edit, no compositing and short of time. Then frameserving straight into HCEnc rather than .png master or HuffyUV lossless intermediate first.
Can I ask what you mean by frameserving to HCEnc? Do you have a CODEC set available (in Blender) due to the install of HCEnc on your PC?
I have re-read your scenario and as I understand it you are not syncing multi shots (sorry) but I believe that you aren’t triming your footage either. Certainly for slipping contents and sliding inpoints that is the only way to go on the Blender (a seperate track for the clip in question. But you should still be able to checkerboard the shots with just a few tracks.
Select a shot from the 2min clip in question use the cut tool to slice it move that bit away then trim the remainder to the length that you want. The real issue for you is the lack of a source bin that you can go to to get tails of shots or or hold onto prefered bits. Even the ability to drag from a second sequencer timeline would be helpfull there. But I’m not sure you can access stuff from other scenes’ timelines.
Works on Linux (Wine) or Windows, not sure of Mac.
Not at my machine but it goes something like this:
You will need this Freeware and Opensource software:
Blender sources + mingw
VFAPIConv.exe and VFCodec.dll
HCEnc 0.23 or Beta 0.24
Seperate sound file. (A blender ‘MIXDOWN’ from video sources off VSE)
Compile the VFAPI tool in blenders source folders as per link below, bottom of page for TMPGEnc
If on Linux you’ll probably need to install mingw.
Gives you a blenderserver.vfp file. Put this in your HCEnc encoder folder.
Create a .blu file in your favourite text editor as described in bledner link above. Although it says it’s for TMPGEnc it’s also needed for later step for many standalone encoders.
Put ReadAVS.dll and VFCodec.dll into the HCEnc encoder folder. You can put these in system32 but I prefer to keep all this stuff contained in an easily accessible and deletable place.
Open your blender project you want to render. Do a mixdown of your sound if needed. When done choose frameserver render option. Hit Anim.
To create an .AVI signpost file, run VFAPIConv.exe and ‘Add’ the .blu file and choose the location to save your .AVI signpost to. I generally drop it into the encoder folder.
Blenders frameserver must be running to create the signpost file. Obviously.
Open text editor again and create a simple .AVS file, including the following:
video = Avisource(“yoursignpost.avi”)
audio = Wavsource(“your.wav”)
For HCEnc a ConvertToYV12 command as it doesn’t accept RGB which is what is coming out of blender.
Save the .AVS with a suitable project name. You may want to add sharpening to your video. Suggest msharpen.dll for that. But leave it for now.
Run HCEncGUI, load the .AVS file, do minor config choices for export. Check in the preview tab to see if blender has sent a frame to HCEnc. Having blenders console visible is useful too. Hit ‘All Frames’ button.
Window at bottom of HCEncGUI shows whether AVISynth is happy with your avs file. If so hit encode button. Blender should advance frames.
Using the above setup I’m successfully frameserving to HCEnc, Cinema Craft Encoder SP2, TMPGEnc and VirtualDub & Mod. (To encode to other system codecs you may have installed but not available via blender interface( A linux thing).) All on Linux with Wine.
May sound like a lot of effort but once it’s setup it’s easy and both VirtualDub and AVISynth are excellent all round video processing tools. There are real benefits to getting your head round AVISynth and the fantastic range of tools available for it, many opensource. Check out www.Doom9.org for masses of information.
You can add AVISynth processing options to your .AVS file like shapening, colourspace conversion for encoders that don’t take RGB, like HCEnc, interlacing. deinterlacing in the frameserving process.
Loads of AVISynth filters http://avisynth.org/warpenterprises/
Thanks for your suggestions on editing, I’d been a bit daft staggering up in one go, so now I stagger up so far if I need to then start again at Channel 1 and stagger up again and again. Much easier.
Since Blender does lack a source bin, I keep a small second window open to my timeline, but have it set so that it is seeing a section that is far beyond what I’m cutting. For example, if I’m doing a five minute cut, the second window is centered on about 10 minutes. By using the snap function (shift s) you can easily transfer stuff in and out of the “bin”.
Mawilson cool idea, I wa splaying with other timelines and that didn’t occur to me.
Yellow, oh I see is that a watch folder for encoder? Hmmm many steps for work around of basic function, but atleast it works.