Wanna make a script to talk to ffmpeg for rendering with other parameters/codecs . please help me

Hi, Im searching for a way to render to a different format with different settings than blender can provide through its output properties ui . i guess re-coding the ui is too much work but maybe we can at least create a fixed render script or similar which users can once define and then use for their productions…
rendering to images is a pretty hefty load of logistic to a home project , id rather use a video codec, to keep the numbers of folders low

thats why i want a usable production codec , which is not possible from blenders render ui right at this point

what format and codec are you after here? I mean, you can output to ffmpeg and get any of these:image

come to think of it, blender will output images of each frame into a temp directory anyway before encoding it to a video, so… I’m not sure what you’re expecting.

In blender UI there is only subset of ffmpeg avaliable options.
obraz

Example of other open sourced program with ffmpeg:


obraz

So… What you’re saying is you want every possible thing ffmpeg can do in blender’s options? Do you have any idea what that would entail? There’s a reason those other programs are separate things that basically only slap a gui on ffmpeg.

Most production use purposes render to images and do what they need from there. Blender doesn’t have everything under the sun because it’s near impossible to account for everything anyone might want and then build all of it.

dnxhd doesnt even render on my machine , the settings in the ui also cannot configure it , i would like to use dnxhr4444 8bit for my stuff, or maybe lagarith or maybe, if it works, even hevc lossless setting+alpha. i wanna make anime and need to render different objects seperatly for post processing so i have quite heavy aftereffects comp , i stil render somestuff i wanna overpaint as pngs but like i said the logistics to handle so many folders is just jarring and time consuming. im just one man , not a studio , if i could render single files for many things it´d take alot of strain of my back

is it really true tat blender exports images to a folder ?? i guess it directly feeds the uncompressed frame to the encoder , back in 2.79 blender still had the option render to a frameserver too.

yes. It uses the default OS temp directory to render each frame to an image, or one you set in user prefs.

If you want a script to do this, you can rebuild your own UI for it in Blender (or just have a text field where you can put your favorite FFMPEG voodoo incantation). Then your script calls FFMPEG as a subprocess that’s triggered when rendering is complete. The only downside of this approach is that you have to ensure that the FFMPEG binary (and the right version of the binary for your script) is installed on your machine… which is not so hard.

Anyway I higly suggest to NEVER render animations directly to a video file. A crash, a power loss and hours of renders… puff! Gone.
You can always output the image sequence to a video file once fully rendered, to save folders and filecount

2 Likes

Much in favor of this approach for production- you get max quality in your master output and can composite, re-render any buggy frame individually, and then encode in whatever sizes/quality/format/etc you might need afterwards.

yeah i wouldnt do it for 3d projects but my scenes consist 90% just of image planes and gp pencil objects. they render in under a minute , sometimes just 20 seconds for the whole camera shot.

i just checked all temp folders including blender render cache , windows temp , blender temp,… i couldnt see any images created during rendering

edit: i now checked it with “everything” , no new files are created , only the videofiles with 0 bytes and later it switches to out put size

any pointers ? im a noob, i have made small scripts for blender but i only know a bit about grease pencil , i still dont know much of the api

I would say that the first step is learning to do things the slow way. That is, render all your frames to images and then learn how to use FFMPEG at the command line to encode those frames to a video file. That will familiarize you with the syntax of the FFMPEG command and what you want that application to do. Once you have that, then you can start thinking about ways to automate it with a script.

thats the easy part though, i just dont know where to implement these values through the blender api (if possible) or if not possible ill have to cmd from python. how to listen for render finish?

the first option is preferred i dont have to mux and write the data twice (for the images and the video )

Right. First of all, it’s absolutely in your best interest to pay attention to the recommendations in this thread to render to an image sequence prior to encoding. It’s the only certain way to allow for resuming render jobs. Also, thanks to FFMPEG’s multithreaded encoding (for most codecs), encoding those frames to a video file takes a fraction of the time that actually rendering takes.

Now, to make this work, you’re going to have to call FFMPEG (installed separate from Blender) using Python’s subprocess module; as far as I know, Blender’s Python API doesn’t have any access to it’s internal FFMPEG library.

As for triggering the event on render completion, that’s done with the render_complete application handler.

1 Like

ok done this is pretty perfect for my case

its just a simple script no addon , and its hardcoded to my needs , but everyone should be able to conform it to his needs.

My reasoning wasnt explained before maybe , but since im making anime/cartoon i need to render at least background and foreground seperately , often times more than that, some specific character on its own layer in aftereffects to. i use eevee becasue it has realtime feedback and much better animation tools than video software but i need some layer based postpro nontheless.

now im tackeling a bigger project with many camera cuts , resulting in several folders per cut. as a lone home animator the logistics of that many folders is a pretty heavy burden and has fd me up in the past even with smaller projects. having just a few videofiles in a folder per cut is so much easier to handle.

anyways heres the code for anybody interested:

1 Like