How do you handle huuge (literally) projects? Renderer, Imageprocessing, Compositing.

Not my first gig of that caliber, I just completed another project, 14m long, 3m in height at 72dpi.
Funky 39685/8504 pixels in Blender Internal.

Usually it’s not too tough to render it, ReallyBigRender did it in 2.49, in 2.5 I tried to oversize with 1000% rendersize, resulting in a crash somewhere during night on the 3 machines I let render parallel.
So I tried the Render Tiles script, which delivers the desired result but only for the plain render, not for compositing nodes.
I had lensdistortion, chroma-abb, DOF, glare and a fake HDRI pass, looked really nice, but the renderscript can only do it for the single tiles wich, for blur and lensdistort nodes is not good at all.

So I had to disable all nodes, I thought I do it postpro. I screwed up the z-pass render because I had a normalize node in it, and guess what? it normalizes each tile, not the whole z-pass. Well, no DOF.
However, Photoshop is painfully slow with a 321 GPx image and Blenders node editor… well you can set it up but as soon as you press F12 the whole show crashes.

So, what renderers (prefereably biased) do you use for such huge renders, which can handle it in a moderate time? Is there one that can distribute the rendertiles to rendergrunts without the need of a gazillion expensive licenses?
And how do you composite such huge renders? Any tools that are particulary good for that kind of work?

As IDSoft rented Cray servers to work on their 381+ GPx textures for Rage, I am not too sure if there are any enduser ways to do it painless :slight_smile:

In my job I mostly use BI, Octane and Indigo, that’s really all I need for the most part, got Luxray running too, but I just know my way around to render small projects, no in depth knowledge if it’s possible.

I´d also appreciate the input of Thea and vray users, as one of those will be the next addition in my toolbox. Personally Thea is the better tool to me, while vray offers the defacto industry standard. How do those perform with such huge rendertasks?
I just recently looked into thea and vray together with Blender and I do like both.

For once I wish I had a normal project, not a render the size of a house, or a tiny mesh that needs to have 10mio polys, or to render a 10 min animation within a day… :slight_smile:
But I guess without the challange I´d loose interest in my work…

All in all I´d like to have a bit of a discussion about productions with other “professional” Blender users, it´s a thing lacking in this forum (!"§!whats wrong with my render i got black things in it HALP!!!11) but I am sure there are many things each and everyone of us does in his production pipeline a way someone else has a better or more efficient solution for :slight_smile:

The largest I’ve had to go is about 8000px X 4000px and I cheated heavily on this, this first time, due to time constraints. I rendered it with Indigo at 6000px X 3000px and upscaled the damn thing to the final resolution. Post effects and lens distortions masked the upscaling so it looked fine. I was on an extremely tight deadline and so had to cut corners wherever I could. If I recall correctly, I tried rendering the same scene at 8000px later with Vray and I think it made it through fine, so Vray could be a good shot. Maybe I’ll give it a try again, perhaps with something bigger. As you’ve probably guessed, I haven’t rendered anything quite as big as what you’re going through (!!!). Compositing was done in Photoshop, which took a while, and I had to fore go my usual 16-bit workflow to prevent chronic freezing and sluggishness.

Out of curiosity, how long did it take you to render that?

Interesting question!

I’ve never had to go quite that large… from what I’ve done personally and from what I’ve seen agencies do large formats are almost never a straight render… and I’ve only ever gone up to poster size (A0)

render at a reasonable size broken into passes whenever possible… render one thing at a time!

(as big as you can go), enlarge to target resolution and patch up in photoshop with a lot of hi res elements… Texture not holding up? patch it up!

Really big render would be great to have back… don’t forget it’s just using shift on the camera set to a narrower FOV so you can do that manually yourself… you could probably automate that now that everything is animatable by animating the shift values!

Honestly though this sounds TOUGH at that res… unless your computer is amazing it’s probably best to nail the overall look at a smaller res and to break into tiles even for post touchup… I’d presume something that large wouldn’t even be printed on one sheet of paper! though I guess it’s possible to have a 3m long roll printer… largest I’ve seen is half that!

ps, the zpass can be output in teh 0-1 range using the “map value” node… this is animation and tile safe unlike the “normalize” node. (the z pass is in blender units as standard)

These are the settings for the map value node:
offset = -1 * camera near clip plane
Size = 1/(camera far clip plane - camera near clip plane)

Sorry I can’t help on external renderers… have only used blender or lightwave for this sort of thing and try to avoid large format work whenever possible :stuck_out_tongue:

Arexma regarding the render licences i would go with vray, its robust and cost no extra licences.
I dont know how good the 2.5 implementation is now. But i would try a demo of Vray… But the demo has a resolution limit i guess.:wink:
One point, what is interests me is the use of proxies. In the video clip editor in Blender there is a proxy field for 25%, 50%and 75% for the resolution. I wish there would be something similar in the blender compositor too. Just like in AfterEffects. This proxy stuff speeds the workflow enormous … I know this has not got to do with extrem reslotions, but for the workflow its great…

Well, I’m far from ever having used Blender professionaly, but the biggest render I ever did had a resolution of 12500x17254 pixels. Should’ve been 2.46 or somethin’ back then & I used really big render of course. Stitching it all together in GIMP afterwards already almost overwhelmed my PC’s RAM. But it worked, after all.
So ehm…now I start wondering why am I even posting here? I’m afraid my one-time-expirience is of no use to you, sorry. Maybe take it as moral standby^^.

greetings, Kologe

I’ve gone billboard size and the size of gas tankers in other applications. And the key is to do it in chunks. If it will be a printed (which it usually is) print that image and combine it externally. The difficult part is rendering it out so all the pieces will line up.

@Tom:
As mentioned I used BI, so I cut corners where possible. The whole thing was for a backdrop for an event in a shopping mall and the usual deadline of want to see something yesterday and at the beginning of the week it should have been in print. It rendered really fast. I broke it into 100 tiles, a tile took around 3-5min depending on the tile. The parts with grass (meshgrass) took longer, all the rest I used highres textures rather than geometry. Originally I was aiming for photorealistic, the client was happy though far before the goal was met.
So, around 5-7h on a Quad Ci7 with 12GB RAM. And some time for the stitching. Result, a 480 Mb tiff :slight_smile:
I´ll ask for the rendertime, the job ran in the studio. Once it renders it´s my time to lean back until the phone rings… :smiley:

@Mike:
map value… where have you been yesteraday when I brainfarted normalize. Ah well. The backdrop will not be setup straight, it’ll be curved so I got my natural lens distortion and DOF will come from the size alone :wink:
As for the print I got no idea. Our media director handles that. I get told we need this 5 man - 2 week project done by you in 2 days, it has to show this and has to fullfil that requirements. :wink: The usual.

@Tungee:
I already tried vRay demo with B2.5, so far I really like it, I just haven’t had time to really get into it. Vray is not the renderer where you quickly get a result out of it, you got to know it to try it.
Then again, Thea offers various different render kernels, a grand material studio but requires node licenses.
I need an objective compariosn between vray and thea :smiley: CLAAS where are you? I know you use both :slight_smile: Ah, yeh… Mitchigan. He´s still sleeping :slight_smile:
And you can actually use “proxies” for compositing. I had a .blend ready with just file input for z-pass and the normal render, all setup, and ready to tweak, working with just 1/10th size images for “realtime” results, blender just decided to crash on me as soon as I tried to feed it a production size image.

I got it ringing in the back of my head somewhere, that blender can’t load images bigger than 20kpx lateral lengh? Anyone?

@Nichod: I did it in chunks, there is no other way anyways. The problem is the process of combining it, especially if you want to use postpro compo effects that only work over the whole image. And I don´t find it too hard to produce tiles. With a calculator and Blenders lensshift you wouldn´t even need a script :wink:

All in all, mine was a rather extreme example, though I find the PX size not too unusual.
If the requirements are to produce for posterprint, A0, 300dpi, you´re already at 14043x9933, sure you could lower the dpi, but for A0, I would still use 300, people also like HDTV, so I guess it´s a valid topic to discuss :wink:

Especially as you have to do things in junks, it would be great to have a smart system. Let´s say you got to blur the image because you want to use it for soft light overlay, you can’t do it with one junk. The tool would have to know the neighbouring 8 tiles, or more, if the blur radius is bigger than one tile, so the calulations are done right. Well, I can come up with that idea, so someone smarter must have too, I just don´t know what he named his tool :wink:
Then you could do postpro as tiles as well unless you do it by hand, and working with (in my case) 100 tiles your render is technically outdated by the time you finish :slight_smile:
I think we need to make smarter tools instead of relying on better hardware. :wink:

For large format printing of my artwork I always render about 1/3. Anything more than that and you see the difference. I upres in Photoshop with bi cubic. Basically there is never a good reason I have found to render at full res for print work. There is such a negligible difference for most situations. (especially banners and posters you see from a distance) It works just fine to render smaller and upres in post. Pulling my hair out - ah I don’t have any - to try and get the resources working for that has little if any pay off in my experience.

Yep. Usually we would paint on any effects. One we airbrushed on a subtle DOF with an off white paint. Worked surprisingly well. Another we painted on black outlines since the software we were rendering didn’t produce an outline that lined up across images. Another option is to render out the “effect” by itself and print it on transparent sheets that are overlayed onto the final image. We did a lensflare that way.

You got me curious. What´s your expertise exactly? You render images in real size and apply them onto anything the customer pays for with paint?

Just shows that standard 1024x768 is not the minimal size.

As RichardCuliver said, I also just upres the files in photoshop.

There’s a good trick to setup an action and do the resizing in small increments. I set the document size to “Percent” and change the size to 105% resampling using Bicubic Smoother. I record that as an action and run the action until I’m at the size I need. I’ve done images like that for some huge tradeshow banners as well as a couple vehicle wraps. It works like a dream and the quality stays great.

Hi arexma,

My response to the post pro part of your question: hire a coder to create what might be called a multi-res image processing workflow, using ImageMagick and a custom GUI. The idea is to assemble and process the picture at screen size, make the job trivial that way. The routines that you run are then applied at full res as a script.

I have such a tool, programmed in Python, with GUI designed in Glade (using GTK+ 2 for the Curve), unfortunately it isn’t releasable and may never be. However, I’m not a programmer and managed to create it in about a month. A pro coder should do it quicker given a solid specification.

@ RayT:
I`d actually appreciate if it was possible for Blender to handle large images and do the post/compo there.
I am quite capable of coding (C, C++ , ASM) and it would indeed be no real problem to make a smart tile blur (I´d use adobes boost::gil, or ImageMagick as well) however, to code the various filters, or even make it nodebased would be quite a handful.

It should be doable within Blender and with Python to split a render in tiles, distribute the tiles throughout the network and render with rendernodes and once all tiles are done, patch them back together. Then create a low res proxy for compositing and postpro, and once you render your setup, all the values, like blur are kept relative in the background (lets say the proxyimage is 1000px and you blur 20px, so the blur for the real image which is 10000px will be 200px) and whereever possible the postpro is made on the tiles. Allocate memory for tile, handle it, save it, free memory, next tile. And in the end its stitched together.
Filters should work smarter too. Blenders compositor should support something like “layer” and “canvas”.
This way you could apply for instance lens distortion to a tile. The effect calculation basis would be the “canvas” while it would only be processed on the “layer” respectively the tiles dimensions.

Photoshop is good, a 3rd party tool would be nice, but the power of nodes and proxy images is just superior. You setup your node system, get instant feedback and deploy in production quality in the end. Just like Adobe AE can do for instance.
So if there´s a pythonhacker out there without an idea what to code… :slight_smile: I am “just” capable of C, CPP and ASM, no Python (yet) and the Blender API is alien to me.

Maybe it is also something for Project Mango.

Arexma do you tested yafaray?

Is it even possible to run filters such as blur on tiles? I thought that it isn’t, but I might be wrong.

Edit: Sorry, I misread your post. You’ve got the right idea.

Yes… however the last time around Blender 2.43 IIRC.
There´s only so many raytracers one can handle. As stated I use BI, Luxray, Octane and Indigo, and got demos of Thea and vRay.
If it has no biased kernel it is not really of interest to me, I got plenty of unbiased solutions at hand :wink:
I might look into it again, just for the sake of it, because Blender is Yafarays host application.

For what do you need such a huge image if I may ask? You do realize that the bigger the image for printing, the lesser DPI you need? Just last week we printed a 3x2 wall which was being watched from a close distance (max. 5 meters) and the image on the wall only had 80 ppi and still it looked great with this low resolution.

I really have no idea why you ask - did you read the thread?
As I wrote in the first post, it was 14m by 3m with 72 dpi and a few posts later, that it was for an event in a shopping mall as stage background.
And the 72 dpi where the requirement from the printer, and I guess they know what they do when they can print such an abomniation.