I’ve been woundering something (a question for Orange people.) Was “Elephant’s Dream” rendered using Yafray or Blender Internal (reworked for upcoming realease?) Of so, were the textures and lighting baked into the walls for quick animation times or was the thing rednering for a year? Also what kind of lighting was used to produce the Yafray look (GI and inbuild radiocity AO, etc.) is the renders was in fact internal.
Even better answer to these questions would be a place where Orange technique has or is discussed.
“Rendering such a movie on film resolution is no small task: thankfully Bowie State University donated access to XSeed, their cluster of 240 Dual-core Xserves to render the movie. XSeed took over 125 days to render the movie, using up to 2.8GB of memory to render a single frame. With each high-resolution image consisting of up to nine different layers, terabytes of data were flying across the globe and pouring into the Amsterdam studio.”
Of so, were the textures and lighting baked into the walls for quick animation times or was the thing rednering for a year?
There was no baking (unfortunately), we did generally render characters on separate render layers to the backgrounds, with different lighting setups, but it was getting re-rendered each frame. We had the render farm, which helped, and used matte paintings on a few occasions.
Also what kind of lighting was used to produce the Yafray look (GI and inbuild radiocity AO, etc.) is the renders was in fact internal.
No GI, or radiosity, or AO, or even raytraced shadows were used. All lights were generally either shadow buffer spotlights or spherical lamps. AO was just far too slow to be useful for us - the shots were mostly rendering at about 10-25 minutes per frame (1920x1080). Anything more than that gets too time consuming, not only in render time, but in test renders too. Best bet is to inspect the .blend files
Even better answer to these questions would be a place where Orange technique has or is discussed.
You’re right. Hopefully we can do some blog posts on these sorts of things in the coming weeks and months.
That’s amazing. Of course the renderfarm makes all the difference. I have had some success with rendering high quality frames using clever lighting and radiocity, but I’m really impressed with the quality of the film with only sphere lights and buffered shadows. Were the textures made using GIMP or rendered from something else?
Also besides the characters are the models hi-poly? I ask because I’m making a mostly-CG film and I’m looing for clever ways to prodce quality with lower render times.
I haven’t had a huge amount of experience using GI (yafray) and radiosity (blender) but each time that I have, I’ve been frustrated due to the lack of control, and slow feedback. It’s almost impossible to tweak and do test renders when you’re waiting an hour for it to appear. I much prefer a workflow of use lots of smaller lights, that can be moved around to get just the right appearance, almost like ‘painting with light’.
I also used a lot of negative lights. If you open up some of the sets in production/lib/machine/ … In some of the sets that I lit, like the tile room, there are a lot of them. I didn’t use them much before this project, but now I find them really really useful just to tweak things, to hide flaws and to get nice effects. Using them with sphere and a small radius is also a great way to get contact shadows where things connect (like pipes to a wall). Normally this would look hard-edged, harsh and very CG-ish unless you use things like AO or GI.
The textures all went through Gimp at some stage. Some were painted from scratch, some started off as photographs and were heavily layered with grunge maps, corrected, painted on top of, etc. They were generally quite high-res, since we were rendering in HD.
I’m not sure what you mean about high poly models or what your definition is. You’re best to look at the production files themselves.
Generally a good ballpark approach is for the texture to be double the size that it appears on screen. For example, if an object is going to be close up, full screen in a shot, then the texture size should be about double the res of the frame (so for DVD, around 1500x1000). It needs to be double, to take interpolation into account. Things will be moving around and deforming on screen, so if you only give it as many pixels as is seen in the final render, you’ll see nasty distortion and pixellation.
Of course if you’ve got things that aren’t seen in closeups they don’t need to be such high res - eg. if something is in the background, taking up 1/4 the width of the frame, then it only needs to be 720 x 1/4 x 2 = ~400 pixels wide. It’s important to be aware of this, rather than just throwing high res textures at everything, because it can really send the memory usage (and render times) through the roof. There are lower res versions of all the proog and emo textures (1k width instead of 3k or 5k) that we used in long shots, to save memory usage. On the closer shots, with sets and everything, some times we’d hit 2.5 or 3GB memory usage… not fun.
Ya I think hit around the same memory usage the first time I tried to render one frame of the opening scene … Except the memory for sure was “virtual” (disk swapping). I rendered the first closeup of Proog … actually I stopped it after ~40 minutes, it was ~ 3/4 done :eek: … And that was only at 640x480 res too !
In a somewhat related “textures and lighting” question :
I see that many of the scenes, in the nodes editor have multiple render-result nodes, each using different render layers / scenes. To render the final image, is the procedure to render each of the named render layers with “Do Composite” turned OFF, and then render the final image with DoComposite then turned ON ?
And one other question (for now anyway ).
How does the “linked object/ (groups?_” mechanism work ?, e.g. 04_17.blend outputs these messages :
How do you add/remove these or other “linked objects” to a scene?
I tried to remove them, to make the scene “standalone” with the idea of then appending those objects so that the entire file would be self contained. The reason being I was trying to render a file on render-planet and it complained about not finding the linked files. I didn’t try uploading the linked files (yet), but I wonder if it’s going to work without the “production\lib …” etc paths existing on the render farm?
Yep, most of the time it’s memory that it needs to actually render. I was rendering some frames for my reel here on a Powerbook 1Ghz G4 w/1GB of ram. It took me about 2~3 hours per frame at 1024x576. A PC I have with a faster CPU but 512MB of ram wouldn’t render at all.
I see that many of the scenes, in the nodes editor have multiple render-result nodes, each using different render layers / scenes. To render the final image, is the procedure to render each of the named render layers with “Do Composite” turned OFF, and then render the final image with DoComposite then turned ON ?
All the Do Composite does is send the render pipeline through the compositing stage when you press render, otherwise you get what’s in the 3D View. It’s similar to using the sequencer. As far as I know, it only needs to be switched on in the scene that you’re doing the final render from (i.e. saving the frames through the render pipeline to disk). If Do Composite is on, you should only just have to press render once - when it does the compositing, it goes through the nodes and renders what it needs to compose the image.
How does the “linked object/ (groups?_” mechanism work ?, e.g. 04_17.blend outputs these messages :
Unfortunately the wiki documentation is lacking anything to do with Blender’s library/datablock system, which is strange, and really bad since it’s fundamental stuff that every Blender user should know. This article may help anyway:
How do you add/remove these or other “linked objects” to a scene?
I’m not sure, you can try opening up an outliner, changing the display menu to groups, selecting the groups my clicking/dragging with the left mouse button outside the name display, right clicking and choosing “make local”.[/QUOTE]