Project Widow Reboot "SILK"


Some of you may or may not remember a short film project called “Project WIdow”. This was a short film that begun in 2009 as a technological showcase between Blender and Renderman, in our case we were using Blender 2.49 and Aqsis. At the time such technology was experimental but it functioned well, this was before Matt Ebb’s addon was picked up by Pixar, so we wanted to show that such a feat was possible for small and large scale environments.

This was a world wide collaboration between artists and software developers and had been worked on for a couple of years before it shut down in 2012 due to lack of updates and software issues.

Last year I secretly begun working on it again, this time using Cycles as the render engine. The reasoning was I was bored in between moving from one house to another and to pass the time messed with the assets we created using Blender 2.79. I was amazed at not only how well it looked but how easy it would be to convert everything to Cycles. So one thing led to another and here we are.

I have been decided to start posting on BlenderArtists to show some of the progress, as well as it’s history here and there.

Some renders I have done over the past year, quality is not great as these are more tests renders than final frames.

This is a composite test between a still background and the animated spider

Test render of the environment

2.79 Compositing test.
I have since upgraded my renderfarm to Blender 2.80, going to 2.81 soon to take advantage of new features recently added.

Composite test video

Another composite test done in Blender 2.79, this one was to bring out the spider webbing more.

I do a LOT of writing about this short on the blog that’s still here 10 years later but will try to keep the project up to date here on BlenderArtists as well.

This is the storyboard video

It has been some time since I worked on this but it has not been forgotten! It just got put on the back burner for a bit but once again I have been doing more and even have some final frames rendered.

So to begin with I decided that EEVEE would have to do as the render engine of choice, primarily due to the HOURS long render times and I simply do not have the resources, time nor money to shell out my own private renderfarm nor do I want to rely on a commercial renderfarm either ($$$). I have spent YEARS working on this and simply want to complete it before I am old.

I made this change after seeing some advances in EEVEE since the 2.80 versions, now I am using 2.93.

Off to some renders and screenshots!

This screenshot is from a shot about midway through the whole short, I am working on the easier shots first, while getting some of the harder animation done at a later time.
Here the train simply passes in front of the camera, and I applied a little noise to the camera to simulate camera shake.

Here you can see the end result.

This shot is in the beginning of the short to establish the general mood and atmosphere, environment that the whole thing will play out in.
I made use of the compositor and applied some lens flare, distortion and even film grain.

Here again is an opening shot, with more lens effects. I did try to get some Depth of Field going and used the compositor to bring out the lighting a bit more. Again all of these renders are done in EEVEE.

The train tracks have finally been shaded to an acceptable degree, took me forever to get right considering that I had completed them in RSL (Renderman Shading Language) and had to replicate them in Blender. For years they stayed the same but a couple weeks ago finally got them to look right.

Here is the shader node layout.

Much of the shaders are procedural, there are actually very few objects that are texture mapped and the ones that are have a mix of both. I think the spider, our main character of the short, is the only thing fully texture mapped. Back in 2009 when this was a Blender/Renderman project we wanted to showcase the power of RSL at the time and using procedural shaders was the way to go. Time marches on and as explained in my first post (2 years ago…) this turned into a pure Blender project. Too many people laid their hands on this to let it go to waste so I am determined to finish it.

Anyways it is late, just wanted to post an update long overdue…

So today I have placed the first “trailer” for “Project Widow” on Vimeo, I tried my best to piece together the footage I do have rendered and composited so far, not all of it of course, just some of it. Hopefully this is of interest for some people.

I am here to give a small update on “Silk”, it has not been forgotten and I have not given up on it, actually quite the opposite, I have been very busy on it and have got quite the list of things done.

I wanted to share some renders (before and after composting), some screenshots and a test animation of a particularly difficult shot that I had to finish.

One thing I have done is installed Kitsu on my old system that is now more of a server, for those that don’t know Kitsu is a production tracker for things like this…

This has helped me out IMMENSELY! Not only do I NOT have to remember what shots I have done, in what stage they are at, I have a quick web page that I just have to go and look at. Before I had installed Kitsu I had to go through the files, see what stage they were in to remember “Oh yeah! That is what I was doing!”… very ineffective, time consuming and not very productive. Now all I have to do is look at the page, and do some clicks and I get an update on my progress, or I set an update on my progress. Now I could have done one better and installed the Blender Studio Kitsu addon for the production, but I would have had to set up an SVN server, get THAT configured, get the assets configured in such a way that the addon would work… and so on and so forth… no I decided to that I was already knee deep in the mire that I wanted to just have my directory structure as it is and just work out the shots, not worry about assets and even more server related technicalities. It was a days work just to get Kitsu running.

I had to fix the opening shot because of the small bit of geometry that you can see in the top right corner… this bit is actually way below the shot area, in terms of physical space and is lit because of the sun light I have in the scene. I am not sure what I changed in the camera area but as I was reviewing the scene this popped up in view… either way I worked on the camera animation a bit and now it is gone.

The shots for Seq 004 had to be rendered again because of a small technical glitch that I simply forgot about. The cables that you see drooping off the sides of the walls? The history of these is that they were built for Renderman, each object can have a Subdivision surface applied to them without having to do it in Blender, at least back in the RIBMosaic / Blender days of 2009… well I never did this to them when I rebooted the project a few years ago and they continued to be this way until I had the bright idea to apply the SUbD Modifier to them and lo and behold they look %100 better. Because I have switched to EEVEE for the final render I did not have to wait a few days to see the results, at best it was an hour combined, I hit render and did other things for a bit while my computer chugged out the frames.

Here you can see the spider in action, sort of, as it is traversing across it’s webbing. I have a better image somewhere but I could not for the life of me find it… I think… (I have a LOT of images) anyways…

Here is another shot of the same sequence just at another angle. I had to take liberties with the lighting to get the spider silk strands to look better than they did with just the default lighting setup, in this shot I have a light off to the right side of the camera to highlight the spider and another light off to the left to do the same but to also highlight the strands… otherwise they look black and very not “silky”… I just turned down the Diffuse and Volume lighting to 0.

This is an early render (it has since been rendered again for technical reasons) of Seq 009, Shot 001… this is a 2 second shot of the spider in the line of the train, about ready to be hit… The reason for the rendering again and of course I don’t have a PNG of that render is because the silk could barely be seen and the camera in ALL other shots except this one had 6 blades, so the Depth of Field looked very different than the rest.

This was actually taken tonight during a testing, this webbing is actually a cloth simulation. The train comes and rips it apart, I didn’t have to do anything other than make a series of boxes (that are invisible to render) as collision objects, attach them to the train objects and then animate it moving at high speed and the simulation itself tore the webbing apart. It was pretty cool.

This shot of course took a lot of work to get done. The first shot I did of this didn’t have the oomph I needed to show that maybe these move the air and are heavy. Of course this is all just for visuals and in all reality I have seen plenty of trains not blow dust around like this (snow on the other hand is different I think)… I had to create a cone emitter object, parent it to the train object, then through trail and error made a simulation of dust by creating large icospheres and then using an animated volume shader to create the “dust”.

Here I have a quick video of that shot.

The shader for the above shot.

Another thing I wanted to bring up that might anger some Blender users is that for the longest time I had been using Blender’s Compositor for compositing my shots. Well I decided that I wanted to try to Natron again for this short. One reason is that I wanted to get my hands dirty on a true compositor and I felt that while Blender’s is powerful enough for what I am doing, it lacked in certain areas. I am sure some of you will be offended and I am sorry if that does… and yes I know that had the compositor not been added to Blender then certain things like MultiLayer OpenEXR files wouldn’t exist. Trust me I know, I was there many many years ago when Blender first GOT the compositor, it was a glorious thing to play with. Plus I am not the only one that uses other compositing software for the final product, I see many of you guys using other software. Anyways off my soap box lol…

I unfortunately do not have any screenshots of my composites yet, I will in the future I think, considering how LONG this film is taking, but you have to remember I am just a one man band working on this now.

Well things have changed, I am not satisfied with the results I am getting with EEVEE, mainly with volume lighting being visible behind mesh objects and no matter what I try to do I just cannot seem to get rid of it. Either way I have reverted back to Cycles and realized that I had been doing renders all wrong, which is why I was getting fireflies and noise.

So to explain what I was going through. Here I have a light that is behind cylinder mesh, it should not be visible but it is.

Here too there is a light that is behind the mesh and it is visible through the mesh. Not sure why this is happening.

So that made me switch back to Cycles and I had done some testing to see if I still would get some noisy renders. I also read up on it a lot, trying to make sense of what I was doing wrong. Some of the images in my previous posts are examples of noisy images.

Below is a test render I made with motion blur, I had clamped the indirect lighting a bit and filter glossiness as well. I also had some volumetrics in there as well but I think in the end I will be removing fog just to shave off some render times. There is also denoising done to the render.

I also did some more tests that I saved as EXR and for the life of me cannot seem to open in Blender to resave them as PNG but I am sure it is just me. Anyways off to the next subject - the renderfarm.

The Widow farm (as I call it) is the next version based off of the experience I had a couple of years ago when I rebooted this short film and was trying to save some time rendering. It took a couple of weeks to get down and unfortunately I did not document my process but I still remember much of it. The renderfarm is based on CGRU / Afanasy, which has been around for some years now and works well. It installs nice and configures fairly easy so getting THAT part done is pretty painless. The bulk of the work though is getting the farm to use a network file share and making sure the right permissions are in place. In the first version of this renderfarm I used Samba because I was thinking of using some Windows laptops I had laying around to make use of them (bad idea as they aren’t designed for such use) and getting this to work was a pain. I also had just 2 Ubuntu systems. Now I have 3 hah! If I can afford it I will try to get more but at the moment I have 3 systems that are available to use. My workstation and 2 older systems I have, one being the Kitsu server. Not ideal situation to use a server as a render node but I have to make do with what I have. My other system is an audio workstation I have running a variant of Ubuntu 16.04 and haven’t really made use of it recently in over a year so I THINK it can be wiped and redone as a render node at this point.

I will be using NFS as the network file share, it is said that it is easier to use than Samba. I have yet to actually install and configure it. The previous version of the farm took a couple of weeks to work out the kinks and issues due to the Samba configuration and file permissions drama. Hopefully I won’t have the same issues here but am prepared for it.

In all I think I am making the right choice in going back to Cycles for the rendering, I like EEVEE and it’s fast but I just did not fully like the renders I was getting from it and I was making sacrifices for the sake of speed. Now that I know a lot more of Cycles due to actually reading and testing my shots, I think the end result will look a LOT better.

Oh and also in my testing I shaved off a LOT of render time, I also started to use the OpenImageDenoise which makes my renders look a lot better than the NLM I was testing a while back, which looked awful in my opinion.

I think I will stop there, I want to try to get some more done before I go to bed it is late.

Showing some newer renders recently, just testing each shot so I can see what to improve for either better performance, lighting, or any other reason for that matter. Below is a newer render of a shot, on EEVEE looked not so great as seen in a previous post. However with Cycles looks much better.

One such render I noticed that there was some “broken” geometry, actually it is where the tunnel section repeats and the geometry for the wires didn’t connect.

You can see it in the left side of the frame, very clearly didn’t match up.
So I took the original model, deleted the offending geometry and simply made new wires.

Another test

Others I am not so successful on… here I have a lot of green fireflies that I am trying to figure out how to reduce or get rid of.

I think I need to tweak the lighting? I am not sure but it’s going to be addressed either way.

In the mean time I have been checking out EBay for cheap servers to act as rendernodes, I think by winter I could have a relatively cheap farm that would at least help render frames out. My current rendertimes of tests I have done of this on the small scale farm has taken at least a day and a half for a few hundred frames each scene, so not bad per se but still would like the boost, or at least free up my workstation. I have also found 16-24 port switches for like $30, so networking would be cheap. Maybe not the electric bill but that’s another problem for another day.

1 Like

Another update.

Did some work to the main character, the spider. Mainly in surfacing, which I added a SSS map to shader.

Nothing too complicated in the spider shader because much of it is texture maps. Back in the Blender 2.49 days it was much the same as well. Of course back in 2009 we were merging Renderman and Blender together so it was up to the RSL surface shader to make use of the texture maps.

Since this is using Cycles on Blender 2.93 now I decided to go the Principled Shader rather than try to roll my own. I tried but the results were not as good, or in other words it looked ok but to me it looked off. Maybe I have just spent hours looking at the same thing anything different just won’t match up. I don’t know.

Anyways, I am still chugging away at this trying to get shots either finished or started. I am planning this to take longer than the end of the year, yet again. I am not loosing hope, in fact the opposite. Now that I have a pipeline of sorts built for animation, rendering and compositing it is just artistic driven. Now I gotta work on the actual hard part. Recently I was working on a physics simulation that just wasn’t working right, so the solution is to just animate by hand. Luckily it is just one object. I have rendered some shots again, this time in Cycles on my still 3 system renderfarm. Life though gets in the way so I have had less time to work on it as I would like to.

At least I did the last bit of work on the Spider shader now rather than halfway through the rendering and changing it.


So recently I was informed of the Renderman 25 release by Pixar and I decided to try my hand at installing it again. Mind you I use Ubuntu and Renderman is a RedHat package so getting it to install correctly prior to this was not working for me, either I followed the wrong tutorial steps or something wasn’t going right on my end. This time I followed a tutorial I found on YouTube (did not bookmark it of course) and to my amazement it actually worked! I got beyond the problems that I had before trying to install and had some issues getting a license (I guess the servers were being bombarded with attempts from everyone else doing the same) but once I got the license it began the installation process, and now I have Renderman 25 on my system.

This changes EVERYTHING! Project Widow started as an attempt to being Blender and Renderman together before it was a thing, long before Pixar got involved with Renderman for Blender. This was the earliest attempt to do such a thing and when I rebooted the project many years later I started to use Cycles, again because as above I had problems installing Renderman on Ubuntu. It has come full circle, from Renderman to Cycles, to EEVEE briefly, back to Cycles and now finally home to Renderman once more.

This of course also means I need to change everything and build for Renderman again, though this time a bit different since at the time Renderman wasn’t like it is now. There have been substantial improvements, changes and so forth. Plus the Renderman for Blender addon is so much better than the addon we used for the project back in 2009 - hell everything is improved, this also means more hair pulling moments and frustration but I am adapting to the new system.

This also means that my renderfarm is pretty much dead. Not that it was a huge thing, I only had 2 other OLD systems connected, one of them died recently too, they did not offer improvements in render times if anything they extended it due to their lack of power and speed. I am not upset it was more of an experiment if anything, I call it a win either way, it worked, just not very fast and efficient.

One thing I do like is that Renderman 25 is faster than Cycles in the final image rendering tests I have done. What used to take 30-45 min now takes 15-20.

This shot above, while early in it’s testing and development took 15 minutes, using Cycles would take at least 25 to 30 minutes. This is not the final image that will appear in the short, this is a test and should be considered as such so please don’t criticize the shading, lighting or anything else, this was just a quick render to test out certain things. I had a hell of a time getting everything redone for Renderman in this shot and it is not complete.

Here is the power of not only Renderman but Blender as well. since I have mostly everything linked, it is easy to just replace geometry from one file to another, by Relocating the source file. I have spent all week converting files from Cycles to Renderman and in some cases I am almost done as much of the shading is simple, so making a Renderman material in Blender is very similar to Cycles and much of the nodes are very similar as well.

This shot in particular will have to be painfully redone because long ago I had Appended the train object into the scene rather than link it in, at the time I had not worried too much about it since I THOUGHT that Cycles was going to be the final engine so I really made a bad choice in that regard. Not too much worry though as it is a simple animation scene but still a pain, I am still learning things as I go through files.

This of course comes down to the spider model. This one is a HUGE pain to get right and I made a lot of sacrifices to the final result.

One thing I do not like with Renderman for Blender is it’s Hair interpretation, it simply does not work well on this model due to it’s scale (it’s really small), no matter what I did to try to fix it, it didn’t work out. The displacement map I created back in 2009 also doesn’t look good, so that went. The shader itself was a pain in the ass to get dialed in and I am still not satisfied. Maybe I am doing something wrong, not sure but either way more work needs to be done to it.

Either way I am on my way to getting this done, slowly but now that I have a working instance of Renderman on my system it’s like a breathe of fresh air! I feel like now again it has purpose and once again I feel that energy I had back in 2009 when it all started. There has been moments throughout the years of doubts and “why even try” but now it’s like the electricity has been jolted back into me and I am running with it. Maybe now it will finally get completed!

1 Like

Above is the newest version of the opening shot. I had to use a different ground texture than the one I created many, many years ago, it just wasn’t working for the scene anymore, so I went to PolyHaven and got a new one that looked similar to the one I edited long ago. I added an HDRI map to the PxrDomeLight shader so I could get that realistic lighting. After futzing around with some of the shaders I made a render using RenderMan 25.

I have also been working on the spider shader, as I was not satisfied with what it looked like. So I started over and basically stripped it of everything and started fresh.

Not exactly the end result of what I wanted but it is getting closer.

I want to add the bump map back, work on the SSS and try to get it closer to the Cycles version I had a few posts above, just without the hair, because that just doesn’t want to work at the scale the spider is at. Not sure if it’s me or a bug.

Either way this is exciting to work with RenderMan technology once more!


I’m getting nightmares just looking at this so you’re clearly doing something right :smiley:

I made mistakes with Scene 09, where I wasn’t using linked models OR I was using obsolete models in another directory, OR as explained above I was using Appended models. Bad move. I pretty much have to start over from scratch for each of these shots.

The renders below were done in EEVEE or Cycles, before the RenderMan 25 switch. Just wanted to show what was exactly being redone.

It normally wouldn’t take too long to recreate these shots, I just am discouraged that I actually have to do it. I suppose in the industry these things happen so at least that is somewhat a less soul crushing realization.

And in my Kitsu server I had to set the status from Done or WIP to Retake. So that kind of makes me cringe a little.

All in all I am very happy though that things are going well, despite setbacks of Scene 09.

Here frame 0126 of the first shot I noticed an error in the geometry, the triangle black hole in the concrete. It is now fixed but I never noticed it before using Cycles. I guess the difference in rendering engines makes it real clear what and where there are modeling errors.

Here is the scene in Blender using RenderMan for previewing. I do notice repetition in the texturing however due to the fact that there is not a lot of it on camera and this is the ONLY shot “outside” it should not be an issue.

1 Like

So I tackled the spider model with shaders once more with RenderMan 25 and got closer to the results that I wanted. I had to really toy around with the displacement to get it to look right and I am still on the fence over it but closer now, I guess I had to think a little harder about settings.

I also got the hair to render correctly!

Before I had tried to render the hairs I was getting large clumps and not understanding why as seen here

Later on I realized that I had to go to the Hair Shape and edit the Diameter Root from it’s default 1.0 to something smaller…

Here you can barely see them but they are in fact there, which is exactly how I wanted it to be.

I then got excited again and started to work on the rest of the shaders as well.

Here I changed the BXDF from PxrDisney to the PxrSurface and plugged in the texture maps and messed around with the settings before coming to a result I liked. I decided to add some Iridescence so that the spider has a look seen in nature.

Hopefully this will be closer to my vision of this spider I had long ago.

1 Like

Recently I have been doing some test renders, using motion blur, to my surprise and dismay I was getting some really awful renders using motion blur and I could not figure out why. I read the RenderMan docs three times over, changed settings and still could not get a decent render.

Here I was trying to render one of the animation shots of the spider swinging on it’s silk, of course you can’t even tell what is what in the render. Did a quick render sequence test with reduced resolution and it was even worse. the motion blur jumped all over the place.

At least here you can kinda tell what is going on but still the motion blur is just ruining the shot.

So this got me thinking, how could a world class production proven rendering software really not work as intended and motion blur was just awful, makes no sense at all I mean really RenderMan having a terrible bug like that? This would be a translation problem between Blender and RenderMan … maybe there was a bug somewhere there? Hardly unlikely as the addon just really sets the scene up for exporting, and besides each time I did a render the same result would come up no matter what setting I did change, such as Shutter Angle.

So I opened up an old R+D file I did back in 2009 or 2010, and this had some cloth physics in it and I again setup motion blur but kept the settings at default and the render turned out just fine. What was going on? Why would a render in one scene file be ok but not another?

Then it dawned on me that I was using an addon called Camera Shakify and as it turned out, when I opened a shot up without it, the motion blur would render just as expected and wonderfully so. This was a problem with the addon, Blender and RenderMan not translating correctly at all and I for one have no clue why. All I know is that it was causing issues and needed to go.

Since discovering this issue I had completely removed the addon from Blender 2.93 and have been setting up the camera shake manually using a Noise modifier on the X and Y axis of the camera and then playing around with the settings to achieve the same effect. All Camera Shakify did was simplify the process but in my case it turned out to be a huge problem.

Now that I have a clear idea what to do it is relatively simple to work with, it’s just manual instead of a quick easy setup… however I do have more control over the actual motion of the shake as opposed the addon which I believe has a preset set of movements programmed in. Using the modifier I can make it look more natural and as random as I want to.

Been working on the shaders for the train model. When I originally did the shaders in RSL, I had tried to achieve a certain look for them, a sort of metallic with a voroni / fractal pattern, mixing two colors for each of the main colors.

Below is the original test render in Aqsis back in 2009 / 2010.

Then during that year I modified a RSL shader editor called Shrimp, for the purpose of tightening the integration of the tools we had at the time. We of course had Blender 2.49 but we were using a plugin called RIBMosaic and back then what this did was read the RSL shader and make a “fragment”, a sort of internal file that could interface between the shader and Blender, if I recall correctly it has been a number of years since. We had a vast include library to make our shaders work well, so what I did was to modify the source code a bit to read these, I also had to modify the include files so that Blender’s text editor could read them as well because the titles were so long that Blender would get confused, so I had to shorten them a little. I called it the RSL Library. Also added was the ability to use AOV data. Either way a LOT of work went into this and I even uploaded it to GitHub and renamed Shrimp to Prawn because it was a fork so extreme that it deserved a different name, or I was just being an ass I don’t know.

Above is an attempt to create a new version of the shader in Prawn, using Aqsis to render the preview. Not exact as the original RSL shader but ehh close enough.

Long story coming to an end I promise.

Now that Renderman for Blender is out, there is no need for these tools and everything can be done internally in Blender via nodes, which is very convenient, so in that regard I have undertaken the task of recreating the shader as best I could for the new way. I MIGHT want to try to do something in OSL but honestly the standard RenderMan shaders work well enough that I don’t see a reason to do so.

Again, not exact as the original RSL version but close enough that I can consider it done.

AND to try to this new image comparison slider thingy…

Now mind you there are several actual shaders on this model, two for the main metal color scheme, the glass, the other black metal, the wheels and it’s supporting structure and also some emission shaders for the lights. I have rigged 4 Area lights to the train as well so that these actually produce more needed lighting for the scenes that the model is in. So far I think it is good enough and any more detail will get lost as this thing will be moving fast anyway.

1 Like

I had an issue come up recently with a particular shot, something I could not figure out for a couple of days and I almost had to scrap the shot because of it. Basically it’s a bug that I was able to reproduce but I was not sure if it was from Blender or the Blender to Renderman addon.

So in this shot I actually have a duplicate of the spider in a different folder, yes I know having duplicates is a bad move and this example is why… Well I tried to “Relocate” the file from the duplicate to the original, which in Cycles would actually be fine, but as soon as I set the renderer to RenderMan and saved it, then opened it again, a particle system would be created out of the blue and this would be a bunch of balls falling towards the floor like a particle system by default would. This drove me mad for like 2 days until I just gave up and setup the duplicate file to be an almost exact copy of the original AND I deleted the hair, just in case. I also did not save a video of the process because I was really not thinking about it, I was more concerned with saving a shot and not saving a screencast but now that I think about it I should have, maybe someday I will reproduce the problem and record it, just with a copy of the file so I don’t kill my shot, lol.

Yes I am a little bummed about the hair but in all reality this shot would be the closest shot of any on the spider so the hair would not really be seen anyway.

I also upgraded Blender 2.93.10 to 2.93.18 thinking that maybe this was a bug in that particular version but this did not solve anything. I am not sure why this happens but it does. I also checked to see if this would be a problem with any other file I had setup already but luckily I did not have the duplicate in any other file, just this one for some reason, and now I cannot delete the duplicate for this is the only shot that has it in there. Aggravation but it was my own doing so thus it is what it is and at least I do not have to reproduce the entire animation again from scratch like I originally thought I did.

Now I just have to get the lighting a bit better and this one is pretty much considered ready for rendering!

I have some issues regarding Blender 2.93 and it looks like I will be upgrading the version, except in one shot where I have no choice but to stick with Blender 2.93. It’s been a difficult road going all the way back from Blender 2.49 to now but I think I can work out the details as I come across them.

The main problem with upgrading the entire pipeline from Blender 2.93 to something higher is that the one shot Ere Santos had done cannot be opened by newer versions because it uses the old Proxy system and the newer versions use Library Overrides. According to the docs it says that conversion happens on opening a blend file but in this case it just crashes Blender 3.0 and above.

I do NOT want to just discard this animation file because it won’t work in Blender 3.0+, it is a moment of time that captures an aspiring artist’s rise from a student to a now successful animator at Disney, I cannot and will not do that. Aside from the fact that it is well done and probably the best animation of the spider in the whole short film, I just cannot get rid of it.

Here comes another issue. In Blender 2.93 I have Seq 04 Shot 02, where geometry linked in will not place correctly in the file. It hangs off to the side of the rest of the set and no matter what I do to correct it, including deleting the data from the file and adding it back in, it still goes to these locations.

Normally I wouldn’t care but there is definite continuity issues. So to test I opened the SAME file in Blender 3.0 and it is in position where it is supposed to be at.

Now here comes another tricky problem, is that in Blender 3.6 I open the SAME file and suddenly a good portion of the tunnel ends are just not rendering in RenderMan. Oh and some of the rails as well.

OMG what the hell!!!

I cannot win! No matter what I do I have to use several different versions of Blender before I get to finish the short.

So for now I have to render Seq 02 Shot 03 in Blender 2.93 exclusively.

I have to render the rest of it in Blender 3.0.

Hopefully this problem doesn’t keep happening otherwise I will go crazy.

(EDIT) I did a test while writing this post… I went into Blender 3.0 and Appended the collection that the spider was in and actually did not cause a crash! Animation and all! So after adding the environment just fine I saved it so I can continue to work on 3.0 without having to go back and forth between versions.

(EDIT) - After working on this file in 3.0 I found out that the texture maps are not getting it’s data like it was supposed to and renders pink… I gave up. I will just resort to rendering out that shot in 2.93 and just be done with it and move on to 3.0 for everything else. :angry:

So after dealing with my frustrations with Blender version hell… I decided to not worry about it and just deal with what I have in front of me so to speak. After all in the end it doesn’t matter when it is really the short that I want to complete.

I revisited a concept that I had MANY years ago (2010) that kind of brings the story together visually, however due to code changes I wasn’t able to use the same file from 2010, so I decided last night to recreate it from scratch… kind of. I had a R+D file from awhile back made in version 2.83 or something and decided to try my hand at cloth simulation again.

I had collision objects already in place, all I had to do was to bring in the RenderMan ready assets and the scene is ready to go. I then created my cloth, in this case it’s paper, placed it in the scene. I already had the forces laid out as well so in this case all I really needed to do was tweak the paper cloth object into position and eventually it landed exactly in a great spot.

(Note: not sure if this video will play on all browsers, I actually didn’t realize that you could upload videos till just now lol)

So this playblast video you will note that the cloth object is NOT looking good in terms of subdivision… BUT there is the ability to do this at render time with RenderMan so I don’t need to choke up resources for a good display of a playblast when the final result will be a nicely subdivided piece of paper floating about in the wind.

I also added in a curve object from one of the broken grate rods to the pipe below it, you really can’t see it unless it’s rendered as well because the shader brings it out a lot more. This also ties into the story visually I think. I could be wrong but at least I tried to add more to the elements than what originally was laid out.

I also had light leaks inside the shaft because in the RenderMan world, unless you specifically make a shader double sided… well it won’t be and in this case the normals were pointed inward (as they should be)

Because I have the PxrEnvDayLight added to the scene the lighting leaked through the geometry and caused a defect in rendering. So instead of just fixing this geometry I added a bunch of blocks around the shaft object and called it good. No more light leaks!


Another update on the progress, remaking the problem animation files piece by piece. I have worked on Scene 009 Shot 001 over the past couple of days. It’s not complete yet because I have yet to animate the actual spider but it’s getting there.

I changed the camera angle slightly than the previous version. I linked in everything and set the spider to be a Library Override (as explained above the Proxy system was rendered obsolete and causing frustration mixing the two) and then quickly set up the silk strand. I learned how to make silk strands for this project a while ago, by making a bezier curve, adding two empties as hooks and this way I can manipulate them rather easily. Sometimes it has caused some very interesting bending but considering I only use them closeup I can get away with the effect.

I also learned very recently that I have been making my curves all wrong! Before I would extrude them by 0.0001 (or so depending on the camera angle) and while reading the RenderMan docs I learned that for my particular case I should have been setting the Radius to something equally small, so I tried on some R+D file I had laying around and while it’s very difficult to actually see a difference, it’s more accurate.

So I went about my files and changed the curves from extruded to the more accurate radius.

Anyways off to do more work on this!

1 Like

Recently I have been trying to optimize my exported files because by default if you have a set, animated characters or whatever, the RenderMan For Blender addon exports ALL this in a single frame RIB and at times will be significantly larger than the actual EXR rendered out, which says alot considering EXR’s are not small space files by any means. It literally exports everything into a single RIB per frame and this is not optimal. It will quickly eat up disk space and in the end even if you do end up compressing the RIB using gzip it still will cause large RIB files, and at 90-240 frames per shot… this is not good! Even with a lot of space, it is taking up a large chunk of that.

In the days of RIBMosaic and Blender 2.49, we had thought of this in the first place and any geometry that was to be static (the sets) would have some kind of flag that deemed them as a DelayedReadArchive, which is not used until the BoundingBox of reached, plus if the bounding box is deemed off camera will not read it which reduces parsing times. RIBMosaic would write a file or series of frames, by calling RIBArchives instead of writing the same static geometry inside each frame. RIBMosaic was designed for animation and at the time of this nobody went as far as Eric did with this plugin. RenderMan for Blender has this ability as well it’s just a bit more clunky and manual, also depending on what you are doing, buggy. Nonetheless I realized I’ve been using this addon all wrong, for MY production needs and your experience may vary.

I started to experiment with RIB Archives, which are basically instances of geometry or lights in my case but specifically for RenderMan. The idea is to reduce disk space and processing with a single call to the geometry, or lights, instead of each frame writing the SAME data over and over again. Think of it as programming, your RIB Archives are header files sort of and instead of programming the same function again and again you simply refer to it. In my case my sets really don’t have any moving geometry at all so why am I wasting disk space when I can simply call for it as needed.

So I did just that. First I regrouped all my geometry into Collections so I can easily organize the mess I had, I mean I did HAVE collections already and everything was grouped up to some degree or another but it was not organized as it should have been. Took roughly an hour to did this task. Then I went into each Collection, selected all the objects of said Collection and then exported them as an RIB Archive into a subdirectory so that I could easily find them later on.

In my first testing I thought that MAYBE the RIB Archives would become manageable if they were smaller geometry, which was a yes and a no because the box size in Blender correlates to the Bound Size in RenderMan. I also was able to export the instance points that the walls, sidewalk and railroad ties are laid out since the days of Blender 2.49. Surprising after many years and the data survives to this day, a good chunk of it at least. It also saves the shaders, which in this case are simply the shader node input and output data.

After doing that I thought “Well hell, might as well make an RIB Archive of the whole set, this way I don’t have to import an RIB Archive each and every time I want to show the entire thing”, so I made an RIB Archive of RIB Archives… ha! In practice though I would probably construct sets manually depending on the shot.

By the way this only really matters in the Lighting and Rendering stage of the shots as there is a small issue, the RIB Archive imports are empty boxes once added to Blender. This is because the data from RIB is not directly transferable to Blender, it is easy to write an RIB scene (if you know how, which I know very basic RIB syntax, hence why I use an addon) but it’s not so easy to read into anything other than RenderMan. So at the stage of layout (because there are still plenty of shots still in layout stage), animation and FX (such as cloth sims), the geometry is all Blender. After that point is a mix between RenderMan and Blender, until the lighting stage ends and goes to prep for Rendering.

The reason for the RIB Archive in the Lighting and Rendering stage is simply because once everything is done in animation stage, it should be final and then gets placed in the Lighting stage. There is no reason for the sets to be there anymore, there is no reason to have them in scene taking up space. Prior to this stage though it is crucial to have the set because with just the RIB Archives they are literally empty cubes in 3D space and the only way to see them is to render it. One added benefit is that this method makes rendering in 3.6 possible, however it is also buggy as once you save the file, exit Blender, try to come to it later, and you will encounter a bug that will not add the RIB, and even though you are literally trying to add, the addon thinks nothing is there. Until that gets fixed I had to create the entire set in RIB Archives in one go and then export to RIB Archive.

While in the Lighting stage I am placing the dynamic stuff as Blender, such as animation and FX, because having a visual cue as to where things are is kind of essential. As the stage progresses, maybe more RIB Archives are written? Until the entire set is an RIB Archive, then I can accept the largish files per frame.

So it has been some time since the last post so I figured I would give a small update. In my last posts I was having issues with different Blender versions and RenderMan 25, well it turns out that other people were experiencing the same thing, that certain objects wouldn’t render using Blender 3.1 and above for some reason.

For example here the railings are showing up in odd locations in addition to their normal placements. Missing train tracks. Missing walls. Missing support columns etc…

Turns out there IS a reason and it is a combination of the SubD modifier and GPU Subdivision. So if you have an object that has the SubD modifier on and use GPU Subdivision this will cause a render error by not rendering it at all, AND placing objects in nonsense locations, for some reason that is beyond me. This was found out on the Pixar Discord channel and confirmed by not only myself but others as well.

So I went through my set file and found out that not only do I have the SubD modifier on certain objects but in the RenderMan addon I also enabled Subdivison surfaces to Catmull Clark, I guess in my noobishness at the time I was working on them I didn’t realize what this would do and is excessive to say the least. So I went about and deleted the modifiers on all the affected models, did a quick render and problem is solved. YAY! I can work on Project Widow in Blender 3.6 now and be ok with this!

However this does not mean that I won’t be using RIB Archives, after all this does in fact reduce the exported RIB file sizes down to KB rather than having a 20-50 MB file PER FRAME, not including the EXR renders which go from 20-30 MB per frame and at a 100 to several hundred frames per shots this will eat up disk space faster than I can imagine.

This does mean though that I should re-export the RIB Archives and see what kind of output I get.

I have also been messing around with volumetric fog in RenderMan. In the past I had tried but had gotten less than satisfactory renders. Now with a little more practice and experimenting, I have gotten some decent results.

This is one example frame of the second shot of the short. It seems to be a little grainy so I want to try to work on it a bit more, it does add some render time to the frame but not by a ton.

I also have been working on trying to get some lens flare effect into the shots as well. The idea is that this short would be “filmed” by a real camera, this explains some of the camera shakes in shots and depth of field in select shots.
This involves importing the rendered elements into Blender and using the Emit pass from RenderMan to be the basis for this. I will be using Blender’s compositor to generate the lens flare effect without having to keyframe them in Natron’s lens flare plugins.

So for example the shot above has several lights that have en emission shader on them. I render out each pass as a separate EXR, since RenderMan does not support the OpenEXR Multipass format sadly (as far as I know). No big deal, would be nice to have but I press on.

So above I have the first frame of the second shot.

And the emit pass from that same frame.

Now to import that Emit pass to Blender!

Had to do some work to get the right effect which I can dial down later in Natron.

Here I have combined the elements into Natron and did some work on them to get the desired effect.

After adding some more nodes to Natron to get the glow a little bit more oomph to them, I rendered a composite of the frame and in general it looks decent. Not perfect but a lot easier than to manually keyframe Natron’s lense flare nodes for hundreds of frames, per light.

So yes certain shots are closer to being done while others are still being worked on, I was hoping to finish this by Christmas this year but alas… it may be even longer to get this done but again I press on because damn it’s been this long already with a TON of work involved that I can’t quit now. It’s just I have so little time to work on it daily, due to a full time non CG related job, raising teenagers, housework and other daily duties that my time to work on it seems to be a couple of hours late at night while listening to soundtracks in the dark.

1 Like