Why Renderman?

After reating this I figured it sounded a bit harsh and just thaught Id preface it by saying I realy like renderman. :slight_smile:

The longer and longer I have heared
“Why dosent Blender export to renderman”
The more I wonder Why?
Is this request based on an intelligant understanding? or is it just a desire to support PIXAR’s flagship rendering tool?

I dont see people wishing whatever rendering engine DreamWorks used for Shrek and Antz.

What keeps baffeling me is that Pixar make cartoons (only cartoons?) There humans have never looked realistic. Im sure people can do it in renderman, as with most engines you can manage it. but as far as I know its not the main goal of PRMan.

Heres a list of resions not to use Renderman/PRman
*** PRMan costs money, other implementations either cost money or are developed by a small group of people, projects may die who knows.

*** Renderman is a standard just like HTML and C++, not all renders see them the same way, resulting in different images depending on the implimentation you use. Without just having to worry about supporting renderman you have to support a number of implemtations, mabe 2- PRman so as to live up to the dream of being as good as Pixar, and Aqsis for people who dont want to pay cash.

*** Renderman cant be totaly controlled by a User interface since it relys on programmable shaders and such.
I remember playing with an old Version of K3D (renderman compliant 3d app) I opened the material editor and got a blank text window.
On the other hand you could code all blenders existing shaders in renderman and just use those, but again you now missing the power of programmable shaders. Both could be done at once but it would be no small task.

*** Most people dont have the time, knoweledge to write or even properly use programmable shaders, its alerght for Pixar when they have team of coders, but most people have to subsist on less and would rather twiddle a few sliders then learn a new language.
I realy wonder if the newbies who come on the forum and want Renderman support would take the time to use it once it was there.

*** Is renderman that much better anyway?
Ayam- opensource app, been around for a long time.
http://www.informatik.uni-rostock.de/~rschultz/ayam/gallery.html

K3D- another oss app, getting quite nice- gtk2 + python support.
http://k3d.sourceforge.net/cgi-bin/wiki/GalleryArt

So… I think I have made my point.

But… but … I just want Renderman!

Well, put some effort into adding accsess to light/material/render settings in python so its possible to make good render exporters.

Cheers-

In short, because a properly implemented renderman renderer is very powerful and generally very fast.

I dont see people wishing whatever rendering engine DreamWorks used for Shrek and Antz.

This renderer, as well as many others, are not available to the public. Renderman is, and in various implementations.

What keeps baffeling me is that Pixar make cartoons (only cartoons?) There humans have never looked realistic. Im sure people can do it in renderman, as with most engines you can manage it. but as far as I know its not the main goal of PRMan.

Pixar Animation is not in the business of making “realistic” animation.
Pixar R+D is in the business of furthering a world class rendering standard.
Pixar’s Software department is in the business of creating their implementation of the renderman standard; PRMan.

PRMan is more than capable of high quality “realistic” ouput; see The Lord of the Rings trilogy, renderman was one of the main renderers, plus many many other productions. Pixar and Renderman usage are not, and never have been exclusive.

Heres a list of resions not to use Renderman/PRman
*** PRMan costs money, other implementations either cost money or are developed by a small group of people, projects may die who knows.

There are some very capable open source Renderman renderers around. As for the projects dying… Well, so could Blender… I really don’t see any validity to your argument.

*** Renderman is a standard just like HTML and C++, not all renders see them the same way…

There are certainly differences, however I’ve never found it to be an issue. Between rendertests and iteration one should already know what the outcome will look like. Having different ouputs from different renders is also a big bonus. Can we say frankenrender?

As for supporting different implementations… Erm… a couple of different variables, which are generally within the environment anyway. I can’t see how this could be a problem.

*** Renderman cant be totaly controlled by a User interface since it relys on programmable shaders and such.
I remember playing with an old Version of K3D (renderman compliant 3d app) I opened the material editor and got a blank text window.
On the other hand you could code all blenders existing shaders in renderman and just use those, but again you now missing the power of programmable shaders. Both could be done at once but it would be no small task.

There are various tools around that can facilitate in the creation of sl code.
See shaderman for one.
Your second point is moot. If someone writes a wrapper to convert blender shaders into renderman shaders, then you can edit the sl directly after conversion, before the shader is compiled…

*** Most people dont have the time, knoweledge to write or even properly use programmable shaders, its alerght for Pixar when they have team of coders, but most people have to subsist on less and would rather twiddle a few sliders then learn a new language.

Yes, a lot of people don’t. However, at the same time, a lot of people do.

I realy wonder if the newbies who come on the forum and want Renderman support would take the time to use it once it was there.

I think it depends on their background. The techies with programming knowledge won’t have an issue with it, and would enjoy the freedom that sl brings.
People with an artistic background most likely wouldn’t look twice at it, however renderman isn’t for them anyway.
CG newbies who want to use what they hear is the best will probably run a mile once they see what it actually is…
Then there are those somewhere inbetween, such as myelf. I’ve been to art school, never technical school, yet I’ve still taught myself a little programming. Initially mel because I was trainging in maya, then sl so I could write renderman shaders, and more recently I’ve been learning python so that I gain more control over blender and gimp.

There are many different kind of people using the software. Some will use a feature, some won’t. I for one have never fired up the game engine, and doubt I ever would, save a case of severe boredom… :wink:

*** Is renderman that much better anyway?

I take it you mean blender’s renderer? Well… Better is a subjective term.
It is certainly more powerful, and extensible, for those who know how. However it does not have the user interface for begginers, as you noted earlier.

I don’t look at it as a case of better or worse anyway.
However in any case, more options is always better.
There is also that whole argument about bringing feature’s into blender to compete with the “big boys”, and renderman support would certainly be one of those kind of features. Personally I don’t particularly care who blender is competing with. I use it because I like open source, and Houdini is waaaay out of my price range… :wink:

Regards,
Jac

Fair point, I cant deny that renderman is a very good standard.

OKay, you sighted a point where renderman was used for realistic work, still it hasnt been as heavily developed for this purpose, eg- raytracing only recently added to PRman, just like with Blender.

A large project like blender with Many users and developers should not rely an a renderer developed by 1 person, its just pracarious, Blender wont die because theres enough developers, Aqsis/Pixi could, since with both it seems there is one main developer working on them.

Its just an extra level of stuff to deal with, most of the opensource renderman renderers dont su

About users being able to edit shaders, 1) Most users wont, 2) Its not a vaible solution/workflow. Id have a shader Id copy and paste every time I wanted to see what me test scene looked like, Mabe 5 or 6 copy and pastes If I want more then 1 programmable shader.

Yeah- what your saying is true- people like yourself could probably make use of it, but I wonder if thats a large enough % to add renderman support. BTW- I too use Python for blender in Gimp :slight_smile:

The “I want to compeate with the big boys” argument is rubbery, if a studio wanted to use blender for renderman output Im sure they could write there own renderman export in a couple of weeks.

  • Cheers

cambo

i think you do not know what you talk about. renderman and mental ray are at the moment two of the most advanced render engines and offer possebilities blenders toy render engine cannot do. blender cannot even render metalic reflections/highlights. this is no offense to the coders just to point out the difference.

in case you would ever work with big animations you would relaize that raytracng is hardly used and even electric image used non raytracing with great results for a long time.

you dont need raytracing for everything.

your argument about that most people wont use it is pretty bad. in this case dont put in advanced character animation tools and other advanced tools because most people here dont even know how to do good animations nor models because they lack the skill or training in it. so should we stop develooping the new animation rework or soft boddy animation? no because among the greek and newbies and hobby freaks there are also some professional users as well and all could use the advanced systems.

a renderman support will not bring blender into ILM or pixar but will enable users to create renderings with even better results than you can see here at the moment.

in my opinion renderman or rib support will improve blender as yafray did.

claas

3d is hard… 3d modeling is very hard… Its great to be able to hit the render button and see what you got. But Its never going to be a “amazing” renderer.

Its too much work to do both. Renderman is cheap ($1000 or 3Delight and 3500 for renderman) and gives great quality. There are now a few renderman renderers to chose from. Also there are rendering farms (really big ones) that have renderman renderes set up on them… Its a good idea…

So if the internal render is good enough-- then fine… No problem… But if its not, supporting renderman is a Great idea. Personaly i think its the direction blender should move it. (with plugin shaders…ie someelse provides a tool to compile renderman shaders to blender plugin shaders… Way cool)

If you try to do top end renderings some of the blender internal renderes shotcommings become pretty clear…

Its a great tool. But lets not try to do “everything” when there is no need to … Blender could do very well as a great modling tool… Its does not need to be a top end renderer as well , and trying to do both will probably mean that neither will be achiived.

delt0r

Damn you guys, I was trying to gather all my will-power NOT to open up BlenderMan python script and work on it this weekend.

But It looks like I’ll have to. :slight_smile:

A point though- The Aqsis developer almost finished renderman export but ended up quittinbg - heres his rationale.

On 13 May, 2005, at 7:31, Jonathan Merritt wrote:

>> Hi Everyone,
>>
>> Following recent queries, I though I would provide a summary of “what
>> happened to the RenderMan exporter in Tuhopuu 2”.
>>
>> Paul Gregory began work on this exporter, using the original Yafray
>> exporter as a template. I started work on the project a couple of
>> months after Paul.
>>
>> All of the basic elements of RenderMan export were very easy to
>> implement. We had a system which could do as much as any of the current
>> Python-based exporters and more. The main difficulties lay in the more
>> advanced features of the exporter; particularly automatic shadow map
>> generation, motion blur and shading. (Note that while many of the
>> Python efforts have not addressed these features, they are really
>> absolute requirements for any reasonable RenderMan exporter.)
>>
>> We had automatic shadow map generation working well by the end of
>> Tuhopuu2, so I won’t go into that.
>>
>>
>> === Motion Blur ===
>>
>> In RenderMan, motion blur is done by providing “snapshots” of the
>> blurred parameters. The renderer then automatically blurs using the
>> snapshots as keys. Motion blur comes in two varieties: transformation
>> motion blur and deformation motion blur. In transformation blur, the
>> object being blurred is rigid between the key frames (ie: only its
>> transformation is “blurred”), and in deformation blur, the object can
>> deform between the key frames.
>>
>> In order to implement motion blur, it was necessary to invoke the
>> exporter code at least twice, incrementing Blender’s internal time
>> between the two invocations. The exporter was then required to do some
>> clever tricks to collect all of the required information from the
>> keyframes, and associate that information with the correct meshes (ie:
>> MUCH more work than the Yafray exporter!! :slight_smile: . It also relied on the
>> assumption of “coherence”, in other words: that render faces (VlakRen)
>> would be presented to the exporter in the same order each time. This
>> worked most of the time, but I could of course find cases that broke it.
>>
>> As far as I can tell, deformation motion blur presents a logical impasse
>> for the use of VlakRen for export: it simply can’t be done without
>> kludgey assumptions! What is required is access to the underlying mesh
>> data in Blender, and a way to enforce coherence for a particular mesh if
>> it is to be deformation-blurred. The requirement for this extra
>> information means that VlakRen are not an “attractive” short-cut for
>> RenderMan in the same way as they are for Yafray. I definitely
>> wouldn’t recommend using VlakRen again! :slight_smile:
>>
>> === Shading ===
>>
>> We originally took the approach of developing a single surface shader
>> and a single light source shader. The idea was that we could pass all
>> of the required parameters to the shaders to have them behave as any
>> generic Blender material or light. However, unfortunately the number of
>> parameters that must be passed to the shader is enormous, because
>> RenderMan shaders cannot have optional parameters (actually the most
>> recent release of PRMan from Pixar does allow optional parameters, but I
>> don’t know of any other RenderMan implementations that do).
>>
>> Just think, for example, that for each texture channel, all possible
>> parameters from all possible texture types had to be passed down to the
>> shader! In addition to this large number of parameters at the top
>> level, each smaller level of the shader required a large number of
>> parameters as well. For example, consider color ramps and all of their
>> possible parameters, which in RenderMan shaders must be incorporated
>> into the innermost illuminance loop of the shader.
>>
>> To overcome the problem of massive numbers of shader parameters (and
>> massive numbers of parameters going to each shader function), I came up
>> with a system called “Dobby”, which saved the texture parameters to a
>> text file. The RenderMan shader was then passed the name of the file,
>> and it could (via a DSO) arbitrarily query the presence and value of any
>> of the parameters. It also allowed many parts of the shading (for
>> example, pattern generation) to be done in C/C++, thereby allowing the
>> same, identical code to be used in both the RenderMan shaders and the
>> original Blender pattern generation.
>>
>> The Dobby system was about 80% complete (it could read and write
>> parameters and do some other stuff) when Tuhopuu2 was abandoned.
>>
>>
>> === Future Development ===
>>
>> I would recommend using Python for future RenderMan export. Despite all
>> of my efforts on the Tuhopuu2 exporter, my own use of the exporter has
>> been very minimal. Instead, I have developed my own Python scripts that
>> have been used to produce real content. I have rendered several
>> animations this way that have been used in the BVSc degree here in
>> Veterinary Science.
>>
>> However, I would advise against the use of a “packaged” exporter like
>> BlenderMan, without a published API that allows elements of the exporter
>> to be used independently. The reason being that a “packaged” exporter
>> misses much of the point of wanting to use RenderMan. Depending on the
>> particular project, you might want to set up customized shadow map
>> passes, ambient occlusion baking passes, incorporate procedural elements
>> into the render, merge RenderMan elements from Blender with those
>> created by some 3rd party tool, etc. I’ve done all of these things, so
>> they are realistic requirements!
>>
>> What I would like to see (and have started working on myself), are a set
>> of RenderMan-specific Python classes that can be instantiated from the
>> existing BPy classes. These wrappers can then provide utilities that
>> exporters will need. For example, a RenderMan Mesh class would provide
>> utilities to export the mesh, export a UV-coordinate mesh for baking
>> purposes, etc. A RenderMan Light class would provide utilities for
>> setting up a camera with the same transformation as the light,
>> inserting the light into a RenderMan scene, etc. A “packaged” exporter
>> could then be built up using these classes, but the more important
>> contribution would be a nice set of independent classes that can be used
>> to build a custom RenderMan solution for a given project.
>>
>> I would also add that the use of a Python mapping for the RenderMan API
>> is something I would consider essential for this kind of work. cgkit (
>> http://cgkit.sourceforge.net ) provides just such a mapping. Trying to
>> do without a mapping for the API and using things like print statements
>> or direct file access is going to bite you one day when you want binary
>> RIB export!! :slight_smile:
>> –

i prever hard coded systems more because often i encountered time consuming python problems.

but i think a python bases system will work as well. eeshelo mentioned that
with a python based one the new additions in yafray could be faster accessible
within blender for example.

i think at least a rib exporter (geometry) would be very usefull to get the 3d data into the reyes render engine. also a basic texture exporter as well. professional shaders are anyway hand coded. but when specual, dioffuse etc settings are transfered it would help unexperienced artist to use the power reyes provides.

and in most cases you need uv mapping as well with some bnasic procedual shaders like noise/cloud based for stones for example.

maya will support renderman rendering next to mental ray now.
i cannot wait until we can test this.

Cool, it’s great to see you working on this! Too bad that the exporter didn’t get finished earlier.

Tiggs

Maya has pretty good Renderman support with LiquidMaya. I’d certainly like to see similar in Blender but liquidMaya was a big project according to the developer. It might be easier to try and port it across using Blender’s data structures instead of Maya’s because I believe liquidMaya was used on high end productions so the functionality should all be there.

The good thing about Renderman is the flexibility it provides as well as access to good renderers like people were saying. People have spent decades optimising their RMan renderers to be very efficient supporting dual processors and CPU optimisations.

Blender internal fails on a lot of things - displacement maps, motion/deformation blur, AA and they are mostly the things that people need. With an extensive shader structure like Rman, we could have micropolygon disp mapping to be able to do terrain like terragen. We could have had SSS years ago.

The point about shaders and having to copy them is wrong because you keep a directory of shaders that the Rib files just point to. All the standard Blender shaders would be there and you would just send different values to it just like you do when you move the sliders.

If you write your own or someone who knows how to write them does so, you can share shaders around the web. That’s much harder to do with Blender.

Also bear in mind that shaders can generate things like fur and hair at render time. This means you could render in the order of millions of hairs.

Renderman is a great system to have. The only issue is about 3rd parties controlling the renderer. The open source renderman renderers are bad - I haven’t got any of them working right. The only two I would consider viable options are PRMan and 3delight. Out of those I’d choose 3delight because you can download it and use it for free. But you can’t use it for commercial renders.

Pixie was looking good because it supported a lot of features but reports said it was unstable and I haven’t been able to run it at all. Also, it has one developer as someone said.

I realise that you have backed down on this already but renderman’s position in the industry (and contribution to it!) is massive. Regarding realistic!! LOTR should have you convinced but there are countless films that have used renderman in realistic post effects. Caustics, fur, displacement, refraction, sss, volumetric, etc. - all done with renderman a long time ago. The rendering model is so good it almost allows for anything! Inherently extendable!

I am lucky enough to have been using BMRT when it was free. Resources like the following have always been altruistic.
http://www.rendermanacademy.com
http://www.rendermanacademy.com/docs/SSS_depthmaps.htm

But … I kind of agree with you. It is the exentablity that people are looking for. I would like to see more native support for for ‘shader’ plugins within the blender toolset. I have recently come to the limits of the support supplied by blender for extensions (texture and ssequence) - which is good but very limiting. Yafray is helping but has a long way to go. This is currently what blender users have to play with. Can they do everything that is required? Not on the bleeding edge but it gets better all the time.

If there was a realistic situation were blender could fit in to an existing pipeline that included a renderman sharder within the professional arena I would say put the development effort in to the exporter. But … I don’t think that is realistic. As such I think that blender is working better in it’s own context and with better shader and effects support (via plugin or yafray) will become a better professional tool in it’s own right. My vote is to put the development effort in to extending blender’s functionality/extendabilty.

I use the word professional in the contect that the product (animation and images) generated by blender is earning people money or is very much in the limelight (on the cinema screen!).

There is a lot to learn from the renderman render model. Imagine the coup of supporting renderman shaders in blender!!!

I have strated to look at the blender code to see how shader plugin support might be best implemented.

pixar now ships renderman for maya soon. and it features a one to one intergration with shader network creation in maya. way to go.

3delight is quite strong. i wokred with it and the renderer is for what it gives great. never touched pixie or aqsis. 3delight was always better imho.

Renderman RIB support from Blender would be a great idea as well for those looking to break into the professional graphics industry. Since Renderman is so pervasive, having this dot on your resumé will at least give you a bit of an edge. I know a few brilliant people in my school who’ve dedicated some time towards learning Renderman and all of its publications, graduated, and now work for Pixar and Sony Imageworks.