QDUNE aka RenderMan in Blender!!!!

Well after installing a newer CImg version than Feisty features it actually compiled…though some of those RIBs don’t seem to do anything apart from printing some scene stats…but i like the hair ball :eyebrowlift:

Well, yes, not all ribs are for direct rendering (gumbo/vase/teapot/suzanne/suzanne_pp.rib), they are ‘include’ files for some of the others . The ‘smokesm.rib’ file renders a shadowmap for ‘fig12.12.rib’.

btw, I have of course at some point hoped there was maybe some way that it could benefit yafaray too. But I guess they are way too different now.

It would be so cool if Yaf(a)ray would get some hairy primitives as well :smiley:

if only someone would get it work on Mac (PPC).

Question for you eeshlo if blender ends up not using the nice renderman interface you have made for it, do you think you will continue to work on the renderman implementation? Also what license have you released it under?

I put it through some tests the other night and it really is wicked fast, it seems to excel in displacements and ricurves. I ran a test with over 100 thousand curves individually shaded(which usually chokes renderman renderers), on my 1.5 celeron laptop, it rendered in 97 seconds, pixie 196 seconds.

not to mention qdune was only using 22% of my 512mb of ram, pixie was hovering around 50 to 60 %.

Well, I can’t really say yet, I’m not sure. I really just did this because I just wanted to learn about the inner workings of reyes renderers, it is something different at least from the raytracers most graphics coders will try at one time or other. I’m not really interested in ‘competing’ with other renderman compliant renderers. I don’t think I would be able to improve anything that other current well-established renderers already do very well.
But in any case, I don’t really know yet what is going to happen exactly, I’m kind of out of the loop currently, but I wouldn’t be surprised that really only the bare bones renderer is going to end up in Blender, say about 2/3 or half of QD will be thrown out. Like the shading engine, which is kind of a pity, I’m really most proud of that part, since I didn’t do anything like that before, and sofar it seems to have worked out quite well. The goal is not to make Blender a renderman compliant renderer after all.
But I really don’t know yet, I guess we’ll know more after the conference this weekend.

As for license, well, I don’t really think about that sort of stuff at all, as far as I’m concerned it might as well be public domain :slight_smile: But since I use some other external libraries and code, I guess I don’t really have that choice, I really don’t know much about that stuff.
But besides that, I basically agreed to turn the whole thing over to the Blender foundation, so it really is up to them, they are more or less the owners now, so I guess that would be the GPL license then.

btw, if you are going to compare QD and Pixie, use the ‘zbuffer’ hider in Pixie. The only hider currently working in QD is a simple zbuffer based one, and in Pixie that is also often considerably faster than the standard ‘hidden’ hider. Pixie might then be the faster one and use less memory as well.
Some of the ‘speed’ of QD might simply be because of it being incomplete, for instance, it doesn’t fully initialize all standard variables yet for all shader types.

I had a hunch eeshlo would be cooking up something big :slight_smile:

this could be “The” thing in blender.
Great stuff !

Hm…you could’ve told me what’s up with you like 1.5 ears ago :wink:
Only after i asked Ton often enough if you are finally lost he gave me some hint…

But yea, you’re probably right, just one of those typical raytracers again…i’ve had many sleepless nights about MTD and hair/fur, but no matter how you put it, REYES will run in circles around it.

Can we agree to blame the whole situation on Jandro? :stuck_out_tongue:
Though i’ve heard rumors he may come to BConf too…damn i really should’ve planned a trip. Next time, i hope…

I’m a bit confused eeshlo,you said that shading engine will not be used.This means that some internal conversion will be done with the blender shading system,hardcoded?

As I said, I really don’t know, as surprising as it may seem, I don’t have any ‘inside’ information. I have opted to let the Peach team do their own thing which I won’t interfere with, otherwise I would have been there with them all in Amsterdam.
But judging from past general comments from Ton, it is something completely incompatible with Blender philosophy/design.
Although in a email from long ago he seemed to be interested to see the shader engine as a nodal based system, but he probably already forgot about that, and he might have changed his mind by now.

I’m not sure how you think shading works in the mprenderer, but there really is no compatibility problem with blender’s current shading system. Micropolygons are shaded per vertex, and it is a ‘simple’ matter of just calling blender’s shading routines for every point on the grid, “that’s all”.
But I thought that is what you wanted anyway, you want to keep using Blender’s current shading system, don’t you?
So is everyone else from what I read sofar, so everyone gets what they want, just not me… :frowning:
:wink:

Not that I wanted to replace Blender’s shading system of course, but I would really like to see programmable shaders as a possible option at least. And that does NOT mean that everyone needs to learn programming or something. It could be kind of similar to how texture plugins currently work. People who like to do this sort of thing would make a shader, and users just use the shader as a single node, simply controlling the parameters, just like any other current node.
Even the ‘programming’ part could be completely node based, just connecting simple ‘micronodes’.
One difference with texture plugins would be that the shaders would work on any platform without changes.

Anyway, I’m just speculating :slight_smile: I really know absolutely nothing.
I’m sure more info will be available after the conference.
Lets just wait and see what Brecht comes up with, I’m sure it will be good :slight_smile:

ackh from the start of your post i thought this is some random emo text, but NOW im SURE your hiding something… :evilgrin: something extreEemly awesome, that is…

Eeshlo, do you know if there is still someone working on the render api? This would really mean a step closer to more “freedom” to choose from.

Thanks for the reply Eeshlo:o
To explain better my point of view,what I really would like to see is node based shading,completely integrated(for me integrated means that the micropoly renderer is able to do the same stuff,more or less,that the blender one,here I have problem in understanding how the 2 thing can be fully compatible,as far as I know reyes renders make it more difficoult global visibility problem,as they are “local”).
Node system are terrific powerfull,and Easy,without the workflow involved in programming shading,even if for some difficoult stuff having the possibility to write shader its really cool,more power we have,the better is.
Btw,I’ll wait to see what happens,I thanks you for this,and brecht too,when he finishes I’ll try to render with it some huge character,and I’ll be very happy:)

I guess that is probably what most people think whenever they see one of my posts before skipping to the next one, just ‘random emo text’ :stuck_out_tongue:
But no, really, all I know is that Brecht is working on the code, and that’s it…

well, i was talking bout the one i quoted… :stuck_out_tongue: you know like: " so everyone gets what they want, just not me… " a little more confidence, come on! :wink:

renderdemon: I really don’t know how to explain it better.
All I can say is that as far as I can see there is no compatibility problem, so long as raytracing is not involved at least, yet anyway. Raytracing micropolygons is somewhat problematic to do efficiently, but not at all impossible. it just takes a ‘bit’ more work.

I think we are saying the same stuff eeshlo:), I was speaking mainly about raytracing.
Do you think is possible using the micropoly render with multires mesh(I really don’t know how they are used in rendering,at the time I have studied the blender code multires mesh have to be still created )
As we know(having tested that)isn’t easier baking correctly the details with only scalar displacement if the low cage mesh doesn’t have the right topology,the micropoly render can make more useful multires mesh.

renderdemon: Well, not being an artist, I have never used it, so I don’t know much about it, but from what I can see it is some sort of subdivision system, so rendering shouldn’t be a problem I guess.
I’m not sure, I think I said this somewhere before, but I suppose if the sculpting process could directly be stored as some sort of displacement vector in an image instead of doing it as a postprocess, that would be probably be about ideal.
But I really don’t know if such a thing is possible at all, that may all be very well complete nonsense… :wink:

Sorry, I very rarely even attend the regular sunday dev meetings, so I really have no idea…

It exists (exactly as you describe) and is called vector displacement :slight_smile:
(Modo 301 has it)

I think that what people want is the same thing judging by the constant requests for things like blurry reflections, SSS etc. but as soon as you say anything Renderman related it’s whoa, no I meant can Blender get it, I don’t want to use anything that might be considered even remotely commercial or whatever because Blender is ‘open source’ which means it’s based on completely innovative and never before used software developments. :rolleyes:

I don’t like the node based idea of coding but I do think that programmable shading needs to come into Blender at some point and the best way is with an established system. People don’t seem to realise how important programmable shading is - it means people like Brecht, Broken and many others can write shaders that the community can use immediately - no need to get a new Blender build at all. The number of developers that would be willing to write programmable shaders will greatly exceed the number of people who will make changes to the Blender code and it’s the best kind of open source code because it’s powerful code that comes in small easy to manage fragments. It’s not so much a case of here’s Blender it’s half a million lines of code or more, have fun doing what you want with it. It’s more a case of here’s 20-30 lines of code that maybe do something with AO, modify it as you want if you need to optimize it or just understand the code that specifically handles that feature.

If the shaders are used and refined enough, they could easily be optimized and put directly into the renderer.

One way of making customizable nodes could be to simply have a c-based function and the node inputs would be represented by the function parameters. Then there would be a set of declared output variables and these are the node’s outputs - like Renderman’s aovs. This is kind of how Shake’s nodes work except they have one output. They are placed inside .h files and loaded at program launch time and that’s where they are checked for errors. They are different in that they are really just macros. Blender’s nodes would have to be compiled if they were actual shaders but that shouldn’t be a problem even if there wasn’t a bytecode compiler and it just used gcc. Even if Blender had a feature somewhere that asked you to select a shader source file and it did this step for you.

Using gcc instead of a custom compiler means that you don’t have to worry about optimization so much. You’d likely still need declarations to handle things like uniform and varying variables though or maybe things like that could be handled by Blender when invoking the code fragments. Maybe that wouldn’t be able to comply with RSL syntax but that gives some degree of separation from Renderman if it makes people feel any better.

RSL has a few flaws that such a system could overcome like not being able to execute a piece of shader code only once per frame.