I haven’t been able to find any tutorials on this so far. How do you make realistic smoke in cycles render? The standard blender render technique doesn’t work as far as I can tell.
Cycles doesn’t support volumetrics at the moment Why isn’t BI working for you?
It is working but I am creating the rest of the scene in cycles because I am going for a photo realistic look. I probably will end up adding it in using photoshop if it isn’t supported by cycles.
Why not render the smoke in BI, the rest in cycles, and then combine the render layers in the compositor?
It’s there any tut or page about work how combine render from BI and cycles together?
I learned how to do it from this one: http://cgcookie.com/blender/2012/02/16/compositing-camera-tracking-blender-hidden-safe-part-02/
The fact is in reality smoke casts shadows, and should be the same in a raytracer that supports smoke, so, if you compose it you can’t get that effect, and you can see the lost shadows also in the eppo’ example above. At that point it’s easier to add the smoke in photoshop and you have more control.
For instance take an explosion in which the smoke is lit by the light inside, but it occludes it at the same time.
What’s to stop you from multiplying the shadow from the smoke onto the scene? Nearly all smoke is composited in from an external source in a production pipeline anyway.
You have two fundamental rendering processes currently available in Blender, as well as the capability to interface with various external tools. Therefore, instead of regarding any one of them as being “the one source from which my ‘finished image’ must come, if only I try enough times and wait for many hours each time” … which, if I may say, is complete nonsense … instead regard them as being pieces of the puzzle, with the compositor being the tool for stitching all of their various work-products together.
You don’t need for Cycles to be the one to produce “smoke.” Instead, you need for Cycles to be good at what it’s good at (now… stay tuned), and, “you need smoke.” Finally, you need a way to combine all of these components together.
Consider how any music that you hear today is the product of a “mix-down” process. Even if several musicians were all in the studio at the same time, they probably recorded their parts in separate rooms so that each of their contributions would be on a separate, isolated “track.” The actual “finished song that you heard” was produced by multi-track recording. (You want a “country” version and a “pop” version and an “extended dance-mix?” Par for the course. Consider it done.) Exactly the same principle applies in CG … and for exactly the same reasons. The compositor is your “ultimate mixing board.”
maybe I’m explaining badly, but what I mean is simple indeed, I mean that the cycles scene is not ‘aware’ of the smoke, and objects are lit uniformly as if the smoke is not there, and so it is actually.
I don’t exclude some possible workaround such as adding a mask on the light source to compensate, but it must be in the cycles scene anyway.
think to clouds casting shadows on the ground, you can’t compose smoke this way and get the effect.
musical instrument doesn’t interfere each other, at least at audible level, each one plays its own notes, while in 3d, or better, in the visible reality it emulates, there are lights that enlighten objects and object casting shadows, and all in 3 dimensions, so you cannot sum them together.
I completely agree that cycles scene has to ‘know’ about smoke generated in BI to get the lights/shadows right. And it is true other way around too; If there are lights in cycles scene, that should reflect on smoke. Plus, smoke scatters some, giving additional light source(s) in cycles scene.
I just can’t imagine that being realistic, painted in PS.
Wow that was a lot of answers. Thanks guys you definitely made me find out a lot about blender I didn’t know before but I think the best option is still to render smoke in blender render, clip out the backdrop, and then add it into the scene using photoshop.
I’m still not seeing why you can’t have duplicate geometry in the BI scene and grab just a shadow pass from that using tweaked spotlights with deep shadows, and then composite the shadow pass onto the Cycles render. Sure, it won’t be entirely physically accurate, but it will be close enough that most people won’t be able to tell, especially in a moving shot. It’s not ideal, but it’s a workaround for now.
Yeah, for now…
Try creating a new scene and linking the camera from the Blender Render scene to the cycles scene. Then composite the two together for the output. Render Nodes will achieve this for you. In fact, there’s a setup in the Blender site with a Blend downlaod to help you figure it out. Here’s the link to the page: Blender Composite Nodes
“Where there’s a node-network, there’s a way.” I certainly did not mean to imply that you “sum them together!” That might be the limits of audio, but of course not of video.
The only way that my analogy was meant to be taken, is simply as a reference to the idea that many independently-prepared and isolated channels of information are “subsequently combined in some useful way” to produce the final deliverable(s) … and that the combining-process is trivially resource-intensive by comparison to the processes that generated those various inputs.
You can use Cycles and Blender (and Lux and …) in synchronicity with one another because everything can share the same underlying 3D-model information on the input side, and because many cleanly separated channels of information can be drawn from the output side … and stored, in (e.g.) MutiLayer-format files. You have at your disposal, not only the output-pixels themselves, but a great deal of contextual information about exactly how (and why…) those pixel-by-pixel values were produced. All of this information becomes “grist for the mill” in the compositing process … and any way that you can figure out how to employ it is fair game.
Hence … “don’t waste your time trying to get Cycles to do whatever it doesn’t do easily right-now,” if you know that you can get BI (say…) to do it for you. Simply let each renderer do what it does best, and capture not only the raw pixel-values but also the ancillary information. Now, put your thinking-cap on, and start dreaming of ways to use “all that information” to solve your problem.
Are you “cheating?” Heck no, you’re actually programming. Pushing aside all of the “this is the real world” analogies, the fact is that all of this is “this is not the real world at all… this is a digital computer.” And you are, quite literally actually, the programmer. You’re figuring out … creatively(!) … how to efficiently as possible get this cantankerous machine to deliver what you want to be able to “eat popcorn and look at,” and any way that gets you there … gets you there.
Just use Storm’s patch. Period. Problemo solved.