Page 843 of 854 FirstFirst ... 343743793833841842843844845853 ... LastLast
Results 16,841 to 16,860 of 17064
  1. #16841
    Member
    Join Date
    May 2013
    Posts
    1,100
    Originally Posted by m9105826 View Post
    Procedurals are pretty garbage in production. Depending on their usage they can increase render times anywhere from 2x to 100x or more. I doubt there's a single production today that doesn't bake their procedurals to textures for final renders.
    Care to elaborate on that math, because it doesn't add up here. Although I tend to mix bitmaps with procedurals or mix bitmaps with procedurals rather than rely completely on them, I can't bake them out because the often the purpose is to add randomness. Resulting bakes would have to be so big I'd run out of memory (gpu) - and I'd happily trade in a little time for getting it done at all. And I really mean "a little", not "2x to 100x or more". Also, ideas that start out complex may end up very simple in the end. And even if you do bake them out, decent generators are key to create the bake efficiently in the first place - not everyone uses external programs. Trying to mimic things like scratches using tons of generators will slow things down for sure - a single generator designed for the job would be a lot faster. Do you want to see bricks gone? Because it can probably be setup using nodes. Personally I'd like to see other voronoi metrics and distance controls, as well as 4D inputs and UV repetitions (4D inputs can be used to create 3D repetitions if time is then excluded). Another thing would be auxiliary coordinate mappings available anywhere (sphere, tube, disk, cone, box etc) not locked to image texture node.



  2. #16842
    Originally Posted by CarlG View Post
    Care to elaborate on that math, because it doesn't add up here. Although I tend to mix bitmaps with procedurals or mix bitmaps with procedurals rather than rely completely on them, I can't bake them out because the often the purpose is to add randomness. Resulting bakes would have to be so big I'd run out of memory (gpu) - and I'd happily trade in a little time for getting it done at all. And I really mean "a little", not "2x to 100x or more". Also, ideas that start out complex may end up very simple in the end. And even if you do bake them out, decent generators are key to create the bake efficiently in the first place - not everyone uses external programs. Trying to mimic things like scratches using tons of generators will slow things down for sure - a single generator designed for the job would be a lot faster. Do you want to see bricks gone? Because it can probably be setup using nodes. Personally I'd like to see other voronoi metrics and distance controls, as well as 4D inputs and UV repetitions (4D inputs can be used to create 3D repetitions if time is then excluded). Another thing would be auxiliary coordinate mappings available anywhere (sphere, tube, disk, cone, box etc) not locked to image texture node.
    I use procedurals in production all the time and donīt notice significant slow downs either. In fact I donīt know when I used a bitmap the last time. Even if there is some slow down when rendering, artist time is more valuable than cpu time, so it doesn't really matter imo.



  3. #16843
    Originally Posted by BeerBaron View Post
    The OptiX license is not compatible with Blender's GPL license, but it should be possible to create a separate program that denoises using passes coming out of Cycles (and other renderers).
    When OptiX 5 eventually gets released. At the moment, all we have is an announcement. If and how it will work with Cycles is something we can answer then.



  4. #16844
    Cycles: Re-enabled motion blur for object scaling
    https://developer.blender.org/D2937

    I hope that goes quickly into the master.



  5. #16845
    Member Ace Dragon's Avatar
    Join Date
    Feb 2006
    Location
    Wichita Kansas (USA)
    Posts
    28,710
    In case people are wondering about recent Cycles development, Brecht has just added support that allows GPU CUDA rendering to make use of system RAM (if the data can't fit on the card itself).
    https://lists.blender.org/pipermail/...ry/103260.html

    Unfortunately, you're looking at a staggering performance hit if it's more than just image textures not fitting into VRAM (so don't expect magic if you want to render a scene requiring 30 GB of memory on a GTX 1080 Ti).
    Sweet Dragon dreams, lovely Dragon kisses, gorgeous Dragon hugs. How sweet would life be to romp with Dragons, teasing you with their fire and you being in their games, perhaps they can even turn you into one as well.
    Adventures in Cycles; My official sketchbook



  6. #16846
    I was really looking forward to this patch (Stefan Werner submitted this a while ago) especially as my gfx card at home is a puny GTX780 with only 3GB.
    In my first tests in Linux Mint both the buildbot build and my own build either throw an "out of memory" message (which after this patch should be gone?!) or the tiles render with a completely different shading / lighting depending on whether they were rendered on GPU or CPU.
    So far this only happens with scenes that I wasn't able to render on GPU before the patch.
    I'm still trying to figure out how to reproduce the errors, but I'm getting weird results so far.
    The Gooseberry production benchmark scene (I just load it and change nothing except setting the CPU rendering to GPU) e.g. sometimes throws the "out of memory" error and after restarting Blender and doing the exact same steps it starts rendering but with bright and dark tiles mixed.



  7. #16847
    Originally Posted by Ace Dragon View Post
    In case people are wondering about recent Cycles development, Brecht has just added support that allows GPU CUDA rendering to make use of system RAM (if the data can't fit on the card itself).
    https://lists.blender.org/pipermail/...ry/103260.html

    Unfortunately, you're looking at a staggering performance hit if it's more than just image textures not fitting into VRAM (so don't expect magic if you want to render a scene requiring 30 GB of memory on a GTX 1080 Ti).

    Is this loss of performance measurable?

    I mean could we know if we will loose 80% of performance or 20% of performance using this?

    Cheers.



  8. #16848
    Member
    Join Date
    May 2016
    Location
    New Jersey, USA
    Posts
    282
    The note says 20-30% for image textures. If it requires other assets, it can be up to 10x slower.



  9. #16849
    Member m9105826's Avatar
    Join Date
    Dec 2007
    Location
    Fairfax, VA
    Posts
    4,270
    It is also severely limited in the amount of system memory it can access due to OS stability issues. Out of core memory access on GPU shouldn't surprise anyone with its poor performance though. If it were that easy to get production scenes working on the GPU, Hollywood rendering would be a very different ballgame. GPU production animation rendering without serious caveats is still a ways off.
    Long time 3D artist and member of the official Cycles Artists Module
    https://www.youtube.com/user/m9105826 - Training, other stuff. Like and subscribe for more!
    Follow me on Twitter: @mattheimlich or on my blog



  10. #16850
    Member mib2berlin's Avatar
    Join Date
    May 2008
    Location
    Germany
    Posts
    3,515
    Hm, I could test the patch some month ago and got only 10-20% performance loss but only with textures.
    Octane Engine has this feature since years and get also 10-20%, it is limited to textures only anyway.
    It is may not suitable for Hollywood but I know many productions from Octane forum specially for advertising, ArchViz and so forth.

    Cheers, mib
    OpenSUSE Leap 42.1/64 i5-3570K 16 GB Blender 2.7 Octane 3.03
    GTX 760 4 GB, GTX 670 2 GB Driver 375.26 | Blender for Octane



  11. #16851
    Originally Posted by mib2berlin View Post
    Hm, I could test the patch some month ago and got only 10-20% performance loss but only with textures.
    Octane Engine has this feature since years and get also 10-20%, it is limited to textures only anyway.
    It is may not suitable for Hollywood but I know many productions from Octane forum specially for advertising, ArchViz and so forth.

    Cheers, mib
    Patch is in master, but i already found a cornercase where it does fail:

    Having multiple gpu's with uneven VRAM if one gpu still fits render mem requirement but the other not, the "too small" gpu fails contributing intact tiles here.

    Have not yet tested if the same occures if the gpu's are same but one has less VRAM available cause it drives the screen.

    Jens
    Last edited by jensverwiebe; 04-Jan-18 at 12:30.



  12. #16852
    Member Indy_logic's Avatar
    Join Date
    Nov 2008
    Location
    Portland, Oregon
    Posts
    1,018
    Originally Posted by skw View Post
    When OptiX 5 eventually gets released. At the moment, all we have is an announcement. If and how it will work with Cycles is something we can answer then.
    So actually, Vray, isotropix clarisse, and Redshift all claim to support OptiX 5 now. They must have been seeded with dev ahead of release.



  13. #16853
    Member
    Join Date
    Nov 2013
    Location
    Local Coordinates 0,0,0
    Posts
    298
    Originally Posted by Ace Dragon View Post
    Brecht has just added support that allows GPU CUDA rendering to make use of system RAM (if the data can't fit on the card itself).
    https://lists.blender.org/pipermail/...ry/103260.html
    Does this apply to microdisplacement as well?



  14. #16854
    Originally Posted by Indy_logic View Post
    So actually, Vray, isotropix clarisse, and Redshift all claim to support OptiX 5 now. They must have been seeded with dev ahead of release.
    looks like it came out last month according to this .



  15. #16855
    Member m9105826's Avatar
    Join Date
    Dec 2007
    Location
    Fairfax, VA
    Posts
    4,270
    Optix 5 is out and their AI denoising tools are freely available for anyone to download.

    https://developer.nvidia.com/optix-denoiser
    Long time 3D artist and member of the official Cycles Artists Module
    https://www.youtube.com/user/m9105826 - Training, other stuff. Like and subscribe for more!
    Follow me on Twitter: @mattheimlich or on my blog



  16. #16856
    So I kinda wanted to do some kitbashing with the new bevel node today and noticed the bevel node takes the radius in world space instead of object space, meaning the bevels are different sizes compared to the object, at different scales.

    Naturally I wanted to fix that by multiplying the value by the object scale in the shader, but, uh, the object info node doesn't expose that.

    While this can be worked around by comparing texture spaces to different objects (eww, messy), I can't be the first to notice object scale is kind of an important thing to have in the Object Info node and it's missing?

    Can we have Scale in the Object Info node pretty please?



  17. #16857
    Member Hadriscus's Avatar
    Join Date
    Apr 2012
    Location
    Massilia, France
    Posts
    1,416
    It would be nice to have. In the meantime you can always use drivers.



  18. #16858
    Originally Posted by jensverwiebe View Post
    Patch is in master, but i already found a cornercase where it does fail:

    Having multiple gpu's with uneven VRAM if one gpu still fits render mem requirement but the other not, the "too small" gpu fails contributing intact tiles here.

    Have not yet tested if the same occures if the gpu's are same but one has less VRAM available cause it drives the screen.

    Jens
    Quoting myself here i can now happily report that Brecht fixed the issue in master. Testwise victor rendering on a 6/8/8GB gpu config. Got only a speed degradation from 7:15 to 7:17 from pinned out memory which is neglectible.

    Cheers ... Jens



  19. #16859
    Member pitiwazou's Avatar
    Join Date
    Jul 2013
    Location
    France
    Posts
    2,957
    Originally Posted by m9105826 View Post
    Optix 5 is out and their AI denoising tools are freely available for anyone to download.

    https://developer.nvidia.com/optix-denoiser
    I hope Cycles will use it.



  20. #16860
    The problem with Optix is the license, we can't ship closed source software with Blender. There may be some way around it, shipping it separately or having it as part of the driver, but it's unclear what we'll do at this point. We've talked to NVIDIA about this problem but no conclusion yet.

    Great to hear the CUDA rendering using system RAM appears to be working for everyone now. Note that in practice the restriction on the amount of system RAM that can be used is not necessarily that bad. We always leave 4 GB to the system, but if you've got a big scene, in practice that 4 GB was likely already needed by the operating system, Blender scene data, or something else.



Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •