Hi all!
I’m new in the community and relatively new to Blender but din’t use it for so long.
I was following a tutorial by Andrew Price and needed to use the GPU computing.
I remember in the oldies, it was always troublesome to get GPU to renderbut now I must say I got it to work with
little effort. And the results and timing are awesome!
The renders were made with different samples but the results are always the same, even with 1000 samples and different tile sizes I always get those darker areas and artifacts with GPU rendering.
Recalculate normals (CTRL +N) in 2.7a does nothing. If I open the project file in and older version, say 2.62, and recalculate normals using the buttons in edit mode, save it, then reopen it with 2.7a, it renders just the first frame with less artifcats, and the others like the image above.
Any advice would be much apreciated, thanks in advance!
I am very sorry to tell you that I buy the wrong gpu. What is Nvidia would have to buy for this type of representation. I recommend you see the thread “A good news for AMD / ATI Graphic cards owners”. In that thread will have the discucion 1500 comments unresolved.
If I get it right you’re telling me I bought the wrong GPU, right? And to go see that thread as only nvidia work with Blender. Wrong. I got it working. And my topic is about a totally different matter.
I am already rendering with the GPU, see the image in the first post.
Anyways.
I discovered after some test that those darker areas are caused by the Tile dimension.
As you can see, with different Tile sizes the darker areas appear in different places and at different sizes but always in the same area, and that is the side of the text, never in the front (more illuminated and textured part).
It seems they appear at the intersection of tiles, but I just can’t understand why.
Maybe is something related to the illumination? I will try some more test with direct light on the side and report here.
Btw, to enable the gpu rendering I did the opencl trick by command line inside the blender folder, activated the gpu rendering inside user preferences and used cycles to render, choosing ofc GPU inside cycles properties.
Yeah, I know and it sucks! I just wanted to see if there was some kind of solution or trick to get rid of those artifacts.
As you can see it does render with the GPU, the problem is only with something regarding how blender renders the frame, as per the Tile sizes.
But changing the tile size only kind of group the “stains”, with bigger tile sizes.
Yeah, and the strangest thing is that they only appear in the side of the text. I can render the text with the camera in any angle and the front part of it will always render correctly.
It seems that the settings in Blender “User Preferences” and in “Cycles Render” config panel where conflicting.
Cycles setting always set to GPU Compute.
User Preferences -> System -> Compute Device -> OpenCL
I then have 3 options when I click to choose which device to use:
The stains/artifacts only appear if I have any of the options with “Pitcairn” in it enabled, which means setting OpenCL to use only the GPU or GPU + CPU.
THEN, if I set OpenCL to use only the CPU (the option with no “Pitcairn” in it), but let Cycles setting as “GPU Compute” it renders perfectly using the GPU!!!
The downside of course is not being able to use both CPU and GPU, but f**k it! The upside is that I’m able to render with my Sapphire R9 270x!!! Gaining something like 100% speed in rendering! (from 50 secs to 5 secs).
To moderators:
I think my achievement is quite valuable, not only I got to make blender 2.70 to render with my Sapphire (AMD) video card, but also got rid of any artifacts/errors created due to that same GPU rendering.
Do what you believe is better with this information, if needed I could make a proper tutorial.
Best regards,
Someone who got little help in the “Technical Support” section but made it work anyways.
If you don’t have ‘Pitcairn’ in one of your options, then the GPU isn’t being used. That would be AMD’s driver creating an OpenCL Device for your CPU, which it will render Cycles OpenCL on fine (and at times faster than normal CPU).
I have four OpenCL devices on my PC: Intel GPU, AMD GPU, Intel OpenCL CPU and the AMD OpenCL CPU (2 different OCL drivers for same CPU) and I have used them in every combination testing with SLG, and you’re not rendering with a GPU if your GPU’s name is not one of the selected options. Hence why it would render perfectly as your not using the GPU which creates errors.
Selecting ‘GPU Compute’ simply means Cycles is using the selected (and in this case CPU) OpenCL device.
I’m sorry if I may appear harsh, but it would be best to ask or double check your results in one of the AMD Cycles OpenCL threads, Cycles/AMD issues are well known, and not something we want any false positives in.
Test made with the same settings a part from compute device set, in cycles panel, to GPU or CPU.
User Preferences -> System -> Compute Device -> OpenCL -> CPU
+
Cycles render panel set to: GPU
Rendering Time -> 15:14
User Preferences -> System -> Compute Device -> OpenCL -> CPU
OR
User Preferences -> System -> Compute Device -> None -> CPU
+
Cycles render panel set to: CPU
Rendering Time -> 2:03:43
Soooooooooooooo, not only it renders x8 times faster but also has a grater quality of rendering with the same exact settings, a part from the render device.