Topaz AI Gigapixel

(Thesonofhendrix) #1

It’s the best thing since sliced bread i think! I even purchased it after my trial ended.
Not only does it produce very nice upscaling at 150% -200% when i set it to 16bit tiff output. But i find i can take any of my final render images, run it through a 150% or 200% upscale and then downscale it back to the original size again in a photo editor and it brings out details, clarity and sharpness that was not there before, without it looking oversharpened.
It’s great. And great for speeding up low res renders with some noise if you select the strong reduce noise and blur option. Takes like 5 seconds, to get a 200% larger render. Absolutely crazy. I wish this tech could be built into the cycles engine, a bit like the denoiser, and much like Nvidia DLSS.

(Andres7777) #2

Keep in mind that those upscalers cant add information to the image thats not there to begin with, it maybe work okay for simple renders, but it will fall apart whenever you need to upscale something that need more information to look good than the original render has.

Also DLSS is used to get more samples from areas that traditional antialiasing doesnt affect. Cycles by its very nature already takes multiple samples per pixel.

I think a good denoiser will be a lot better for final renders than any upscaler ever could

(Ace Dragon) #3

I agree about how denoising should be a focus, DeepBlender is already getting some amazing results with his denoising solution and hopes to see it bundled into Blender itself in the future.

The one concern about upscaling algorithms (even if you do avoid a blurry image) is how it would handle details that is a size of a single pixel or half a pixel in the final result, surely it couldn’t be done in a clean way unless it had access to the original scene data?

(Thesonofhendrix) #4

On the contrary, it absolutely adds detail and information that wasn’t there to begin with. And even more dynamic range when you upscale a jpg into a 16bit tiff… Try the free demo and scrutinize it for yourself… Deep learning>AI is trained to add detail that it thinks should belong there, that’s how the program functions.

(Thesonofhendrix) #5

Well i’ve tried upscaling a final render image with AI gigapixel and then down scaling again for excellent results. So blenders denoiser has a lot of catching up to do. Im not sure what denoiser you are thinking of, but it must be seriously powerful vs blenders.

(Andres7777) #6

Then I think that developing and training such algorithm to produce consistent results is far to complex a feature to be incorporated into blender, and I would advise you to stick to the workflow that already works for you.

Of course one specific feature in a free software developed by one person will deliver worst results than a commercial application with a budget behind it intended to do one and one thing only.

(Ace Dragon) #7

Got some insight into the technology on Topaz’s website.

It looks like a more extreme upscale ( >2x) will end up giving an impressionist look to details that are only a few pixels in size in the source image). Since this is based on deep learning, it will likely be highly dependent on what the algorithm was trained on.

So for a 200 percent resize it could be enough, but don’t expect this to be a magic bullet that will allow you to blow through multiple renders an hour.

(Thesonofhendrix) #8

yea there is that, but i wouldn’t be that cynical about it. I do think it will integrate into blender eventually as people are toying with AI/deep learning all the time now, just as their own side projects. And as Ace Dragon said, DeepBlender is already making great progress, and that’s a similar method, probably uses adversarial neural networks.

(Thesonofhendrix) #9

I agree it’s not going to ever be 95 or 100% accurate to ground truth, but as we know, using lower samples and blenders denoiser will also give inaccurate or impressionist looking results compared to just brute force 10k samples… But i find myself using far less samples than i used to now we have the denoiser…

(Ace Dragon) #10

You can actually get pretty good results with the detail settings at 0.4, the radius at 10, and the direct lighting passes skipped. You might get fireflies that are untouched, but no details that matter will be blurred out or butchered if enough samples are used.

DeepBlender’s stuff though is obviously higher quality and seems to scale up, can’t wait to see it as a compositing node or even an automatic feature that kicks in when rendering is done.

(Thesonofhendrix) #11

Thanks for the suggestion. You see most casual users wouldn’t know this and just click denoise on. And thats it, render at maybe 200-500 samples.
Also i recommend you try AI gigapixel, V2.00 may really surprise you. It keeps blowing my mind.


What makes you believe only commercial applications are capable to produce consistent results for this kind of task?