Topaz AI Gigapixel

It’s the best thing since sliced bread i think! I even purchased it after my trial ended.
Not only does it produce very nice upscaling at 150% -200% when i set it to 16bit tiff output. But i find i can take any of my final render images, run it through a 150% or 200% upscale and then downscale it back to the original size again in a photo editor and it brings out details, clarity and sharpness that was not there before, without it looking oversharpened.
It’s great. And great for speeding up low res renders with some noise if you select the strong reduce noise and blur option. Takes like 5 seconds, to get a 200% larger render. Absolutely crazy. I wish this tech could be built into the cycles engine, a bit like the denoiser, and much like Nvidia DLSS.

1 Like

Keep in mind that those upscalers cant add information to the image thats not there to begin with, it maybe work okay for simple renders, but it will fall apart whenever you need to upscale something that need more information to look good than the original render has.

Also DLSS is used to get more samples from areas that traditional antialiasing doesnt affect. Cycles by its very nature already takes multiple samples per pixel.

I think a good denoiser will be a lot better for final renders than any upscaler ever could

1 Like

I agree about how denoising should be a focus, DeepBlender is already getting some amazing results with his denoising solution and hopes to see it bundled into Blender itself in the future.

The one concern about upscaling algorithms (even if you do avoid a blurry image) is how it would handle details that is a size of a single pixel or half a pixel in the final result, surely it couldn’t be done in a clean way unless it had access to the original scene data?

On the contrary, it absolutely adds detail and information that wasn’t there to begin with. And even more dynamic range when you upscale a jpg into a 16bit tiff… Try the free demo and scrutinize it for yourself… Deep learning>AI is trained to add detail that it thinks should belong there, that’s how the program functions.

Well i’ve tried upscaling a final render image with AI gigapixel and then down scaling again for excellent results. So blenders denoiser has a lot of catching up to do. Im not sure what denoiser you are thinking of, but it must be seriously powerful vs blenders.

Then I think that developing and training such algorithm to produce consistent results is far to complex a feature to be incorporated into blender, and I would advise you to stick to the workflow that already works for you.

Of course one specific feature in a free software developed by one person will deliver worst results than a commercial application with a budget behind it intended to do one and one thing only.

1 Like

Got some insight into the technology on Topaz’s website.

It looks like a more extreme upscale ( >2x) will end up giving an impressionist look to details that are only a few pixels in size in the source image). Since this is based on deep learning, it will likely be highly dependent on what the algorithm was trained on.

So for a 200 percent resize it could be enough, but don’t expect this to be a magic bullet that will allow you to blow through multiple renders an hour.

yea there is that, but i wouldn’t be that cynical about it. I do think it will integrate into blender eventually as people are toying with AI/deep learning all the time now, just as their own side projects. And as Ace Dragon said, DeepBlender is already making great progress, and that’s a similar method, probably uses adversarial neural networks.

I agree it’s not going to ever be 95 or 100% accurate to ground truth, but as we know, using lower samples and blenders denoiser will also give inaccurate or impressionist looking results compared to just brute force 10k samples… But i find myself using far less samples than i used to now we have the denoiser…

You can actually get pretty good results with the detail settings at 0.4, the radius at 10, and the direct lighting passes skipped. You might get fireflies that are untouched, but no details that matter will be blurred out or butchered if enough samples are used.

DeepBlender’s stuff though is obviously higher quality and seems to scale up, can’t wait to see it as a compositing node or even an automatic feature that kicks in when rendering is done.

Thanks for the suggestion. You see most casual users wouldn’t know this and just click denoise on. And thats it, render at maybe 200-500 samples.
Also i recommend you try AI gigapixel, V2.00 may really surprise you. It keeps blowing my mind.

What makes you believe only commercial applications are capable to produce consistent results for this kind of task?

I know it’s an old thread but I just found it now. Does this AI.gigapixel work on animations ?

yes its easy to batch process many image files in AI gigapixel.

1 Like

I’m also an AI Gigapixel aficionado. I used to use Photozoom Pro, and also own Blow Up, but AI Gigapixel is the upscale king at the moment. It saves my ass when a client asks for a realistic rendering of 6000 x 6000 pixels. Upscaling a 2000 x 2000 pixels rendering using AI Gigapixel works fine.

1 Like

and you don’t see artifacts or flickering around edges?

For example when using optix denoiser on animations you see some ‘clouds’ moving because the AI doesn’t take previous and next frame of the animation into account, I suppose it must be the same with AI enlargement or is this one more constant?

None of the AI based denoisers which are currently usable in Blender have animation support. However, there are denoisers which are capable of dealing with that.
This same technique could also be used for super resolution, at least for rendered animations. Like this, the frames should be very smoothing and the quality of individual frames might even be improved too.