Texture caching and Tiled EXR - Can we get these features?

Hi guys,

I’m mainly a vfx guy, so I used high resolution textures all the time. Now, Tiled EXR has been around for a while now and it is pretty important part of vfx and animation. I generate 8K textures all the time and I need an efficient way to handle them without going thru a bunch of downsizing optimization. This is where Tiled EXR comes in handy. I sent a report a while ago about Tiled EXR, but I haven’t heard anything since. I’m going to assume it is not being worked on, which I think is a mistake. So I’m pleading that we get this in 2.8. Anyone agree?

By “tiled exr” are you referring to what Nuke calls a “Layer Contact Sheet”? Basically it shows an array of images for all the layers in an exr.

Which is one reason I use Nuke instead of Blender for compositing…

This is basically what I mean.

This allows it to load just parts of a given texture during rendering in order to save memory. When you are using tiled OpenEXR files with mipmaps V-Ray automatically recognizes this and loads the appropriate tiles during rendering instead of just loading the whole texture.

So anything you don’t see, don’t get loaded. I can fill up a city with tons of buildlings and textures and the render engine will load only the portion of each texture needed.

I’m working on a big planetary zoom and downloaded a nasa texture about 80k x 40k. I mapped this on a sphere. Obviously, I don’t need the render engine to load the portion of the texture on the other side of the planet, and I don’t need most of the texture when the camera is zoomed in on small part of the map.

It was mentioned at last blenderconference.

The fact that a topic does not make the news, does not mean that is not on ToDo list.

1 Like

Yes, but where is it on the priority list? I believe it’s always been on the list, but I’ve posted a report on this 1.5 years ago.

https://developer.blender.org/T49417

This is one of those things (like PTex) that are fairly easy in a CPU renderer, where you can do anything (i.e. file system or even network access) at any point during rendering (e.g. shader evaluation).

On the GPU, this requires an entire new system for suspending ray evaluation and loading data outside of the kernel invocation. A simpler (and less robust) pre-processing step could also be used to just load data that is in the view (similar to how adaptive subdivision works).

Since neither solution is both straightforward and appealing, I wouldn’t bank on getting such features in the near future.

So its kind of like Sparse Textures. Not sure that would be of use for the CUDA/OpenCL renderers however could be potentially interesting for the viewport/EEVEE.

I am still hoping for svg vector textures for Blender cycles and eevee. Seems cpu renderers allow for more flexibility texture wise.

Sounds like GPU rendering isn’t for large final render output.

It isn’t. Yet.

The work I presented at BlenderConf 2017 is in the code repository here:
https://git.blender.org/gitweb/gitweb.cgi/blender.git/shortlog/refs/heads/cycles_texture_cache

Since it’s relying on OpenImageIO’s texture cache, this feature works only with CPU rendering for now. There are a few known issues preventing it from going to master, but I hope to tackle them one day.

Could it work with current GPU+CPU solution ?

I can see how something like Texture caching and Tiled EXR would be really handy for stuff like that, where even if not using a GPU to render, it would still save a lot of system RAM. In some ways, it’s stuff like this, which while maybe not of as much interest to the usual hobby user, goes a lot more towards some of that workflow/pipeline discussion in the other thread. Things that can be done which would make Blender more likely to be used in a studio environment.

PS. On looking at your website, you have a spelling error on the initial page. Under 3D and CG Services, “enginering” is missing an e

Not at the moment, no. CPU render threads are able to call any 3rd party API at any time and are free to read from disk as they need, which makes adding texture caching to CPU rendering comparably easy. The GPU on the other hand does not have that capability - GPU threads can’t fetch data from disk, they can only access what is already in VRAM, or, with certain limitations, RAM. Logic to feed random data from disk to the GPU through the CPU inevitably becomes more complex and in many cases, much slower too.

thanks for pointing that out. I’ll take care of that now.

I’ll give this a shot, but it looks like I need to know how to compile these codes. Thanks.

Anyway, it looks like CPU render farms are still the way to go. Multi GPUs for workstation probably makes more sense for asset creation.

The world relies on bigger and bigger textures outputted out of Substance Painter, 3D Coat, and Mari. If we use Principled, each object will require at least 4 texture set. With all of them painted at minimal 4k.

Would it be possible to have a pre-process to divide render between render tiles for GPU and render tiles for CPU ?
User could choose about having a certain percent of render done by GPU and the rest by CPU.
Portion of render done by GPU could correspond to a big heavy texture tile sent to VRAM. And then, memory gain would just concerned tiles rendered by CPU.
Actually, if you are using hybrid GPU/CPU solution, it is already to a speed lost for a memory gain.
It could become an adjustable balance. But it is just a idea thrown in the air. I don’t know if it is feasible.

Edit : OK ! I just realized how it does not make sense for an animation. It would mean a different cache per frame.

No. You’d have to know in advance what texture data are needed for a certain tile. But you don’t know that until you have traced all the rays. Path tracing is a random walk through the entire scene.

PM me if you need help with that.

Thanks. Appreciate that.