Help: Procedural Texture to 2D Image with Generated/Local Tex Coords (No Baking)

Summary: How to sample/evaluate a procedural texture using Generated (orco) texture coordinates, such that the resulting image will affect the object identical to that of the procedural texture, when used for texturing (BI) and displace modifier using Generated and Local texture coordinates respectively.

Long Version w/ Code: I’m trying to save a procedural texture with Generated texture coordinates as 2D image. Such that when the image is used with Generated texture coordinates it gives the same exact result as the procedural texture.

Before continuing I need to clarify that I’m not interested in “baking” solutions or those involving UV mapping and looking for a python alternative that can properly map the Generated texco to image x,y grid. Also everything below is done in Blender Internal.

I have two objects with identical geometry. The first object on the right, object RIGHT is being displaced using a procedural texture with Local texture coordinate system. For the material texture I’m using the Generated mapping which perfectly corresponds to the Local texture of the displacement modifier.

The second object on the right, object LEFT is being modified by a displacement modifier using our saved image texture (from procedural), and similarly has a material texture with Generated coordinate system corresponding to Local coords of the modifier.

This image texture used for LEFT is the sampled version of the Procedural texture of the RIGHT.

I’ve been able to generate this image from the procedural texture using the evaluatefunction from the API. The code generating the image is below:


import bpy
import numpy as np
D = bpy.data


image_object = D.images["Untitled"]
texture = D.textures["Wood"]
pixels = D.images["Untitled"].pixels
rows = D.images["Untitled"].generated_height
columns = D.images["Untitled"].generated_width


data = np.asarray( pixels, dtype="float" )


print("Texture Copy Started")
print(data.shape)
data3d = data.reshape(rows, columns, 4)


# Values below are arbitrary to get some reasonable results
# probably some formula to get proper results
scaleI = 70
scaleJ = 70


for i in range(rows):
    for j in range(columns):
        x, y = i/scaleI , j/scaleJ 
        # texture.evaluate takes in a coordinate. How to get the correct one?
        intensity = texture.evaluate((x,y,0)).w
        data3d[i,j][0] = intensity 
        data3d[i,j][1] = intensity 
        data3d[i,j][2] = intensity 


# Ensure (0,0) corresponds to bottom left and is blue
data3d[0,0][0] = 0
data3d[0,0][1] = 0
data3d[0,0][2] = 1


# Ensure (width/2, height/2) corresponds to middle and is green
y = int(rows/2)
x = int(columns/2)
data3d[x, y][0] = 0
data3d[x, y][1] = 1
data3d[x, y][2] = 0


# Ensure (255,255) corresponds to top right and is red
y = rows - 1
x = columns - 1
data3d[x, y][0] = 1
data3d[x, y][1] = 0
data3d[x, y][2] = 0


D.images["Untitled"].pixels = data3d.reshape(rows*columns*4)
print("Copy texture completed")
print(data3d.reshape(rows*columns*4))        

Resulting in the following for a 256x256 image.
https://i.stack.imgur.com/NDMR2.png

My main question is how can I sample the procedural texture considering the texture coordinates of Generated (orco) such that when I use the image texture on LEFT I will get the same exact result as RIGHT, meaning they both look identical.

I’ve read a lot about this, but the material is quite obscure. From here, I’ve read that Orco(v) = 2*(v-center)/size for v, vertex. But I’m not sure how to use this knowledge in computing the image.

On the other hand I’ve read that (minX, minY) --> (0,0) and (maxX, maxY) --> (1,1) for Generated coordinates, but I’m not sure how this correlates with the above formula for Orco(v)?

Before anything, I want to say that I greatly appreciate your help in advance.

Here are some resources that I’ve already looked at:

Attachments


Procedural textures are 3d and of limitless resolution. You’ll have to sample multiple slices of the applicable portion of the procedural texture at a fixed resolution, then find a way to read and map those slices (a stacked 2d image sequence) volumetrically.

DICOM and fractals have examples of such setups.

I hear that you are uninterested in baking solutions, but why?

It sounds like you are trying to reinvent the wheel here.