Ripples, Waves, Normalmaps and Textures with numpy, scipy and matplotlib

I am just doodling around to learn about image/texture generation and manipulation using python. Numpy is useful since it does fast array manipulation. Scipy has an extensive image manipulation library and PIL I’ll be using to just to save and read image files. As I become familiar with the potential of this combination I’ll post examples. This is the first example, a short program that takes an idea from http://fragsworth.com/video_functions/ and expands it to using colours.


from math import *
import numpy as np
from PIL import Image


def render(res):
    h,w = res
    img = np.zeros((h,w,4), np.uint8) # an hxwx4 array with the 4 part holding the RGBA data
    for i in range(h):
        for j in range(w):
            x, y = (i-h/2)*4*2*pi/h, (j-w/2)*4*2*pi/w  # chosen to have some repeats
            # may need to normalise these some other way if using functions other then sin cos
            r = sin(x*y*y)  # red
            b = cos(x*y+pi) # blue
            g = cos(y)*cos(x*y) # green
            col = [c*128+127 for c in (r,g,b,1)] # normalise and add alpha channel
            img[i,j] = col  # this is slicing off the 4 part and assigning it to col = [r,g,b,a]
    
    pilimg = Image.fromarray(img, 'RGBA')
    pilimg.save('result.jpg')

if __name__ == '__main__':    
    render(res=(500, 500))

This is the trippy result

http://farm3.static.flickr.com/2559/3791168247_a0042e63cb_o.jpg

If you have anything more trippy please post. Next up will be some filter examples.

After posting the above I realised I hadn’t actually used any array methods. The following code does the same thing as above, but about 16 times faster.


import numpy as np
from PIL import Image

def render(res):
    w,h = res
    x,y = np.arange(-w//2, w//2)*4*2*np.pi/w, np.arange(-h//2, h//2)*4*2*np.pi/h  # chosen to have some repeats
    r = np.sin(np.outer(x,y)*y)  # np.outer generates the complete matrix for the red channel
    g = np.cos(y)*np.cos(np.outer(x,y)) # green
    b = np.cos(np.outer(x,y)+np.pi) # blue
    img = (np.array((r, g, b))*128+127).transpose(1,2,0).astype(np.uint8) # no alpha this time
    
    pilimg = Image.fromarray(img, 'RGB')
    pilimg.save('result.jpg')

if __name__ == '__main__':    
    render(res=(500, 500))


Each of the images r,g,b is an individual grey scale combining them like this img = np.array((r,g,b)) creates an array of shape (3,w,h). But the shape of the rgb data in an image file is this (w,h,3). What is wanted is that img[j,k] index a sequence containing (r,g,b). np.tranpose(1,2,0) accomplishes this by transpositioning the index 0 data which was (r,g,b), to be now indexed last instead of first.

I gave an explanation in the previous thread as to what transpose is doing.

There are number of filters defined in scipy for image manipulation. Generic_filter1d will filter along an axis direction. For images stored as numpy arrays the axis could either be x,y or the r,g,b value.

Below def fnc is a callback fn which accepts a line of data, iline, from the image array and from this produces the altered line, oline. It can also except extra_arguments in this case a,b, but could have more or less arguments as is your want. It slices the lines in various ways and adds them together to produce an oline. How the lines are sliced is determined by filter_size, the slicing must match the filter size. Additional documentation is available here: http://docs.scipy.org/doc/scipy/reference/tutorial/ndimage.html. The added slices are normalised (3+a+b). This produces a blur over 9 pixels along the y axis:


import numpy as np
from PIL import Image
from scipy.ndimage import filters

def fnc(iline, oline, a, b):
    oline[...] = (iline[:-8]+a*iline[2:-6] + iline[4:-4] + b * iline[6:-2]+iline[8:])/(3+a+b)
    
if __name__ == '__main__':

    iclan = Image.open('xilitla.jpg')
    img = np.asarray(iclan)
    fimg = filters.generic_filter1d(img, fnc, filter_size = 9, axis = 0, extra_arguments = (3,2))
    pilImage = Image.fromarray(fimg, 'RGB')
    pilImage.save('result.jpg')

Original image
http://farm3.static.flickr.com/2431/3797791216_1f0cd30699_o.jpg

Filter image (vertical blur):
http://farm4.static.flickr.com/3465/3797791276_bb34b68fd6_o.jpg

The rgb data can also be filtered with axis=2. Since the colour axis is being mixed (blurred) this tends to desaturate the image colour:


import numpy as np
from PIL import Image
from scipy.ndimage import filters

def fnc(iline, oline, a, b):
    oline[...] = (a*iline[:-2] + iline[1:-1] + b*iline[2:])/(1+a+b)
    
if __name__ == '__main__':

    iclan = Image.open('xilitla.jpg')
    img = np.asarray(iclan)
    fimg = filters.generic_filter1d(img, fnc, filter_size = 3, axis = 2, extra_arguments = (3,2))
    pilImage = Image.fromarray(fimg, 'RGB')
    pilImage.save('result2.jpg')

http://farm4.static.flickr.com/3497/3796995079_fbc83e5acc_o.jpg

hi sornen.
very nice work indeed.
:slight_smile:

Thanks, I hope that as the thread progresses, much better things will come. The doodling at the moment is just for learning. The real aim of the thread is to generate useful content.

Incidentally, this is good way of learning stuff, start simple, play around, see where this leads. Kind of the experimental approach to learning.

All these programs are tested with my IDE of choice; eclipse with the pydev plugin.


http://pydev.sourceforge.net/

Convolution

Firstly, I’ve decided to change from PIL to matplotlib, which gives far more flexibility in plotting the results. matplotlib uses png natively but can use PIL to load jpg and other formats. So if you load jpg files using matplotlib make sure PIL is installed as well. That said, imsave from matplotlib does not seem to be installed with my version so I have to use PIL for saving. Further imread from matplotlib appears to read the image upside down, this can be inverted the right way up with np.flipud. In linux at least PIL is usually known as the pkg “python-image”.

Convolution is probably the most useful imaging filter, capable of a wide variety of affects. In scipy.nimage a basic convolution is accomplished by taking an image ‘img’ and convoluting with another image called a weight, ‘wgt’. The mode parameter decides what happens to the convolution at the edges of the image.


filters.convolve(img, wgt, mode='reflect')

  1. The wgt array has to have the same arrangement of axes as the img array. In numpy image.shape = (h,w,3) for ‘rgb’ images and (h,w,4) for ‘rgba’ images.

  2. For use in images wgt should be normalised so that the sum of all the indices is between -1.0 and 1.0.

I have not seen convolution of the color axis before so this looks like fun to examine first.

With ‘wgt’ we are not restricted to just 3 channels such as ‘rgb’, we can have as many colour channels as we desire. Below we will use a convenience function, def colourwgt(list, scale), which will take a list and convert it into a nomalised array (sum = 1.0), having the correct shape for use in the convolution. The scale parameter can further be used to scale the results and should have a value between -1.0 and 1.0

Since we are only interested in the colour axis, the height and width in the shape can both be 1 .


from PIL import Image
import numpy as np
from scipy.ndimage import filters
import matplotlib.pyplot as plt

def colourwgt(colours, scale = 1):
    c = np.array(colours)
    return (c*float(scale)/c.sum()).reshape(1,1,len(c))

   
if __name__ == '__main__':

    pmg = plt.imread('xilitla.jpg')
    img = np.asarray(pmg)
    
    wgt = colourwgt([5,10,-5,0,0,0,0,5],-0.8)
    
    convimg = filters.convolve(img, wgt, mode='reflect')
    
    plt.imshow(convimg, origin = 'lower')

    """ If you want to save the image uncomment these below """
    # pilimg = Image.fromarray(np.flipud(convimg), 'RGB')
    # pilimg.save('img0.jpg')
    # or this may work
    #plt.imsave('img0', convimg, format='png')
       
    plt.show()

plt.imshow will pop up a window with the resulting image which can be saved, as well as viewed in various ways. This is the result:
http://farm3.static.flickr.com/2545/3798861369_0181c3b51b_o.jpg

Although the wgt has been normalised there is still saturation in one of the colour channels, the blue channel. Adjusting scale to 0.8 will fix this.


wgt = colourwgt([5,10,-5,0,5],scale = 0.8)

http://farm3.static.flickr.com/2603/3799678722_9e75cdbd98_o.jpg

What happens if we add more 0 colour channels to our weight. This appears to be similar to adding a colour correction, the overall image is less blue, but as well, blue and red have been swapped.


wgt = colourwgt([5,10,-5,0,0,0,0,5],scale = 0.8)

http://farm3.static.flickr.com/2577/3799678774_d33261d6c8_o.jpg

Finally with a negative scale the image has a more mysterious, softer quality, and is the colour negative of the second image.


wgt = colourwgt([5,10,-5,0,5],scale = -0.8)

http://farm3.static.flickr.com/2604/3799678752_9e75cdbd98_o.jpg

The weights chosen are just random choices, they are the only ones that I have had time to examine, and yet they seem to produce fairly interesting and pleasing changes in the images, with only a few lines of code.

Correction of a correction. Matrices are represented as (height, width), so it is normal to assign this to (x,y). Also the code fragment in the post above


img = np.asarray(pmg)

is redundant and left over from using PIL.

numpy has several different types of vector multiplication.
a) vector * vector = vector
b) vector * scalar = vector
c) row-vector * col-vector = matrix


a = np.array([1,2])
b = np.array([3,4])

# a)
a*5
>>> array([ 5, 10])

# b)
a*b
>>> array([3, 8])

# c)
b.reshape(2,1)
>>>array([[3],
       [4]])

a*b.reshape(2,1)
>>>array([[3, 6],
       [4, 8]])

Where above we used np.outer(x,y), the same affect can be had by reshaping y into a column vector.
Generally we are interested in vectors x and y representing coordinate axes like this.


x = np.arange(6)
y = np.arange(8).reshape(8,1)
x
>>> array([0, 1, 2, 3, 4, 5])
y
>>> array([[0],
       [1],
       [2],
       [3],
       [4],
       [5],
       [6],
       [7]])

# more succinctly
h,w = (6,8)
y,x = np.ogrid[0:w, 0:h]

A 2-D gaussian gradient can be readily produced by multipling 2 1-D gaussian gradients and here we use vectors as coordinates to create our image:


from PIL import Image
import numpy as np
from scipy.ndimage import filters
import matplotlib.pyplot as plt
import matplotlib.cm as cm

def gauss(size, std=(1.5,1.5)):
    """ size = (h,w) 
        std - (x,y) standard deviation """
    stdx,stdy = std
    h,w = size
    a,b = h/2.0, w/2.0
    y, x = np.ogrid[0:w, 0:h]
    # scaled so that it is centered and independent of size  
    gauss = np.exp(-np.square(stdx*(x-a)/a))*np.exp(-np.square(stdy*(y-b)/b))
    return gauss/gauss.sum() # normalise
   
if __name__ == '__main__':
    h,w = 20,20
    wgt = gauss((h,w))
    plt.imshow(wgt, cmap = cm.gray, origin = 'lower')

    """ If you want to save the image uncomment these below """
    #pilimg = Image.fromarray(np.flipud(wgt*255/np.max(wgt)).astype(np.uint8), 'L')
    #pilimg.save('gauss.jpg')
    # or this may work
    #plt.imsave('gauss', wgt, format='png')
       
    plt.show()

The result:
http://farm3.static.flickr.com/2562/3807723592_fe4665bfcf_o.jpg

def gauss can be used to produce a gaussian blur


from PIL import Image
import numpy as np
from scipy.ndimage import filters
import matplotlib.pyplot as plt
import matplotlib.cm as cm

def gauss(size, std=(1.5,1.5)):
    """ size = (h,w) 
        std - (x,y) standard deviation """
    stdx,stdy = std
    h,w = size
    a,b = h/2.0, w/2.0
    y, x = np.ogrid[0:w, 0:h]
    # scaled so that it is centered and independent of size  
    gauss = np.exp(-np.square(stdx*(x-a)/a))*np.exp(-np.square(stdy*(y-b)/b))
    return gauss/gauss.sum() # normalise
   
if __name__ == '__main__':
    h,w = 20,20
    wgt = gauss((h,w))
    img = plt.imread('xilitla.jpg')   
    convimg = filters.convolve(img, wgt.reshape(h,w,1), mode='reflect')
    plt.imshow(convimg, cmap = cm.gray, origin = 'lower')

    """ If you want to save the image uncomment these below """
    #pilimg = Image.fromarray(np.flipud(convimg).astype(np.uint8), 'RGB')
    #pilimg.save('gblur.jpg')
    # or this may work
    #plt.imsave('img0', wgt, format='png')
       
    plt.show()

Blurred:
http://farm3.static.flickr.com/2562/3806907319_50a614e9cf_o.jpg

Next up, something useful??

Normal maps

There are many types of filters that can be applied by convolution, and used for such things as edge detection and smoothing. The common ones are typically seen in graphic editors such as the gimp and photoshop. Other weights or filters for the common effects can be found by searching google eg.
http://www.ozone3d.net/tutorials/image_filtering.php

However, there seems to be no information on how to produce a normal map from an image, although Nvidia do produce a photoshop plugin that does this. With numpy and scipy it is possible to produce normal maps with a minimium of code, the core routine being less than 10 lines of code.

A little theory

A field in 3-dimensions is a function of the x,y,z coordinates, g(x,y,z). The gradient of a field, symbolised by ∇ is defined as a vector (dg/dx, dg/dy, dg/dz), where dg/dx,dg/dy,dg/dz are the partial differentials. For images and surfaces in 3 dimensions z = f(x,y) and the gradient is in the same direction as the surface normal. The x derivative of an image can be found by applying a simple filter (1,-1) to an image. The y derivative is the column vector form of (1,-1). For the z derivative g(x,y,z) = f(x,y)-z. Thus the z derivative is -1. Since the z derivative is going to be the blue component of a normal map, the value is made positive. Once the gradient is obtained it can be normalised and the three gradients combined into a rgb image.

More complex gradient filters can be used that in part smooth the image. class Normalmap has to date 2 gradient filters called smooth and fine. class method, def get(), does all the work of calculating the normalmap.

Images are read in png format, converted, to 64 bit floats, averaged over the RGB axes to give a gray scale matrix. Little or no compression should be used in producing the png file. Images formatted as jpg give compression artefacts, and should not be used, and as well may produce the wrong normal map.


'''
Name: normap.py
Author: K. Sornen
Copyright: K. Sornen 2009
License: GPL, THIS SOFTWARE IS PROVIDED FREE "AS IS" WITHOUT 
ANY EXPRESSED OR IMPLIED WARRANTIES
'''
from PIL import Image
import numpy as np
from scipy.ndimage import filters
import matplotlib.pyplot as plt

class Normalmap(object):
    def __init__(self, img):
        """ img is a gray scale image as a float64 matrix 
            to be turned into a normalmap """
        self.img = img.astype(np.float64) # convert to float64 just in case
        self.shape = self.img.shape
        
    def get(self, type = 'smooth', mode='reflect'):
        """ type can be 'smooth' or 'fine'
            mode can be 'reflect','constant','nearest','mirror', 'wrap' for handling borders """ 
        gradfn = {'smooth':self.prewittgrad, 'fine':self.basicgrad}[type]
        gradx, grady = gradfn()
        # x, y and z below are now the gradient matrices,
        # each entry from x,y,z is a gradient vector at an image point
        x = filters.convolve(self.img, gradx, mode=mode)
        y = filters.convolve(self.img, grady, mode=mode)
        # norm is the magnitude of the x,y,z vectors, 
        # each entry is the magnitude of the gradient at an image point and z*z = 1
        norm = np.sqrt(x*x+y*y+1)  
        # divide by the magnitude to normalise
        # as well scale to an image: negative 0-127, positive 127-255
        x,y = [a/norm*127.0+128.0 for a in (x,y)] 
        z = np.ones(self.shape)/norm # generate z, matrix of ones, then normalise
        z = z*255.0 # all positive
        # x, -y gives blender form 
        # convert to int, transpose to rgb and return the normal map        
        return np.array([x, -y, z]).transpose(1,2,0).astype(np.uint8) 
        
    def prewittgrad(self):
        """ prewitt gradient """
        gradx = np.array([1.0,1.0,1.0]).reshape(3,1)*np.array([1.0,0.0,-1.0])/3.0
        grady = np.array([1.0,0.0,-1.0]).reshape(3,1)*np.array([1.0,1.0,1.0])/3.0
        return gradx, grady
    
    def basicgrad(self):
        """ basic gradient """
        gradx = np.array([1,-1]).reshape(1,2)
        grady = np.array([1,-1]).reshape(2,1)
        return gradx, grady
   
if __name__ == '__main__':
    
    pmgc = plt.imread('sc.png').astype(np.float64)
    pmgc = pmgc.sum(axis=2)  # sum RGB to give gray scale 
    pmgf = plt.imread('sf.png').astype(np.float64)
    pmgf = pmgf.sum(axis=2)  # sum RGB to give gray scale 
    pmg = pmgc - pmgf/50.0 # subtract and scale images
    nmap = Normalmap(pmg)
    nimg = nmap.get('smooth')
    plt.imshow(nimg, origin = 'upper')
    plt.show()
    """ If you want to save the image uncomment these below """
    pilimg = Image.fromarray(nimg, 'RGB')
    pilimg.save('ns.png')

Two images were used, one scaled by a factor of 50, and inverted by subtracting from the other.
sc.png
http://farm3.static.flickr.com/2518/3810400357_7a6e635f1e_o.png
sf.png
http://farm3.static.flickr.com/2618/3810400309_02ceedda83_o.png
normalmap.png
http://farm4.static.flickr.com/3117/3810400407_4518c3e275_o.png
blender render (as well as usual settings, have to enable uv texture under editing to get this to work):

Just a couple of followups on the previous post:

Found a normalmap plugin for the gimp, which no doubt some of you knew about. It claims to have more functionality then the nvidia one for photoshop.
http://nifelheim.dyndns.org/~cocidius/normalmap/

Here is an example of using jpg file which even without any compression gives a normal map with artifacts.
http://farm3.static.flickr.com/2616/3812854381_474fb23ff5_o.jpg

Animation

Of the limited affects that have been discussed above most can be done readily in Blender without the trouble of having to write a script. However, numpy comes into its own for generating procedural and animation content.

Taking the S image used above it only requires a few lines of code to generate a short animation. The code given below generates a series of normal maps of increasing height, together with a greyscale image with the S part increasing in value. The greyscale images will be used for setting a raymirror value.

I have altered the code slightly from that above. With png the 4th channel is the alpha channel and, in general, we probably do not want to use this to generate our images so it is sliced out with pmgc[…,:-1]. The elipsis is something specific to numpy and is equivalent to this pmgc[:,:,:-1]. Further the sum has been normalised by dividing by 3. It should be noted that the image initiating the Normalmap class has values between 0 and 1. However, images can have much larger values then this for Normalmap, practically about 255 x times, to give normalmaps with larger dynamic ranges. So for instance Normalmap(pmgg*255) would produce a normal map with a much larger dynamic range.

‘tmp/ms%03d.png’ % (i) will produce filenames that blender will recognise as a sequence, that is the filenames ms000.png, ms001.png etc. And I keep all the generated textures in a separate directory called ‘tmp’.


from PIL import Image
import numpy as np
from scipy.ndimage import filters
import matplotlib.pyplot as plt

class Normalmap(object):
    def __init__(self, img):
        """ img is a gray scale image as a float64 matrix 
            to be turned into a normalmap """
        self.img = img.astype(np.float64) # convert to float64 just in case
        self.shape = self.img.shape
        
    def get(self, type = 'smooth', mode='reflect'):
        """ type can be 'smooth' or 'fine'
            mode can be 'reflect','constant','nearest','mirror', 'wrap' for handling borders """ 
        gradfn = {'smooth':self.prewittgrad, 'fine':self.basicgrad}[type]
        gradx, grady = gradfn()
        # x, y and z below are now the gradient matrices,
        # each entry from x,y,z is a gradient vector at an image pixel
        x = filters.convolve(self.img, gradx, mode=mode)
        y = filters.convolve(self.img, grady, mode=mode)
        # norm is the magnitude of the x,y,z vectors, 
        # each entry is the magnitude of the gradient at an image pixel and z*z = 1
        norm = np.sqrt(x*x+y*y+1)  
        # divide by the magnitude to normalise
        # as well scale to an image: negative 0-127, positive 127-255
        x,y = [a/norm*127.0+128.0 for a in (x,y)] 
        z = np.ones(self.shape)/norm    # generate z, matrix of ones, then normalise
        z = z*255.0 # all positive
        # x, -y gives blender form 
        # convert to int, transpose to rgb and return the normal map        
        return np.array([x, -y, z]).transpose(1,2,0).astype(np.uint8) 
        
        
    def prewittgrad(self):
        """ prewitt gradient """
        gradx = np.array([1.0,1.0,1.0]).reshape(3,1)*np.array([1.0,0.0,-1.0])/3.0
        grady = np.array([1.0,0.0,-1.0]).reshape(3,1)*np.array([1.0,1.0,1.0])/3.0
        return gradx, grady
    
    def basicgrad(self):
        """ basic gradient """
        gradx = np.array([1,-1]).reshape(1,2)
        grady = np.array([1,-1]).reshape(2,1)
        return gradx, grady
    
  
if __name__ == '__main__':
    
    pmgc = plt.imread('sc.png').astype(np.float64) # seems to be read in as 0 to 1 float
    pmgg = pmgc[...,:-1].sum(axis=2)/3  # sum RGB to give gray scale and normalise to 1
    pmgc = 1.0-pmgc[...,:-1]   # invert slice off alpha
    for i in range(255):
        nmap = Normalmap(pmgg*i/255.0)
        nimg = nmap.get('fine')
        img1 = Image.fromarray(nimg, 'RGB')
        img1.save('tmp/ns%03d.png' % (i))
        img2 = Image.fromarray(((pmgc)*[i,i,i]).astype(np.uint8), 'RGB')
        img2.save(('tmp/ms%03d.png' % (i)))


The texture settings (F6) for the normal map images are given below. Note the ‘Normal Map’ button is selected as is ‘Sequence’. For the raymirror images exactly the same settings, except the ‘Normal Map’ button is NOT selected.
http://farm3.static.flickr.com/2643/3855859502_ab6f0ba38e_o.png
The material settings (F5) for the normal map are shown below.
http://farm3.static.flickr.com/2507/3855070193_214ba6ceff_o.png
And note a new UV Texture has also to be created (is this a bug?)
http://farm3.static.flickr.com/2393/3855859560_b93daeaa90_o.png
Finally the raymirror settings for the raymirror images. ‘Ray Mirror’ is selected in the Mirror Transparency tab and the images is mapped to ‘RayMir’. ‘No RGB’ also has to be selected since the png image is in fact an RGB image and not a grey scale image as required.

I’ve also used an hdr image in the world buttons to bring up the reflections on the S.

The resulting movie:

Tutorials from the recent scipy conference can be freely downloaded here:

http://www.archive.org/search.php?query=scipy

Waves

Some Theory:

Wave heights in water can be approximated by a plane wave:

z = A.cos(k.x+ω.t+ϕ)

k is the 2-D wave vector eg (1,2), with magnitude inversely related to wavelength and with the vector direction perpendicular to the wave front.
x is a point or coordinate (x,y) in the ocean
ω is the wave frequency in radians per second
ϕ is the phase of the wave
t is the time

The above equation produces a single wave, but ocean waves are best described by groups of waves having differing k,ω, and ϕ parameters, which are summed to generate a height image.

The matrix x contains coordinate information, that is each matrix element is a coordinate point (x,y). With a 2x2 image it would look like this [[[0,0],[0,1]],[[1,0],[1,1]]]. x can therefore be dotted with the wave vector k. For realistic waves; amplitude, frequency and wave vector are randomly generated to follow a normal distribution, which is user adjustable. After a wave group has traversed some distance across the ocean each individual wave should no longer be in phase with any other wave, therefore the phase, ϕ, is randomly generated from 0-2 pi radians. For def gen(), the wave parameters come in pairs the actual value say ‘a’ for amplitude and its standard deviation ie ‘sa’.

With numpy it is possible to generate random variation of each of the waves at the same time, as numpy random number functions take a size parameter. Thus in def gen(), once all the random parameters are generated, the wave equation is used to build all n wave maps together, one map for each wave. Finally the n waves are summed together to generate a height map. The height map is converted and saved as a normal map. Once the parameters are generated it is only a couple of lines to do everything else, again showing the compact programming possible when using numpy.

def gen() is a generator function, it returns a generator. Everytime it is invoked with next() it yields the next value in the for loop, with the range of the for loop being set sufficiently large that it will never terminate. xrange is used instead of range as the former yields the next value without actually storing a long list of values. In the for loop ‘i’ is the incremental time parameter.

Again the normal maps are stored as a sequence of images.


import numpy as np
import numpy.random as random
from normap import Normalmap

class Wave(object):
    def __init__(self, w, h):
        """ w, h --- width and height of image """
        self.map = np.array([[float(x)/w,float(y)/h] for x in range(w) 
                      for y in range(h)]).reshape(w,h,2) # array of indices for each point on the image
        
    def gen(self, n, a, sa, k, sk, w, sw):
        """ n --- number of plane wave in group
            w --- wave velocity
            a --- amplitude, sa --- standard deviation
            k --- wave vector, tuple (x,y), sk --- standard deviation, tuple (sx,sy)"""
        amp = random.normal(a,sa, size = n)
        amp = amp/amp.sum() # normalise
        k = np.array([random.normal(k[0],sk[0],size = n), random.normal(k[1],sk[1],size = n)])
        w = random.normal(w,sw,size = n)
        phi = random.uniform(0,2*np.pi, size = n)
        for i in xrange(100000):
            z = amp*np.cos(np.dot(self.map,k)+phi+w*i)  # the wave equation
            yield z.sum(2) # summing individual waves which is the last dimension
             
if __name__ == '__main__':
    w,h = 1024,1024
    wv = Wave(w,h)
    g = wv.gen(10, 1, 0.5, (20,50), (10,30), 0.05, 0.02)
    for i in range(900):
        z = g.next()
        nmap = Normalmap(z*63) # 63 since values span -1 to 1, could be something else
        nimg = nmap.get('fine')
        img2 = Image.fromarray(nimg.astype(np.uint8), 'RGB')
        img2.save('tmp/wave%03d.png' % (i+1))

Looking at static waves that is with t = 0, ocean waves appear to be often made from two or more components, intense waves having a long wavelength, generated by winds from afar, and less intense shorter waves, due to local wind conditions. So it may be better to generate two groups of waves, by envoking def gen() twice with different sets of parameters.

Here is a quick test using maps generated by running the above code, leaving out every second map for speed of rendering. The normal maps are added to a blender disp modifier of a plane that has been subdivided 64 times and subsurfed. As well a blender normal texture node is used.

http://www.youtube.com/watch?v=YZRAS--qYyw&fmt=18

Next up ripples, which are readily generated using a cellular automation.

I think your learning approach is excellent. Love the results too.
Keep it up.

Ripples

Thanks for the comments cotejrp1

It is fairly easy to generate ripples by using a type of cellular automata, where each generation of the ripple depends on mixing adjacent pixels of the preceding generation of the ripple. It is not a completely physically accurate model but suffices to generate the correct visual illusion of a ripple. I got the idea from Hugo Elias http://freespace.virgin.net/hugo.elias/graphics/x_water.htm, who got the idea from someone else, so the original author of the idea at this time remains uncredited. Hugo explains how ripples works on his site.

So in the numpy code class Ripple does some initiating and then uses a generator method to generate each subsequent state of the cellular automaton. It is simply an adaptation of Hugo’s code to numpy. Perhaps the only explanation required is that with numpy the whole map is updated all at once. This is done by taking different slices of the image. Basically this means we take a block of the image say one pixel down and add it to the image. Since we are taking slices one pixel away we can run into problems with the boundary, so the image is sliced with a guard around the perimeter (z2[1:-1,1:-1]). That way slicing never runs out of image data. The operation is actually on an image which is smaller in width and height by 2 pixels.

In this sort of simulation there has to be boundary conditions. As Paul Nylander states http://nylander.wordpress.com/category/physics/waves/ the correct boundary conditions are dynamic, but it is easier to use static and again this visually looks realistic. The def static method applies the boundary conditions, and is very simple.

My own contribution to generating ripples is to realise that you can start with a static coloured image map to define the boundary conditions and initial ripple shape. This means that ripples can be mapped to any UV image data, thus not necessarily a flat plane. Such an image map is shown below:

http://farm3.static.flickr.com/2540/3953723213_8b74004f1e_o.png

Here red are boundaries. The boundaries can be any shade of red, but if the red has less then the maximum value, 256, then the boundaries act as a diffuser and the waves lose energy as they move across the boundaries. Values of 256 cause maximum reflection and no movement of the wave across the boundary. Green are positive wave generators and blue negative. By superimposing wave generators over boundaries, ie by having RGB coloured shapes, boundaries can also act as wave generators. For unusual shaped generators, it is generally advisable to apply a small amount of gaussian blur to the perimeter to avoid pixelated ripples which look more like standing wave vibrations then ripples. In the image above you can see faint blue and green generator dots which have gaussian blur applied, this would be reasonable model for say raindrops hitting a puddle.

The background should be black otherwise the wave will lose energy for shades of red or act as generators, which, nevertheless, may be an effect that is wanted.

The results are remarkable for such simple code.

This would be quite nice as a blender plugin, but there is problem with this. Blender is moving to python 3.0 whereas it will be sometime perhaps 6 months to a year for numpy to make the move. So in the meantime I have turned this into a script.

class Gnorm does all the messy stuff of generating the maps, and the maps are the same size as the original boundary/generator image. Maps can be sequences for use in animation, static or GLSL realtime maps. A damping parameter is also required. Finally not every generation is required to be mapped, otherwise the ripples will appear to move too slowly. s determines how often to save maps.

I was going to use this first in my own project but never seem to have much time, so here is the code. Note, it also requires the Normalmap code in one of the posts above which should be saved as normap.py. Save the code below as ripple.py


#!/usr/bin/python2.6
'''
Created on 3/09/2009

@author: sornen
'''
from PIL import Image
import numpy as np
import matplotlib.pyplot as plt
import normap
import optparse

class Ripple(object):
    def __init__(self, img, damping = 0.97, btype='static'):
        """ img contains the initials starting conditions and borders = 1.0
                these need to be stripped into separate files
            damping of ripple
            bytpe defines the type of boundary conditions """
        self.bound = 1.0 - img[...,0]  # red channel is boundary invert to set zero
        self.z1 = img[...,1]-img[...,2] # green +ve, blue -ve initial conditions
        self.z2 = np.zeros(self.z1.shape, self.z1.dtype)
        self.bfn = {'static':self.static}[btype]
        self.damping = damping
        
    def gen(self):
        while(1):
            # need to slice off a guard around the img
            self.z2[1:-1,1:-1] = (self.z1[:-2,1:-1] + self.z1[2:,1:-1] + 
                              self.z1[1:-1,:-2] + self.z1[1:-1,2:])/2 - self.z2[1:-1,1:-1]
            self.z2 = self.z2*self.damping    
            self.z1,self.z2 = self.z2,self.z1
            self.z1 = self.bfn(self.z1)    # set boundary conditions
            yield self.z1
        
    def static(self, z1):
        """ static boundary conditions easier to calculate """
        return z1 * self.bound  # inverted so where red is will be zero

class Gnorm(object):
    """ class to generate normal maps of ripples """
    def __init__(self, img, damping = 0.985):
        """ image is the starting coloured map to generate the ripples, 
                red are obstacles
                green is +ve displacement with blue being negative displacement
            damping of the ripple """
        self.size = np.array(img.shape[:-1]) # size without rgb
        self.dtype = img.dtype
        rp = Ripple(img, damping)
        self.gen = rp.gen()
        
    def make(self, s = 5, type = 'sequence', n = 100, rtshape = (10,10), filename = 'map.jpg'):
        """ s saves every sth generated ripple, which is really is the speed of the ripples
            type can be 'sequence' for sequence of images, 'glsl' for realtime map 
                or 'static' for the just a single image
            n is the number of sequence or glsl maps to generate or the nth static map
            rtshape is the shape of the array of glsls images for the realtime maps """           
        self.s = s
        self.n = n
        self.rtshape = rtshape
        self.filename = filename
        {'sequence':self.sequence, 'glsl':self.glsl, 'static':self.static}[type]()
       
    def nmap(self, img, seq):
        nmap = normap.Normalmap(img*255)
        nimg = nmap.get('fine')
        img = Image.fromarray(nimg.astype(np.uint8), 'RGB')
        if seq == -1:
            img.save(self.filename)
        else:
            name,ext = self.filename.split('.')
            name += "%03d" % (seq)
            img.save(name+'.'+ext)        
            
    def glsl(self):
        mh,mw = self.rtshape
        glsl = np.zeros(self.size*(mh,mw), self.dtype)
        for i in range(self.n*self.s):    
            bump = self.gen.next()
            if not i%self.s: # save every s'th result
                j = i/self.s       
                n0,m0 = self.size*np.array([j/mw,j%mw])
                ih,iw = self.size
                # image coordinates for blender bottom left, not top left as in numpy
                glsl[n0:n0+ih,m0:m0+iw] = np.fliplr(bump)
        self.nmap(np.flipud(glsl), -1) # flipud for blender    
                
    def sequence(self):
        for i in range(self.n*self.s):
            bump = self.gen.next()
            if not i%self.s: # save every s'th result
                self.nmap(bump, i+1)
                
    def static(self):
        for i in range(self.n):
            bump = self.gen.next()
        self.nmap(bump, -1)

if __name__ == '__main__':
    parser = optparse.OptionParser()
    parser.add_option("-i", "--image", dest="img", type = "string", 
        help="starting image map for ripple")
    parser.add_option("-d", "--damping", dest="damping", type = "float", 
        help="damping of ripple")
    parser.add_option("-s", "--sth", dest="s", type = "int", 
        help="save every sth ripple")
    parser.add_option("-t", "--type", dest="type", type = "string", 
        help="type of output can be glsl, sequence or static")
    parser.add_option("-n", "--nth", dest="n", type = "int", 
        help="number of images to generate, or nth image for static")
    parser.add_option("-r", "--rtshape", dest="rtsahpe", nargs = 2, type = "int", 
        help="realtime image map shape")
    parser.add_option("-o", "--ofile", dest="ofile", type = "string", 
        help="output filename")
    parser.set_defaults(img = "start.png", damping=0.985, s = 5, type = 'glsl', n = 100, 
        rtshape = (10,10), ofile = "map.jpg")
    (options, args) = parser.parse_args()
    img = plt.imread(options.img).astype(np.float64)
    gn = Gnorm(img, options.damping)
    gn.make(options.s, options.type, options.n, options.rtshape, options.ofile)


The default options for the script will generate a 10x10 glsl realtime map “map.jpg” from an initial image called “start.png”. Input and output files can be jpg or png, perhaps others as well.

To run the script in linux change the permissions


chmod u+x ripple.py
./ripple.py

otherwise for window and linux users


# default options
python ripple.py
# generate help for the script
python ripple.py --help
# 5 images output to filename tmp/tmp001.jpg as a sequence, default input file is still map.jpg
python ripple.py -n 5 -t sequence -o tmp/tmp.jpg

There are some examples and blends in this thread
http://blenderartists.org/forum/showthread.php?t=166258

Just wanted to say thank you not only for your contribution but also taking the time to explain things. Useful on many levels.