Rock generator script

Yes, I have just been updating the links in the first post and note the changes with a new post.

great Ive been looking forward to a better version of this script ;D thanks alot

Glad you find it useful! And let me know if you have any problems.

Hi all,
im pretty new to blender and im working now a a mac 64bit versions of blender.
I see the add on in the add ons sections but it’s highlighted and says that’s in development.
I cant seem to select it so it turns active, i did add the numpy folder to the modules.
Though i did everything manually concerning the adding part, do i need to grap the x86 version of the numpy???

restart your computer, and then try again

rombout:

If you are using the 64-bit version of Blender, then you use the 64-bit NumPy build. It sounds like you got the right one. It may be an issue that the build I posted was built on Windows (I don’t have a Mac, so I can’t test it), but since they are just python files it should not matter.

Now, two quick questions: did you give it a little time to add? I have noticed that the first time I enable a new script it takes a moment before it shows as enabled (but it should not take more than a few seconds). Second, have you verified that Blender is recognizing NumPy? If it is not being enabled then there is some form error with the addon. I know the version of the script I uploaded works, so my guess is NumPy. What I would first check is to make sure Blender is recognizing NumPy. You can do this by bringing up a scripting panel in Blender (easiest way is to set the screen layout from “Default” to “Scripting”) and in the Python console type “import numpy”. If it gives a traceback error than NumPy is definitely the problem. We’ll go from there.

Welcome to BlenderArtists, BTW.

Somewhat on topic, is there someone watching this thread who knows how to do a build on Mac, just in case?

oke i’ll that with the scripting window! thnks for the fast anwser

oke you where right on the numphy recognition

Traceback (most recent call last):
File “<blender_console>”, line 1, in <module>
File “/Applications/blender/blender.app/Contents/MacOS/2.57/scripts/modules/numpy/init.py”, line 136, in <module>
from . import add_newdocs
File “/Applications/blender/blender.app/Contents/MacOS/2.57/scripts/modules/numpy/add_newdocs.py”, line 9, in <module>
from numpy.lib import add_newdoc
File “/Applications/blender/blender.app/Contents/MacOS/2.57/scripts/modules/numpy/lib/init.py”, line 4, in <module>
from .type_check import *
File “/Applications/blender/blender.app/Contents/MacOS/2.57/scripts/modules/numpy/lib/type_check.py”, line 8, in <module>
import numpy.core.numeric as _nx
File “/Applications/blender/blender.app/Contents/MacOS/2.57/scripts/modules/numpy/core/init.py”, line 5, in <module>
from . import multiarray
ImportError: cannot import name multiarray

Blame Outlook and instant e-mail notification . . . :stuck_out_tongue:

More seriously though, it looks like it is having issues with a file called “multiarray.pyd”. Could you verify that it is present by going to the numpy folder, and then looking for it inside the folder called “core”?

Oh, and from the script window, could you just verify that the Python really is 64-bit? It should say right on the top line (PYTHON INTERACTIVE . . . 64 bit . . .).

hi,
try putting numpy in addons/modules folder.
that fixed similar errors for me.

hi,
I spoke to ideasman about this script & the discussion was if there is anything in particular this script needs from numpy that mathutils cannot do.
The problem is the compiled pyc files required to run Numpy.
It would be possible still to get this script into Contrib I think with Numpy included, but it’s been a policy for a long time not to include precompiled py files.
What are your thoughts?

Meta,

Right now I am only using the beta, normal, integer, and weibull random methods from NumPy, all of which are found in the standard Python random, so yes, I could create a NumPy-less version. The tradeoff would be a hit to performance, since NumPy’s random is much quicker. I would be willing to create a NumPy-less version, since it is simply a find and replace operation. Also, I am working on getting it to upload. I have been really busy with this being finals week and having to be packing to go back home. Also I am working on some (readable) documentation/description I would like done first. All that to say that it might be a couple days still before I have things ready on my end to take that jump.

With that, I had some inspiration yesterday on a dead simple way to deal with the beta monster. I was hitting a block wall trying to figure out how to verify a correct ÎŒ and σ from which to calculate the needed α and ÎČ. Anyway, I realized that I was making it way harder than it was and that I just needed a ÎŒ between 0 and 1 . . . which I already had from the skew values. So take the skew and multiply by 10 gives an α value larger than 1. Now the base equation is ÎŒ = α / (α + ÎČ), and since I now know ÎŒ and α, then ÎČ = 10 - α. So I was able to very quickly put together a beta based skewed distribution (which is faster in some cases and has a cleaner distribution that the skewed Gaussian I wrote) and integrated it where it would help. Also, I have done a little more work on the material generation. It now makes the new materials and assigns them, but does not yet modify values or textures. I’ll post the update script tonight.

Ok, so have had an 11th hour bug crop up. So at the moment the links in the first post are pointing to the same files as earlier, but they has been renamed (you will need to rename the .py file and take the date off). The reason for the renaming is I have uploaded the latest code, but note that there is a bug in it and I expect that it will not run properly. You have been warned. The specific bug seems to be a division operation on a list, so I need to figure out what’s up there. If you would like to look at it anyway, the appropriate links are below:


http://www.mediafire.com/file/d43e4obxwivflbg/add_rock_mesh.zip

I do not understand, numpy without C or C++ compiled *dll or so, how can it be faster?

I donloaded the numpy form the first post and see this code:


import bpy
import numpy
import time
import random

tstart = time.time()
for i in range(10000000):
    tmp = random.random()
tend = time.time()
print("random's radnom 10000000 times =", tend-tstart)    

tstart = time.time()
for i in range(10000000):
    tmp = numpy.random.random()
tend = time.time()
print("numpy's radnom 10000000 times =", tend-tstart)    

#result
#random's radnom 10000000 times = 6.0959999561309814
#numpy's radnom 10000000 times = 9.498999834060669


THAT confirms my idea , doesn’t it? And (that Numpy) it is ca 26MB code or one has to remove all not needed stuff???!

Numpy in itselft is for numerical stuff WELL rather good!

I changed the source of *rock.py (only problem wat the creation of textures not yet solved) and it worked with import random.random (and others)

If I was using the basic random, that would be true. However, I am not using the basic random but an instead working with distributions. That changes the outcome. Using your same code with a slight modification:

def betaTrial():
    alpha = random.random()
    beta = random.random()
    tstart = time.time()
    for i in range(10000000):
        tmp = random.betavariate(alpha, beta)
    tend = time.time()
    print("random's radnom 10000000 times =", tend-tstart)    

    tstart = time.time()
    for i in range(10000000):
        tmp = numpy.random.beta(alpha, beta)
    tend = time.time()
    print("numpy's radnom 10000000 times =", tend-tstart)
    
    tstart = time.time()
    tmp = numpy.random.beta(alpha, beta, 10000000)
    tend = time.time()
    print("numpy's radnom 10000000 times =", tend-tstart)

Resulted in:

&gt;&gt;&gt; betaTrial()
random's radnom 10000000 times = 66.8309998512268
numpy's radnom 10000000 times = 7.21399998664856
numpy's radnom 10000000 times = 3.5169999599456787
&gt;&gt;&gt; betaTrial()
random's radnom 10000000 times = 67.44799995422363
numpy's radnom 10000000 times = 7.434000015258789
numpy's radnom 10000000 times = 4.177000045776367
&gt;&gt;&gt; betaTrial()
random's radnom 10000000 times = 66.98799991607666
numpy's radnom 10000000 times = 7.799999952316284
numpy's radnom 10000000 times = 4.166000127792358

That is not an insignificant difference, which is why I started using NumPy. Can I do everything without NumPy? Yes. Will it be slower? Yes. The tradeoff I am fighting with right now is a noticeable speed bump vs. ease of distribution. If it is best that I stop using NumPy, I can do so, and if so I would drop the beta and stick with a Gaussian/Weibull only,because they do not see the tremendous speedup the beta does (they both get about 2x from NumPy instead of the 10x+ of the beta). Going back to your first question:

I have a one word for an answer: algorithm. NumPy’s algorithm’s have been highly optimized because it is used extensively for complex scientific work where performance and time complexity are issues. I don’t think the basic random has been optimized as much with NumPy because you don’t normally see it in nature. If you were to take a bunch of random samples the end result (there are two mathematical theorems that back this) will approach a normal distribution. The catch to what I am doing is that I am allowing the user to skew the resulting distribution curve, which a normal distribution does not allow. The best alternative is a skew-normal, but neither Python nor NumPy have one (and I am not writing one :eek:). That leaves the second best choice of a beta distribution, which is what I have been focusing on.

Hopefully that helps explain the logic I have been following in deciding to use NumPy. If NumPy really is a problem, then ok. I’m not dead set on using it. If I do stop using it, then I will not use the beta because Python’s is so ridiculously slow and stick with an artificially skewed normal.

Will try to understand 


PKHG,

Some reading that might help (or make things way worse):
http://mathworld.wolfram.com/Probability.html
http://mathworld.wolfram.com/NormalDistribution.html
http://mathworld.wolfram.com/CentralLimitTheorem.html


http://mathworld.wolfram.com/Skewness.html

http://mathworld.wolfram.com/BetaDistribution.html

Be warned: there is a lot of math in those links, almost all of it is calculus. It also might help to point out that the simple .random() of both Python and NumPy is called a uniform distribution: http://mathworld.wolfram.com/UniformDistribution.html. It is very different from a normal distribution.

OH OH, though mathematician, I am not fond of that probability stuff 

In which way does that ‘beta’ (and skewness) make the rock(s) better?
That seems (if I understood well) to make the difference. Maybe you show
us some pictures?
OK I will stay with a numpy free version because of several MB more code (of numpy) needs.
Is there a way to ‘only’ extract from what you need from numpy (from your link) ?
E.g. the test do not run of missing ‘nose’ code 
 so that directory could (I think) be removed as well as 
??

Skewness can be used to control how the groupings and outliers fall. By skewing the curve in one direction, you are increasing the probability of an outlier in the opposite direction (skew down you are more likely to have a large outlier). For example, in the attached non rendered picture I have two groups of rocks. The top group was skewed up and the bottom was skewed down. You’ll note that the most of the rocks in each group are about the same size as the others within the group. Unfortunately these groups did not generate very many outliers which makes it harder to demonstrate, but the bottom group had one that was about the size as the upper group. While for the most part I would not expect a user to use the skew, it is available. An example here is the project I was working on that inspired this (sorry, no renders though). For the most part I wanted rocks about the same size, but I wanted the ability to have abnormally large rocks generated without an equal number of abnormally small rocks. This is the perfect scenario for a lower skewed distribution.
One other place where I am using a bunch of skewed distributions is in the texture generation. I am using combinations of beta distributions and weibull distributions for better texture type selection. You just don’t see that part because it is buried inside the code and I am not allowing the user to play with those values.
With that, the reason for the beta is two fold. One, it is similar to a normal distribution, and two, it skews both positively and negatively very well (most distributions only like to skew one way).

For a custom NumPy with only the random part included . . . I have no idea, and if I did know then I would jump on it. I tried (though I think tried is much to strong a word :rolleyes:) to remove what was not needed pretty unsuccessfully. It might be possible to use just the “random” folder from inside it, but I have not tried and I have a feeling it will create an error nightmare. This end of things is moving out of my Python knowledge base (if it was Java I could give you an answer).

Attachments



Yes thanks a lot, now I see what you try to accomplish (and succeeded with numpy and reasonable time, I think)

Maybe there is a Python Numpy specialist, to reduce the size of used stuff? (Will have a [small :eyebrowlift2:] look 
)