A Feature That Will Revolutionize Blender Materials Easily For Games, Comp, & Cycles.

Atarivandios here, I’ve been texturing for a long time and have always had the feeling that there was an easier way. Procedural textures are amazing, but difficult to master, especially efficiently. I’ve been noticing the growing number of software packages that are made to deal with this unique problem.

Number one for me is Allegorithmic’s MapZone.
Number two would be NeoTextureEdit.
Number three is Perfect Resize 7
Number Four amazes me… Genuine Fractals 6 Plugin for Photoshop.

The features that I speak of are: loading procedurals as plugins, automatic conversion of an image to a combination of procedurals independent of resolution or to the best quality for size specified (for game engine loading times and memory efficiency if line efficient coding is chosen forcing conversion to pixel images), and storing the procedural image in a format for use in any section of Blender for materials (including realtime). The preferred saving method would be in the same plugin format mentioned above to make future map production more time efficient and to reduce memory further (basically how the procedurals are stored now, just as a separate module like a script, python script-ability would be really nice for this, if not currently present).

If the above were done, conversion from high-res models to low-res models would be a lot more efficient in time and memory. For example: one could compute AO bakes procedurally without having to rely on samples and waiting for baked renders (another great AO approximation method), Shadows would be the same, and if images were detected to not be procedural to begin with they could then be converted to produce the texture bakes as desired. I’ve even heard of some packages that do auto-material completion for image based models by highlighting faces that would share the same material so as to automatically fill in material holes created by camera projection. That problem could be solved by choosing to auto complete when baking a texture if there is a image hole detected by a simple test of the UV’s (based on selecting UV’s and assigning separate materials as is already possible). A default color (like what is used by green screens) could be used to make the coding simpler by doing multiple passes over the UV map (If coding would be simpler than the above material assignment method).

I’ve programmed before so I will address the coding issues here. Basic correlation graphing and approximations would be able to handle this issue efficiently enough, but at the sacrifice of time. For example, moving through each setting using hundredths until one correlates without going over (similar to homemade square root functions) then doing the same further past the decimal and repeating for each procedural plugin available, then doing the same for all the available mix methods until best result is found (one must consider math nodes with formulas and individual RGB channels as well). If one doesn’t want to do any of the above procedurally the same can be achieved with fractal algorithms for which there are numerous whitepapers available, this method is equally efficient in the area of size independence and is sometimes (often) depending on implementation more memory efficient. However, on baking you would have to already produce a baked render the current way first before processing, this would heavily complicate the above-mentioned image based modeling texture bakes, however, coding might be simpler and create the foundation for the procedural implementation in the future.

If people are interested in this I can do screen shots of what a prototype would look like as one can do this by eye, but it is currently very time consuming. I would code this myself but I am efficient not proficient. I would have to first review a lot of code and make charts and maps (in case current functions do not work as desired) to understand what has already been done, but the good news is NeoTextureEdit is open source so all that would need to be done is a large python script to translate (still easier said than done) the application as it is currently written (thank God it too is written around OpenGl) the coding situation is also helped by the fact that the application is written in a form of C to begin with.

The other good news is that I know as fact that if one were to implement the fractal and procedural methods mentioned above the new motion tracking system can be used to identify all the different textures in a scene (based on contrast) then the fractal method can be used to synthesize the detail that is missing (simpler that cutting out parts and skewing them square on a plane then using a fractal detail generator) then the procedural method can be used to convert those textures automatically to tile-able images for use on any model rendering the tedious ‘still image camera on site’ techniques obsolete. These methods are usually in anywhere from three to eight separate packages and not all cross platform (most closed source). This would make Blender the most advanced package for 3d materials currently available. The procedural method can also be utilized independent from Uv’s similar to PTex without being as complex as well. This would drastically reduce time of production for Mango and future projects by making composting easier as materials will already conform to the style of the video shot (keep in mind that in procedural transfer you can select the individual layers to remove the gradients that influence image lighting if the code efficient route does not make them tile-able). This would go hand in hand with the recent motion capture addition, and be a possible approximation method for cycles making 3 to 5 thousand pass images render in seconds as a powerful new approximation method, and make image based texturing completely obsolete by bringing current material methods and work flow out of the nineties and into the future. This is already being done with next-gen game engines and much like the past this is a big sign that this is the future of digital animation work flow. These steps can be taken now, if not I can guarantee that they will be introduced in the future for Blender, but it will be more challenging as any addition made past this point will make the transition more time consuming and possibly even bigger than the transition to Blender 2.5 costing potentially a lot of unnecessary development time and effort.

The above methods can and most of the time (if you go the the web pages of the packages mentioned above) mimic the current work flow perfectly. The transition at this point would me almost completely unnoticed with the exception of five to ten new buttons, this is an overstatement as several of those would be the same button just placed in a different Blender window (UV, Material, Cycles render area etc.). These features though apparently simple would truly revolutionize the 3D world forever. I would be proud as a long time Blender user to say to my employers when they say “wow that’s magic” and I say “no it’s Blender 3D.”

Sorry about grammar errors.

I also meant to state that fractal abstraction of images can automatically shape and align styles of scenes in the compositor.

I don’t understand how AO and shadows can be baked “procedurally”.
AO and shadows are calculated. A bunch of rays are launched to test if there is geometry intersecting them.
Could you explain better? Are you saying that you have a low quality AO baked image and using procedurals you obtain a high quality AO baked image? Could you show some image of before and after that “procedurelization”?

“AO and shadows are calculated. A bunch of rays are launched to test if there is geometry intersecting them.
Could you explain better? Are you saying that you have a low quality AO baked image and using procedurals you obtain a high quality AO baked image? Could you show some image of before and after that “procedurelization”?”

From what I’ve seen procedural gradients can be formed by assessing length and falloff based on lighting conditions eg. distance of lamps or emitters of any kind. That level of integration can also provide a much more simple global illumination by taking into account objects being used as light sources. The image is then applied to UV’s and skewed accordingly. Almost all that I mentioned above is nothing more than pixel math, usually in those circumstances AO “texture” is a culmination of a lot of procedural gradients mixed in different ways. As far as application to the UV’s which is really the most difficult part of all of this, considering all these methods require it, the best alternative that I could conceive without a review of some papers is to use a Ptex like solution that is nothing more than abstracting out UV visibility and using the already present texture paint system to apply the different gradient maps. Alternately instead of mapping UV’s, just a list of face volumes (in pixels) or dimensions therein can be used to apply the texture on a per face basis (similar to Ptex) and apply them in much the same way as you would in texture paint as well, but that is more coding work. Shadows could be done the exact same way by also taking into account the the location of objects as well (these too can be colored or textured). This works almost the same way as the old halos do for spotlights. Procedural seams are usually rendered mute as you can control the level of apparent detail per face anyways based on pixel volume per face which applies whether Ptex or any variant of UV’s therein are used. The placement and falloff calculation does still require quasi ray calculation but only at each vertices and also takes into account the angle of every edge to face (this really mutes the need to do any ray tracing at all). This system can also work when there are no intersecting faces by using a threshold number for distance. By and large a lot of these things can be tweaked after render anyways which is the other reason this is handy. It’s like using the normals of a scene to change lighting in the compositor (very handy). The falloffs are very similar to the ones available for lamps as well. Keep in mind that a lot of these projects do geometric tracing to auto-create procedurals as well (spline curves with varying degrees of thickness like in metal rails for example) which makes the system very powerful. Considering that this would be the only program to have all of these in the same sandbox you could always load the AO pass of another scene and by doing rigorous contrast and falloff detections force the system to match the length and falloff based on edge from another AO render altogether, especially if you predefine points of interest.

Congratulations atarivandios, those were the hardest to understand posts on BA forums i have read. I would like to see example images to see what you mean cos this wall of text is not friendly.

I read that stuff serveral times now and eventially I gave up because it just doesn´t make sense. I have no idea what your masterplan is.
It seems like you leave out important informations to link your well… written mindmap.

Do you want to decompose an image texture into procedurals and store the procedurals to lateron generate the image from them again? That can´t be it, that doesn´t sound very practical.

From what I’ve seen procedural gradients can be formed by assessing length and falloff based on lighting conditions eg. distance of lamps or emitters of any kind. That level of integration can also provide a much more simple global illumination by taking into account objects being used as light sources. The image is then applied to UV’s and skewed accordingly

What? That doesn´t make any sense to me…
The length and falloff of what? What procedural gradient? I can make a gradient of any two random numbers. AO is not based on the distance to lights or emmiters, it´s based on the amount of occluded space around a spatial point - like the name says.
And what integration of what into what?

Your sentences are all overcomplicated.
Either you’re smart beyond my comprehention and on to something, or just talking gibberish :smiley:

You might want to rephrase your idea in one or two sentences that are not 10 lines long :slight_smile:
Or you just start off with the work as it seems you got the knowhow and can code and show us what it is about.

Please share some screenshots, code, and .blend files. It sounds like great stuff.

we need new features

I love this kind of bold and innovative thinking, unconstrained by any material considerations or vested interests. But we all need to eat and pay the bills, it would be a pleasant surprise if even 10% of your ideas are realized in Blender. More realistically, I wish you luck in monetizing your talent in the commercial world.

Here are a few screens I drafted from Blender and Gimp. Just keep in mind that the Final AO look looks a lot like the approximate AO we have now, but you have to also take into account camera angle and basic direction of light. I just painted, but it would look like that if you hit Esc early lol. The post kept getting denied so I will just show them separately below.

This would work for shadows as well with the option of different mix styles and textures with an image size of potentially kilobytes with no dependent size. This is something used by a number of game engines as well.

A regular AO pass at 4 samples.
http://www.pasteall.org/pic/show.php?id=22059

The Desired Result.
http://www.pasteall.org/pic/show.php?id=22060

The code doing angle to face comparisons.
http://www.pasteall.org/pic/show.php?id=22061

The product if stopped early with Esc. lol with many gradient procedurals, different rotations, and different mix masks.
http://www.pasteall.org/pic/show.php?id=22062

arexma you are totally right. This is designed to give the appearance of occlusion at the highest quality possible (mimicking the real deal). I mean those terms in reference to images as these are pixel manipulations.
http://www.pasteall.org/pic/show.php?id=22070

I can see this could be cool, but needs to be presented better. I’m not brave enough to read those walls of text you posted.

Answer these three questions in one sentence each, please. :slight_smile:

  1. What is the current problem you are fixing?
  2. What does your project do/How does it work?
  3. How is it faster/better than the old system?

OK, time to maybe help some people understand, just a little bit.

His basic plan is to use procedural textures and mapping, that’s computed locations are made by measurements from intersection points on meshes. Then a procedural gradient is made made from the last known point of shadow on the object (correct?) and is then “attached” to the UV map. These gradients could be re-sized in real time (theoretically) so you could make an AO baking session go down to a few minutes of testing. Instead of Hours of baking and rendering to see the results.

There was something else over there, but it was big, and stuff got mixed around a little bit…

I THINK that is the idea here, and it sounds pretty good. I am no coder, and my knowledge of Vector math is limited, but it sounds kindof feasable…

  1. What is the current problem you are fixing?
  2. What does your project do/How does it work?
  3. How is it faster/better than the old system?
  1. Memory footprint for textures and images are usually horrendous this can fix that.
  2. This project essentially turns any and all images into formulas, but works with anything that is a string of numbers.
  3. Improved load and render times at virtually no cost.

Downside is coding a system that can detect a boat load of patterns quickly, efficiently, and recognize something by the types of patterns used. One way to solve that is a collection of smaller common images or completely procedural math.

My opinion after reading your proposal is that this would be impractical. You want to:

  1. Analyse each image
  2. Decompose it to a linear combination of procedural basis functions and store the weights of these instead.

The thing is that even if you “could” find the appropriate basis functions to approximate an arbitrary image (which is a whole science on its own) I don’t think that this analysis could be done real time. Except if you are aware of universal functions that can act as basis to create any image (where you just get the “projection” of the image on the function as weight) but I think that such universal functions are highly impossible (intuitively thinking but I can also prove it mathematically). Alternatively you can store the weights every few pixels but that kinda defeats the purpose of efficient storage.

And, contrary to your belief, procedurals aren’t cheap, which is the reason we don’t render them on the GLSL display when there is one in the material stack.

Ambient occlusion won’t be faster because we have to calculate occluders too, especially in animation where the positions of objects change and the occlusion of an object in a frame is not the same in the next. It’s not just a matter of adjacent face-to-face comparisons. Maybe what you are proposing here is a just a different approximation algorithm, but I don’t think that you can make such grand claims as “the method that will end all” without clear-written ideas and math. Especially when there are hundreds of papers out there with proven results.

In any case I might be wrong so please give some serious references. And math. Math shows you know what you are talking about.

Actually images are not bunch of pixels but “formulas” if you uses JPG. What you see in a JPG image is a lot of base shape gradients joined giving an aproximation of the original. So you basically are saying you discovered JPGPlus. Well, I remember a way to compress an image using fractals, I think the problem was the time it took at the moment to generate the image that was very high compared to JPG. I don’t know what the result would be today with actual hardware.
Any new compress method for images needs give the same quality than JPG and be smaller in size. But that is for storing in hard disk. When the image is loaded to RAM, be it JPG, PNG, TGA, BMP, or whatever, the image is decompressed and occupies a lot of RAM. I understand your method avoids this decompression and all the access to the image is in compressed format. That is certainly interesting and I just want to know where is source code to see if really is doing it that way or not.