Atarivandios here, I’ve been texturing for a long time and have always had the feeling that there was an easier way. Procedural textures are amazing, but difficult to master, especially efficiently. I’ve been noticing the growing number of software packages that are made to deal with this unique problem.
Number one for me is Allegorithmic’s MapZone.
Number two would be NeoTextureEdit.
Number three is Perfect Resize 7
Number Four amazes me… Genuine Fractals 6 Plugin for Photoshop.
The features that I speak of are: loading procedurals as plugins, automatic conversion of an image to a combination of procedurals independent of resolution or to the best quality for size specified (for game engine loading times and memory efficiency if line efficient coding is chosen forcing conversion to pixel images), and storing the procedural image in a format for use in any section of Blender for materials (including realtime). The preferred saving method would be in the same plugin format mentioned above to make future map production more time efficient and to reduce memory further (basically how the procedurals are stored now, just as a separate module like a script, python script-ability would be really nice for this, if not currently present).
If the above were done, conversion from high-res models to low-res models would be a lot more efficient in time and memory. For example: one could compute AO bakes procedurally without having to rely on samples and waiting for baked renders (another great AO approximation method), Shadows would be the same, and if images were detected to not be procedural to begin with they could then be converted to produce the texture bakes as desired. I’ve even heard of some packages that do auto-material completion for image based models by highlighting faces that would share the same material so as to automatically fill in material holes created by camera projection. That problem could be solved by choosing to auto complete when baking a texture if there is a image hole detected by a simple test of the UV’s (based on selecting UV’s and assigning separate materials as is already possible). A default color (like what is used by green screens) could be used to make the coding simpler by doing multiple passes over the UV map (If coding would be simpler than the above material assignment method).
I’ve programmed before so I will address the coding issues here. Basic correlation graphing and approximations would be able to handle this issue efficiently enough, but at the sacrifice of time. For example, moving through each setting using hundredths until one correlates without going over (similar to homemade square root functions) then doing the same further past the decimal and repeating for each procedural plugin available, then doing the same for all the available mix methods until best result is found (one must consider math nodes with formulas and individual RGB channels as well). If one doesn’t want to do any of the above procedurally the same can be achieved with fractal algorithms for which there are numerous whitepapers available, this method is equally efficient in the area of size independence and is sometimes (often) depending on implementation more memory efficient. However, on baking you would have to already produce a baked render the current way first before processing, this would heavily complicate the above-mentioned image based modeling texture bakes, however, coding might be simpler and create the foundation for the procedural implementation in the future.
If people are interested in this I can do screen shots of what a prototype would look like as one can do this by eye, but it is currently very time consuming. I would code this myself but I am efficient not proficient. I would have to first review a lot of code and make charts and maps (in case current functions do not work as desired) to understand what has already been done, but the good news is NeoTextureEdit is open source so all that would need to be done is a large python script to translate (still easier said than done) the application as it is currently written (thank God it too is written around OpenGl) the coding situation is also helped by the fact that the application is written in a form of C to begin with.
The other good news is that I know as fact that if one were to implement the fractal and procedural methods mentioned above the new motion tracking system can be used to identify all the different textures in a scene (based on contrast) then the fractal method can be used to synthesize the detail that is missing (simpler that cutting out parts and skewing them square on a plane then using a fractal detail generator) then the procedural method can be used to convert those textures automatically to tile-able images for use on any model rendering the tedious ‘still image camera on site’ techniques obsolete. These methods are usually in anywhere from three to eight separate packages and not all cross platform (most closed source). This would make Blender the most advanced package for 3d materials currently available. The procedural method can also be utilized independent from Uv’s similar to PTex without being as complex as well. This would drastically reduce time of production for Mango and future projects by making composting easier as materials will already conform to the style of the video shot (keep in mind that in procedural transfer you can select the individual layers to remove the gradients that influence image lighting if the code efficient route does not make them tile-able). This would go hand in hand with the recent motion capture addition, and be a possible approximation method for cycles making 3 to 5 thousand pass images render in seconds as a powerful new approximation method, and make image based texturing completely obsolete by bringing current material methods and work flow out of the nineties and into the future. This is already being done with next-gen game engines and much like the past this is a big sign that this is the future of digital animation work flow. These steps can be taken now, if not I can guarantee that they will be introduced in the future for Blender, but it will be more challenging as any addition made past this point will make the transition more time consuming and possibly even bigger than the transition to Blender 2.5 costing potentially a lot of unnecessary development time and effort.
The above methods can and most of the time (if you go the the web pages of the packages mentioned above) mimic the current work flow perfectly. The transition at this point would me almost completely unnoticed with the exception of five to ten new buttons, this is an overstatement as several of those would be the same button just placed in a different Blender window (UV, Material, Cycles render area etc.). These features though apparently simple would truly revolutionize the 3D world forever. I would be proud as a long time Blender user to say to my employers when they say “wow that’s magic” and I say “no it’s Blender 3D.”
Sorry about grammar errors.