Making high-quality image textures

I want to create my own image textures. Most information is aimed at beginners and doesn’t cover how to make professional-grade, high-quality textures. I’m an amateur photographer, so I have a good DSLR, some lens, and various other equipment. However I haven’t used Gimp/Photoshop very much.

Very wide angle lenses (like fish-eyes) would obviously cause distortion. But what about very long lenses? I would guess it’s okay, but not ideal as I would be getting only a very small part of an area. Is there an optimal focal length?

Is it better to take multiple shots and stitch them together with panorama software or just take one shot? Recommendations for software? Should I take a dozen or more shots really close to get better detail?

Any tips to avoid shadows when shooting textures indoors? Use some sort of diffuser over the lights/flashes?

What file format is best? I guess shoot in RAW and export in some lossless format.

What’s a good way to make a texture seamless? Some software does this automatically. I also saw someone offset the image by 50% so all the edges are in the middle and then use a clone brush to copy bits around. Is one way better than the others?

As for manually making most of the other maps, I know I should convert the image to grayscale and adjust the levels in Gimp/Photoshop, but what am I aiming for? What should the levels be? I know it will vary with every texture, but are there any guidelines? When do I know I have set it right?

How would I make an albedo map?

Just want to check I understand all the maps:

  • Diffuse/Color: the color of the texture. Color
  • Albedo: same as diffuse, except shadows and highlights are removed. Color
  • Reflection/Specularity: where the reflections are. Grayscale, NPR only
  • Roughness/Glossiness: what’s rough and what’s glossy. Grayscale, PBR only
  • Metalness: what parts are metal vs dielectric (eg. rust). Grayscale, PBR only?
  • Ambient Occlusion: adds shadows or grunge/dirt in crevices. Grayscale?
  • Bump: what’s bumpy; only the height not what axis of direction. Grayscale
  • Normal: what’s bumpy but contains direction. Purple
  • Displacement: deforms the mesh. Grayscale
  • Emission: What parts are emitting light. Grayscale? When is this used? Glow-in-the-dark slime on cave walls?

Any recommended software to automate the process? Anything else I should know?

Sorry for the long post.

The only use I have ever found for the Offset filter in PS is doing this :slight_smile:

The industry standard software for doing this seems to be Allegorithmic/Adobe’s Substance Designer and/or Substance B2M and the new Substance Alchemist. While Designer is most often thought of as for designing procedural materials it does have tools/nodes for creating photo based materials. B2M is more specifically for image to material workflows but far less powerful and its tools are in Designer anyway. Alchemist is new and tbh I haven’t tried it yet - but its being installed as I type this.

I suppose a lot depends on what type of material you wish to create textures for. I have only dabbled in creating photo-based textures from scratch. I made a wee material “scanner” so that I could use my smartphone to capture a series of images lit sequentially from set angles. I used Substance Designer to process them into this. I’ll need to try some more.

1 Like

Material Capture, Material Scanning, Appearance Capture, are all good search keywords. Here are a couple great guides from Algorithmic:


1 Like

I don’t shoot textures, at least not anymore. But the longer the lens the less distortions, but sometimes the longest on a zoom lens softens the image or reduces the contrast. If you’re not worried about that, then generally the longer the better, but you’ll often find yourself doing shorter for practical reasons; getting camera out of glare conditions, limitation of shooting space. Don’t shoot wide open or fully stopped down, most lenses has a sweet spot where they produce the sharpest images.

I don’t need much these kind of textures for the few times I do rendering at work, but I would generally prefer to create them in a software mentioned above where you get all the maps (in good definition) for pretty much free, rather than shooting and manipulating them forever in order to extract somewhat relevant maps.

2 Likes

Long lenses are typically better that very wide, but they can also introduce distortion. The best lenses for texture photography (generally speaking) are prime lenses, as they tend to be the sharpest (because they have few optical elements). In order to find the sweet spot of your lens, just test it with a detailed, frame filling subject and go through the focal lengths and apertures to find the best settings. Otherwise you can search for websites that publish tests like that online.

Definitely take muliple shots. Unless you have a very expensive camera and/or a very expensive lens, you will never achive the same quality you can get from panoramas with a single shot.

Photoshop’s panorama stitching is very good with 4-8 images, but the more images you add, the more distortion it will introduce in my experience. A good alternative for stitching is photogrammetry software, because it tends to be way more accurate and way better at compensating distortion. The only thing you need to watch out for is that photogrammetry software needs more source images than photoshop to reconstruct a panorama.

Should I take a dozen or more shots really close to get better detail?

Yes. Take as many shots as you can reasonably justify. I say that because there is technically no limit to how many images you could take, but more images means more required disk space, more required ram, longer processing times, etc. But you can always scale down, so it’s better to have too many than too few shots.

Raw processing: Lightroom, Dxo photolab, Photoshop (ACR), Rawtherapee, Photoscape

Editing: Gimp, Krita, Photoshop, Affinity Photo

Stitching: Microsoft ICE, 3DF Zephyr, Meshroom, agisoft, etc.

(And blender of course, but I’ll get to that in a moment…)

Yes (Within reasonable parameters).

Indoors can be tricky depending on the surface. Softboxes are great, but on the cheaper side, reflectors tend to work very well too. Otherwise one or two good flashes would be nice (but kind of expensive). If you can spare some money maybe look into cross polarized lighting, but that’s not really a necessity 99% of the time. (It can help alot with reflective surfaces).

Always shoot raw whenever you can. Convert it to 16 bit tiff (8 bit should work fine too if 16bit is too much for your pc. The heightmap should be 16 bit though.)

The way you describe works pretty well if you don’t have software that does it automatically. Krita’s wrap around mode is also pretty useful. While making it seamless, check how it looks tiled every now and then to catch spots that stick out too much. I recommend you clone from different parts of the image and not large areas.

Specularity and roughness maps are best done with a realtime preview. If you don’t have a dedicated software, you can use blender. Just plug the color or height map (or a mix of the two, whichever works best) into a color ramp node and then into the respective inputs of the principled shader and play with the values until it looks good. To render the finished maps, go to the composior, open your base texture and copy the color ramp nodes and save the output. Just make sure you are using sRGB (default color management) at base contrast. Filmic or anything else will mess with the values and give you a wrong output.

1 Like

Camera “raw” encodes are camera referred, and typically delivered in a normalized linear range.

Aside from polarization issues, it is as simple as taking the linearized data and scaling it against the destination albedo.

It really is nearly trivial.

Thanks for your help!

Glad I wasn’t the only one who couldn’t find other uses :wink:

Is B2M just a subset of Designer’s functionality with a UI specifically built for making image textures?

I have been considering buying all/some of the Substance programs (if B2M’s features are all in Designer I may not need B2M). I’m torn between buying a perpetual license ASAP, and avoiding Adobe (they sound unfriendly). Honestly, I would rather donate the money to Blender, Gimp, Krita, or ArmorPaint. I can’t afford a subscription right now, not that I like the idea of renting software anyway (I don’t have Creative Cloud, Gimp and Krita have been enough so far). However, Substance seems really good.

Off-topic, but how is Alchemist? Is it as good as the marketing says? What does it do? I’ve never figured out its purpose.

Thank you for the info! Some good starting points for further research.

1 Like

Thanks for the tips!

Yeah, I have to agree with you on that.

Thank you for the in-depth reply! There’s a lot of info to digest there. Thanks a bunch.

What are the reasons behind that? (Just curious)

I wouldn’t have thought of that, thanks.

I didn’t understand that. I’ve barely touched the compositor yet, so maybe I’m missing something.

Thank you. I think I understood that in theory, however, I have no idea how to put that into practice.

Substance Designer has a node called Bitmap to Material Light which contains enough features that you can plug in a source image and output diffuse, normal, spec, gloss, roughness, metallic, AO and height, with a host of sliders to tweak everything. B2M itself has more options in this regard, but Designer dwarfs it in other features. Designer is overkill I think if all you are going to make are photobased textures/materials. If you plan to add procedural elements then it might be worth a look. They have demos of all their stuff.

I got Painter, then Designer as perpetual licences before Adobe took over. I’m one of those people who don’t get too bothered about Adobe though. I have been using their software in day to day work for over 20 years and have a CC account.

I have installed the beta of Alchemist, but still haven’t got round to trying it :stuck_out_tongue:

I’m not certain where your knowledge of cameras and encodings begins and ends, so I’ll keep it as short as possible. Feel free to respond if something doesn’t make sense.

Albedo is a measurement of reflectivity of light from a theoretically perfect Lambertian diffuser. That is, if we had something close to a Lambertian diffuser, we could find the encoded value and roughly scale the results. On planet Earth, here in the real world, the closest and least expensive material we have to that is PTFE, such as Teflon. Plumbers tape is a relatively inexpensive, common item, for example.

Using a single, relatively diffuse light source, and a piece of PTFE, we could capture a camera raw encode, evaluate the code value our camera encodes as 100% reflectivity, and then scale our camera values accordingly for other materials.

dcraw -T -4 input.camera_raw_file for example, would encode a sixteen bit linear REC.709 based TIFF. We could open that up in Blender, make sure the transform on the buffer is set to linear, indicating linear REC.709, and evaluate the photo in question. The values on the left of the information pop up are the ones we want to pay attention to, not the right side, which is the output camera rendering transformed values.

If we imagine our sampled code value is 0.87, we would need to scale our data upward from there, to put the code value at 1.0, or a hundred percent. This would be a simple multiply across all three channels. In this case, 1/0.87 gives us an acceptable approximation of the scale value we need to multiply our values by.

Now we pop a photo under the same lighting we used to capture our PTFE, and scale our RGB by the value. From here, we need a bit of colour science to calculate a single channel representation, as most albedos are single channel approximations. Without boring you, the colour science weights for Blender’s default configuration would be something like:

AlbedoScale * Camera100PercentReflectanceScale * WeightedRGB = TargetAlbedo

Which is:

AlbedoScale = TargetAlbedo / Camera100PercentReflectanceScale * ((0.2126 * R)+(0.7152 * G) + (0.0722 * B))

Phew…

After you have your AlbedoScale, you can multiply your RGB by that, clamp at 0.0 and 1.0 to prevent glare from creating physically implausible values, and you have a pretty quick diffuse albedo calculation for REC.709.

Note: This is a purposefully over-simplified explanation, and assumes the camera raw encoding is decoded using the camera white balance such that the PTFE results in R=G=B. There are other more complex things to consider, however by and large this is a pretty simple and decent enough method to scale RGB values from a photo without glare to a simplified singular destination target albedo.

1 Like

The height map should have a 16 bit color depth in order to avoid banding, which can create a stepping effect in the displaced mesh and in the normal map. Sometimes (depending on the software and algorithm you use), banding is reduced by dithering the result. If that is the case, the height map might have visible noise, which is reduced in 16 bit maps.

Example (right: 16 bit; left: 8 bit):

higher subdivision:

Concerning the workflow with the compositor, what I do is I import the color map and put it through a color ramp and into the roughness and specularity inputs:

Then I copy the values of the color ramp nodes and reproduce the outputs in the compositor:

Then I save the images that the compositor outputs.

2 Likes

Thanks for the info.

I’ve never dealt with Adobe, so I’m going by other people’s opinions. And it’s said that people are more likely to complain than compliment, so it’s good to here from someone who doesn’t think Adobe is evil personified. Although, I would much rather support open source software.

Think it’s the opposite.

1 Like

I learned how to make my own textures from this book:

The edition I had was one of the old ones. Even though it was old it gave me the knowledge for taking my own pictures. The book also teaches on painting textures. It contains lots of information that is not obvious without the book.

I highly recommend the book to learn how to make your own textures.

1 Like

Minimal/basic. I know more about using a camera than how it works or encodes the image.

I would like to take you up on that at some point. However, there are a few things I can look into on my own (Lambertian diffusers etc.). Thank you for helping me understand!

The best thing you can do is to properly demosaic and decode a camera encoding to 16 bit TIFF. This is the massaged by camera results, encoded typically into a linearized minimum to maximum ratio. DCRaw can be used for this in many cases, even though it is discontinued. The simple command is:

dcraw -T -4 camera_encoded_file.raw

The resulting TIFF should be sixteen bit integer, with a linearized ratio encoding using REC.709 primaries. This is the most direct and basic decoding of the camera raw file.

From there, load the image and make sure the colourspace on the image is set to “Linear.” Sample the values to get a feel for the ratios in the file. Again, pay attention to the RGB values on the left in the black information bar. If you shoot a white piece of paper, you can expect it to diffusely reflect anywhere from about 85% to 95% light. Sample the values and note what the file encoding reads. It’s also important to use the compositor, as it is the only method in Blender that automatically promotes the encoded image data to float and displays the properly transformed values.

Anyone uses OpenEXR for texturing?