Question about Baking Basics

Hi, (hope this is the right place to post this)

I’ve used Blender for awhile, mostly to pose human figure models that I use as references for sculptures that I make. So I don’t do much with animation. I recently purchased a model from TurboSquid, and discovered it was too high-poly for me to work with. I went back to TurboSquid and they provided me with a low-poly version of the model explaining that I would have to “bake the normals” with the the high-poly version if I wanted the detail. I still have to find some good “baking” tutorials, but first I wanted to try to get an answer to a pressing question: I have to first “pose” the model in order to use it as a reference. But if I change the pose of the low-poly model, is it going to then “bake” together properly with the unposed high-poly version? Sorry, clearly I’m in the dark as to what baking is all about. Thanks so much for the help!

No.

Baking normals means saving surface difference on an image, which can then be used to fake that detail with the shading on the lower polygon surface. Could think it as: how the low detail surface has to change to make it look like the high detail one. If your bake target is a flat surface and you have a slanted surface above it, it will save the difference of the directions. If the surfaces are parallel, there’s no difference. If the surface above is perfectly perpendicular, there’s no difference.

If you bake the unposed on the posed surface, it’s not going to work. Could try baking before posing, or using the low poly model to pose it but use the same armature for the high poly one if it’s properly rigged. If it’s not, could try surface deform modifier but that could get too heavy.

Baking workflow is different between the render engines. Cycles has a cage option, which is usually just a duplicate of the low poly, but with inflated surfaces (alt+S for shrink/fatten in edit mode) to enclose both the low and high poly. It acts as a projection surface for baking, and helps to control the distances on areas where there’s geometry close to each other, like fingers.

JA12, thank you for your answer. I have more to research than I realized. I’ve found some tutorials showing how to create a bump map from a hi res model, and using that to fake detail. But for that to work on a human figure would require some complex mapping. I’m not sure what I’m talking about, so I’d better get back to the tutorials.

The devil is in the details and it’s hard to get the context without seeing what is being worked with nor suggest anything specific without knowing what you’re trying to do.

There are many types of texture maps that can affect the appearance of a surface. Baking those needs to happen so that high detail surface and low detail surface occupy the same space to get the difference. That’s why you can’t bake posed and unposed ones, the two model surfaces are way apart from each other. Simplified example https://blenderartists.org/forum/showthread.php?318502-Best-way-to-bake-a-grid-of-cubes-on-to-a-plane&p=2520445&viewfull=1#post2520445

Baking results go on an image that you create, and for that the bake target model (low poly) needs an UV map. UV islands can’t overlap, and there should be a sufficient margin between them to get enough texture bleeding.

That’s for static detail. If the surface deforms when you pose it and want some wrinkles to disappear as the skin stretches for example, that’s another story and would require a setup to control the amount of texture detail.

Bump map is a greyscale image where each pixel value controls height. Normal map is an image which describes the variation of height, a direction (vector), and the values are saved on each color channel of an image. The two accomplish the same, but normal map is more accurate.

Displacement map is a greyscale image, meant for moving actual geometry, not just faking it like the other two. You could use that with subdivision surface modifier if the model structure is suitable for that, and displace modifier to get high poly results in the viewport and render. Another option is to use Cycles render and its experimental adaptive subdivision and microdisplacement, which tesselates and displaces the surface in render time based on the settings, texture detail, and distance from the camera.

JA12, thanks again for the info. The tutorial is a bit more than I’m able to take on at the moment, especially since I rarely, if ever, need to bake anything. I did, however, find this simpler tutorial: https://www.youtube.com/watch?v=0r-cGjVKvGw&t=139s . I’m very excited that I was actually able to complete the tutorial, except that the final results are not good as per attached screenshot. Would someone with more experience than I be able to look at the screenshot and determine the cause of the failure. The figure should be a very muscular looking male. Instead he looks like some kind of crocodile person. Thanks again.

Attachments


The normal map shouldn’t be green, and both objects might be on top of each other in the render. That’s all I’m seeing from the screenshot


A. target doesn’t have a proper UV layout, faces overlap
B. when you bake, don’t have the image node connected to anything, because it makes a dependency cycle where the source for the bake is the same as the target. I would also disconnect the normal map node just in case


It needs to be unwrapped. For baking, automatic unwrap methods should suffice, like smart UV unwrap. U -> smart unwrap.
Disconnect the nodes and have the image node selected for baking.

SUCCESS! JA, can’t thank you enough. Baking was such a foreign concept to me, I wasn’t sure if I’d ever understand it. But going thru this process really helped. So cool that you can create such detail without all the excess strain on the process. Now I just have to get the rig working. Cheers

You’re welcome. I answered in the thread itself so that anyone else having problems with baking and reading the thread would know to check those on their own.

I understand that you don’t want to share the asset you had and sent a link to it privately. But otherwise it’s not good to discuss and troubleshoot with just one person privately because you’ll be waiting for their answer, instead of putting it in front of many eyes that can answer with suggestions and corrections.

An alternative to sharing the whole asset is to cut most of it, just to give a sample of the problem, use compress feature in file -> save as dialog, and upload that. That way you’re not sharing your asset, just a piece of it for troubleshooting. In this case though, cutting the high poly with booleans or otherwise does take a lot of memory.

Either way, I don’t keep nor use assets that came from troubleshooting someone else’s file. Haven’t heard of anyone else doing that either, but preparing a file by removing unnecessary geometry and/or objects should be enough to make a file that can be shared publicly. Doing so also reduces the file size, along with enabling compress of course.

Understood and agreed. Yes, you’re right, I did go private because I didn’t want to get in trouble for sharing a model. Plus, in this case, I wouldn’t know what parts to eliminate and which to keep. Thanks again for your help.

A new question came up. I saved my successful bake file, but when I opened it again later (after transferring to another machine), my normal image was messed up (sorry, not sure how to explain). My render was a mess, and when I tried to re-instate, Blender kept doing weird things like swapping normal images after a bake (again, sorry, I tried so many things, can’t remember all details). My question: is it necessary to “pack” the image with the Blender file before moving it? Also, does the saved “png” file contain the same detail as the normal map? In other words, can I load the normal.png for the render? Because some of my experimental normal maps turned black when they had previously been okay. (I may have neglected to change the image texture node from color to non-color data, or vice versa.) Thank you

Mmhh… I don’t want to enter into this theme, but the last isssue:

My first question would be: Does it work on the original machine? - This only to eliminate the eventuality of issues/differences between the two systems.

No. Surprisingly, the original file on the original machine did not work either, so it wasn’t an issue of difference between the systems. Also, using the same version of Blender. I was shocked because the file was working great before I closed it. But upon re-opening, the render was a mess.

New and edited images don’t save with the file. The images need to be saved on the disk separately. UV/image editor, image -> save as. If you baked and didn’t save the image, it won’t be there when you open the file next time.

One workflow for painting or baking multiple images could be to save the images on the disk as you create them, then they all can be saved in one go while working and afterwards with

  • UV/image editor, image -> save all images,
  • from the tool shelf in texture paint mode,
  • or by searching “save dirty” from the search menu.
    That will save all dirty and naughty images.

Cool. Not sure what dirty or naughty images means, but I’ve learned something new. Thanks for the help!