Bake Wrangler - Node based baking tool set

The main function is really tiny:

for is_alpha in alpha_it:
    if not is_alpha:
        snap = px_offsets + alpha_it.multi_index #snap shot indexes
        hit_count = np.count_nonzero(exp_alp[snap[...,0],snap[...,1]]) # count non alpha pixels in snap shot
        if hit_count:
            pxd = exp_px[snap[...,0],snap[...,1]] #pixes in snapshot
            pxc = pxd[...,-1] > 0.9 # only take pixels with alpha above value
            sum = np.sum(pxd[pxc], axis=0)
            val = sum / hit_count
            pixel_out[alpha_it.multi_index] = val # write pixel

I will past you the rest of the setup code in msg.

Iā€™ve messed around a fair bit with different methods to get to this point. But I could have easily gone completely off track along the wayā€¦ I think the main slow points are currently the hit_count function and the sumā€¦

1 Like

I managed to get it down to 10 seconds on my system with a single thread. But thatā€™s with summing all the pixels in range, which doesnā€™t really give accurate results. Summing only x closest is still adding a lot of extra time.

Need to think about how I can do it faster stillā€¦

(My benchmark is 2k normal map with 16px margin)

1 Like

Well Iā€™ve got it as about as fast as I think Iā€™m going toā€¦ Which is plenty fast enough smaller margin sizes, but if you start wanting really thick margins its not greatā€¦

16px: 6s
32px: 17s
64px: 90s

Itā€™s quite fast at rejecting pixels that wonā€™t be inside the margin radius. But it is quite slow when it comes to calculating the colour of pixels that are very far from an edge. So the larger the margin size, the more pixels it has to calculate and more of them are far awayā€¦

I can probably add a short circuit for pixels that only have a number of possible matches close to the target sample size to the inner loop. But that will still only speed up the ones at the very edge of the margin.

A solution is of course to put the output back through the process again. That should mean if you go in 16px steps it would take 24s to do 64px. But then the each iteration is using the previous margins colours instead of the original imageā€¦ Is that actually a problem though?

Here is where I am at with creating margins:


This can be created quite fast, but is done in 3 passes. You can see where the first pass ends quite easily in this image. The border between the 1st and 2nd pass isnā€™t so obvious, but the pixels have all become a bit blurry. Iā€™ve also picked the most obvious place in the image to show. In other places there is no noticeable difference between a single pass and multiple iterations.

Using this method I can completely fill the gaps in a map pretty fast. On my system it takes about 20 seconds for your average map to be filled. If I fiddle around to find a sweet spot for how many pixels to add every iteration this can be probably get faster by a bit.

I can also create margins of various sizes. The set up means that there is a fixed base time cost, where anything under 16px takes about the same amount of time (5 seconds).

So my question is: Are these margins acceptable? (I guess I kinda need to actually add them to a test version)

Umm. If Iā€™m reading that right, then that margin is not very good. The pixels beyond the edge are just nowhere near the same color as the greenish-pinkish bake. Ideally you wouldnā€™t be able to tell where the bake ends and the margin begins.

There is a couple of pixels of blue around the edge, so the first band is correct. This is the area before:

1 Like


Here is a better image. Middle is my script, end is blender, start is source. As you can see the colours are similar, but some how mine angle away from the sourceā€¦

if the left image is source I canā€™t understand where these pink and green pixels come from. coz the border pixels on the left pic are blue.

I found a reason why udim does not bake in my case. Because i bake in 32 bit and save image in 16 bit. Also, randomly, if take a lot of meshes, some meshes take wrong udim tile and over bake above on not self tile

Why does using 32bit to 16bit cause problems?

I would think itā€™s most likely if the mesh ends up on the wrong tile, that the wrong UV map is selected or maybe there is something wrong with the UVs? For example a mesh could have a single island that covers more than 1 tile, or it could have UVs that arenā€™t in the correct range for UDIM. Like having negative UV values or ones outside the tile rangeā€¦

@unv Those are caused by the pixel having no adjacent colour. When it looks further away it is able to include both the edge and at least one pixel behind the edge in the samples which causes the colour bleed. That said, there is something not quite right with the way the colours move off diagonally. I think the problem is with diagonal pixels getting included where they shouldnā€™tā€¦

The thing is, I canā€™t really sample between two pixels. So I have to approximate radial sampling by deciding which pixels are included at each level. I suppose the other option would be to give them a weight, but thatā€™s basically the same thing.

Looking over blenders source, it uses Manhattan distance to weight pixelsā€¦ Which should also be wrong as it would greatly lower the weight of diagonalsā€¦ >_<

Yep, so if I use Manhattan distances as weight I get results very close to what blender producesā€¦ I think I need to attempt to come up with a more accurate weighting. But right now I can produce margins very similar to blenders for any bake at the cost of around 5-20 seconds for a small margin to a complete infill.

2 Likes

I wrote this little thing to visualise some possible pixel weights: https://replit.com/@netherby/SamplePattern#main.py

Blender uses the ā€˜Manhattanā€™ set (which looks pretty wrong to me). I think the Euclidean rounded or ceiling weights look like the best fitā€¦

Alright, well this is itā€¦ Iā€™ve uploaded BakeWrangler_1_2_RC1.zip to all the places.

This version includes the new margin generator (see icons next to the margin size setting). The time it takes to create the margin will vary depending on the thickness of your margin and how much empty space is in your texture. Well packed textures are much faster to process than ones with tons of gaps. The resolution also of course will play a large part. That said a decently packed UV layout shouldnā€™t take much over 20 seconds to completely fill. But a large mostly empty texture could take a couple of minutes if you want to completely fill it.

This is everything for v1.2 now, except for any bug fixes obviously!

The future now really depends on what going to happen with CyclesX and the promised baking API. I do have a few more features I want to add at some point, but we will see what happens with CX first.

I need to spend some time fixing up the documentation and making some tutorials or something?

Let me know what you all think about the margin thingā€¦ I wish it was fasterā€¦ But Iā€™ve already spent so much time optimising it I have to just go with what Iā€™ve got at some point >_<

5 Likes

wow! List of improvements is impressive! Would be great to have at least a brief documentation for new features with examples.

Mac OS 10.14.6 (Intel), Blender 2.93.4, Simply enabling the 1.2.RC1 Addon (from the zipped archive) gives me this error:

Bake wrangler v 1.1.1 works great, by the way. So this is something new.

There are a lot of changes to things. Try to first disable the add-on, then remove it, then restart blender before attempting to install the new version.

Wow, look at that! Disabled v1.1.1, quit Blender. Relaunch, Remove 1.1.1, quit Blender. Relaunch, install RC1 and it works! Thanks.

Cycles-X was just merged in to main 3.0 branch was there an updated bake API or is it the same one ?

All i can find is that they added adaptive sampling and denoising as options

Hi Netherby.

I find out that few quick trick can significantly reduce baking time in your addon.
Just by make single user and merging all hipoly and lowpoly models. Maybe you can add some button to atomate this proces while baking.

Take a look into my blender scene:

Hi @kamilozo, I know that merging objects improves render times but itā€™s not something that can be used in all cases. The main problems are that it will break the normals of objects that use different auto smoothing settings and potentially mess up UV maps.

You also normally wouldnā€™t want to merge the target objects in most cases.

That said, I could add an option to merge things somewhereā€¦

@Lamia the only changes to bake that Iā€™m aware of are the support for denoising and adaptive sampling that you mentioned.

so then no bake API changes just updated performance ?

Iā€™m curious now how much time can be saved with denoising adaptive sampling plus cyclesx