for is_alpha in alpha_it:
if not is_alpha:
snap = px_offsets + alpha_it.multi_index #snap shot indexes
hit_count = np.count_nonzero(exp_alp[snap[...,0],snap[...,1]]) # count non alpha pixels in snap shot
if hit_count:
pxd = exp_px[snap[...,0],snap[...,1]] #pixes in snapshot
pxc = pxd[...,-1] > 0.9 # only take pixels with alpha above value
sum = np.sum(pxd[pxc], axis=0)
val = sum / hit_count
pixel_out[alpha_it.multi_index] = val # write pixel
I will past you the rest of the setup code in msg.
Iāve messed around a fair bit with different methods to get to this point. But I could have easily gone completely off track along the wayā¦ I think the main slow points are currently the hit_count function and the sumā¦
I managed to get it down to 10 seconds on my system with a single thread. But thatās with summing all the pixels in range, which doesnāt really give accurate results. Summing only x closest is still adding a lot of extra time.
Need to think about how I can do it faster stillā¦
Well Iāve got it as about as fast as I think Iām going toā¦ Which is plenty fast enough smaller margin sizes, but if you start wanting really thick margins its not greatā¦
16px: 6s
32px: 17s
64px: 90s
Itās quite fast at rejecting pixels that wonāt be inside the margin radius. But it is quite slow when it comes to calculating the colour of pixels that are very far from an edge. So the larger the margin size, the more pixels it has to calculate and more of them are far awayā¦
I can probably add a short circuit for pixels that only have a number of possible matches close to the target sample size to the inner loop. But that will still only speed up the ones at the very edge of the margin.
A solution is of course to put the output back through the process again. That should mean if you go in 16px steps it would take 24s to do 64px. But then the each iteration is using the previous margins colours instead of the original imageā¦ Is that actually a problem though?
This can be created quite fast, but is done in 3 passes. You can see where the first pass ends quite easily in this image. The border between the 1st and 2nd pass isnāt so obvious, but the pixels have all become a bit blurry. Iāve also picked the most obvious place in the image to show. In other places there is no noticeable difference between a single pass and multiple iterations.
Using this method I can completely fill the gaps in a map pretty fast. On my system it takes about 20 seconds for your average map to be filled. If I fiddle around to find a sweet spot for how many pixels to add every iteration this can be probably get faster by a bit.
I can also create margins of various sizes. The set up means that there is a fixed base time cost, where anything under 16px takes about the same amount of time (5 seconds).
So my question is: Are these margins acceptable? (I guess I kinda need to actually add them to a test version)
Umm. If Iām reading that right, then that margin is not very good. The pixels beyond the edge are just nowhere near the same color as the greenish-pinkish bake. Ideally you wouldnāt be able to tell where the bake ends and the margin begins.
Here is a better image. Middle is my script, end is blender, start is source. As you can see the colours are similar, but some how mine angle away from the sourceā¦
I found a reason why udim does not bake in my case. Because i bake in 32 bit and save image in 16 bit. Also, randomly, if take a lot of meshes, some meshes take wrong udim tile and over bake above on not self tile
I would think itās most likely if the mesh ends up on the wrong tile, that the wrong UV map is selected or maybe there is something wrong with the UVs? For example a mesh could have a single island that covers more than 1 tile, or it could have UVs that arenāt in the correct range for UDIM. Like having negative UV values or ones outside the tile rangeā¦
@unv Those are caused by the pixel having no adjacent colour. When it looks further away it is able to include both the edge and at least one pixel behind the edge in the samples which causes the colour bleed. That said, there is something not quite right with the way the colours move off diagonally. I think the problem is with diagonal pixels getting included where they shouldnātā¦
The thing is, I canāt really sample between two pixels. So I have to approximate radial sampling by deciding which pixels are included at each level. I suppose the other option would be to give them a weight, but thatās basically the same thing.
Looking over blenders source, it uses Manhattan distance to weight pixelsā¦ Which should also be wrong as it would greatly lower the weight of diagonalsā¦ >_<
Yep, so if I use Manhattan distances as weight I get results very close to what blender producesā¦ I think I need to attempt to come up with a more accurate weighting. But right now I can produce margins very similar to blenders for any bake at the cost of around 5-20 seconds for a small margin to a complete infill.
Alright, well this is itā¦ Iāve uploaded BakeWrangler_1_2_RC1.zip to all the places.
This version includes the new margin generator (see icons next to the margin size setting). The time it takes to create the margin will vary depending on the thickness of your margin and how much empty space is in your texture. Well packed textures are much faster to process than ones with tons of gaps. The resolution also of course will play a large part. That said a decently packed UV layout shouldnāt take much over 20 seconds to completely fill. But a large mostly empty texture could take a couple of minutes if you want to completely fill it.
This is everything for v1.2 now, except for any bug fixes obviously!
The future now really depends on what going to happen with CyclesX and the promised baking API. I do have a few more features I want to add at some point, but we will see what happens with CX first.
I need to spend some time fixing up the documentation and making some tutorials or something?
Let me know what you all think about the margin thingā¦ I wish it was fasterā¦ But Iāve already spent so much time optimising it I have to just go with what Iāve got at some point >_<
There are a lot of changes to things. Try to first disable the add-on, then remove it, then restart blender before attempting to install the new version.
I find out that few quick trick can significantly reduce baking time in your addon.
Just by make single user and merging all hipoly and lowpoly models. Maybe you can add some button to atomate this proces while baking.
Hi @kamilozo, I know that merging objects improves render times but itās not something that can be used in all cases. The main problems are that it will break the normals of objects that use different auto smoothing settings and potentially mess up UV maps.
You also normally wouldnāt want to merge the target objects in most cases.
That said, I could add an option to merge things somewhereā¦
@Lamia the only changes to bake that Iām aware of are the support for denoising and adaptive sampling that you mentioned.