I’m curious about it, because it seemed like it was bothering me more than it should. I have been getting floating point errors in the Blender 2.8 beta a lot. Sometimes it will happen when scaling an object, (even when I enter in precise values with no decimals), sometimes it will happen when snapping to grid. It’s getting a little ridiculous. I’m not sure if blender 2.79 was this way, but it does seem to be like this in 2.8 at the moment. These values sometimes show up in the thousandths place (although that’s more rare) (.001). Sometimes it’s a little further such as .000001, and this would happen for perfect values as well, which would make this rather ridiculous.
Sometimes it’s a 1 point difference such as .9999999, or even something such as 20.159999. I’ve pretty much accepted it at this point, because I’m not going back to Maya, but I was wondering if it bothered anyone else or if they even noticed.
Still on this huh?
No, doesn’t bother me at all. Blender is a polygonal modeler and polygonal modeling is approximation of curves and surfaces. A circle with N vertices has a diameter of what you set. If you get a floating point error on that dimension, it is many orders of magnitude less than the diameter error between edgetoedge vs. vertextovertex.
Or, the tolerance when selling models:
From turbosquid
2.3 RealWorld Scale
2.3.1 Realworld scale within 13% – Model can use any units to achieve realworld scale. If the model does not have an exact realworld counterpart (such as a human character or an unbranded car), the model must use the size/scale of comparable objects in real life.
Absolute precision doesn’t have value in most polygonal modeling. You’re supposed to build an intelligent approximation, and relative precision is enough because it’s for visual purposes. Doesn’t mean one can be sloppy, but knowing to be accurate where it counts.
3D printing is one of the few cases where absolute precision counts, which have tolerances of 0.010.02 mm. Too accurately defined curves and surfaces just increase the file size, up to the point it’s unusable, depending on the model and the dimensions.
see this video
there is still some problems with numbers shown in N panel
for loc scale and dim
there may be more digits shown then the minim blender single precision
depending on the scale
happy bl
It bothers me too.
And apparently in the eyes of the general Blender community I got issues for letting it bother me.
Of course being bothered doesn’t stop me from working with Blender. I just use the money I save (after subtracting my monthly Silver Level Development Fund donation) as an open source software user to pay for my therapist:)
Seriously though, as a visual designer who loves mathematical precision it does make my skin crawl. But whatcha gonna do? I’m not proficient in code wrangling and the developers got bigger “fish” to “fry” right now so I make the best of a quirky application and keep my fingers crossed that this particular quirk is on their sooner than later todo list.
Solidarity my peer in the love of numeric precision.
Maybe everyone reading this thread knows this, but in case not, here are some of the reasons that these “errors” occur:

The main reason is that all arithmetic in computers these days is done using binary (base 2) numbers. For fractions, there is often no way to represent decimal (base 10) fractions in binary. E.g., 0.1 cannot be exactly represented. So if you enter 0.1 through some UI element, it will be converted to the closest binary fraction, and converting it back out again to decimal may find that, say, 0.09999999 is closest (to that many digits) to the internal number.

There are lots of hidden transformations going on inside Blender. E.g., transforming from where something is in “local” space to where it is in “world” space. These transformations are done by multiplying by matrices (again, all binary numbers), and again can cause differences you might not expect if all arithmetic were done exactly.
In fact there are ways to do arithmetic exactly, representing all numbers as exact fractions with (potentially very big) integers as numerators and denominators. These would be so slow for most things that you would not want to use Blender if it did all arithmetic that way, I am pretty sure.
The other thing that could be done is to try harder to round output fields to “nice” values in decimal. Maybe other apps do more of that. This would have no effect on what is going on under the covers, but may make the users feel better.
floating errors are understandable and normal on any computer!
one thing which is confusing is what is shown in the N Panel
sometimes it shows a lot more digits then what is inside blender
like showing 6 digits for the fraction part of a dimension
which is way much more then the internal value limited to only 3 digits
with a scale one 1
is there anyway to correct this which is very confusing for a lot of people ?
thanks
happy bl
In float10, “1/3” cannot be exactly represented: it is 0.33333… with an infinite(!) number of “3’s.” As noted, in float2, “1/10” is the same. And yet, exactly this calculation must be repeatedly performed in order to transform the internal value to a humanreadable base10 representation. In one sense, the digitsequence that you see is itself an approximation of the float2 value.
"Floatingpoint numbers are like little piles of sand on the ground. Every time you pick one up and move it around, you lose a little sand (precision), and pick up a little dirt (error)." But these errors don’t tend to accumulate.
One of the things they taught you in engineering classes, especially in the sliderule and pocketcalculator days, was just how many digits of that number on your calculator were “significant.” A calculator might display 8 digits but some of them are meaningless – “insignificant.” A slide rule had very low (mechanicallybased …) precision but was, and still is, often good enough.
Where do you get this idea from? Internal values all have about 7 decimal digits of significance (that doesn’t mean after the decimal point: it means total after the leading digit, regardless of where the decimal point is). There is no limitation imposed by the scale or units in the UI, though some UI output elements may round according to those values.
I mean if you have scale of 1
you get like 9999.XYZ in internal blender it is 7 digits !
so cannot get less then 0.001
but in N panel sometimes you can see more then these digits
which is not showing the real value as I know of
been said many times before on forum and it is confusing
thanks
happy bl
Bothers me too for a long time. Since it quickly start to cascade, you can get quite noticible differences on certain fields like: Modular pieces, shading and within baked normal maps. You can get around it, but sometimes it gives me some headache when it get’s a visibility (e.g. when you set your smooth shading to 30.0deg and some edge became 28.52917).
I find this to be a relevant topic.
I don’t have anything of real value to add, but wanted to mention the error bothers me.
Visually there is no real difference, I guess, but there were times I needed the measurements to just stay at the values I told them to be, sometimes because I had to scale that measurement a few thousand times.
EDIT for clarification: by “Scaling” I meant that I, for example, array 2000 pieces in a line with relative measurement. Or something else where the error manifests on a larger scale, I don’t remember when exactly I was seeing those annoying measurement changes.
But yes, even scaling in the tool meaning would cause a 2.002cm long box to become 2002cm long, which can be relevant.
if you multiply by a few thousand time
It also magnify the error by a few 1000’s time
happy bl
This is not how floating point works. Yes, if you had a value of 9999 then you can only have 3 digits after the decimal point and anything beyond that is garbage. But if you have a value of 2.3446, as in your screenshot, all of those digits are significant because there is only one number to the left of the decimal point there: so up to 6 digits afterwards would still be meaningful.
right
float should not have any limit on numbers of digits up to 7 after dot!
ok did a test with script to add a cube
with some float numbers
I get the following values
N panel seems to be limited to 5 digits
tried with different values and still only 5 digits
Ex: like x = 222.312431
any reasoning why we don’t see 7 digits ?
thanks for feedback
happy bl
How was the Blender binary compiled? Is it using ffastmath? There are various options to set how the compiler handles floating point. Some can drastically erode accuracy and compatibility. https://gcc.gnu.org/wiki/FloatingPointMath
2021 now and I’m looking up this issue. I have scene scale set to 0.001 so I can model in mm and export to STL without weird sizing. I have a 26mm long pin I specified to be 4mm in diameter and it turns into 3.97 x 4.04 x 26.
It’s also making Cad Transforms addon bug out quite a bit.
This is nature of the floating point arithmetic. Blender has floating point error, and same time other program you will import has floating point error too. And in result, you meet this floating point error.
If you don’t want this, before export, scale you model to x100 or x1000, and after import, downscale to x100 or x1000.
Gotta wonder why STL etc have such large scene sizes compared to Blender/Others in the first place, not to mention the slicer software that imports them
ZBrush is a pain to use if you don’t have GoB addon for the same reasons. GoB is