Float64 support in Blender

Hello all,

We were able to import GIS (Geographic Information System) or CAD data’s in blender. Blender support import of DXF files by a default plugin and can open GIS files with BlenderGIS project => https://github.com/domlysz/BlenderGIS which use a common GIS opensource library called GDAL.
What is interesting and efficient is to communicate in both sides : import DXF/GIS datas => Then Work in Blender => Then export to DXF/GIS software. But there is a problem, usually we work with projected metric coordinates which provides high numbers and when we export it provide errors because there is a limit with high numbers coordinates in float32 system.
In BlenderGIS, the author found a workaround to temporaly transform the coordinates to 0,0 ( bpy.ops.transform.translate ). The transformation is stored. When the model is exported, the coordinates are transformed but not in blender (with gdal python bindings). If we could make this accurate transformation inside Blender, It should be better and It will be possible to export in others formats.

So, I would to know if it’s planned to support float64 coordinates in blender. Otherwise, is it possible to modify source code to implement float64 support ?

Sincerely,

Sylvain

1 Like

I heard there is a way but
it would make bl a little more sluggish and take more memory!

I would like to see a maintained built using full 15 bytes precision
if some one has the time to make a build using double precision float
I think many people would use it !

Note: Hope Ideasman can answer that one !

happy bl

Thanks for your reply.
If it takes more memory, it doesn’t matter nowaday : most of computers have enough …

1 Like

from a cpu point of view may be
but not GPU yet!
and with Cycles big scene can become huge mem consumer !

just making it more sluggish is not really the best thing
but I agree if full 64 bits is required for location size scale
then there should be like an option for bl if possible at all

happy bl

There is only one reference to make it works ?

don’t know how to build C!

so hope someone in the know or Ideasman42 can clarify this
would be nice to test an SVN built on WIN with 16 digits and how slower it is !

happy bl

The word “perhaps” should always be taken very seriously when programmers use it. I don’t know how exactly Blender passes position data to the GPU. It is not possible to blindly pass 64 bit data and it gets converted to 32 bits and to expect that it will work. It requires quite some work to get this done. I wouldn’t want to do it.

What’s not mentioned so far is that this would make Blender slower in general.

don’t really need to use 16 digits to render things on GPU or viewport
just have access from script to high res 16 digits to do calculations, import , export

happy bl

Yes only to get accurate scientific coordinates like Ricky.

You want high precision, but it doesn’t matter that it will be shown wrong?

For me it doesn’t matter because it is not the same scale usage !

Ehm…I hope this wont become a default vertex format.
As if i remember correctly even for the performance on a normal CPU this is not such a great idea
If i look at current CPU’s, then the only Intels that currently can work fast with this data type are Xeons.
All other CPU’s ea i5, i7 cannt deal that well with 64bit numeric math, it isnt that fast as say the more common int32.
an i5 or i7 require the use soft double’s (in compiled c++ acka assembler code); and in simple that slows things down quite a lot.

A chip like xeon Phi can directly work with them… though i assume then one would need to compile the code with extra intel libraries as well to make use of it, or some updated compiler… or low level inline assembly code.

For Blender most of us we rather have, cheap fast multicore systems; the cheaper, the faster, the more cores; the better…
So Xeons are not preferred, you see those chip more often in servers then you see them in clients desktops/laptops.
Maybe some of us use Xeons, and boards with multiple Xeon CPU’s (a trick an i7 cannt do), but they would be a small group i think.
As most heavy blender users rather use GPU’s. (which also dont support float64). It could perhaps be by soft floats, but that would only make the graphical kernel require more memory (GPU memory that cant be used for drawing).

It think solving the problem at importer /exporter might be a better way… (and simpler)
And maybe even you could write some simple code to pre-process your data outside of blender.

You are talking about one use case. Blender has quite a few users and the developers need to consider a few more use cases and need to weight the pros and cons.

About execution on CUDA of double precision floating-point variables :

Using double precision floating-point variables and mathematical functions (e.g., sin(), cos(), atan2(), log(), exp(), sqrt()) are slower than working with their single precision counterparts. One area of computing where this is a particular issue is for parallel code running on GPUs. For example when using NVIDIA’s CUDA platform, on gaming cards, calculations with double precision take 3 to 24 times longer to complete than calculations using single precision.[SUP][4][/SUP]
source wikipedia,

(dont remind the source where i got the CPU info from, it was real memory instead of a hyperlink)

Moved from “General Forums > Blender and CG Discussions” to “Coding > Beginning Blender Code and Development”

The link to Campbell’s response covers most things. The short answer is that it’s probably possible to support double precision floats for limited uses. You should be able to make a few changes (and disable a lot of features) to get Blender to compile with enough double precision support for your specific need. This of course, is not a generalized solution and it’s not likely that Blender will ship with double precision any time in the near future. But part of the beauty of open source is that you can make modifications to meet your specific needs.

Thanks for your answers

I would like someone who knows how to make SVN to make a new Version to test that if possible
then we can see how well it works.

thanks
happy bl

If there is only reference to change I could to this but I’m in doubt … I need to investigate

Ok so next week, I’ll try this modification

I’m aware that I’m necroing this thread, but I’d like to give an update on the world of computing in 2019. The biggest issue with using double-precision 64-bit floats is indeed that in some places it doesn’t perform well, but that’s becoming less and less of an issue. Let’s take a look at what’s limiting us now:

Memory usage: As with several years ago, this isn’t really an issue today. A model with 100,000 verts would go from 100,000*3*4 bytes to 100,000*3*8 bytes in size, which only increases memory usage by a few megabytes.

CPU standalone calculations: Most modern 64-bit CPUs convert all floating-point types into 80-bit extended double precision format for internal calculations, and then the output is either truncated to 32-bit singles or 64-bit doubles. So we don’t have to worry about standalone calculations.

CPU vector calculations: Most modern CPUs contain SIMD instruction sets for performing calculations where you perform one operation to multiple pieces of data. We’d typically want to use 3D vectors, which are composed of 3 numbers. CPUs with AVX can perform SIMD operations with four doubles or eight singles at once, so we’re already set for working with individual double-precision 3D vectors at a time with these CPUs. AVX-512 extends this to eight doubles, https://en.wikipedia.org/wiki/Advanced_Vector_Extensions

GPU calculations: Here’s where the problem start. Nvidia still considers double-precision an enterprise feature, so double-precision performs extremely poorly (up to about 100x slower) on consumer GeForce cards. Sending doubles to the GPU would only be a viable option for Quadro users or AMD users. There are ways to work around this, Blender could use doubles internally for calculations and simply truncate them to 32-bit during rendering. Or, Blender could perform camera-relative rendering, but this might be slow too.

API support: Most new APIs support double-precision, but as you may have guessed APIs are slow to be adopted. For OpenGL, IIRC you need version 4.0 for double-precision, yet Blender is still using OpenGL version 3.3. Ideally, for supporting double-precision Blender would first implement the Vulkan API which has great double-precision support.

So, in a nutshell: Doubles aren’t an issue on the CPU, there are issues on the GPU which can be worked around, and there are some API problems that will probably simply be solved with time as Blender requires newer and newer standards.