Point Cloud Visualizer

Ok, thanks for the reply. Around ~ 650M for 24GB ram gpu, will it need to fit all in vram or only visible in vram and all in standard ram ? (when you mean visible, is there an optimisation as what is in viewport camera like frustrum ?

Is the octree optimisation, or dynamic frustrum, or pure-CPU open cl drawcall (as the main bottleneck for large dataset rendering curently is vram) in the roadmap for the future of this addon ?

I have workstations with 128 or more ram, and only 24gb vram, so it may be easier to make CPU drawcall (or a mix of both if possible).

The render of non-meshed, but colored large dataset of pointclouds is really important for me and if solved, I could buy 2 or 3 licenses.

i was told that 650m points took 18.7 out of 24GB gpu ram.

i mean loaded points that are stored in system ram then sent to gpu ram (you can change percentage of points that are used for display, internally it slices array of points and sent only portion). gpu then decides what is visible and what not and draw what is needed

well, i would like to, but i need either come up with octree implemented not in python or use some library for it that is made to work with numpy arrays. and initial sorting will take time as well. so yes, i’d like that in future, if possible.

pcv uses blender gpu module, that is replacement for bgl. pcv render uses the same, only drawn into buffer instead on screen. so you need system ram to load points and gpu ram to draw anything.

i can put together some minimal testing code for you to run to determine how many points you can fit. send me PM if you are interested

for example, this is 300m points loaded, only positions and colors, no normals, that would take a lot of space more
300

I just wanted to reply publicly the numbers i could charge to anyone that is interested.
Running 1billion point, almost fill up 24gb of VRAM. It can run a 2fps on a 3090 with an i7-7820x.

1 Like

1.999.5

  • convert to blender native point cloud object (blender 3.1) - can be rendered in cycles as spheres: https://developer.blender.org/D9887
  • buttons to uninstall optional libraries

native point cloud object = procedural spheres in cycles hoooooray!

aaand some notes…

correct upgrade procedure is: start Blender, go to preferences, disable old PCV, remove old PCV, save preferences, quit Blender, start Blender, go to preferences, install new PCV, enable new PCV, save preferences. otherwise there will be some errors upon activation and they will not go away until blender is restarted

Installation notes for features depending on external libraries: operators in “Extras” panel, importing LAS and E57 files

Windows

  • use “Portable” Blender distribution, before installation of external libraries from preferences, run Blender as administrator
  • DO NOT install Open3D and PyMeshLab together, blender will crash. this seems to be PyMeshLab issue that apparently will not be fixed: https://github.com/cnr-isti-vclab/PyMeshLab/issues/132 so i am tempted to remove PyMeshLab completely. however, this is not an issue on linux and mac, so if E57 import is important for you, please let me know, or i could make free addon for addon :upside_down_face: to keep E57 import if somebody needs it to have it, but keep standard PCV pymeshlab free.

Linux

  • use Blender zip distribution (not store version)

Mac

  • Open3D and PyMeshLab are available for macOS 10.15+
1 Like

1.999.6

  • optional installs (open3d, laspy, etc) are now installed into user python site-packages on all platforms, windows and linux users no longer need to use blender portable/zip distribution
  • added sequence batch filter, convert and export to ply. sequence have to be preloaded (not loaded on-the-fly). batch filter and convert uses settings as set in filter/convert panels, batch export uses its own settings in batch export panel. so, for example, if you want apply Remove Duplicates to sequence go Filter > Remove Duplicates, set desired distance, go Sequence > Batch Filter choose Remove Duplicates from list and click Batch Filter. if you start blender from command line, each sequence frame is printed so you can observe some progress

sequence

edit: and filter properties can be animated, but animation have to start from frame 1, because files are processed only from frame 1 to last frame of sequence files, cycling have no effect on it, that if just for viewport display. also batch convert creates only as many new objects as sequence have frames. i really need to start writing documentation…

1 Like

for example, sequence of ply files, batch converted to native point clouds (blender 3.1) and rendered in cycles

seq

5 Likes

Hello !

I really love the new feature to display scalars instead of the RGB channels and mix them together !

I don’t know if this is the right place for a feature request, but I’d like to be able to multiply the RGB with the scalar instead of mixing them. This would help me add an ambient occlusion effect for instance :

BSE_86

I can kind of do it in CloudCompare now but this would be awesome new feature to have directly in Blender.

Cheers !

2 Likes

this is the best place for request :wink:

this should be easy to add, adding on todo

3 Likes

1.999.7

  • Blending modes for scalars (Normal, Multiply, Screen, Overlay, Soft Light)
  • Color adjustment (Exposure, Gamma, Brightness, Contrast) per channel in shader and filter
  • Discard filters (Discard Normals/Colors/Scalars)
  • Generate from native pointcloud object (Blender 3.1 Alpha or Blender 3.2 Alpha, i.e. build with pointcloud object enabled)
  • Viewport postprocessing
  • LAZ import: https://github.com/tmontaigu/laszip-python
  • Alternative E57 reader: https://github.com/davidcaron/pye57
  • Reading all points from E57 multi scan file when using PyMeshLab for import
  • Better ui for packages installation in preferences
  • Removed Extras panel and moved contents to filters
  • Plugin icon in ui marks functionality depending on 3rd party package

Some demos:

Viewport postprocessing - points have no colors and no normals, only point coordinates are loaded

edl

Blending modes for scalars

scalars

Some more notes:

  • Blender native Pointcloud object seems to be included only in Alpha builds, last 3.1 Beta does not have it, so you need older still Alpha build or 3.2 Alpha, so conversion from/to native pointcloud object is not always available.
  • LAZ file import via laszip depends on laspy
  • pye57 macos wheel is not available, have to be built from source (will include instructions later), windows and linux installs normally from preferences
  • Viewport postprocessing works only with what PCV draws on screen. It is drawn on top of everything else in viewport and it is aplied on all PCV instances. Basically all PCV instances are drawn into offscreen buffer and then drawn as image processed with another shader on top of viewport. Currently there is only one postprocess option Quasi EDL. While it is available for rendering as well, it is view and resolution dependent and render result will look differently so render time settings have to be adjusted to get similar look as is in viewport.
  • Please read 3rd party package installation instructions (there is button in preferences for that). Most important points are: On windows do NOT install Open3D and PyMeshLab or Blender might insta crash when one or other is used. On Apple Silicon machine if you need E57 import, you need to use Blender Intel build because PyMeshLab is not available for arm architecture. The best is, don’t install package unless you really need functionality depending on it.
  • If you run into any problem with E57 files (and it seems to happen much more frequently then with other formats based on support messages), best workaround is, use CloudCompare to import E57, merge all scans if it is multi scan file and export to PLY and use that in PCV. CloudCompare E57 reader seems to handle everything. So next possible step is using CloudComPy but its installation is not that easy…
5 Likes

Woot !! :slight_smile: Thank you so much, I can’t wait to test this out.

Hi guys, I only have very little knowledge about Blender. I happened to use the Point Cloud Visualizer add-on in 2019, but haven’t gotten a chance to revisit Blender after that. My question is not really regarding the Point Cloud Visualizer, but a more generic question about the ‘material’ setting.

Below is an image I rendered back then, and I forgot what settings I had in the ‘material’ for the house so that the surface is reflecting a little bit from the surrounding colour.

I tried to play around with using every Surface including ‘Glossy BSDF’, and setting the Roughness as 0 to set a reflective effect, but still, it won’t look the same as the image I got before.

Below is a picture that I try to reproduce, but obviously the surface of the house is not reflecting the purple colour from the ‘World’.

Can anyone offer any suggestions that I can try out?
I will appreciate any prompt response.

Thanks so much everyone.

hi, i don’t think it is pcv related in any way. it is not using world settings for anything. i suggest you to ask in lighting and rendering section: https://blenderartists.org/c/support/lighting-and-rendering/
and my guess to get similar result, i would just compose some colors over rendered image using channel(s) from normal render pass… but i am only guessing

Hi, does the error :

Unable to allocate 19.8 GiB for an array with shape (1326035645, 4) and data type float32

Occur because of lack of ram ? Or is it from a bug with the add-on ? (I had left around 15% of ram)

not enough ram for array

>>> np.zeros((1326035645, 4), dtype=np.float32).nbytes / (1024*1024*1024)
19.759470894932747

so ~19.76 GB needed for array (colors i guess, only colors are in shape (n, 4)) and even when it could be loaded, it won’t fit into gpu memory i suppose… together with point positions as

>>> np.zeros((1326035645, 3), dtype=np.float32).nbytes / (1024*1024*1024)
14.81960317119956

that’s ~35GB just for point data needed for default shader with colors only…

edit: go to PCV > Load and set it to load every second point with Method = Every Nth and N = 2, if you can work with less points

I have tried to load every nth to 2, but it isn’t random :

Also, other than matcap shaders, would it be possible to add real nodetree shader ? (or any xray shader)
Like that for example :

it is every nth point, it is not supposed to be random. your points must be ordered in such strange way. while loading, points are not shuffled, shuffling would require loading them all to ram first. this is meant when there is not enough memory. imagine it is like instructing numpy to read every other line. points are shuffled afterwards so display percentage works better.

see this, from 10 points total, with N=2 is loaded every even index, 0,2,4,6,8

maybe i could add another option that first loads all, then shuffles and then slices points to some count, then passes for display, but that will not save ram while loading. can you paste me full error here? so i can identify when it runs out of memory? it could be during reading file, or converting to internal data object or when filling gpu with data

eh, no sorry, i am not recreating eevee… i could implement some similar shader, what it is, depth based colorization? looks like, you can already do this in existing depth shader, kinda… only it is not transparent (dunno if your example is)

Yes my point cloud come from reality capture export, it’s a downsampled from a part of a city of around 10B points.
It loaded successfully into PCV, but it’s when I tried to “draw” that the error occurred.

About the shader, i don’t ask you to recreate eevee ^^ why would you create something that already exist ! Why would you always implement your own shader/matcap and why won’t you let people use BSDF of custom matcap ?

My usecase isn’t to just create gradient blue/purple from a point, but to make the points around 0.75 transparent, and .25 emissive so that if many points are in front of each others, they contribute to make the pixel brighter than surfaces where there is less point in front of each others
slack_HYAuELmj4s

i know that 1B could be loaded in rtx3090 (24gb memory), but only synthetic points (location and color only, random values, straight from numpy) using crash test i wrote and one kind customer that let it run on his hardware, real data was more than 600-700M, don’t remember now. when you click draw at that moment loaded data are filled to gpu buffers so at one moment you need basically twice the memory, so in short, use less points so they can fit on both memories

you can have custom matcap, matcap shader takes matcaps that are installed in blender, install your own (blender > preferences > lights > matcaps) and then you can choose it in pcv list (i remember testing it long time ago, so it still should work), but matcap needs correct normals on points, or it cannot work as intended. but having normals, that is 3x float32 extra data to load on gpu.

hm, so something like in PS drawing slightly transparent points on top of other and in screen blending mode, i will think about how to make it in opengl shader, but it might be available only for rendering because transparency have to be depth sorted before drawing

what about this, no transparency, have to be rendered over some background, like some viewport color or black (used when other shaders are on transparent), but it can be later composed over something if needed, with additive blending it looks almost the same like in viewport

1 Like