I made an addon that can capture and process data for Google Seurat, which is a great tool to optimize very heavy scenes with realistic materials for mobile VR. It generates a 3D model which looks just like the real scene, as long as you stay within the predefined ‘box’. You could compare the process to photogrammetry with known camera positions and depth values.
I’m still relatively new to coding and creating addons, so any kind of feedback is heavily appreciated. The source code is available here:
That should work just fine, there’s actually a Seurat plugin for Godot too, which helped me a lot to create this one for Blender. Here’s some info about importing the mesh into Godot. Let me know if you run into any issues.
Strange as it might seem, I am exploring this technology from the perspective of asset thieve prevention.
As a 3D modeller who model from scratch & am planning to put out VR contents in a limited form, I have witnessed how 3D models are straight out being ripped shamelessly in various social VR platforms in broad daylight.
With this technology, the “original asset” with its topology model and … well basically the whole structural integrity of the model is ruined but the visuals stays intact !
I’ve tried this on multiple scenes and it doesnt work on any of them. Every single scene shows “ERROR: Point is outside of the frustum used for binning. Possible causes for this problem: geometry inside the headbox, incorrect matrices, incorrect depth values, other errors in the generation process. (Showing only the first error)” when trying to process the data, so perhaps this plugin is non functional.
It looks like the capturing box contains objects here, you need to make sure there’s are no meshes inside of the box. The area inside of the box is not actually the part that the addon tries to optimize, it defines the viewpoints used for the cameras. Basically, the box is the area you want to view your scene from. Place the capturing box inbetween Suzanne and the cube, if that’s where you want someone to stand.
For some extra context, the addon works by placing several 360 cameras inside the capturing box. If one of those cameras renders inside of a mesh, you’d get projection and culling issues from it. I could remove the check for intersections, but it wouldn’t work regardless.
It helps a lot
Thank you for taking the time to explain in such detail, much appreciated.
It is fascinating that the box controls how far the user can travel within, now I can use this as a guide in Godot to restrict movement (fade to black).
I can optimize it even further by making the box on eye level since people don’t bend down/take a knee while viewing around. The box could be set narrow in height.
Question: How to you set how many 360 cameras are created within this box ?
Or it magically “knows” how much is enough for smooth “magical” interpolation when the viewer moves within the box ?
The amount of views doesn’t depend on the box size, if you set it to 16 you’ll always get 16 360-views. It’s true that this won’t be enough in some cases though, if a scene has a lot of parallax/depth you’ll need to increase it.
Scaling the capturing box works fine, the addon accounts for it, only rotation isn’t supported.
It works !
Thank you thank you thank you !!!
It transformed a 15,000 polygon scene into 100,000 polygon scene but that’s ok, because it achieved the ultimate purpose that I want it to be used as.
All geometry has been obfuscated into facing planes, the geometry integrity of the model has been completely ruined. That’s GREAT !!!
I can use this technology in conjunction with the normal method in a VR game, the architecture could be normal models, interior modeling is easy to create, effortless, it is the character modeling that is GOLD, and that I do want to protect, for character display and statues I could use this technology in VR.
Just one question, I have some vertex stretching pointing towards the center of the box like star streaks and settling to alpha clip in Eevee still have z-depth issues, now it doesn’t matter at all, it doesn’t hinder the visual, just wondering why but again, this app on works perfectly I am so happy thank you !!!
The cool thing about Seurat is that even if the polycount isn’t too much better, you’ll be reducing overdraw (Rendering the same pixel twice or more) a lot. That’s what mobile VR hardware struggles with most.
There are some special shaders specifically for rendering Seurat meshes, those should get rid of those streaks. There isn’t one for Blender though, only Godot, Unreal, and Unity. I’ve tried replicating them in Blender but I didn’t have much luck with that. You might be able to get a working setup though.
Oh please share the setup for Godot since that’s what I use ;-p
I have lost very good shaders (water shaders that look amazing) that no longer work as Godot upgrades
I have no idea about the overdraw thingy, but I read from Reddit some x times ago that the Oculus one can only run about 100,000 poly’s in 60 fps so that’s just about right
It’s kinda weird that importing the obj from the exported mesh results in a model that have to be turned 90 degree but that’s ok…but Sir…I wish it exported an empty cube for me to know where the boundary I have placed was because after the exported/imported process with the model requiring rotation to be “corrected”, I think even the position was off-setted and hence…one have to “guess” where the boundary/capture box was placed, wished a hinted “box” was place to indicate ;-p But this request is more like a “please with cherry on top” since the miracle has already been achieved ;-p
It depends I suppose, you can have scenes with very little polygons and complex shaders that perform badly. Aside from that, the amount of objects/materials in your scene can also increase ‘draw calls’ which slows things down. Godot’s documentation has some pretty good info about optimization if you want to read up on that
I think the 90 degree rotation is done by Seurat itself, so I don’t think I can fix that part. I’ll look into the offsetting though, I think there’s an option to tweak it inside of the json file. If you want to you can add it manually for the time being by adding this line in there:
he he he
Notice the boundary indicator is gone in the imported scene, this is totally expected since it is just a obj file…but…Please Sir…can I have…some more ? ;p
(A cube that matches the boundary box, so that when I bring into Godot, I can set the AABB boundary via my own code.)
Wow, Godot have been provided a “seurat_blend.shader”, NICE !
In case you are wondering “why not just do it in Godot”, simply put, Godot’s baking/lightmap/SSGI/Whatever is NO MATCH for Cycle’s Realism, since the data retrieval is going to take a long time anyway for both Godot and Blender, why wait for crap when I can wait for Cycle’s AMAZING rendering result.
It’s silly to do it in a game engine, any game engine with their sub-par crappy (compared to a production renderer like Cycles) is nothing compared to a production renderer.
I am so glad you have made a Blender version of this technology port, SO GLAD.
Can we “isolate” certain object to do this with ?
Like for large architecture interiors, it is feasible to keep that large flat plane ceiling and floor and only do this “magic” on high poly or selected characters you want to protect from being ripped.