New Technologies (AI)

I would like that this topic will remain for new technologies that appear related to 3D

Photos + Neural Magic = 3D render

We present a novel point-based, differentiable neural rendering pipeline for scene refinement and novel view synthesis. The input are an initial estimate of the point cloud and the camera parameters. The output are synthesized images from arbitrary camera poses. The point cloud rendering is performed by a differentiable renderer using multi-resolution one-pixel point rasterization. Spatial gradients of the discrete rasterization are approximated by the novel concept of ghost geometry. After rendering, the neural image pyramid is passed through a deep neural network for shading calculations and hole-filling. A differentiable, physically-based tonemapper then converts the intermediate output to the target image. Since all stages of the pipeline are differentiable, we optimize all of the scene’s parameters i.e. camera model, camera pose, point position, point color, environment map, rendering network weights, vignetting, camera response function, per image exposure, and per image white balance. We show that our system is able to synthesize sharper and more consistent novel views than existing approaches because the initial reconstruction is refined during training. The efficient one-pixel point rasterization allows us to use arbitrary camera models and display scenes with well over 100M points in real time.

Code is available, maybe can we get 100M points in Blender?

8 Likes
2 Likes

:point_right: https://rgl.epfl.ch/publications/Nicolet2021Large

6 Likes

Thanks xan2622 , excellent.

1 Like

Not new but quite impressive too: Embergen and its real time Fire/Smoke simulation engine:

:point_right: https://jangafx.com/software/embergen/

5 Likes

Some preview of paper for upcoming Siggraph Asia this 14-17 December

2 Likes

@Bullit: This here seems like a more mature version of the neural rendering approaches like NeRF and ADOP.

Radiance Fields without Neural Networks:

https://alexyu.net/plenoxels/

For Real-time Rendering of Neural Radiance Fields

https://alexyu.net/plenoctrees/

2 Likes

First test

1 Like

Nvidia evolves Gaugan now with Text, Segmentation and Sketch inputs to build an image.

https://twitter.com/boubek/status/1471629732202663943?s=20

7 Likes

NVidia published on their blog about Spatiotemporal Blue Noise (STBN) textures that become better than regular Blue Noise texture over time.

While they show great or not worse result compared to Blue Noise texture it doesn’t seem like they overcome limitation of being useful only on low sample count or low-dimension algorithms (whatever that is).

2 Likes

Here is Meta AI’s new neural network that allows reconstructing animatable 3D models from videos:

https://banmo-www.github.io/

61d40f0ee6681

9 Likes
5 Likes

Displacement mapping is a powerful mechanism for adding fine to medium geometric details over a 3D surface using a 2D map encoding them. While GPU rasterization supports it through the hardware tessellation unit, ray tracing surface meshes textured with high quality displacement requires a significant amount of memory. More precisely, the input surface needs to be pre-tessellated at the displacement map resolution before being enriched with its mandatory acceleration data structure. Consequently, designing displacement maps interactively while enjoying a full physically-based rendering is often impossible, as simply tiling multiple times the map quickly saturates the graphics memory. In this work we introduce a new tessellation-free displacement mapping approach for ray tracing. Our key insight is to decouple the displacement from its base domain by mapping a displacement-specific acceleration structures directly on the mesh. As a result, our method shows low memory footprint and fast high resolution displacement rendering, making interactive displacement editing possible.

6 Likes

OMG ! This is insane.

This tech maybe could be used to make high detail sculpts fast and easy without the need of Zbrush?

insane reduction in both time and ram usage. Would be great if anyone from blender devs could comment on implementation possibility.

There have been a few comments by Brecht here : https://devtalk.blender.org/t/2-5d-displacement/21234

1 Like

First posted by Metin Seven in Blender sculpt topic. Reposted here for consolidation.

4 Likes

https://twitter.com/Yokohara_h/status/1497170735797735426

4 Likes