New Technologies

I would like that this topic will remain for new technologies that appear related to 3D

Photos + Neural Magic = 3D render

We present a novel point-based, differentiable neural rendering pipeline for scene refinement and novel view synthesis. The input are an initial estimate of the point cloud and the camera parameters. The output are synthesized images from arbitrary camera poses. The point cloud rendering is performed by a differentiable renderer using multi-resolution one-pixel point rasterization. Spatial gradients of the discrete rasterization are approximated by the novel concept of ghost geometry. After rendering, the neural image pyramid is passed through a deep neural network for shading calculations and hole-filling. A differentiable, physically-based tonemapper then converts the intermediate output to the target image. Since all stages of the pipeline are differentiable, we optimize all of the scene’s parameters i.e. camera model, camera pose, point position, point color, environment map, rendering network weights, vignetting, camera response function, per image exposure, and per image white balance. We show that our system is able to synthesize sharper and more consistent novel views than existing approaches because the initial reconstruction is refined during training. The efficient one-pixel point rasterization allows us to use arbitrary camera models and display scenes with well over 100M points in real time.

Code is available, maybe can we get 100M points in Blender?

4 Likes
1 Like

:point_right: https://rgl.epfl.ch/publications/Nicolet2021Large

2 Likes

Thanks xan2622 , excellent.

1 Like

Nvidia GauGAN 2:

:point_right: http://gaugan.org/gaugan2/

1 Like

Not new but quite impressive too: Embergen and its real time Fire/Smoke simulation engine:

:point_right: https://jangafx.com/software/embergen/

2 Likes

Some preview of paper for upcoming Siggraph Asia this 14-17 December

1 Like
1 Like

@Bullit: This here seems like a more mature version of the neural rendering approaches like NeRF and ADOP.

Radiance Fields without Neural Networks:

https://alexyu.net/plenoxels/

For Real-time Rendering of Neural Radiance Fields

https://alexyu.net/plenoctrees/

2 Likes

Nvida Gaugan technology is now available !! (edit: it seems was available before so not a new thing, Beta from June)

Nvidia Canvas Beta

Download - exports PSD for Photoshop

SYSTEM REQUIREMENTS
GPU GeForce RTX, NVIDIA RTX, Quadro RTX, TITAN RTX
OS Windows 10
Driver 471.11 or later

Studio Drivers advisable instead of Game Ready.

Recent 472.84 from 2021.12.13
https://www.nvidia.com/Download/driverResults.aspx/184241/en-us

1 Like

First test

1 Like

Nvidia Apex Cloth simulation in real time with Unreal 4.27 , some mesh to mesh interference but still very nice.

Testing a hat I made with A LOT of cloth physics hanging from it. Cloth simulation is stock APEX in Unreal Engine 4.27.

Also if you think this is Raiden from Mortal Kombat please watch Naruto.

Body Motion Capture System:
Vicon - 10x Vero Cameras

Workstation 1 (Vicon Shogun and Unreal Engine)
HP Z8 - Dual Intel Xeon Gold - Nvidia A6000

Recording Workstation (OBS)
Custom PC - AMD 1950x - Nvidia 2080ti
Elgato 4K60 Pro

2 Likes

Nvidia evolves Gaugan now with Text, Segmentation and Sketch inputs to build an image.

4 Likes

NVidia published on their blog about Spatiotemporal Blue Noise (STBN) textures that become better than regular Blue Noise texture over time.

While they show great or not worse result compared to Blue Noise texture it doesn’t seem like they overcome limitation of being useful only on low sample count or low-dimension algorithms (whatever that is).

2 Likes

Here is Meta AI’s new neural network that allows reconstructing animatable 3D models from videos:

https://banmo-www.github.io/

61d40f0ee6681

6 Likes
4 Likes

Displacement mapping is a powerful mechanism for adding fine to medium geometric details over a 3D surface using a 2D map encoding them. While GPU rasterization supports it through the hardware tessellation unit, ray tracing surface meshes textured with high quality displacement requires a significant amount of memory. More precisely, the input surface needs to be pre-tessellated at the displacement map resolution before being enriched with its mandatory acceleration data structure. Consequently, designing displacement maps interactively while enjoying a full physically-based rendering is often impossible, as simply tiling multiple times the map quickly saturates the graphics memory. In this work we introduce a new tessellation-free displacement mapping approach for ray tracing. Our key insight is to decouple the displacement from its base domain by mapping a displacement-specific acceleration structures directly on the mesh. As a result, our method shows low memory footprint and fast high resolution displacement rendering, making interactive displacement editing possible.

6 Likes

OMG ! This is insane.

This tech maybe could be used to make high detail sculpts fast and easy without the need of Zbrush?