I would like that this topic will remain for new technologies that appear related to 3D
Photos + Neural Magic = 3D render
We present a novel point-based, differentiable neural rendering pipeline for scene refinement and novel view synthesis. The input are an initial estimate of the point cloud and the camera parameters. The output are synthesized images from arbitrary camera poses. The point cloud rendering is performed by a differentiable renderer using multi-resolution one-pixel point rasterization. Spatial gradients of the discrete rasterization are approximated by the novel concept of ghost geometry. After rendering, the neural image pyramid is passed through a deep neural network for shading calculations and hole-filling. A differentiable, physically-based tonemapper then converts the intermediate output to the target image. Since all stages of the pipeline are differentiable, we optimize all of the scene’s parameters i.e. camera model, camera pose, point position, point color, environment map, rendering network weights, vignetting, camera response function, per image exposure, and per image white balance. We show that our system is able to synthesize sharper and more consistent novel views than existing approaches because the initial reconstruction is refined during training. The efficient one-pixel point rasterization allows us to use arbitrary camera models and display scenes with well over 100M points in real time.
Code is available, maybe can we get 100M points in Blender?
NVidia published on their blog about Spatiotemporal Blue Noise (STBN) textures that become better than regular Blue Noise texture over time.
While they show great or not worse result compared to Blue Noise texture it doesn’t seem like they overcome limitation of being useful only on low sample count or low-dimension algorithms (whatever that is).
Displacement mapping is a powerful mechanism for adding fine to medium geometric details over a 3D surface using a 2D map encoding them. While GPU rasterization supports it through the hardware tessellation unit, ray tracing surface meshes textured with high quality displacement requires a significant amount of memory. More precisely, the input surface needs to be pre-tessellated at the displacement map resolution before being enriched with its mandatory acceleration data structure. Consequently, designing displacement maps interactively while enjoying a full physically-based rendering is often impossible, as simply tiling multiple times the map quickly saturates the graphics memory. In this work we introduce a new tessellation-free displacement mapping approach for ray tracing. Our key insight is to decouple the displacement from its base domain by mapping a displacement-specific acceleration structures directly on the mesh. As a result, our method shows low memory footprint and fast high resolution displacement rendering, making interactive displacement editing possible.
This is giving me the creeps. Makes me think of these worm parasites that grow inside of insects and get out when they’re fully developed. Nightmare fuel
Breakthrough in GPU performance technology from NVIDIA, IBM, and universities to increase the performance by directly connecting to SSDs instead of relying on the CPU
Big accelerator Memory, or BaM, is an intriguing endeavor to lower the dependence of NVIDIA GPUs & comparable hardware accelerators on a standard CPU such as accessing storage, which will improve performance and capacity.
Microsoft’s DirectStorage application programming interface (API) promises to improve the efficiency of GPU-to-SSD data transfers for games in a Windows environment, but Nvidia and its partners have found a way to make GPUs seamlessly work with SSDs without a proprietary API. The method, called Big Accelerator Memory (BaM), promises to be useful for various compute tasks, but it will be particularly useful for emerging workloads that use large datasets. Essentially, as GPUs get closer to CPUs in terms of programmability, they also need direct access to large storage devices.
Modern graphics processing units aren’t just for graphics; they’re also used for various heavy-duty workloads like analytics, artificial intelligence, machine learning, and high-performance computing (HPC). To process large datasets efficiently, GPUs either need vast amounts of expensive special-purpose memory (e.g., HBM2, GDDR6, etc.) locally, or efficient access to solid-state storage. Modern compute GPUs already carry 80GB–128GB of HBM2E memory, and next-generation compute GPUs will expand local memory capacity. But dataset sizes are also increasing rapidly, so optimizing interoperability between GPUs and storage is important.
There are several key reasons why interoperability between GPUs and SSDs has to be improved. First, NVMe calls and data transfers put a lot of load on the CPU, which is inefficient from an overall performance and efficiency point of view. Second, CPU-GPU synchronization overhead and/or I/O traffic amplification significantly limits the effective storage bandwidth required by applications with huge datasets.BaM essentially enables Nvidia GPU to fetch data directly from system memory and storage without using the CPU, which makes GPUs more self-sufficient than they are today. Compute GPUs continue to use local memory as software-managed cache, but will move data using a PCIe interface, RDMA, and a custom Linux kernel driver that enables SSDs to read and write GPU memory directly when needed. Commands for the SSDs are queued up by the GPU threads if the required data is not available locally. Meanwhile, BaM does not use virtual memory address translation and therefore does not experience serialization events like TLB misses. (…)
Hah, at bloody last ! it was bound to happen, and it will completely change the game.
Nvidia and its partners plan to open-source the driver to allow others to use their BaM concept.
Wait… it’s only a driver ?? I thought it was some kind of hardware interface…! and it’s going to be opened ?? it’s mad. Does that mean it’ll possibly run on current generation hardware ?
BaM instead provides software and a hardware architecture that allows Nvidia GPUs to fetch data direct from memory and storage and process it without needing a CPU core to orchestrate it.