Nvidia fast object photogrammetry
4 posts were merged into an existing topic: Another AI vs artists thread
What was that video even about? Like if photogrammetry hadn’t existed before (without machine learning)?
So it’s supposedly faster or what? Even if it is, I could probably just have thrown more computational resources at the problem, to make it faster either. But that would have cost a hell of a lot of money.
Fortunateley, Nvidia’s graphics cards don’t cost a hell of a lot of money, right?
.- Wait a moment…
That is not wise.
They don’t cost. You can buy a 3060 laptop for about 1000 euros that have a lot of other uses for us 3D like for example Cycles, or 2D video editing encoding and FX in several video editors.
The more jobs a GPU can do, the cheaper it is. So we should welcome more and more jobs a GPU can do, be either Nvidia or ATI or Intel or …
11 posts were split to a new topic: Another AI vs artists thread
This video (part #1 of an ongoing tutorial series) introduces Mitsuba 3, a new differentiable renderer developed by the Realistic Graphics Lab (https://rgl.epfl.ch) at EPFL in Switzerland. The tutorial series provides a gentle introduction and showcases the work of different team members.
I think this is probably one of the coolest things I’ve ever seen.
I don’t think it’s that grim. These tools will never replace humans for creativity, because it’s impossible for a machine to be ‘creative’ - unless, of course, it goes full ‘Ghost in the shell’.
People still use 2,500 year old sculpting and painting techniques, even though they could use computers.
They will be used for more technical/laborious tasks, but not creative ones. Take mocap for example. It’s been around years, but even though it’s 100% realistic movement, it looks terrible against a skilled human animator’s work. Ironically, it’s lifeless.
This is a very impressive new volumes standard. The reduction in simulation data size is huge compared to OpenVDB.
I have seen this yesterday. It is going to be interesting whether they can make those suitable for production. As far as I understand, they have to train a neural network for each frame. Is this streamlined enough that it always just works (in the paper, they used different settings)? Installing a deep learning framework is still not trivial (especially for end users), can this be simplified for this kind of application?
Once the neural network has been trained, the decoding/decompression seems to be quite slow too. I wonder whether this is difficult in practice or if artists would know they had to work with a scene the next day, let the whole thing decompress over night and then work with it. (So the compression would mostly be used for archiving?)
Great questions. I took the video at the face value with what was announced. I had the feeling there was more to it but didn’t dig deep. My hope was it’s easy to implement and this can be adopted industry wide without caveats. Of course, with NVIDIA there’s often a catch. I wounder if the new NeuralVDB is centered around their GPUs or specific architecture that is a must to benefit from this format.
USD to be the File System across industries including Architecture, Mechanical Cad, Industrial Simulation etc.
Effort to Further USD as Foundation of Open Metaverse and 3D Internet Led by Pixar, Adobe, Autodesk, Siemens, Plus Innovators in Media, Gaming, Robotics, Industrial Automation and Retail; NVIDIA Announces Open-Source USD Resources and Test Suite
At its SIGGRAPH special address, the company shared forthcoming updates to evolve USD. These include international character support, which will allow users from all countries and languages to participate in USD. Support for geospatial coordinates will enable city-scale and planetary-scale digital twins. And real-time streaming of IoT data will enable the development of digital twins that are synchronized to the physical world.
To accelerate USD development and adoption, the company also announced development of an open USD Compatibility Testing and Certification Suite that developers can freely use to test their USD builds and certify that they produce an expected result.
“Beyond media and entertainment, USD will give 3D artists, designers, developers and others the ability to work collaboratively across diverse workflows and applications as they build virtual worlds,”