Google is developing an image database where images are classified by A.I.
This video (part #1 of an ongoing tutorial series) introduces Mitsuba 3, a new differentiable renderer developed by the Realistic Graphics Lab (https://rgl.epfl.ch) at EPFL in Switzerland. The tutorial series provides a gentle introduction and showcases the work of different team members.
I think this is probably one of the coolest things Iâve ever seen.
I donât think itâs that grim. These tools will never replace humans for creativity, because itâs impossible for a machine to be âcreativeâ - unless, of course, it goes full âGhost in the shellâ.
People still use 2,500 year old sculpting and painting techniques, even though they could use computers.
They will be used for more technical/laborious tasks, but not creative ones. Take mocap for example. Itâs been around years, but even though itâs 100% realistic movement, it looks terrible against a skilled human animatorâs work. Ironically, itâs lifeless.
This is a very impressive new volumes standard. The reduction in simulation data size is huge compared to OpenVDB.
I have seen this yesterday. It is going to be interesting whether they can make those suitable for production. As far as I understand, they have to train a neural network for each frame. Is this streamlined enough that it always just works (in the paper, they used different settings)? Installing a deep learning framework is still not trivial (especially for end users), can this be simplified for this kind of application?
Once the neural network has been trained, the decoding/decompression seems to be quite slow too. I wonder whether this is difficult in practice or if artists would know they had to work with a scene the next day, let the whole thing decompress over night and then work with it. (So the compression would mostly be used for archiving?)
Great questions. I took the video at the face value with what was announced. I had the feeling there was more to it but didnât dig deep. My hope was itâs easy to implement and this can be adopted industry wide without caveats. Of course, with NVIDIA thereâs often a catch. I wounder if the new NeuralVDB is centered around their GPUs or specific architecture that is a must to benefit from this format.
USD to be the File System across industries including Architecture, Mechanical Cad, Industrial Simulation etc.
Effort to Further USD as Foundation of Open Metaverse and 3D Internet Led by Pixar, Adobe, Autodesk, Siemens, Plus Innovators in Media, Gaming, Robotics, Industrial Automation and Retail; NVIDIA Announces Open-Source USD Resources and Test Suite
At its SIGGRAPH special address, the company shared forthcoming updates to evolve USD. These include international character support, which will allow users from all countries and languages to participate in USD. Support for geospatial coordinates will enable city-scale and planetary-scale digital twins. And real-time streaming of IoT data will enable the development of digital twins that are synchronized to the physical world.
To accelerate USD development and adoption, the company also announced development of an open USD Compatibility Testing and Certification Suite that developers can freely use to test their USD builds and certify that they produce an expected result.
âBeyond media and entertainment, USD will give 3D artists, designers, developers and others the ability to work collaboratively across diverse workflows and applications as they build virtual worlds,â
This is a pretty big effort to establish USD as a default 3d data exchange format. Iâm pretty happy about it. I only hope Blender wonât fall behind with the support of USD. Very excited about USD and MaterialX adoption and other open standards.
I donât understand whatâs going on First DreamWorks release MoonRay as open source. Now Autodesk develops and going to release realtime path tracing render engine Aurora which supports Hydra.
This is just seems too good to be true. Iâd love to know the motivation for this. There are two things that comes to mind why we see so much tech going open source:
- Blender influence. Blender really captured a huge mind share from the 3D community and now has amazing momentum. Companies are trying to get back the mind share to producing some open source tools to keep/attract users in their bigger ecosystem of both paid and free tools.
- Fragmentation of tools and standards is very bad for the 3D industry as a whole. Having greater interoperability reduce friction in the pipeline reducing costs of assets production and artists training. Having less formats which are industry adopted will improve the progress and ease the art creation process in the long run.
I wounder if itâs any of these reasons or there are other completely different motives for this flood of open source tech/formats in resent years.
Anyway, what ever it is Iâm really happy with what is happening here.
If a company has parts of their code open source, engineers may already be familiar with some the code, helping them to adapt faster. Or they may find engineers within the group of people who are contributing code.
Those are two arguments I hear on a regular basis.
Keep in mind that Autodesk does not seem to have any intention to open source the big one as far as render engines go (Arnold). I would actually not expect them to open source anything that might be seen as a solid alternative to their own big products like Max and Maya (at least if you want to do more than smaller scenes).
Iâm pretty cynical (I like to call it ârealisticâ) about big tech companies like Autodesk; I think they consider open-sourcing software for these reasons:
- To push their own tech as a standard.
- If they fear theyâre not nimble enough as a company to keep up with the speed of tech on all fronts, to harness the volunteer efforts of open-source enthusiasts, and to broaden their own access to a pool of people familiar with at least one of their products so they can suck them into their greater ecosystem.
- End-of-life extension, so they no longer have to maintain software that might still have thousands of loyal customers who might not be easily convinced to shift to another product of the same company at that point.
Theyâre pretty much copping to #2 in the announcement â well, the first part of that.
They do so relatively unwillingly because they hate to give up control, and they have no true fondness for open source. So me, I prefer to contribute to independent open-source projects, and not those commercially motivated honeypots.
Yes, that makes sense for sure. Good point.
I donât know. At a glance Aurora seems like a good alternative to Eevee. In a sense that Eevee was really sucesseful offering the expirience in instant feedback when you look developing. The speed of iterations and instant feedback is a big deal. Hence we see many production renderers making sure they have an integrated solution to offer what Eevee offers. Such engines like Storm from Pixar, potentially Brigade from Octane, Redshift RT mode and now Aurora from Autodesk.
The one important part of these renderers is that they are based on open standards like USD and MaterialX and supports Hydra. Meaning you can use any DCC which supports Hydra to run these engines. Both Arnold and new real time Aurora supports Hydra, MaterialX and USD. So it should be a no brainer to develop your scene in Aurora and render production quality final shots in Arnold. You experience should be potentially what now Eevee/Cycles combo offers. Youâd be able to seamlessly switch between Arnold/Aurora but itâll be a better experience to develop your scene in Auroara 99% of the time. At least itâs how I see this unfold.
I guess Eevee really showed the benefit of real time interactions for the look dev in DCC even if you loose some quality. But also with latest advances with light transport algorithms the difference can be negligible if non at all in a year or two.
I get it. I also donât believe that corporations have our best interests in mind. Still, these moves looks like theyâll benefit everyone.
Of course, corporations are looking after their on interests but in this case it doesnât look like weâre getting the short end of the stick.
Autodesk has been wanting to have a raytraced viewport for about two years. I think ever since Eevee dropped, they have been showing interest in getting a higher quality viewport. At first, raytracing was all the hype, so they worked on a GPU version of Arnold (which was acquired after the whole MentalRay discontinuation). They wanted it to compete with Vray to get quicker feedback right from within the viewport, but as you may know, it still takes a while to get a raytraced image depending on the scene and GPU, so the 3ds Max team decided to go for partially raytraced, partially rasterised rendering of the viewport. I am under the impression that this is the result of that effort.
However, Iâm not sure whether this can shipped with Blender, due to GPL V3 vs Apache 2.0 licensing. See: https://www.apache.org/licenses/GPL-compatibility.html
If the license blocks shipping the software with Blender, then Autodesk doesnât aid the (current) competition.
I would fully agree with you if we hadnât seen 3ds Max implement USD, Open Colour IO, Material X, etc. The Max team is really pushing these standards, from what I can tell and are thus actively contributing (especially with Material X, which they have basically adapted to the point where Autodesk Standard Surface mtl = Material X).
This wonât be an issues as soon as Blender supports Hydra. I believe Brian from AMD is working on making it happen. I think itâs the matter of time.
Also, I failed to mention: Fusion 360, Inventor, Revit and Autocad, to name a few, donât use Arnold. They either use Autodeskâs proprietary cloud rendering or ART render (Autodesk Ray Tracer). So in that regard, weâll have to see whether Aurora lands in anything but 3ds Max and Maya.
Sofar, I havenât seen the Revit team indicated implementing Aurora⌠They instead produce their own 'Consistant colours with textures" mode.