Redshift & Arnold conference @ GTC

Hello, here’s the links for the presentation of the Redshift & Arnold team at the GTC

Redshift : http://on-demand.gputechconf.com/gtc/2018/video/S8404/
Arnold : http://on-demand.gputechconf.com/gtc/2018/video/S8841/

Didn’t see the one of Arnold yet, but it’s pretty exciting for Redshift knowing that it will come to blender one day :smiley:

How do you know it?

It’s on the redshift forum and I posted it on the redshift post on BA.

If you have trust in what the Blender One guys are trying to do than Arnold will probably come to Blender too.
I for one welcome the day when Blender has access to all important industry Renderers.

I am very curious about that. I have never seen any properly integrated commercial renderer in Blender due to the GPL limitation. Every single integration so far was just an exporter that severely limited the workflow efficiency, compared to proper integration in other packages.

There are also some very hacky workaround integrations (Thea, VRay, Octane) using invisible windows over Blender’s viewport, custom builds, and sockets, but yeah, deep commercial renderer integration within Blender is one of my biggest pain points with the GPL license.

Not the GPL debacle again… Simply get another DCC tool to run along. Houdini is more than affordable.:rolleyes:
Yup, seen both and - conference presentations as any other: “The frog screams sloooowly. Nothing new on the west front.”

well beside the announce in the RS forum, he said it in the video

I am sorry, but that is just so ignorant thing to say for many reasons.

1, This is blender forum, we want to obviously render in Blender, not some external tool. If I point out that hacky exporters make workflow efficiency worse, how is exporting the whole creation to another DCC and setting rendering up in another DCC easier? I mean that is even way worse workflow than limited exporter directly into a renderer.

2, Houdini is a good tool for advanced effects, but it’s UI/UX nightmare for simple every day tasks like modeling and scene assembly, it’s not a Blender replacement, and it’s not a very good choice for assembling and rendering scenes. I mean, take a look at how most people use Houdini. They actually take sims and assets created in it and usually assemble scenes and render them outside - for good reasons.

3, Houdini Indie is more than affordable. Once you cross the 100k income limit (before the tax actually), then Houdini becomes the most expensive DCC software on the market. I am not complaining about its price, because it has a power to back it up, I am just pointing out that Houdini is not as cheap for everyone.

4, Let’s say someone is just a Blender user. You suggest him to buy whole another DCC, then learn it enough to get as proficient with it as Blender (Usually a task for at least 3 months, and one year on average), then buy a renderer licence for it, not Blender, then suffer a lengthy workflow of destructively exporting your scenes into it and render them there, while giving up great scene management tools like render layers and upcoming collections, then whenever a change comes, go back to Blender, do the changes there, destructively export it again, and usually manually re-apply all the changes to newly imported assets. You really think that is an appropriate solution to a problem like this?

I mean I myself do use Blender only for modeling, and render in Max or Unreal engine, but I would never be silly enough to suggest it as a feasible replacement of a renderer that’s properly integrated in Blender.

Ignorant?
Ignorant are those who force others to see trough their eyes.

YES, exactly. Show respect to authors and cope with reality. But here, most of all, stay on topic!!!

Okay that’s fine and all but, what is the point of talking about this? There is literally no way to fix the GPL issue. They would have to go through all the code and get written consent from any and all contributors in order to change the license type. I bet there are hundreds of lines that no one even knows who wrote. Or i guess they could rewrite Blender from the ground up with a more permissive license? That would work. :wink:

Yes, I was asking not because I wanted to turn it into another GPL thread. I was asking to see where I can find more about it, so I can take a look on how they tackled the integration, and if they came up with some better way. Then burnin jumped it and started to spew a nonsense about using other package just to render. It was just too much BS not to react to.

I really just wanted to know how RS developers tackled the exporter if they are doing it officially, because before I even posted, I went to the Redshift forum and searched, but I could not even find any official acknowledgement of the exporter being in the works.

Yeah, to be honest, I’m not convinced that a sockets approach wouldn’t work good enough. A few years ago there was a guy who made a sockets connection between Maya and the Modo standalone renderer. It was amazing. Super fast and totally interactive. So, I’m not entirely convinced by people who claim it’s not possible or clunky. Another thing too is that even if it is a little clunky, as long as you support all the features, it’s better than nothing.

Yeah, I just found some videos. Here’s a comparison of Modo’s standalone renderer running in Maya through sockets and native VrayRT in Maya. Mind blowing:

EDIT: And just like that, the links are dead! Let the conspiracy theories began! :wink:

Yes, that’s exactly the reason I asked. I was wondering of RS guys came up with something like interactive socket exporter. My thoughts were that if they’re doing the exporter officially, it may end up capable.

Sockets are clunky, hard to read/write/test/design, easy to break, and still require a decent render API for good integration.

It’s sad that those talks don’t go much into implementation details. For a developer conference with a significant registration fee, those presentations mostly seem like product advertisement to prospective customers and not like anything that would educate developers.

Too bad, since both engines claim to have good out-of-core texturing on the GPU, and I’d love to see how theirs work compared to the rather basic mapped memory allocations that Cycles uses get out of VRAM limitations.

https://www.solidangle.com/news/arnold-5-1/
Release Notes: https://support.solidangle.com/display/A5ARP/5.1.0.0

True but at least Redshift had a VERY interesting presentation a couple of years ago, let me see if I can find it … yes, this one: http://www.luxrender.net/forum/viewtopic.php?f=34&t=12524#p118646

That is sort of how I was hoping the more new presentations would turn out too. I’m debating whether or not it would be worth to attend GTC again in person (I went a couple of years ago). Now that it looks like ray tracing is getting more emphasis again (GTC Europe looked like it was a pure AI conference almost), it could be worth it again as a computer graphics guy.

I am not very impressed. I hoped for a much more advanced adaptive sampling feature. The standard one everyone is using has got lot of disadvantages and the improvement areas are very obvious. Denoisers on image film is also a bit of a disappointment, because that’s something everyone else is doing and we see the collateral damage everyday in the forums, like with the caustics issue. I also see that their are taking a lot of opportunity costs with non core features and GPU raytracing. Seems that Autodesk culture is finally taking over.