Future CGI workflows: Ones resembling 2D painting and photography?

Let me discuss both points here.


3D work being done in a way that resembles 2D painting, but with the advantages of 3D

We are already seeing the possibilities of this with the likes of technologies like Ptex, the advent of Ptex means we can texture meshes, both simple and complex without UVmapping. How could it make a work-flow similar to 2D painting from start to finish.

Simple: The possibility that with enough of a painting system, you can just remove lights (or have a very basic setup), and using Ptex painting to create fake shading complete with fake highlights and bumps as well as light and shadow.

Rendering would be very quick as there’d be no complex shading and doing multiple angles can be done without having to start again from scratch as so in a 2D program. The downside may be due to having to paint reflection and refraction effects in the camera view per-angle, but this workflow may be possible in Blender in the near future without the need for an external renderer.


Setting up a scene and lights like a 3D variant of photography.

This possible future workflow is being done through the likes of the race to create professional-grade unbiased renderers like Lux, Indigo, Octane, and in some cases the new Thea renderer.

The promise of unbiased rendering is this: Set up the structure of your scene (perhaps even using scripts to ease creation of various items), give them materials and textures, and place lamps and emitting meshes where lightsources would be, render for as many hours as needed, and you have an image that looks as if you built it in real-life and took a picture.

This could allow: Instead of artists working like a technical department trying to find the perfect places to have bounce lights and caustic lights, the artist works more like a photographer not having to have any in depth knowledge on how to make a light rig that resembles actual light physics. Caustics and indirect lighting come by default and there’s no hassle over whether that bounce light over there is too far to the right, needs more power, needs another lamp nearby, or needs a color/falloff tweak.

The best news for us users of Blender however is that we now easily have that option because of the 2.5 rewrite (because of the fact that exporters like LuxBlend25 can replace the UI panels entirely instead of being largely restricted to a special scripts window.)

Right now the main disadvantage is sometimes you have to tweak the GI settings quite a bit to get good, noiseless lighting, but could be resolved with better defaults out-of-the-box.


Your thoughts?

My main weapon of choice in my day job is Adobe Creative Suite, arguably the best 2D suite on the market plus a slew of others and have been using similar software since Ventura 1.0 on an 286 IBM XT,128 kilobytes of RAM, amber monochrome CRT screen with no mouse or GUI of any type. Recently the main thrust of re-factoring for Adobe has been tighter integration the workflow across all the applications in the suite using the unified application framework which is still not complete and a result UI not completely uniform across all the apps. The mac people still prefer their multi-viewport set ups to app framework which was intended to emulate the tiling, non-overlapping UI of the video apps, GPU acceleration for as many features as possible to improve UI responsiveness and make previously pre-rendered/pre-visualised work flows real-time. After Effects, Photoshop, InDesign have access to the GPU, however, the rest still to receive this kind of love though and many features are targeted but not yet implemented. Decluttering the UI as the number of features has climbed. Automation. Ease of conversion from one format to another e.g. Off-set pdfs for print, interactive pdfs for web, flash and html without having to redo or change anything i.e. design once only for multiple targets.

Any of this sound familiar? Also, if Blender’s workflow was anything like my 2D workflow, I would simply save the money for Maya and use that instead despite my moral grounds for promoting open source. The singularity is going to get you, you proprietary capitalist bastards…

Well the standard point I was trying to make with the first part was that this can be added into Blender Internal without having to wait for the shading refactor rewrite in which the start date is uncertain right now.

This could help give BI a better reason to exist in its current state as an artistic, non-physical based renderer, as we may not see Luxrender-style physically-based GI and materials in the near future unless someone thinks that reinventing the wheel with a BI rewrite is preferable to coding and adding new features in Lux or Yafaray.