The human gap in 3D artwork

From my obervation, it appears as if 3d CGI, by and large, as a form of visual art, seems to have a clear tencency towards a certain sterility and lifelessness.

Of course it has such a tendency. Computer Generated Images are composed of a complex but very structured programs that are themselves made out of complex pure mathematical/cybernetic rules and laws.
Even our randomness generator programs produce surprisingly uniform values which seems contrary to the intuitive understanding of noise and variation but you will be (rightfully) corrected that in fact this is the expected and correct behavior of such programs since they in effect are made to implement certain mathematical equations. It is hard(but possible) to overcome this inherit nature of CGI. (which in truth is not even desirable as some of the best examples of 3d art ive seen instead embrace purity, sterility and precision of computers and end up being far more expressive)
Specifically your comparison images: renders are made with the usage of precise psychically based renderers that are biased towards “photo-realism” while paintings do not care about realism at all and focus on expressiveness

Is it a shortage of available (premade) character assets?

Yes its both lack of assets and tooling.
Specifically in the case of your pictures we lack suitable universal(as in cross-platform and non-proprietary) environment/plant/prop generation software as well as good human/animal generation software and no suitable NPR software. All of these problems are thankfully being tackled on by our engineers. To ask an artist to create a scene of such complexity from scratch in 3d is ridiculous. (Even your revered Davincis used various tricks to get their paintings done faster.)
For example:
Why is there so many (good) tf2 Gmod animations than anything else? why would ANYONE in their right mind ever decide to use an fps game that has absolutely no 3d animation tooling as an 3d animation platform? and not use something like lets say Maya? But when you realize that Gmod costs 10 bucks and that its interface is far simpler and more intuitive, comes pre-loaded with ready made configurable assets of wide variety and (relative) quality question will be turned on its head: why would ANYONE ever pay thousands for a software with primitive and un-elegant tooling where you have to manually poly model, rig, texture and shade a character and then animate it with its clunky massively over engineered interface with default grayscale viewport rendering compared to real-time WYSIWG gmod “viewport”?

This massive overhead of so called “professional” and “real” 3d software is not a feature its a bug caused by the youth of the field and its accompanying lack of established good abstractions for manipulation and visualization of 3d space.(This causes overspecialization and departmentalization which in turn reinforces software producers to continue producing their clunky primitive software. Final result of this perpetual feedback loop is stumped artistic development of 3D/CGI as a field)

This is from the preface of the book “Fundamentals of Computer Graphics”

The cover image is from Tiger in the Water by J. W. Baker (brushed and air-brushed acrylic on canvas, 16” by 20”, www.jwbart.com).
The subject of a tiger is a reference to a wonderful talk given by Alain Fournier
(1943–2000) at a workshop at Cornell University in 1998. His talk was an evoca-
tive verbal description of the movements of a tiger. He summarized his point:
Even though modelling and rendering in computer graphics have
been improved tremendously in the past 35 years, we are still not
at the point where we can model automatically a tiger swimming in
the river in all its glorious details. By automatically I mean in a way
that does not need careful manual tweaking by an artist/expert.
The bad news is that we have still a long way to go.
The good news is that we have still a long way to go.

With things like PBR, Texture painting, Sculpting, Non techincal CAD-like non destructive modelling, Realtime rendered viewports and node based systems, open source tooling, open standards, unification of APIs and so on we are slowly getting there.

The goal is make so that CGI artists could create as easily and as expressively as picasso could paint! Only then can we truly realize full unexplored potential of CGI!

Creation and development of open source and free software is paramount for this goal if we didnt vote for Bush we would’ve been on mars already and if Houdini wasn’t proprietary we would’ve been a galaxy fairing civilization.
Proprietary software hinders development of standards, prolonging standard wars, fractures CGI community and obviously slows future CGI developments by intentionally obscuring certain novel concepts forcing industry to reinvent the wheel 15 times in a row.
And of course cost of a guashe paint and bruses is inconvenience. Cost of a CUDA computer and a Modo license is your living.

Consider this: probably 50% of all 3d CGI tutors in gnomons catalog started out in CG with pirated software; Vast majority(if not all) internets artistic output came from pirated software and certain entire countries entertainment industry and cultural landscape is entirely dependent on pirated software (there is no one in lets say poland who can afford a 3ds license therefore there is no way for a country to naturally produce artists of needed volume and quality without pirated software)

What I mean is there is no storytelling to digital art, if we talk still images, especially if we talk 3d stills.

Nonsense. I believe that there tons of our own Michelangelos that we simply dont appreciate now just like how no one appreciated these “frutiger areo” types in the past and i believe that 3D and CGI in general far surpasses anything traditional. We simply havent seen anything yet

Are artists just so overexposed to postmodernistic popcultural klischees, it wouldn’t occur to them to make an artwork about,- say the assasination of Julius Caesar - unless they’d seen that in a Netflix series the week before?

Painting about assasination of Julius Caesar is precicely the type of postmodernistic popcultural cliche gruel that modern day artists would make for instagram/artstation likes. We are in 21st century not in renaissance, Roman and Greek dramas/myths do not have the same cultural power as they did back then(Do not be under the impression that we no longer communicate the underlying morals and myths outlined in Julius Caesars assassination though. We do. But under a different guise more fitting to our modern social political media landscape)

1 Like

Also this tendency of artists not making complex composite scenes that are not conveying some kind of “Story” in other words artists choosing to be more specialized is not unique to CGI this “specialization” effect in fact is seen everywhere. In sciences: past scientists were interested and proficient at many fields: Newton was an occultist physicist and a mathematician now each of these fields have their own sub fields within which you can specialize. When it comes to art nowadays still images 3d models and so on are not end on itself items but mostly part of a larger pipeline in producing a greater work like a movie or a game but since such things for now cannot be made by individuals and only by team effort specialization becomes mandatory otherwise the producers/directors will not have the necessary levels of control over their work. this gave birth to concept artists, mechanical designers, character designers and so on CGI was only coincidentally born into this mess (or maybe it was destined to be born into this mess specifically-through development of more powerful abstractions-to put an end to over-specilization!)

2 Likes

In my case, I consider myself more of an illustrator than an artist. My interest in CGI comes from the desire to make accurate and compelling visualizations of products and places. And that generally is the focus of commercial art. In fine art however, story matters. And as you observed, it is a lot harder to make all the individual personalized elements and compose a scene in 3d compared to traditional media. In many ways, it would be easier to compose and make a traditional media fine art piece and then emulate it in 3d. But at that point, why bother? A second point would be, given CGI’s dominance of commercial art and illustration, a fine art piece composed in 3d has the stigma of being perceived as “cheap”, even if a lot more effort went into its creation.

1 Like

Yes, absolutely.

@Net_Pedestrian: What’s with the blur spoilers? Was that by accident?

Maybe a missunderstanding, but NPR isn’t as such a factor to anything I talked about. I was referring to content, to what is being depicted, not how it is being depicted as such.

But why? And if so, how is asking them to do it brushstroke by brushstroke and in absence of undo not even more ridiculous?

Yet they don’t.

Sure. On the other hand, Julius Caesar was a historical figure, a real person and his assasination a real historical event, much other than,- say Spiderman.

And it was really just a random example, I have the impression folks are a little too inclined to stick to that precise example.
But an example of history art could as well be e.g. Nelson Mandela in a cell on Robben Island.
An example such as this would even be very little extra effort on top of the kind of character studies/portraits of which you find plenty, be it on zbrushcentral or artstation (if you look in an appropriate category, like ‘anatomy’ or ‘character modeling’).

Yes that’s probably also why very rarely anyone bothers to tell a,- for the lack of a better word - serious story in animation (by which I mean something along the lines of ‘Waltz with Bashir’ or ‘The Wind Rises’). It’s probably a lot cheaper to just use lifeaction.

On the other hand, you’d think some genres would actually benefit from CGI, specifically surrealism.
And in fact it does, insofar as at least in movies, if something profoundly surreal happens, it’s most likely CG (say the swarm of rats in ‘Shutter Island’, the bending city in ‘Inception’, what have you).

On a sidenote, I occasionally dream of having a raytracer which allows to input an arbitrary 3D vectorfield to ‘bend’ the rays as they traverse the scene. That would be so cool.
And probably break all sorts of assumptions made in raytracing code, I guess^^.

greetings, Kologe

What is the difference between bending the rays and bending the space?

1 Like

I am pretty positive there is none. That said, I’ve never quite heard of a renderer which would allow such a thing (and for regular usecases, why would it?).

greetings, Kologe

1 Like

Are there any use-cases besides rendering gravitational lensing, (which can be done easier)?
I am just curious why you would want such a specific niche thing…

1 Like

None I’d know of, not really.
I guess technically, you could render the most physically accurate heat distortion ever, if you had such a feature. That’s about the most practical thing I can think of, right now.

But practical usefulness aside, as already implied, the interest lies in the surrealism, first and foremost.
Btw. I’m not even convinced gravitational lensing can actually be done easier, at least not in its full generality (I suspect).
If we talk the gravitational lensing of a point-shaped, zero-dimensional singularity (like a black hole), yes, but how about e.g. a one-dimensional, line-shaped singularity, around which spacetime bends similar to the way water bends while flowing over a barrage (if regular direction of transmission is were defined as horizontal flow)?

For illustration purposes:

Thinking further, what would it look like, if such line-shaped singularity was forming a closed loop to become a ring-shaped singularity, if you will? Now make that a torus-knot shaped singularity and it would probably result in some mad optical distortions.

Though it’s possible it’d be more dull than I imagine. :smile:

greetings, Kologe

2 Likes

That’s kinda neat, but also the most niche thing I can think of.

Ok, got it, but I doubt this is the right approach. I have been going down that road of trying to bend light (via refractive materials) to create interesting surrealistic and abstract shapes and visuals, but it lead nowhere. It can be a cool effect, but it’s not really a tool in itself beyond that. I also think its a colossal waste of resources doing it this way.
Rather than doing this in the renderer the right way is probably doing it in compositing and with strong and focused art direction on the meshes and materials itself.

1 Like

While I see what you mean in terms of such a thing being of hardly any practical use and a waste of resources if by that we mean development-effort needed to make such a thing work divided by number of usecases, I think it’s quite the opposite.

Computational effort (renderer-work) is cheap, art direction effort (artist work) is expensive.
I always hear everyone talk a lot about the ability to iterate quickly and stuff.
Now the thing is, if I come back to the lemma bending the rays and bending the space is interchangeable, I have to admit, I have a lot more confidence in my ability to visualize in my head the former (rays bending) rather than the latter (space bending).

Hence I claim leaving the bending (be it of rays or space) to the computer/renderer would exactly be the more sensible approach, compared to trying to bend your imagination and picture (in your head) the end result and then work towards that in a top-down approach.

Take this example:

Hand_with_Reflecting_Sphere
ded37bab0dab2fb5e6c5d367e3b50e352b3841d9

Hand with Reflecting Sphere - M.C. Escher || Hand with Reflecting Sphere, after Escher

I don’t mean to be condescending towards the guy who made the Blender-replica, but it goes without saying which one is easier done: The raytraced reflection or the drawn one.
And this is a case where Escher himself probably didn’t draw (no pun intended) from imagination alone, but rather used an actual mirror ball and his actual hand and actual room for reference.

Also note this example from Wikipedia:

Wormholes are traversable connections between two universes or between two distant regions of the same universe.
The wormhole shown here connects the place in front of the physical institutes of Tübingen university with the sand dunes near Boulogne sur Mer in the north of France.
The image is calculated with 4D raytracing in the Morris Thorne wormhole metric {isplaystyle ds^{2}=-dt^{2}+dl^{2}+(b_{0}^{2}+l^{2})(dheta ^{2}+in ^{2}heta dhi ^{2})},
{isplaystyle b_{0}=} throat radius.



Though I currently have no actual intention to explore this in practice. Maybe if I should come across a renderer accessible to me which supports both OSL’s trace -function in full, importantly including the shade -parameter (Cycles doesn’t) as well as volume shaders (Appleseed doesn’t).

I suspect, given such complete OSL support, one might be able to hack something together to the effect of a volume shader which alters the ray direction at every bounce (in other words, the discretized, renderer-friendly approximation of a bent ray). Well, maybe. Very maybe.

greetings, Kologe

1 Like

look up DNGR
awesome stuff

I kinda-sorta think that we are maybe being a little bit unfair here. “Artists are always very-constrained by the medium that they necessarily must use.” Be it paint, photographic film, video, or … marble. They learn how to work within the “limitations,” and sometimes manage to bend those “limitations” to achieve sometimes-striking artistic effect.

Instead of spending too much time thinking about how to “re-create” the effects that our predecessors were forced(!) to use, I think that we should simply push our new, essentially-unlimited, “canvas” forward.

3 Likes

You should look around a bit more, than where ever you’re looking… these are certainly telling a story. Though the casual viewer may not know what the specific story is for each image, it’s clear that there is one.

Spoiler tags added for those who have not seen Arcane, and still intend to.

2 Likes