Asking Square-Enix about thier 3D stuff

I just asked on thier live support (don’t know if it even will work) about if they would show how the 3D artist made thier stuff, like the textures they used, the modeling techs, how they animate, what renderer they use, software, etc.

 I was wondering if you think they might actually reply. And, also, if there was any other people I could contact ,like the 3D artist themselves, any of their sites.

Whats square enix??

The makers of the Final Fantasy games, and movies

You’d be surprised. I managed through some coincidence to contact a company who did some work on Lord of the Rings and the Day After Tomorrow and they gave me some valuable insight into their workflow. Basically, they do modelling and animation in Maya and save their work in a proprietary format.

When they need to render, they get some software to export to Renderman format, write their shaders manually and link the geometry RIB to the shaders using python scripts. Then they render using entropy.

Finally, they hand the shots over to the compositors who I think used Combustion or Shake. They said they relied on the compositing part the most for getting it to look realistic but also the shaders, which were made by the technical director.

This is why I and many others think that having good integrated Renderman support built into Blender would really make a big difference to how it’s perceived in the professional community. It’s the only missing link in the above workflow concerning Blender.

There were some interesting info on “The Spirits Within” here: http://www.renderman.org/RMR/Publications/sig01.course48.pdf.gz (5.6 MB).

I heard square-enix (not square usa) use Houdini to make movies for game.

Sometime ago I came across an interesting document about render workflow
that may interest you
http://www.renderman.org/RMR/Books/sig02.course16.pdf.gz (save as)

That has some interesting notes on anti-aliasing:

It seems like the best solution that you’d want would be getting a sort of uniform grey that
represented the average color of the black and white lines in the right proportions. That just cannot
happen with normal uniform sampling and standard reconstruction. There is no way to force the
samples to hit the lines and hit the background with exactly the right percentages everywhere. They
land where they land, and you’re stuck with it.

One approach would be to look at small regions of the floor, and analyze how much black
and how much white are in that region, and return the appropriate grey. This technique is called
area sampling. It is very appropriate if you can choose the size of the regions wisely (such as,
the amount of the floor that all is visible in the same pixel), and if your model of the lines lends
itself to calculating the areas of the intersections of the pattern pieces with the region.

for mathematical procedural textures:

Another popular technique is known as prefiltering. In this technique, whatever function is
generating the pattern is simply not allowed to create a pattern with frequencies that are too high.
For example, the black and white strips might not be allowed to be thinner than a certain width,
and as they go back in distance they are forced to become thicker and thicker to compensate for the
shrinking in perspective.

In general:

The final popular technique is to remove the restriction that the sampling happens at a pre-
dictable, regular interval. Instead, take the same number of samples, but spray them all over the
place at random and see what they hit. This technique is called stochastic sampling, and was de-
veloped by Pixar in the mid 1980s. The idea is that if there is no way to force uniform samples
to hit the black and the white with the right percentages everywhere, just fire at random and you’ll
get something close to the right answer. You’ll never get exactly the right answer, but since there is
no pattern to your sampling, there will be no pattern to your errors either. You won’t get a Moire
pattern, but rather you’ll get a jumble of noise.

Lol:

Doing antialiasing well is what separates the
men from the boys.

I think the Renderman documents are extremely informative for all areas of computer graphics in general. They really help you understand the technical aspects behind 3D software.

Isn’t that patented by Pixar? (but if it is mid-80’s patent should expire soon…)

[quote=“Trident”]

Isn’t that patented by Pixar? (but if it is mid-80’s patent should expire soon…)[/quote]

Stochastic basically just means a random sampling process. Pixar’s patent relates to one specific method of doing it using a patented form of Monte Carlo sampling. They thought BMRT was using it but it was actually using an analytical method. Pixar’s way is just really fast and gets good results but there are other ways, although most likely slower.

I read Pixar’s earliest patents won’t run out until 2007 so we’ll just have to use the slower methods. I don’t know though, 3delight has amazing AA and it is really fast. I don’t know what method they use but I don’t think it will be the Pixar thing.

In effect, PRMan behaves exactly like the receptors in our eyes by using a Poisson distribution pattern of point samples to achieve noise instead of aliasing whenever the Nyquist limit (half your signal frequency) is exceeded:

Jump to half way down the page or so:

http://www.zaon.com/company/articles/3d_rendering.php

http://freepatentsonline.com/4835712.html I’m not exsactly apt at reading patentese :slight_smile: but it looks that actually that would be late 2006. (20 years from the date of application) I’m also not sure if it is that patent, i only skimmed.

[quote=“Trident”]

http://freepatentsonline.com/4835712.html I’m not exsactly apt at reading patentese :slight_smile: but it looks that actually that would be late 2006. (20 years from the date of application) I’m also not sure if it is that patent, i only skimmed.[/quote]

Unfortunately, I don’t think it is that one. That patent is for:

The ability to produce accurate, volumetric, anatomical models from computerized tomographic (CT) scans

I actually did something like that at university to turn CT data into a 3D model but I wasn’t allowed to use an algorithm like that because of either that patent or a similar one. I think it was actually the marching cubes algorithm I couldn’t use - that one looks even more advanced. Bah, I hate patents.

The Pixar patent for jittered sampling as applied to anti-aliasing, focal blur, motion blur, etc. was filed three times.
The first one was filed in 1985.

The second two were updates/extensions of the first. The second one was filed in 1989, and the third one was filed in 1991.

I’m no lawyer, but I think the patent will expire based the 1985 filing despite the subsequent updates/extensions. That means it should be coming up pretty soon.

And in case anyone is wondering, deep shadow maps made it through the patent office. Grrr.

Question… What is stopping anyone from making theosa sampleing in their own app NOT FOR SALE, just for their own use. ?

Would explain a few things on why some artists can bang out super clean renders so fast.

American law gives patent’s assignee right to exclude any use, even personal without license. European laws are more lenient (those I heared of, and INAL.) but they would still limit uses of patented tech too severely to consider that.

Morale: This is illegal, and if you would EVER goet caught on that consequences would be unpleasant.

American law gives patent’s assignee right to exclude any use, even personal without license. European laws are more lenient (those I heared of, and INAL.) but they would still limit uses of patented tech too severely to consider that.

Morale: This is illegal, and if you would EVER goet caught on that consequences would be unpleasant.[/quote]

But forsomething like that as a personal home grown render program, How could something like that ever be proven ?? It’s not not like “oh that looks like pixars render engine look” I would not doubt that it does exsit somewhere… Helll there are some 3d modeling programs that are given out only to the most elite 3d modelers around, not for sale and not for download, and not open source…

PRman costs 3000/CPU IIRC.

I’m too lazy to calculate the costs of 3D app development, and calculate the risk that disgruntled emploee would tell about the trick, but I’m almost sure that it’s cheaper and easer to either buy PRman or license patented tech directly.

Modding OS renderer to use patented tech is easier, but I still doubt I’ts worth it.

YEAHHHH how bout them square enix…? %|

I’ve heard annecdotes that animation companies probably do knowingly infringe patents for internal code.

Regarding europe - software patents are currently invalid BUT if Blender included them, end users in the US could be sued for infringement.

So it would likely be safe for a european company to use them and integrate them into blender for their personal use but not to distribute them.

LetterRip

osx-rules yep the marching cubes patent just ran out a bit less than a year ago, so prior to that usage would have been an infringement.

LetterRip