Can Blender 2.79 achieve visuals and animation like kingsglaive final fantasy xv?

Hi there
Iam just watching kingsglaive final fantasy xv and my jaw drops down like a crazy ,I mean look at the visuals of this movie its the best I have ever see in any animation movie
So is there a possibility to achieve visuals and animation on the latest blender? Can blender make animation movies like this?

Yes, it´s a matter of money and artists, to do that kind of quality you need A LOT of money, and usually in a production like that there is more than one app, for example, Houdini for FX it could be…

But I don´t see any reason why Blender could not be able to achieve that with the proper production money.


It is a 3D animation movie. And Blender is a tool to create 3D animation movie.
But 3D software are not making movies by themselves.
You need humans at some steps of the process.

And in this case, the quality depends on the humans and resources that producers gave to them.
To have a fair response, you should ask the question to people who made this movie and know Blender.

What is sure is that Blender has been used to make movies like open movies :

If Blender was perfect, it would have no reason to evolve to 2.8.
Is it possible to render the same kind of images on your computer with blender ?
I don’t know your resources but probably no. This kind of images requires a huge amount of data and a computer able to handle them.
Before being able to know how much resources you need to do that, you have to become an artist able to do that.

So, the first limit to consider is you and your resources before the software you should use.

In theory, blender have all features to render trees, clouds, cloths shown in the video.
But to be able to judge if it does it well or not , you have to try to make something like that, first.

I think referencing/linking for scenes like this will be very important. Anyone know if they will enhance referencing in 2.8?

At the moment that you modify layer system and introduce objects collections, at least, you have to adapt linking/referencing.
An id-override-static branch has been created 3 days ago. Collections will allow to override many things.
If it is applied to links, linking stuff would probably be a lot easier.

Mont29 improved id-linking and id-remapping in 2.7x series.
And he continues to keep up-to-date asset-engine branch.
So, referencing/linking would not be same in 2.8. And generally, in blender development, when something is intentionally changed ; there are more improvements than regressions.

When these changes will come or how important will be the progress ? Nobody can answer theses questions, now.

In this article, we can read that we would have several asset engines like render engines(Clay, EEVEE,Cycles) in asset browser.
So, abilities would be different according to chosen asset-engine (specific to manipulated data).
To resume :
_Code to identify data is ready.
_Code to handle engines is ready.
Engines have to be created and specific UI of each engine have to be created, too.

I seriously doubt it.

Square Enix is known for pushing the envelope when it comes to CGI.
This movie is the end result of any bleeding edge technology they could push in combination with some of the best talents in the Japanese industry ( not to mention their pockets run deep to buy either tech or artists).

Not even Pixar or Disney can outperform them in this semi-realistic style. No one else tries, at least not on this level.

Nobody can make a movie like this with one software, blender or other. You needs a lot of people working on it around the world. Blender could make similar images? Maybe, but not in motion. Things like hair is actually less powerful than other software, it is one example of problems.

But in productions like this you need a software that have a strong pipeline, tools to management,… blender dont have it actually.

in resume… it is possible to make it in blender? Yes
more cheap and fast that with propietary software? I doubt

Maybe, but the stories of their games were always crap.
E.g. in this trailer I saw a crippled guy sitting in a wheelchair and I was like: WTF!
The machinery in the game is beyond our time and the guy still sitting in a wheelchair?
How about exoskeleton (which has already been used in some of our industries) or even healing his disability?
Especially in a game with a jump speed of like 50+ km/h, what does the wheelchair do?

Thanks. You think it is safe to jump to 2.8. I have a semi big project coming and I would like a robost system to handle linking/referencing.

The uncanny valley still runs strong in this one…

Is it just me, or did the simple 16bit pixel art Final Fantasy have 10 times more character than the art on display in this trailer? Watching it left me feeling absolutely cold. All that money poured into all that CGI, and for what? Lifeless and unimaginative characters.

Perhaps I am being harsh. The technical achievement is impressive. But I really felt this is style over substance.

…And those eyes. …Those dead eyes. Creepy.

Final Fantasy is the epitome of style over substance

Also, it’s not up to the software to match and surpass that quality, it’s up to you.

Hahahaha!!! That’s hilarious! I didn’t even think about that.

I think Yes, although:

  • Lack of OpenVDB means any sophisticated volumetrics need to be rendered in another application(e.g Houdini). That applies to pretty much any explosions and destructions as they all require volumes. Some can be done in Blender, but compared to directability of Houdini, it’s no contest. Of course you couldn’t do half of these in Maya either unless you are sadistic, but maya/max/c4d/everyone else have OpenVDB. You could perhaps just assemble all assets in Clarisse sovling this issue.

  • Another matter is the amount of detail in the scene. My fear and experience is that Cycles will run out of VRAM unless Out of Core has been implemented? CPU - slow, but ok.

  • I can also think of BHV build times. In last cinematic i rendered with big scenes BHV was building 4 minutes(using 1 core?) + 5mins on actual frame(on GPU). This here is magnitudes more complex.

  • I would also note that AOV/Render elements play a key role. Blender is not as flexible in that department as other engines. For starters Cryptomatte would be welcome so at least some CC can be done in post without rerendering scene with flat colors.

Personally I think Redshift(biased) or even Octane would handle such scenes better, faster and offer more flexibility. On CPU side RM and Arnold are groomed for such challenges. Of course Cycles has time and time again proven doubters wrong, and I would not be surprised if this is the case here as well.

In short I think it’s doable in most part with some help from Houdini (just as no doubt in this video). Question mark is where/how to render it. Is Cycles flexible enough, can take big scenes (memory, bhv), fast enough(biased vs unbiased)? Can FX be rendered in Blender(lack of OpenVDB) or should that be forgotten and just rendered externally (syncing scenes, cameras). No doubt all of this is shadowed by question of actual RESOURCE needed to embark on something like that!

I have to agree here. I think the visuals are mostly impressive because of the art style and design behind it. Not so much the tech involved in making it. I didn’t really notice anything striking that made my jaw drop. All I have to go on is that trailer though so maybe I’ll have to watch it and see if I feel different afterwords. I will say this, they have a very good motion capture system. :wink:

Romanji seems to be saying that this is some kind of pinnacle of CGI technology and that no one else even tries because it’s so hard…

I honestly don’t think that’s the reason at all. First off, the VFX industry has a long history of being slammed for trying to do all CGI moves with realistic humans. Just think of Robert Zemeckis and movies like Polar Express, Beowulf and Mars Needs Moms. All of the Final Fantasy moves have made very little money in the US and that doesn’t really inspire Hollywood to back project from others. In the end the question everyone asks is always, why?

I’m not exactly sure how Square/Enix gets away with it but I’m sure they probably fund a lot of it themselves.

Also, I think that the notion of this being a pinnacle of technology seems odd to me when I can think of many examples of rendering, performance capture, deformations, simulations, etc that are far beyond this. Take the new Planet of the Apes move. Man, there’s some CGI that makes me totally forget that it’s CG and not real apes. Not to mention all the numerous digital Stunt doubles, recreated dead actors, and Full CG characters (HULK SMASH!) that Hollywood cranks out every year. In my opinion, really good CGI doesn’t call it’s self out. If you’re watching a movie and thinking, “Wow! how did they make that CGI look so real?!” then that’s probably a sign that it actually doesn’t look real at all.

This also bring to mind this video:

As you can see, this is real-time. It’s done by using the movement of controllers to drive the fading in and out of multiple shape keys and normal maps. This is not at all impossible to do in Blender. It’s just that no one has done it yet. Mostly because it would take a lot of time and money to scan the human actor, separate all the different expressions into muscle groups and rigging it all up. That’s a lot of work and most likely, it was originally done for a game where there was a lot of money to get it done right.

So, I guess my point is that yes, while it’s probably possible to do this in Blender, you would need a lot of talent, time and money.

As far as the eyes looking dead, it’s not so much for me as how everyone in the movie is so perfect and pretty looking. Like they are trying to push some digital human ideal on us. I find it a strange Japanese ideal of “Prefect beauty.” I used to watch a lot of Anime and read a lot of manga and it’s always rubbed me a little the wrong way.

It can but is it feasible? You need a team and a budget as big as that studio. No one is going to risk it just to save 300 000 dollars in software when the budget is several millions.

These are very good points. I would like to address one thing though: If Cycles runs out of Vram then undoubtedly, so will Red Shift and Octane. There is no magic “use less memory” feature in either of these renderers that allows them to render more elements than Cycles. At least, not that I’m aware of. Maybe they are a little more optimized but after testing out Red Shift for a while, I’m not convinced.

There are strategies to get around some of these memory issues though. One is to break up your scene into lots of self contained passes, keeping in mind how much memory is being used in each. Textures will always be an issue though so using compressed formats are essential (although, now that I think about it, are they stored uncompressed in Vram?). Using instances/dupli-objects everywhere you can will help immensely too.

One thing I discovered is that using “Adaptive” with the subdivision surface modifier can actually save memory in many cases. Just as it subdivides your mesh more the closer it gets to camera, it will actually do the opposite too. If you have a scene with dozens of objects that use SDS, they are always going to be rendered with the highest subdivision level you set for the render. But when you’re using “Adaptive” it will use less on the object father away from the camera. So you can save some memory there too.

Lastly, Octane and Red Shift are GPU only renderers. That means that if you do run out of Vram, then there is simply no way to render the scene. The fact that Cycles can use either GPU or CPU is actually a huge advantage because you are never totally screwed by being GPU only.

As far as the eyes looking dead, it’s not so much for me as how everyone in the movie is so perfect and pretty looking. Like they are trying to push some digital human ideal on us. I find it a strange Japanese ideal of “Prefect beauty.” I used to watch a lot of Anime and read a lot of manga and it’s always rubbed me a little the wrong way.

Yes, pretty much. There’s this problem I’ve had with FF13 as well, that not just the prettiness, but in general, the body language of the characters was so extremely Japanese that it was very difficult not to notice that. I imagine if you would watch all the important films in Japanese cinematic history before watching this film, you’d proly notice less issues because you’d get used to the distinctly Japanese body language the characters have.

I dunno, I guess that everyone looking like anime characters might also work in its advantage, because it forces you to consider it as a stylised thing.

You can also see some definite Japanese things in like, indeed, the wheelchair, as well as how some of the shots are framed. (The guy coming in, smiling, it’s not that there’s no such thing in western film, but it just looks totally different)

That said, the lighting on the skin is pretty darn good, if I am comparing it to Rogue One, for example :slight_smile:

You can have some informations about the memory on Redshift

Really this is a movie I thought I was looking at a game cut scene…It looked about the same as most Assassin’s creed game trailers

Thank you for your informative posts Indy_logic & Wazou.

Regarding this:

There is patch in work to use system memory, as i understand this applies for textures only?:

Also vaguely recall reading that Nvidia should implicitly allow use of “Unified memory”(cuda 6?).

I’m not really in the loop with any of it. Would be nice if someone can explain the maturity of these, which limits are solved and which remain.