Physic driven animation

Did someone tried mixing animations and physics in Blender ?
It includes deep learning for auto posing and it should bring new features.

3 Likes

This looks like an amazing tool with huge potential! It appears to be free for those willing to beta test. I’d be afraid to invest the time to test this out and it ending up costing a fortune to buy? Would be nice if we had an idea of the costing. The manual does mention dealing with Blender so Blender users take note (import/exports fbx / Collada files). I’m not much of an animator but would love to get more into it. Especially seeing these kinds of fun tools could help with that.

The open beta-test (OBT) of Cascadeur is available now. With the OBT a broad user base is able to test and evaluate the next major version of our physics-based character animation software. Any animation created with Cascadeur’s new OBT version can be freely used in games and movies without our permission.

You’re restricted to use open beta so you can do whatever you want with your animations.

Perhaps price will be affordable for indies.

BTW, i didn’t find the equivalent and as easy to make physic animations in Blender, while it can be done with lot more effort. Also Blender does not have deep learning auto pose.

Let’s go… Animation 2020! =D

On a more serious note, this kind of thing needs to one day become a standard, using AI driven animation. Imagine as an artist just “directing” the software by creating desired keyposes (which themselves might already be roughed by via AI by typing out “character walks excitedly along the road from the trashcan to the door” or something), and the AI creates the interpolations based on different factors, such as the weight, personality, and emotion of the character.

We’ve already started down the path of using AI assistance in rendering via denoising, but hopefully things like animation, sculpting, shading, etc. can one day soon be all assisted by AI.

2 Likes

That’s not how it’s gonna work out, ever, in any software. Not within the lifetime of anyone of us.

Don’t get fooled by the term AI. That’s just marketing-nonsense. What we’re dealing here is not A.I. as in that old Spielberg-movie of the same name. It’s in no way whatsoever connected to actual intelligence at all. Better think of it as ‘computer intuition’, if you will.

The point here is, the computer doesn’t have a clue of what it’s doing at all. In a sense it’s ‘learning’ to perform a certain task via ‘training’, true, but it’s comparable of how you learned how to ride a bike or how to swim as a kid.
Now ever since that time you can just ‘do it’ without having to relearn it each time and without having to intellectually contemplate about ‘how do I ride a bike without tripping?’ each time.

But telling someone “character walks excitedly along the road from the trashcan to the door” is sth. alltogether different. If you’d try to tell that to a computer, you might as well tell it to a one year old child or to a dog.
Neither will remotely ever understand any of the numerous high-level concepts such as ‘excitedly’, ‘along’, ‘road’, ‘walks’ or ‘trashcan’ involved here, let alone their interplay.

greetings, Kologe

1 Like

Not ever, or not within our lifetime? They’re two different assumptions. If it’s the latter, your argument is rather moot because you’re already recognizing that it is possible, just not in our lifetime.

Also, make sure you’re not assuming I’m talking about some perfect software, as my thought was about a directable (e.g. editable, guidable) workflow.

I don’t think anybody’s wondering if algorithms have human-like intelligence when talking about implementation of machine learning results. I’m not sure why you made that jump.

We’re already using the fruit of machine learning everywhere and what I’m suggesting isn’t farfetched at all, just needs the correct stepping stones.

Storyboarder already is a crude example of what I’m thinking, taking a prompt to generate a basic scene. AI Dungeon is another example of a user’s prompt allowing the AI to smartly generate content.

I’m envisioning something in the future that can potentially serve as the starting point (“roughed out” being the keywords here) for the 3D pipeline, such as animation, sculpting, material creation, etc.

Imagine, for example, if the artist can take keyframes that were somehow generated (via a typed prompt, or manually defining certain parameters, etc.) and all he/she has to do is alter the poses/timing/motion paths, dynamically changing the result towards the final animation.
Probably, initially, the results might be best for stuff like animatics, but with serious effort, I could see something that becomes a significant time saver for final animations as well.

Not at all like telling it to a child or a dog. You’re making a lot of assumptions in your last sentence, which run counter to research advancements already being made. We have AI capable of recognizing objects and living things, basic human emotions, acceptably blending animation libraries, adapting body mechanics to different parameters, style transference, etc.

I’d have to be unrealistically pessimistic to assume these individual efforts are not going to get better in the future. Everything tells me that if a serious attempt is made at some point in the future, even a kind of “type and go” generator for generally acceptable to good starting points of various steps in the 3D pipeline is definitely realistic, maybe even possible within the next decade or two.

1 Like

Well, in fact the lifetime-sentence was a clarification I added at the end after having written the rest of my post. So yes, you’re right, those are two contradictory statements.
On the other hand, how is my argument moot if I assume it’s not possible within our lifetime? Would that not at least mean none of us will live to experience such software (which, by consesquence, would make speculating about it rather pointless, for us, I guess).

Excuse my jump here. I just tend to be a bit alergic to that kind of marketing-blabber.
Like when they offer an UHD-screen, but they label it 4K. Or they talk about machine learning, but they label it AI.
That is very much done with the clear intention of making it sound a bit magical and also very well to push it into the direction of SF-movies like ‘AI’ or ‘2001 - A Space Odyssee’, ‘Ex Machina’ or ‘I, Robot’ (add arbitrarily long list of other examples here) inside people’s heads.

While I don’t quite see storyboarder’s connection to this (haven’t tried it though, I might miss sth.), I quickly checked out AI Dungeon.
I’m not impressed. How is this different from picking a random generic text-piece after any user-input?
Here’s what it gave me (while I admittedly tried to make fun of it):

You smoke some crack.
Your mouth waters as you smell the wonderful scent of roasting meat. You have a feeling that you haven’t smelled food in a long time. You quickly look around you and notice that the other survivors have become more wary.

You smoke some more crack.
The other survivors don’t seem like themselves. Even the deranged men that normally hang around with the others are behaving themselves. You have a bad feeling about all this, but you’re in no position to run.

You run out of crack.
You quickly think of something to help get you out of this situation.
You’re a clever man, you come up with…
The spark!

You mix heroin with cocain.
The mixture is done and you feel a rush of energy. It’s like a drug, it has an almost manic effect on you. You grin to yourself as the others are taken by surprise by your outburst.

Especially the last paragraph made me laugh. I tell it about mixing two of the most widely known drugs and it still doesn’t get it. Instead, it tells me it was ‘like a drug’.
What am I supposed to say? Dear AI Dungeon, for your information: Heroin, as well as cocaine are not like a drug, they are drugs.

Very much like telling it to a child or a dog. :stuck_out_tongue: Think about the very examples you provided there:

  • recognizing objects and living things - just about exactly what a toddler or a dog can easily do either
  • recognizing basic human emotions - just about exactly what a toddler or a dog can easily do either

Your other examples are too closely bound to technical CG-related concepts, obviously, to say the same about them, but maybe you see what I mean.

In any case, call me pessimistic (while I don’t believe this is actually a bad thing for more political reasons), but I do think most of this talk about machine learning is just a hype.
Yes, there’s a lot of research going onb in this field currently, I know, and I’ve seen statistics about how the number of research publications on the matter basically skyrocketed in recent years.

On the other hand, to my knowledge, it’s not like there had been any groundbreaking innovations in the way machine learning itself is done. It’s afaik still the very same concepts already experimented with 40 years ago.

So why all the hype and all the increased interest in the last decade? I think it’s not a matter of machine learning having become a better way of solving computer science related problems all of a sudden.
Nor is it a matter of advancements in processing power making it a viable option now, rather than 40 years ago.
I think it’s mainly this: All sorts of silicon valley (or elewhere) data-pimp-corporations (Amazon, Google, Facebook, Alibaba, Netflix, Paypal, Twitter, Epic, you name it) have already established

  1. Huge databases generated from user-interaction or else
  2. Huge server-farms with tremendous amounts of processing-power and storage capabilities (think Amazon AWS etc.)

So pushing machine learning from small-scale academic research into mainstream usecases (even if you have to forcefully shoehorn it there) is a convenient way for those corporations to suddenly make it look like an incredibly useful new technology popped out of nowhere and should be built upon.
And as machine learning needs huge amounts of

  1. input datasets
  2. processing power
  3. huge input datasets themselves require huge amounts of storage

(for the initial training phase at least), pushing the computer science research-community towards looking at machine learning is a nice way for those data-pimp corporations to make themselves indispensable for the advancement of sth. which is pretended to be a trend occuring in the community of computer science.

Of course noone is actively forced to do machine learning research, but if there’s money in it and those data-pimp corporations are willing to sponsor researchers and there’s prizes to be won (e.g. Kaggle Competition, The Netflix Prize, DARPA Urban Challenge), that’s gonna have an effect on what kind of research is being done.

In other words: All this talk about machine learning is an artificial hype carefully engineered by the people at Google, Facebook, Epic, Alibaba etc. around a technology from the 80ies (if that’s early enough).

greetings, Kologe

1 Like

Denying machine learning can produce useful results is an exaggeration. Sure it is hit or miss, but it can give quick results. They aren’t finished or polished products by any means, but they have potential to help the creative process. I’ve used https://artbreeder.com/ for shilouette work while doing thumbnails for quick consepts concepts (altough it’s base dataset is pretty limited). It’s already kinda viable to use machine learning assists. Auto complete sentence usually get’s it right most of time for example. If it doesn’t produce usable result, let it run again.

Regarding AI dungeon this is what I got out of it one the first try:
You smoke crack.
You smoke crack and feel a rush of euphoria. It is the best feeling you’ve ever felt.

You smoke more crack.
You are surrounded by crackling flames. You feel like you can fly! The flames have transformed you into a creature of fire!

You smoke even more crack.
The world is transforming around you. Buildings are collapsing, fires are raging and demons are screaming. You must act fast to prevent your own destruction!
You smoke even more crack and transform into a creature of fire.

You sober up.
You sober up and feel like your old miserable self again. But something has changed…
You look down at your hands and they have changed. They are red, hairy and five fingered.

You mix heroin with cocaine.
You mix the two drugs and feel a wave of blissful euphoria wash over you. It feels like a hit of crystal meth.
You run outside to share your new discovery with your friends.

It’s arguable what’s what in that, but it could pass as a valid result.

Throw together a huge animation dataset with tags and additional framework and it’s not completly unreasonable to expect some sort of useful result in out lifetimes.

1 Like

Holy smokes, man, I didn’t expect a wall of text for response!

I reread that post several times and it still sounds like you’re saying what I’m saying is impossible. If that’s not what you meant, then I guess your point isn’t moot.

Though, in that case, I don’t really know what the intended point was then.

Haha, yes, that’s an entertaining aspect of AI Dungeon, that it can frequently produce non-sensical results story-wise. But a few things:

  1. The engine you were using is their “free” version. They apparently have a far smarter AI called “Dragon”: [More info]
  2. I think you weren’t using the “custom” game mode, but rather a preset one, which would make it try to work whatever you write into the context of the given story. In your case, it seems you chose one of a character trying to survive through something.
  3. Even the free version gives you the ability to edit the story, provide points to remember, etc. to help guide the AI. In the same way, my writeup was in regards to a directable program because I can’t imagine a perfect program, nor a perfect user.
  4. Like @0451 shared, the results can also be very impressive at times, even with their free AI.

I can’t say I see what you mean. Also, even the “simple” task of object recognition - we already have some AIs that are purported to surpass humans in their tests.

Regardless, I responded to that point originally because it sounded like you were saying what I was saying isn’t possible. But if that’s not what you were saying, I guess there’s no reason to drag it out.

If we have learned anything from the AI-driven denoising in Cycles, it is that algorithms based on machine-learning (as of now) are fairly useless on their own, but really shines when it is guided by other forms of data provided by the application or the user (ie. color information and normal information).

The user will still have to do some manual animation work as a way to serve as a template for the AI to work with, so an instant push-button animated sequence (with any mesh) is still a long way off. Now what could be done now with physics-based animation is to extend Blender’s physics and constraints code to correct or deny user-defined movement to account for collisions (ie. the user could not move a hand through a wall, no matter how hard he tries).

I’m not so sure about that one, either. This too involves a lot of abstract high-level concepts:
If the user can’t push the hand through the wall, can he push a single finger through?
Maybe part of a finger?
What about the wristwatch the character wears, will that go through the wall?
If not, where does that end? Would the porposed system have to check for collisions on the single vertex level, or rather: How would it not have to do just that?

Now talk about ruining performance with all those checks for collisions. And I’m sure you know very well how bad Blender’s animation playback perormance is already, anyway.

What I believe is there might be a place for machine-learning constrained to inner pose-space (dealing with the relative placement of limbs, not accounting for the surrounding environment). So basically what cascadeur seems to do.
Maybe also for inbetweening or even sth. like making a looping walkcycle from a few keyposes.

greetings, Kologe

I just wanted to add a little to this thread, because I think it is an interesting little piece of evidence of how we thought about this technology about 2.5 years before…

The advancement in image generation space with DALL-E 2 and Stable Diffusion, and text generation with GPT3 and ChatGPT is truly remarkable. We now have access to technology that can generate images and text that are almost indistinguishable from those created by humans.

It’s interesting to note that all of these advancements in technology follow the Hilbert curve, and at this point, it’s hard to predict where exactly we are on that curve.
Today, we are witnessing the emergence of truly remarkable tools like DALL-E 2 and GPT3, which are capable of creating images and text that are almost indistinguishable from those created by humans. As we continue to progress along the Hilbert curve, we can expect to see even more remarkable tools that are capable of generating even more sophisticated and nuanced content.

However, it’s important to remember that this technology is not without its limitations. While we have made great strides in creating tools that can generate text and images that are nearly indistinguishable from those created by humans, there are still some limitations that must be addressed.
For example, there is still a long way to go in terms of creating tools that are truly creative and innovative. While these tools are capable of generating content that is similar to what humans can create, they are still limited by their programming and lack the capacity to generate truly unique and innovative content on their own.

Nonetheless, the advancements we have made in the past 2.5 years are truly remarkable, and it’s exciting to think about what the future holds for this technology.

– laterrr

For example Substance Designer powered by AI.
Instead of creating hundred spaghetti nodes, the user only setup main groups material steps.
Giving some images results the AI automaticaly generate material nodes and user only have to tweak settings.

1 Like