Future of motion graphics in Blender 2.8x

It may not seem intuitive at first, but once you get to know the basics it gets much more easier because it is consistent all around. Tools and features all well conected between them so as a user, you can actually explore and come up with your own ways to do things. That’s were the power and “intuitiveness” come from. Actually I don’t think any program is really intuitive for you to start doing things without any help the first time… MS Paint maybe :stuck_out_tongue:

In Houdini, once you learn how to model a simple box with nodes, you can use the same paradigm to start exploring on your own and discover all sorts of different things. Is very, very well thought. Even C4D, which I find quite messy is also consistent in the way you make things, so people just need a good introduction to the software and then they can carry on by themselves.

Blender is incredibly powerful, but is not intuitive at all, and the tools are not well conected between them… things are all over the place and sometimes don’t work as expected. Another example, as of this day we still don’t have a way to use vertex groups directly in the Cycles node trees… Because of that, the results of the Dynamic Paint modifier can’t be used directly by Cycles, we need weird workarounds or external addons. Isn’t that absurd?

  • There’s no easy way to break a wall and have it’s shards emit particles that are also rigid bodies… Again, not without weird workarounds. Because particle systems, rigid bodies and softbodies don’t work well together.

Again, as the title of the thread suggests, for Blender to be a good option for motion graphics what needs to be done is a proper connection between features and tools. Is not a matter of adding new and shiny features, is a matter of improve what we already have.

I think with the new dependency graph a paradigm shift is coming. With the nodal approach, or nodification of everything. It can be huge and motion graphics in blender could be more fun to do :slight_smile:

tbh, I like doing simpler stuff in after effects than blender currently.

But my hopes are the new dep graph will replace the old, and outperform it and open new doors.

I heard the brilliant idea to separate hair/strands from particles which would be great. Particles in my opinion should be re-written from scratch.

Also particles and hair, could borrow the paradigm from cycles, that the node tree should be represented in the property panels as a stack.
so not everything is shown from the start and you tinker with some settings here and there.

instead it’s built up from the ground by the parts you need for the task in mind.

that would be great, and fun to work with I think.

Also a new particle system should use the bullet physics integration more, I want to see particles as Rigid Bodies. Even becoming Soft bodies, and on that note. I don’t know what soft body solver is used today but it’s slow as hell.

At least a setting to use the Bullet Physics Soft Body solver. I’ve seen it working in real time.

The current suffers as much as the current particle system - under an old paradigm.

Anyways I heard Ton say in a video that sometime after new dep graph, nodal approach. There should be time to look over physics and maybe to unify all sims under one roof.

Even if impossible to unify all physics sims, there’s a lot to look after.

In my experience some of the most fun motion graphics has used heavy on physical simulations, even simple stuff like rigid collisions can end up doing some nice motion graphics. it’s just adds that extra level of detail, that done by hand would take forever.

sorry guys, maybe I don’t get the point, but why in the earth can’t the Blender foundation use the source code from Fluid designer, there everything is in place, materials, objects, world and so on (preset system)

at least PBR node wil be in place for 2.8

it would be nice to have a simple walk / run follow path for the default human rigs, so people as props could be easily added.
And so that people could focus on other things then the daunting task of letting someone walk in animation where walking isnt even the main topic.

How about a more sensible version of NPR in Blender as Freestyle is currently tacked on a bit and can’t be used elsewhere in the pipeline

+1
There are plans to improve Freestyle for Blender 2.8 but i havent read any real proposals as of yet.
I’m hoping they choose to go with GLSL shaders, both freestyle and GLSL outline shaders are a post effect and it would be a realtime effect. i dont like freestyle very much because it sooooo slow.

See here for some examples:

And there’s a whole chaper on it in GPU gems 2:
http://http.developer.nvidia.com/GPUGems2/gpugems2_chapter15.html

probably gonna have both. realtime npr shaders (I’ve only seen simple strokes, nothing like freestyles diverse set of outline strokes)

then ontop on the opengl viewport render, freestyle could render as it does now a post process.

Sure but it would be nice to have Freestyle available to the compositor as layers.

True. Best we can do right now to isolate freestyle strokes is have them on another render layer. It works, but not optimal.

The big problem with animation nodes is that it’s like a programm a part to learn, and a very dificult programm, it’s much simpier to learn AE from begining… It maust have also some UI se of presets, if not it can’t be counted like a feature for proffesional pipeline, I remember lost half a day doing some job , just trying to achieve some simple effect. It’s a powerful tool but it’s pointless if you can’t use it to you work. Another big issue that I racontred it’s that you can’t copy\paste or even append animation nodes to another file, so each effect you made you have to reproduce from scratch in another work… It’s a real pity although IMHO, my reall big wish is to see Blender also a Motion Graphic tool (for now I’m making all my MG in blender’ but sometimes I feel like hammering a nail with a microscope… Theres some add-ons fortunately, without I can’t do motion graphic like “Commotion”, another great addon I found just today (and it’s three years old!) “randomize”, but you can’t relay on animation nodes in a workflow.

1 Like

Much of the randomise functionality can be recreated with AN very quickly though. Its really just like 4 or 5 nodes IIRC

I do not say It’s can’t be done, I say it’s very far away to be intuitive, and to learn this it’s a bit like learning a program language, you will not start doing this in a middle of a work with a deadline, to accomplish some simple effects…

Yes, you should know your tools before you apply them professionally.

I understand if you don’t want to invest the time into a complex tool to achieve a “simple” task, or if you want “presets” for everything. I don’t understand how you could claim AN can’t be “relied on” or that it can’t be counted into a “professional pipeline”, based on that attitude. It’s you who isn’t up to the task, not the tool.

If you’re more comfortable with Adobe programs, then use those. Adobe designs simpler applications exactly because many designers are not technical.

Blender isn’t an Adobe program, though. The design of AN is closer to applications like XSI or Houdini. I can guarantee you, you wouldn’t be willing to handle those applications either, but those applications are the benchmark - not After Effects.

I use animation nodes profesionally on a daily basis. And it covers a wide range of needs. Even though I have to rely on things that should be baken due to servers limitation, the amount of possibilities is pretty big.

To be honest I found AN more easy to overcome than houdini SOP nodes which are some sort of black boxes. With a very basic understanding of vectors, matrices and how blender handles mesh data (vertices, edges and poly indices) you can achieve some great results that will take you a lot of time in Houdini.

Maybe the transition between C4D and Houdini is easier. But on the other hand to make some low level operations on Houdini you have to use VEX(scripting), something that can be avoided AN.

Regarding AE is pain in the ass when things get really complicated. You can spend your whole live opening and closing properties.

Just reverse engineer houdini (without unintuitive artifacts like “stamp” function) -> everthying nodes. You cannot go wrong way with this :wink: