Let's give Blender another chance

Hi folks!

A few years ago I thought I could give Blender a shot. I’ve been a Maya user since, well, beta 2, in 96! So the switch was too hard. I hated the interface. I though Blender was using the most complicated ways to do the simplest things. And what was that RMB selecting thing anyways?

I’m now developing a kid show and I wanted to render it in Unreal, so I wouldn’t need a render farm. I have to use alambic to bake the animation. Unreal does support alambic files but it loads everything in memory and it’s not easy to manage as it’s a game engine and there are no time lines and all.

I’ve heard about EEVVEE and 2.8 so I thought let see what it can do. I have to admit, it’s mind blowing! Blender totally supports alambic files, including cameras. Going from Substance to Blender could not be easier. And EEVEE is wonderful!

Here’s a test I did this weekend. Please understand that I’ve been using Blender for only a week so I still have a lot of testing to do to get my renders perfect.

I did have some issues that I can only imagine are related to using a beta version, like being unable to animate focal length, focal distance and aperture. I can see the animation curves but noting changes at rendering.

So yeah, I’m starting to really love Blender. The interface make much more sense now. I do have other issues but before I start complaining I need to do some research to see if the problem is not actually on my side.

Just wanted to share the joy. :slight_smile:

42 Likes

With EEVEE, result at rendering is supposed to be the one in viewport in Camera view.
There should be no problem with focal length.
Depth of Field is an effect that requires to be enabled in Render tab.

Clément’s last bug fix and optimization to DOF was made, last friday.
You should give it a try.

I have, for quite a long time, not been fond of Blender, but kept it around because I have friends that have been learning to use it, and due to my experience with other 3D software, they ask me questions which inevitably result in me needing to open blender to look at a file, etc.
2.8 has changed everything for me. I am enjoying working with it, and I am creating models, complete with UV maps etc. which I am able to use. True, they aren’t super complex, but I’m getting a lot further and with a lot less frustration than I’ve had in the past, and it is a good feeling.
Glad I’m not alone in this. Sharing the joy is a good thing. :slight_smile:

2 Likes

The DOF animation is now fixed. I found the problem with animating the other camera parameters. I can change the focal length, focus distance or f-stop on the interface and add keyframes. I can see the keyframes in the timeline. But the values don’t change. As soon as I change frame, the they revert to what they were before. Now, if I go on the graph editor and manually move the value on the curve itself, then it works as expected. So at least I have a workaround. I don’t know if this is because I’m using an alambic file for the camera. I would need to make more tests.

Can we scale only horizontally to vertically in the graph editor?

Change the value to the desired value, then rightclick the field and select insert keyframe or just hover over the field and hit I. Alternatively you could just hit the little record button above the timeline to automatically key any changes you make.

That is default behavior. As Nosslak said Blender will not consider a value change as intentional until it is validated by recreation of keyframe.
When you add a keyframe, value color is yellow. When this value is changed, value color becomes orange. It means that you have to confirm new value. It should be yellow before changing frame.

To avoid that, you have to use autokeyframing.

Same principles as in 3D view are used in Graph Editor.
Press S to scale, then X or Y to constraint to an axis.
Transformation is relative to a pivot (Bounding box of selection, 2D Cursor, Individual Centers to transform only handles).

1 Like

As I mentioned in my previous reply, I do change the values and add key frames. I can see them on the timeline. But even if I change it from let’s say 35mm to 50mm, it will still keyframe at 35mm. I have to manually change it in the graph editor.

I cannot reproduce your camera keyframing issue here on the 2.8 beta.

Note that, if you turn on Auto-Keyframing in the Timeline (record icon), keyframes will be added automatically when you change values on animated items.

If you have a specific file where this happens, you are welcome to strip away everything except the camera, and submit the file in a bug report on developer.blender.org.

and this is eevee only? I think there´s a lot of post there. By the looks of this you rendered it out with full 4k force. :slight_smile: Regardless, it´s good to see someone with long experience in 3D migrate the knowledge to a new package in but a few weeks. Congratulations.
You mention the facial mocap? Was it really? How was it done?
Cheers.

man, I hope soon someone can help you move your back :joy:

  • anyway, Welcome to the Monkey’s island.

Alright… so I tried to reproduce the problem and now it all works. All of it. I guess it was me all along. @DavidRivera, fo the facial mobcap, I used Face Cap in the iPhone X. This will save an FBX file that contains a basic face with blendshapes. In Maya, I wrote a script that creates a locator (Like empty plane axis in Blender). It adds to that locator extra attributes related to the 52 blendshapes and copies the animation from the animated face to the the locator. So the locator has all the animations including rotations for eyes and neck. It then saves that locate as an FBX file.

After that, I load my rig which also has a locator with the same name as the one I exported. That locator has driven keys (don’t know the equivalent in Blender) that drives the face controllers. When I import the locator in the scene, since there’s already a locator with the same name in my scene, Maya will just update the one in my scene instead of adding another one in it. So it imports all animation automatically. From there I export an alambic file and open it into Blender.

I would be awesome to skip the Maya steps but there are three main issues. First, I’m nor a rigger. My character was rigged by a genius guy I used to work with. The bones can be switched off and the arms can be distorted as needed, like waves if he gets electrical shocks. So redoing that rig in Blender would force me to hire someone to rebuilt it and I prefer to spend my money on body mocap for now. The second issue is that Blender doesn’t support ASCII FBX, so that forces me to convert them first. Then I would need to script the way to import the animation but I have zero programming knowledge in Blender. At this point, I’m just trying to get a really cool demo. When I get the financing for the series, I will be able to hire people to help me. I wouldn’t say I migrated in Blender yet. i’m still exploring. I’m a senior hard surface modeler and I didn’t even try any modelling tools yet. But I saw some really cool tutorials. I’m getting there!

1 Like

complete with UV maps etc.

UV mapping in Blender is the one thing that still drives me to using Maya instead. There’s no manipulator and everything seems overcomplicated. By default I only see the UVs of faces I have selected in edit mode until I go to the display menu and check a box. But then I’m still only able to interact with UVs of components I have selected on the mesh. In object mode I don’t see UVs whatsoever. There’s this selection synchronization that let’s me see and interact with everything, but then I can’t select UVs, only faces/edges/vertices.
Intuitively, I want to select some UVs and unfold/unwrap them, but in Blender nothing happens. If I have the selection sync active then the faces I want to unwrap get separated from their adjacent island and span the 01 space. I first have to pin the other UVs to make it work the way I want. Unselected UVs should be pinned by default when I work on a selection.

Compared to Maya’s UVTools, that got integrated from a plugin, Blender’s “workflow” really slows me down.

Another no-go area in Blender for me is the data exchange of animation rigs. If you get a rig as FBX from some other package and you want to open it, change some animation and export it back out without any other changes: not possible in Blender. There’s this new Armature root that is added as a null and that breaks the interoperability with other software because the hierarchy changed. And the reaction from a Blender Developer was basically like: Why would you use other software? Why don’t you do everything the Blender way? As if there’re no other tools than Blender.
If I remember correctly I couldn’t even open the exported FBX from Blender in MotionBuilder.

I’m still very impressed with how far an open source project got. And I’m keenly waiting on the industry standard keymap as I suspect it will make my adoption of Blender way more comfortable. But to make it really usable I’ll probably have to wait a few more years or get into the API myself.

Sorry for interrupting the spreading of joy. ^^

Synchronization of selection of one vertex in 3D View implies to select all its UVs representations in UVeditor. Blender’s synchronization is working in both ways at same time.
What is selected in UVeditor gets selected in 3DView but also what is selected in 3DView gets selected in UVeditor.
You are basically asking for a decoupling of that with a new workflow.
One option to select 3DView vertices from selection in UVEditor. And another one to select UVs vertices from selection in 3DView.
Or maybe doing that from operators instead to using a mode for that.
I agree that auto-hiding of unselected faces is also a little abusive.

The room for improvement is recognized, several tasks relative to UVeditor are still todos.
UDIM should come up in 2.81 or 2.82. So, probably, that would not be only new feature for UV Editor.
At least, one of your request have been integrated. There is a manipulator in 2.8 when Transform tool is active one.

That does not really reflect their mentality, nowadays. I don’t think that kind of reply was thought to be taken, seriously. Blender devs often expressed that they were encountering difficulties with .FBX because Blender animation’s system way is not close to an autodesk software way.
Then, alembic popped up as a new “must-have” file format. And now, there are also efforts done on glTF.

So, you are probably right. You will have to wait a few more years. But it does not look like a lot of years.

I have to admit, the more I work with Blender the more I realize it’s the festival of making things complicated for no apparent reasons. Just moving a pivot of an object is a complicated process. You have to go to edit mode, set the 3D cursor thing where you want it, get out of the editing mode, set pivot to 3D cursor. I didn’t try to do this on multiple objects. I’m not even sure it’s possible. Snapping and contraints are also way too complicated. This is something I do a hundred times a day. Sometimes you have to use triple key hotkeys. That’s a no-no. You create a plane, a cube or whatever, you can’t decide the amount of subdivision as you create it. Same thing for extrusions. As for the UV editor, I have to admit that it is lightyears away from Maya. It’s quite complexe, with linking things and all. Why is it so complicated? Why not just double clicking on an edge to select a loop? There are some really cool features though that I wish I had in Maya. I wish I could do a live demo with the developers, sharing my screen and show them what I mean. Then again, I’m probably not the first Maya user to complain like that.

1 Like

These complications are mostly you trying to force an outside work flow onto a new program. It is a very natural thing to do. You need to work at learning the Blender way and you will find the workflow to be very good. Naturally 2.8 is super raw and will need a lot of polishing over the next few months.

I have no idea why you think the pivot moving is so complicated. Have you learned the hot keys for it? On top of that you clearly don’t need to get out of edit mode to change it! To move the pilot device you just shift right click.

Here are some nice hotkey sheets. https://cgboost.com/resources/

Douglas E Knapp

because you are used in other programs to take the pivot point and move it by hand …
instead in blender you have to do the opposite, mark the point where you can place the pivot point, and then “send” it with a command to the position you want …

obviously this “single” process seems complicated … but over time you will discover how many advantages and speed you have to use the pointer to do many operations of rotation or displacement of vertices or other … with it … blnder has all its harmony … and the real advantages you see them only after having dug a little more into its philosophy of use … especially in the speed flow of obtaining results as quickly as possible (no I am not referring to simply moving a pivot point)

1 Like

Select one or more objects, hold down the key and drag or click the right-mouse button to position/move the 3d cursor, Object–>Set Origin–>Origin to 3D Cursor.

Done. Preferably use the shortcut key to perform the last step. Turn snapping to snap the 3d cursor to vertices, edges, etc.

This works for multiple selected objects as well.

In Blender it works a little differently: extrude, then CTRL-R to subdivide the extrusion. Use the mouse wheel to add subdivisions, or use the parameters box in the left bottom corner to control the number of cuts.

Same with a box or plane: create a box, enter edit mode, right-mouse click, and subdivide. Or use CTRL-R again. The other primitives do have subdivision controls built-in.

I have worked with most 3d apps, and I find that this workflow is generally faster than having such controls built in. It is more flexible in any case. Although I do miss non-destructive parametric objects sometimes.

Each application has its own quirks. Maya and I never could get along. I use(d) Max, C4D, Houdini, Lightwave, and many other 3d apps without too many issues once I adapted to their respective workflows, and for some reason I never took to Maya at all. Same with zBrush. Maya and zBrush just don’t vibe with me.

But that’s okay. I would have to agree with @magick.crow, though: avoid applying too much of Maya’s workflow into Blender 2.8. And 2.8 is still quite rough around the edges - after all, it’s still in beta, and it will probably take a year or so to fine tune it.

And as always, be sure to check out plugins to help with some of these things. As with Maya, Blender is infinitely configurable with scripting.

1 Like

There is a reason. That is history of Blender.
At its debut, one goal was to store Blender into not more than 2 floppy disk.
So, tools were rare and doing multiple things.

That is the point of 3D Cursor making many things.

Nowadays, blender is getting bigger and bigger. It is no more a goal to be small.
I disagree with the fact that pivot should be selectable and movable as an object to be placed where you want.
But I agree that could be simpler in many cases and not requiring to always use 3D Cursor for that.
We should be able to simply do a selection of vertices, edges or faces in edit mode and by using Set Origin menu, directly placing Origin to center of this selection.

Snapping origin by using 3DCursor is bad.
3D Cursor snapping is helpful to snap objects, a selection in edit mode. But having to snap cursor to selection, then to switch between modes, then to snap origin to cursor. This is awful.

Situation is a little bit better in 2.8 for some cases, because we can directly snap cursor to geometry in object mode instead of going into edit mode to make a selection. But workflow for most origin placing is still slow.

Those hotkeys are customizable. You can change them to what you want.

Use a grid instead of a plane.
For a cube, you can enable Extra-Objects addon and then add a Round Cube.
Or you can change user preferences to directly enter edit mode when adding an object. That way, that could be faster to use loopcut.

CTRL_R for subdividing works for me. It’s still one extra step that I have to do compare to Maya but it’s not a major issue.

As far as pivot goes, it’s still too complex. It’s again a two step process. Now, unless I’m getting this wrong, this is what I understand of it. Let’s say I want to snap the pivot point to a vertices. I first need to select which snap mode I want (because once you are in shitf-MMB, the snap options disappears). Then you move the cursor where you want it. Then you go object -> set origin -> origin to 3D cursor. That’s a lot of step just to move a pivot.

Here’s the Maya way: Press and hold D. Move pivot. Release D.

You want to snap to at the same time? Also press X for grid, C for curve or V for vertices. If you use Shit, you can also reorient the pivot according to an edge or a face. You can rotate the pivot too to change the axis of your object. It’s hard to be more efficient than that.

Now, you have to understand that I have to deal with over 20 years of muscle memory and adapting to Blender is not an easy task. I’m not trying to bitch Blender, I’m trying to understand it. Just look at my next post about parenting in the outliner. 2.79 was just too much of a mess for me to work with. Things were all over the place. I really like the interface in 2.80.