Blender Has So Much Going for It; Let's Make It Even Better

As a longtime Maya user, my perspective may be a bit different than most here. I’ve gotten used to some features there and making the transition to Blender has me searching for better ways to do certain things.

One of the top things on my list would be the text tool. In Maya, there was a significant upgrade in 2018. It is possible to do many types of bevels and the UI is easy and quick with a font preview. I would like to see a Text tool in Blender like this. Right now, the bevel options are extremely limited; there is no obvious way to change the bevel from concave to convex, etc. or to draw a profile curve for the bevel, the way the Bevel Modifier works. And applying the Bevel Modifier to text produces nasty results. Some development work there would help those of us who do a lot of animated logo work.

The camera. I still can’t tell which way it’s pointing. It doesn’t look like a camera at all, and that gets confusing in some scenes. Could we get a more ‘camera’-looking camera icon?

And along that vein of thought, the process of making a camera look at and follow an object is not intuitive and I find that after watching a tutorial on how to do this complicated process, I’ve forgotten it a day later. In Maya, simply choosing an option and dragging out a pointer from the camera icon does the trick. Adding a similar feature in Blender would be fabulous.

I know I’m going to find more items to suggest, as I get into using the program. I’ve been playing with it for about 2 weeks now and can do some very basic stuff. Playing with a lot of test scenes I downloaded and am blown away by the realism it can produce. For some reason, my Maya renders don’t look this real.

EVEE’s speed is stunning and the level of fidelity to a full offline render is amazing. I used to think Maya’s Viewport 2.0 was impressive, but EVEE can do refraction and reflections in realtime.

The fact that Cycles can render using GPU is another big huge advantage over Maya, whose renders still run on CPU. I used to suggest over on the AutoDesk forums that they use GPU to take advantage of those 3000+ cores, but they would tell me that you can’t do that kind of calculation on GPUs. Fast forward to Blender and we are rendering on GPU at break-neck speeds.

Oh, one last thing I remembered… accuracy of raytracing. For two decades, I’ve been testing various 3D software’s ability to correctly render a reflected beam of light. So far, only Mental Ray could pass the “flashlight test”. I’d place a parabolic object, set to mirror like shader and place an omni light at it’s locii. Many renderers, dating back to Caligari trueSpace 2.x failed to produce a beam onto a matt surface in front of the reflector.
With the introduction of Arnold renderer in Maya, this ability was lost. I also found that Cycles does not render a beam. It could be possible that I am not setting up the demo correctly. Perhaps others could chime in with ideas.

In conclusion, despite a few areas lacking, Blender is almost ‘too good to be true’, with it’s realistic realtime rendering, some fantastic modeling tools and efficient coding. I can see it overtaking other commercial 3D programs in a few years. We’ve come so far since 1987, with Wavefront Technologies Preview software running on SGI Iris and Sun SPARCstations. Then in the early 90s, I played with 3D Studio and Pixar Typestry, followed by 1995’s introduction of Caligari trueSpace. By the early 2000’s, I standardized on Maya and there I’ve been til now.


Most maybe, but not all. Blender is a very diverse community.

This might be something you find interesting.

Welcome to Blender and good luck learning. I hope you have fun.

Seriously I am going to duck out now before the flaming arrows launch.

Keep good spirits and understand most of us mean well.


1 Like

Thanks for that hint. I really wish the help page would identify what the parts of the rig are and how to use them. I can’t seem to move it around or find this “bone” that is supposed to make the camera point at something.

I guess I can’t compare two weeks with Blender to the comfort level I have from two decades with Maya. :slight_smile:

1 Like

Ctrl-tab for pose mode.

You can create and save a scene (with all the camera mods you want) as the default scene.

You can also use the actual camera view to control the camera.

I know what you mean. I find a lot of things in Blender aren’t intuitive, and not just processes. Z-up is another. I’ve been fighting a mental block on this since I started with Blender. As to processes, I’ve had to take notes and practice things over and over so I can remember how they’re done, going back over a number of days to get it to sink in.

But, whatever gets you there, right?

Cycle is a path tracer… it can’t do that. You need a ray tracer for that and there is one doing both at the same time with blender:


You can draw a bevel profile curve for text tool since decades in Blender.
You have to use a Curve object as Bevel Object inside Bevel subpanel of Geometry panel, present in Text properties.
That is same panel that for curves and it works like for curves.

But for sure, that is not as discoverable as Bevel Profile of Bevel modifier and it is not as versatile.

An extruded text is made of 3 separated parts ( back, front, side).
So, if you try to modify that with modifiers, result may reveal separations between those parts.
You may need to convert Text Object into a Mesh to really do what you want.
That being done for characters of a font. You can use old Font Family feature to replace an editable text object by those character meshes.

Fancy tricks of Text objects is a relatively old part of Blender that has not been revisited since a long time (at least before 2.3x series).
In comparison, Bevel profile of Bevel modifier was added in 2.82, last release of this year.
That does not mean that developers did not improve Text objects in 2.8x series.
But that was more about the basics.

Well. Camera is basically a pyramid with a quadrangular base.
At peak of pyramid, there is origin of camera indicating its pivot of rotation, its location.
At opposite side, quadrangular base is supposed to represent screen or captor ratio.
And there is a triangle to indicate top of screen.
Direction is supposed to be obvious : from the point to the quad.

In older releases of Blender, most of gizmos tasks were handled by tricks with empties.
There was a shortcut Ctrl T to add a Track To constraint.
So, that was not complicated. You just had to add an empty, select both objects and use shortcut to create a Track To constraint.
But in 2.8, shortcut was removed to let more space for customization of keymap.
2.8 is supposed to have gizmos to help user.
I don’t get why they made one to point lamps (Sun, Spot or Area) on a scene location, but did not do the same for cameras.
So, you have to pass through menus to add constraint, now. (Object menu > Track To submenu)

In Arnold or Cycles, there are volumetric shading. So, you can fake it by modelling the beam as a closed mesh and give to it an emisive volumetric shader.
As @skuax pointed out, luxcore has a laser option for area lamp to make a laser beam.

So I guess Arnold is also a Path renderer, because it can’t trace the rays of light to the reflector to form a beam.
Which raises the question, how does RTX GPU help with path tracing if it’s a raytracing GPU?

I clicked on the rig/widget for the camera and pressed Ctrl-Tab, but cannot see any change. This isn’t making any sense. Using the constraints, as complicated as that was, at least worked.

Basically you need to be in pose mode

Path tracers calculate light closer to reality than ray tracing and other types of renderers. In real life you need some kind of environment scatter to see lasers, god rays, etc. The same is true with path tracers. One method to get these effects is to add in a volumetric object to your scene that will scatter the light beams and make them visible. Both Cycles and Arnold support this as far as I know. :slight_smile:


Raytracing and path tracing are almost the exact same thing, Pathtracing goes 1 step farther and adds indirect lighting and monte carlo sampling iterations (usually).

RTX speeds up pathtracing because pathtracing is infact raytracing


Both Arnold and Cycles can do it too, but you need to set up a volume to scatter the rays to see it. Beams can physically only be visible when there’s something to reflect them, such as dust or smoke in the air. Makes sense?

1 Like

Thanks for quoting my tuts :). At least i know that someone can use it.

A few things:

  • LuxCore is a bidirectional path tracer and most of modern “good” rendering engines will will be the same. What means that still it is path tracer but not unifrom

  • In case of LuxCore uniform path is not less acurate than bidirectional. It is not able to calculate complex scenes (caustic). For this reason we have some tools that allows to use bidir and path at the same time.

  • RTX has nothing to do with calculation rays in terms of rendering engines. Effect will be the same on openCL Cuda optix and CPU (if we are able to have the same settings). Optix is just fast with cycles.

  • laser is just the bright small lamp with sharp edges. To “see” the beam you need to have volume in your scene. Both LuxCore and Cycles (and eevee!) supports it feature. Attention! Principled volume absorbtion Cycles is doesn’t work the same way as volume absorbtion! I havent tried to create a laser in Cycles but “god rays” in volume is quite popular so I believe that also a laser beam can be created. In LuxCore is very simple. I have made tutorial using “laser type” of lights.

  • LuxCore uses Metropolis sampler when rendering on BiDir. You will use sobol anyway :slight_smile:

  • Simple reflection of the beam is doable in both of the engines.

Hi, it does, RT cores are designed exactly for ray/pathtracing in mind.
One get 2-4 times faster render with RT enabled.
In Octane you can en/disable RT cores to compare render performance.
You can render with the same card and en/disable RT cores.

Cheers, mib

This is what I wrote. It affect speed of the rendering not the general effect. You wont get anything “better”, you will just get it faster.

1 Like

In my tests, I have a matt white plane on which to see the beam form. So far, only MentalRay can produce a beam on the plane. The Arnold and Cycles renders do not.