Cross3d:Cross DCC scripting

Cross3D is an open source implementation of commands, in python, that work in multiple programs(Studiomax, Softimage). It utilizes a abstract definition of commands, and a software specific implementation of those commands.

Doesn’t have binding for Blender yet and I can’t find anyone besides Blur using it currently But they mention asset import/export as examples of use in their YouTube Video

Maybe this will get popular with exporter so they won’t be locked to one application.

well blender = python, i dont think the other apps have python integrated, so it be a small step for them to include blender.
in a sense the other products becomme now more blender alike with this i think; it might be a reason perhaps to get a buyout by max

Actually, pretty much every major application has Python scripting these days. Unlike with Blender, these are real scripting APIs, not a jumble of autogenerated operator bindings that are terrible to use for actual scripting (with some exceptions).

Also, unlike almost everybody else in CG, Blender uses Python 3 (instead of Python 2), but that shouldn’t be a big problem for an abstraction layer.

Ok its been a long while ago that i scripted in autocad when i was a student might be 20 years ago or so…

As for programming it, an API is a API doesnt matter the language, it are often wrappers of function/method calls from other languages.
Documentating such stuff is often a poor part, but now that blender gets a complete rewrite, also connections with python will likely improve. As changing the old program wich had partly regions with spagethi to a whole new engine will improve anything that is based on it.

What are you talking about?Max cannot buy blender because it is opensource.

I recall during the 2.5 project that the developers tried to create an auto-generated API, but it got out of control when the number of functions climbed to 1000 and counting (so they went back and did the new API manually).

Though Campbell said before that he would actually agree with those who say the current API is crap (and he states the reason for that is because of an almost total lack of interest in improving it). With him being mostly gone from development, no one has stepped forward in earnest to really do just that (even though there may have been the occasional patch).

I don’t know the exact history. Either way, the largest part of the current “API” is auto-generated and has well over 1000 “functions”. It’s a necessary piece that binds the UI (scripted in Python) to the program. It can be used for scripting, but it hasn’t been designed to do so. This isn’t made clear from the start, so developers will start using it, likely get burned, then walk away.

There’s two major parts to this: “bpy.ops” and “bpy.data”.

“bpy.data” (the better part) lets you access, modify and create (most of) the data Blender uses. Sometimes, you can do so efficiently, sometimes not.

Almost all the functionality that you actually want (i.e. the real functions used to modify data) is not exposed directly. You can access some of it indirectly through the generally UI-centric operators in “bpy.ops”, but that’s a pain in the ass. Your other option is to implement the functionality yourself and write to “bpy.data” directly. Both “solutions” are bad.

I remember a discussion on the mailing lists where somebody proposed to simply export the C functions in the Blender binary, so that they could be called through a C FFI (like ctypes in Python). This wouldn’t have made for a stable API (developers change these functions occasionally), but it would’ve made things possible, at least. Either way, Ton was against it.

As it is, I can only warn people of using the scripting API, unless they have tons of time and nerves to burn. I’m not blaming anyone for doing a bad job, it’s what it is. It’s a “free” program. Time isn’t free though, so if you really have the skills to make something happen with bpy, you probably can do much better somewhere else.

There are exceptions to this: The standalone modules like BMesh are usable, as far as I can tell. They’re designed to be used by programmers.

Though Campbell said before that he would actually agree with those who say the current API is crap (and he states the reason for that is because of an almost total lack of interest in improving it).

I completely understand this, it just doesn’t change anything. The best way to make developers interested in improving an API is to force them to actually use it.

Is this why I can’t figure out wtf is going on with a blender script? I feel like i can kinda muck through others, but Blender Python I can’t really make heads or tails of. I’m sure I could if I was a programmer, but as a laymen Blender scripts seem more complicated.

Is Ton’s name pronounced like Tom only with an N ending sound, or is it like Ton in say An-ton-y, or is it Like Ton in english Ton = 2000 lbs. ?

BeerBaron shares some great insight as always.

  1. First of all let’s start with STANDARDS: http://www.vfxplatform.com/
    We use py 3.5 vs 2.7-2.8 that is used nearly everywhere else. It makes certain tasks difficult. For example you can essentially run houdini as a python module. A 3D app within another - it’s exceptionally powerful. I am not sure if there’s a way to make it coexist with blender, certainly not a straightforward one.

  2. To be fair Scripting in blender is very thorough. In C4D I remember having to press a button and read macro to then call it in python (e.g call_command(123123) for extrude). You often see same with Maxscript (3ds) as well. The python integration of 3ds max can softly be labeled a wrapper with sole intent to expose additional python functionality and libraries (e.g qt). Same with Python in Maya, it’s often complained it is not natural/pythonic and you’re essentially just writing melscript with syntax of python. I’m exaggerating here a bit and there are options, but compared to writing Python in Blender it’s very different.

  3. What is great about Maya, Nuke and Houdini is that you can create a Python NODE. In context of Blender - a modifier. We have no such luxury of scripted modifiers. At first sight it might not make sense and might even appear as security risk, but the possibilities are truly endless! Why couldn’t I read .obj sequence with modifier or create my own reference system linking to external data? Why can’t I have primitives as in 3dsmax that have history(topological parameters) exposed in modifier tab. Perhaps I would want to write my own cloning or Array systems (e.g radial array). The very same GSoC Weighted normals modifier could have been done with such a scripted modifier.
    Just look at some of the examples of MaxCreationGraph - it is essentially a nodal way of writing maxscript. However the problem with ANY nodal system vs Script/code is that it takes YEARS to populate with sufficient nodes to offer functionality already there with script. When people say “everything nodal”, I feel they are aiming too high and scripted option should be offered first.

What is interesting about Houdini is that instead of old school Nodal noodling, the latest trends suggest ever heavier reliance on Scripted nodes such as VEX (wrangle nodes). It is exceptionally fast, very easy to use and even applicable for writing simulation tools (e.g FEM). From point of view of open source community such a foundation would make sense and could unleash a lot of new tools.