Alembic in blender

I’m trying to export a scene to LightWave and some suggested Alembic Which LightWave supports and I see some vague mentions of blenderr supporting for export but I can’t find it in the program. Anyone have any knowledge of this?

Alembic is still not supported in Blender.
Hope this will change sooner or later…

Ah, that’s sad news. Thanks for sharing.

If you want to follow or support alembic library development that Lukas is working on

Whether it’s Alembic or something else, I just need to get the camera into LightWave.

try FBX it exports Camera, at least there’s a checkbox for including camera when exporting.

if it where point cloud mesh export, animated OBJ files, (a file per frame) would be option. But OBJ doesn’t export camera, and you stated that was the important thing for you. So try FBX.

I tried FBX and for some reason the rotation comes through but not the XYZ movement. I tried Collada as well but it seems to be broken, I couldn’t load the file at all in LightWave or Max.

If you only need to export the camera, try to use the .chan exporter in Blender (enable it in addons).

Unfortunately there’s still some problems with camera import/export with FBX, but can understand how difficult it is to support everything in a moving, closed format.

I will look into that but I’ve never heard of loading a .chan format into LightWave.

I, too would like to know of a solid way to get camera data out of blender. Hopefully someone has some success stories to share here.

I’ve resorted to Python and an in-house xml format. It would be really nice to get Alembic.

What ended up working for me was the .chan format. For LightWave there’s a script from Mental Fish to import / export .chan. My only problem is that it’s not quite 100% perfect. My camera tracking looks good in blender but in LightWave it’s just a little bit floaty. I’m not sure if there’s might be something off aside from the camera motion or what. I did make sure the focal length is the same in both apps so maybe something else.

I used to export blender camera tracking to 3dsmax with fbx and it work perfectly. I import in max with fbx2012 settings and the last time i’ve donne this is in blender2.68 i think, maybe it is broken now so try a older blender can help. And don’t forget to set the fps setting in the render pannel.

The .chan format is a simple location/rotation/(scale) and vertical FOV pr. frame, but you should make sure that in LW the camera sensor size (or backplane or what it’s called in LW) is the same as the camera from Blender before applying the .chan. That is what is needed in Nuke at least, to get 1:1 camera anim and FOV from Blender via .chan files. I’m not sure what you mean by “floaty”?

FBX can include more data, such as the sensor size, aperture size, etc, whereas .chan is ultra simple (but effective).

I’d love to use FBX, but it doesn’t work. It’s only loading the camera roatation, no XYZ movement. I don’t know of anything that would be a sensor size in LW but I’ll play with it. When I say floaty I mean it looks liek the objects are slightly floating around instead of sitting still on the ground, while it looks fine in blender. Here’s a video:

Yeah, definitely looks like a wrong FOV (more zoomed in than it should be?). Note, knowing the focal length is not enough. You can have the same focal length in two apps/cameras, but as it’s depending on the sensor size it can result in two different field of views. For the longest time Blender had a static sensor size (32mm x 18mm), so you had to calculate the focal/sensor/fov math externally, but thanks to the developer Matt Ebb this has not been an issue for some years now. Here you can see the vertical FOV of the default camera:

This is the value used in the .chan file format. Like before, depending on the sensor size this value can result in different focal lengths. That’s why we have to setup the cameras first in different apps before using a .chan file, so the correct values can be calculated. Here are the camera properties in Nuke, Maya and Houdini respectively:

Both Nuke and Maya call the sensor size for aperture (which generally means “opening”) but in Houdini aperture stands for the horizontal FOV (sigh), and it has no apparent way to set the sensor size, but it is still possible to use a camera .chan file correctly in the usual cryptic Houdini way… I can’t imagine LW doesn’t have a way to do it too.

Besides all this sensor/FOV stuff, also be aware of the rotation order during export/import.

I know all of the above doesn’t help you at all, sorry, but it is just to say that while the .chan format is old-school and requires manual intervention, it does work in different apps and I use it quite a lot (also had a part in the addon but not credited). Using FBX would be more “plug’n’play” - if it worked. I did get support in for sensor size in the Blender FBX exporter a while back, but haven’t looked at the new binary FBX work other than I know rotation is broken (for me), and there’s no animation support. Maybe one day if I feel brave I’ll take a deeper look to see if I can help, but in the meantime .chan is my go to camera interoperability format.

Edit: Alembic would be great to have :slight_smile:

I did find what appears to be the sensor size in LightWave it’s listed as “Frame” with a box to type what I want or a set of presets. Not much changed when I put the number in there though so I still have to play.

You can actually have alembic files exported from Blender with the Octane Render for Blender plugin. I’m not sure if you can import. Maybe someone can have a look to this exporter and build something to import/export without using Octane.

You said the focal length was the same in both apps, can you see if the horizontal/verical FOV is different at least? If it isn’t the problem may be somewhere else.

LightWave has Horizontal and Vertical but I only see one number for FOV in blender.

As far as the Octane plugin, the whole reason for using blender for this is that I didn’t have to pay for anything. If I have to pay I might as well use PF Track or something.