Blender Camera Animation to Enscape Video Path Python Script

I wanted to transfer camera path information between Blender and Enscape. Googling this led me to Dion Moult’s brilliant blog post regarding this very issue (source: https://thinkmoult.com/how-to-composite-enscape-animations-with-blender.html). I tried using their script and it sorta worked except for the easing. None of Blender’s easing methods match the elusive easing used by Enscape.

Further Googling led me to another script created for 3DS Max by a user named anam-ate from the Enscape forums (source: https://forum.enscape3d.com/index.php?thread/5937-save-camera-path-as-fbx/). anam-ate had the idea to animate the camera inside 3DS Max first where you have the most control, then transfer to Enscape. This can be done by transferring the location and vector of the camera at each and every frame. With 30 frames per second, you don’t need to work about easing because it runs together too quickly.

I cannot code, so I used ChatGPT to help code a solution for converting specific Blender camera data to Enscape as the .xml video path. It took some back and forth to get ChatGPT to do what I wanted. The final result is not perfect and I’ve noticed some jittering in Enscape, but I believe it is an improvement on Dion’s work.

In syncing Enscape with Blender camera paths, you must be working between Blender and another program such as SketchUp or Rhino. It doesn’t matter which program you’re designing in, but you’ll need to match your geometry in the World Space in your two programs so that you can properly plan and animate a camera path.

Workflow:

  • In Blender, create a camera object (named “Camera”) with the FOV set to 90. Also create an empty (named “Target”).
  • Create a path and set the camera to “follow that path” while tracking to the target (you can animate the target or not).
  • Set frame rate to 30 fps.
  • Bake animation of animated camera path (yes to “Visual Keying” and “Clear Constraints”). ##This is a limitation, I couldn’t figure out how to get the script to successfully do this, so you have to do it manually.
  • Save Blend file (must be saved to work), then run script.
  • Open Enscape and load video path file that will be in the same folder as your .blend, called “output.xml”
  • Now the paths are synced! It’s also easy enough to do if you need to change it later due to design changes. Back in Blender you’ll want to apply a Holdout shader on the imported geometry that will be rendered in Enscape so it essentially becomes an alpha mask. Now you can animate whatever you want, render, and composite in your favorite compositor (such as Blender).

I would not have been able to do it without Dion Moult’s helpful descriptions on how the .xml file is organized and anam-ate’s insight to animate the path in Blender for the most control, and of course to ChatGPT, which enabled me, a non-coder, to write code. For anyone who can code, you probably will be able to look at this code and immediately recognize ways to make it better, which I encourage. Let me know how you improve it!

import bpy
import xml.etree.ElementTree as ET
import xml.dom.minidom
from mathutils import Vector

# Set the frame range based on your animation length
start_frame = bpy.context.scene.frame_start
end_frame = bpy.context.scene.frame_end

# Get the directory of the current blend file
blend_file_path = bpy.data.filepath
blend_file_directory = bpy.path.abspath("//")

# Frames per second in Blender
frames_per_second = bpy.context.scene.render.fps / bpy.context.scene.render.fps_base

# Create the root element of the XML document as "Keyframes"
root_keyframes = ET.Element("Keyframes", count=str(end_frame - start_frame + 1))

# Loop through each frame
for frame in range(start_frame, end_frame + 1):
    # Set the current frame
    bpy.context.scene.frame_set(frame)

    # Calculate timestamp in seconds, starting from keyframe 0
    timestamp_seconds = (frame - start_frame) / frames_per_second

    # Access the Camera and Target objects
    camera = bpy.data.objects.get("Camera")
    target = bpy.data.objects.get("Target")

    if camera and target:
        # Create Keyframe element with adjusted order and timestamp
        keyframe = ET.SubElement(root_keyframes, "Keyframe", order=str(frame - start_frame), timestampSeconds="{:.4f}".format(timestamp_seconds))

        # Create Position element for Camera
        position_camera = ET.SubElement(keyframe, "Position", x=str(camera.location.x), y=str(camera.location.z), z=str(camera.location.y * -1))

        # Calculate vector from Camera to Target
        lookat_vector = target.location - camera.location

        # Create LookAt element as vector
        lookat_target = ET.SubElement(keyframe, "LookAt", x=str(lookat_vector.x), y=str(lookat_vector.z), z=str(lookat_vector.y * -1))

# Create the root element of the XML document as "VideoPath"
root_video_path = ET.Element("VideoPath", version="1", easingInOut="1", shakyCam="0")

# Append the Keyframes element to VideoPath
root_video_path.append(root_keyframes)

# Create an ElementTree object
tree = ET.ElementTree(root_video_path)

# Convert the ElementTree to a formatted string with minidom
xml_string = xml.dom.minidom.parseString(ET.tostring(root_video_path)).toprettyxml(indent="    ")

# Save XML to the same directory as the blend file
output_path = bpy.path.abspath("//output.xml")
with open(output_path, "w") as file:
    file.write(xml_string)