Textures in the viewport vs. the rendered scene?

I’m writing an addon to make 2d puppet creation for Unity easier, and I can’t figure out the difference between textures in the viewport and in the rendered view. When I use my script (Source: https://github.com/jceipek/Blender-Unity-Addons/blob/master/Addons/merge_to_unity_puppet.py; Usage:https://github.com/jceipek/Blender-Unity-Addons/wiki/Merge-to-Unity-Puppet) to combine image planes generated by the ‘Import Images as Planes’ addon, they look white in the viewport and are textured with broken uv coordinates when rendered.

What’s the difference between textures in the viewport and the render view, and how does mapping work in Python? I’ve looked through the ‘Import Images as Planes’ script, and I can’t figure out why the textures it generates are mapped properly and show up in the viewport despite the MaterialTextureSlot.uv_layer being empty.

I also don’t understand why moving images in the UV editor changes the way those images are displayed in the viewport but not necessarily the render view.

Is there a high level overview of how all of this works? I’ve been finding the API really hard to use because it seems to assume prior knowledge of how things fit together.

Idk how image planes do it, but there are basically two ways to get images mapped onto objects:

  1. Assign the face(s) an image through the UV editor. Each face can have its own image mapped to it. In the viewport, this will show up if:
  • You have “Texture” Viewport shading active and your “Display->Shading” option is either Multitexture or Singletexture
  • You have “Solid” Viewport shading active and your “Display->Shading->Textured Solid” checkbox is ticked.
    This will not show up in render by default, however you can replace the material base color with that assignment with the “Options->Face Textures” material option.
  1. Create a “Texture” of type “Image” and map it into a material channel using either UVs or one of the generated coordinates. In the viewport, this will show up if:
  • You have “Texture” Viewport shading active and your “Display->Shading” option is GLSL
    GLSL mode will more or less emulate the material that you use for rendering, but not all options are supported (raytraced effects, procedural textures, most of the generated coordinates, etc. etc.)

I know it’s really confusing. Rite of passage I guess…

Thanks, Zalamander. That cleared a few things up for me, but I think my question was overly broad and somewhat confusingly worded; let me try again:

1. Is there an overview of how the Python API can be used to accomplish the task of assigning textures to objects and meshes?
Texturing an object from within the interface is something I have done many times, but Python accesses it slightly differently. The API documentation is very limited; for example, it doesn’t explain how the properties in http://www.blender.org/documentation/blender_python_api_2_62_release/bpy.types.MeshTextureFace.html relate to one another.

2. How does UV mapping work?
Properties->Textures->Mapping->Map is blank, but it has a dropdown menu that contains one entry, “UVMap” when the ‘Import Images as Planes’ addon is used. What is the significance of this?

Thank you very much, Zalamander. That was really helpful, and I my script (https://github.com/jceipek/Blender-Unity-Addons/blob/master/Addons/merge_to_unity_puppet.py) now preserves basic texture information.

The only thing I still need to figure out how to do is preserve the UV coordinates of the merged objects in my script. Do I have to use separate UV layers to accomplish this, or can I somehow combine them in one layer?

I’m not sure what you’re after exactly. I can recommend to you setting the outliner to datablock mode and discover the blender data structures that way. As for joining meshes: Afaik the TextureFace structs in the UV layers match the faces of the mesh by index, so when you are joining meshes, you need to maintain that.