ChatGPT AI Failed to Generate a Supposedly Simple Script

I tried to use ChatGPT AI to generate a script that will create bones for hair cards. My command was:

Please write a Blender script that will create a bone in the middle of each face of a plane.

Unfortunately, ChatGPT AI keeps giving me scripts with errors. I thought it was supposed to be a simple script to create. Anybody have a better luck with ChatGPT AI?

It’s pulling from all the existing, human-written, open-source Blender Python code on GitHub- 80% of which is riddled with errors, outdated, half complete, etc. As this is just pattern regurgitation software, garbage in, garbage out. It doesn’t do well with complex scripts, but it seems like it can scrape out a simple panel or a “hello world” printer

4 Likes

Being a programmer and assuming you gave me that description, I would be confused at first. Is the plane already in the scene or does it need to be created? A plane in Blender only has one face, why are you mentioning “each face” should have a bone created? Isn’t there just one face?

I have seen that others have been feeding the error messages back to ChatGPT and were able to refine the result in this way.

Yes, i thought it was smarter than that

It’s not, yet anyway. It’s so inaccurate that it’s been banned entirely on StackOverflow

Based on ChatGPT’s explanation, it seemed to have understood that there should’ve been a selected plane called “Plane” with multiple faces in it.

I mentioned the error messages to it and it tried to fix the problems several times, before it eventually showed a “network error” :grin:

1 Like

Honestly, it does a lot better than I’d expect. I’ve now had it write three functional scripts. One was for random location. One was for random color. And the last one was for the script talked about in this thread.

I gave it the input of “Write a python script for Blender that adds bones in the middle of each face on a plane.”.
The first script it gave me didn’t work.

import bpy

# get the active object
obj = bpy.context.view_layer.objects.active

# make sure it is a mesh
if obj.type != 'MESH':
    print("Error: active object is not a mesh.")
    exit()

# get the mesh data
mesh = obj.data

# add bones in the middle of each face
for face in mesh.polygons:
    # get the face's center
    center = face.center
    # create a bone at the face's center
    bone = obj.pose.bones.new(name=f"Bone {face.index}")
    bone.head = center
    bone.tail = center + face.normal

Then after just 3 revisions it gave me a near fully functional script.

import bpy

# get the active object
obj = bpy.context.view_layer.objects.active

# make sure it is a mesh
if obj.type != 'MESH':
    print("Error: active object is not a mesh.")
    exit()

# get the mesh data
mesh = obj.data

# create an armature object
armature = bpy.data.armatures.new(name="Armature")
armature_obj = bpy.data.objects.new(name="Armature", object_data=armature)

# link the armature object to the scene
bpy.context.collection.objects.link(armature_obj)

# set the armature object as the active object
bpy.context.view_layer.objects.active = armature_obj

# enter edit mode
bpy.ops.object.mode_set(mode='EDIT')

# add bones in the middle of each face
for face in mesh.polygons:
    # get the face's center
    center = face.center
    # create a bone at the face's center
    bone = armature.edit_bones.new(name=f"Bone {face.index}")
    bone.head = center
    bone.tail = center + face.normal

# exit edit mode
bpy.ops.object.mode_set(mode='OBJECT')

# parent the mesh to the armature
obj.parent = armature_obj
obj.parent_type = 'BONE'
obj.parent_bone = "Bone 0"

# apply automatic weights to the mesh
bpy.ops.object.parent_set(type='ARMATURE_AUTO')

This script of course works for 3D objects as well.

Screenshot from 2023-01-05 21-16-49

The only issue was that it didn’t account for the transforms on the object itself (rotation, location, and scale). But after just two more revisions it fixed the code.

Screenshot from 2023-01-05 21-22-17

import bpy

# get the active object
obj = bpy.context.view_layer.objects.active

# make sure it is a mesh
if obj.type != 'MESH':
    print("Error: active object is not a mesh.")
    exit()

# get the mesh data
mesh = obj.data

# create an armature object
armature = bpy.data.armatures.new(name="Armature")
armature_obj = bpy.data.objects.new(name="Armature", object_data=armature)

# link the armature object to the scene
bpy.context.collection.objects.link(armature_obj)

# set the armature object as the active object
bpy.context.view_layer.objects.active = armature_obj

# enter edit mode
bpy.ops.object.mode_set(mode='EDIT')

# add bones in the middle of each face
for face in mesh.polygons:
    # get the face's center in local space
    center = obj.matrix_world @ face.center
    # create a bone at the face's center
    bone = armature.edit_bones.new(name=f"Bone {face.index}")
    bone.head = center
    bone.tail = center + obj.matrix_world.to_3x3() @ face.normal

# exit edit mode
bpy.ops.object.mode_set(mode='OBJECT')

# parent the mesh to the armature
obj.parent = armature_obj
obj.parent_type = 'BONE'
obj.parent_bone = "Bone 0"

# apply automatic weights to the mesh
bpy.ops.object.parent_set(type='ARMATURE_AUTO')

Not gonna lie…it’s already out smarted me… :rofl:

Also, for anyone curious, here are the other two scripts I mentioned.

Random Location
Prompt: “Write a python script for Blender that places all selected objects in a random location.”
Revisions: 0 (worked on the first attempt)

import bpy
import random

# Set the range for the random locations (x, y, z)
min_range = -10
max_range = 10

# Get the list of selected objects
selected_objects = bpy.context.selected_objects

# Iterate over the list of selected objects
for obj in selected_objects:
  # Set the object's location to a random value within the specified range
  obj.location = (random.uniform(min_range, max_range), random.uniform(min_range, max_range), random.uniform(min_range, max_range))

# Update the scene
bpy.context.view_layer.update()

Random Color
Original Prompt: “Write a python script for Blender that gives all selected objects a material with a random color.”
Revisions: 4

import bpy
import random

# Get the list of selected objects
selected_objects = bpy.context.selected_objects

# Iterate over the list of selected objects
for obj in selected_objects:
  # Remove any existing materials from the object
  obj.data.materials.clear()
  
  # Create a new material for the object
  mat = bpy.data.materials.new(name="RandomColorMaterial")
  
  # Set the material's color to a random value with an alpha value of 1
  mat.diffuse_color = (random.uniform(0, 1), random.uniform(0, 1), random.uniform(0, 1), 1)
  
  # Assign the material to the object
  obj.data.materials.append(mat)

# Update the scene
bpy.context.view_layer.update()
3 Likes

Thanks for the info. I think my command wasn’t clear enough.

Actually, I wanted ChatGPT to write a script that generates a rig for hair cards, but it didn’t produce the correct one if I use that command. I believe it couldn’t find the answer for such problem on the net

Hi, Thanks for posting the script. Actually i wanted to add bones only to the selected faces. If there are no selected faces then to all faces. I tried it with ChatGPT. After a couple of versions it gave me this script.

Well it adds bones to all faces even if i have selected few faces. But in some cases it works if i delete the armature and run the code again. I am unable to figure it and i am novice when it comes to coding.

import bpy

# get the active object
obj = bpy.context.view_layer.objects.active

# make sure it is a mesh
if obj.type != 'MESH':
    print("Error: active object is not a mesh.")
    exit()

# get the mesh data
mesh = obj.data

# create an armature object
armature = bpy.data.armatures.new(name="Armature")
armature_obj = bpy.data.objects.new(name="Armature", object_data=armature)

# link the armature object to the scene
bpy.context.collection.objects.link(armature_obj)

# set the armature object as the active object
bpy.context.view_layer.objects.active = armature_obj

# enter edit mode
bpy.ops.object.mode_set(mode='EDIT')

# add bones in the middle of each selected face
if mesh.total_face_sel > 0:
    for face in mesh.polygons:
        if face.select:
            # get the face's center in local space
            center = obj.matrix_world @ face.center
            # create a bone at the face's center
            bone = armature.edit_bones.new(name=f"Bone {face.index}")
            bone.head = center
            bone.tail = center + obj.matrix_world.to_3x3() @ face.normal
else:
    # add bones to all faces
    for face in mesh.polygons:
        # get the face's center in local space
        center = obj.matrix_world @ face.center
        # create a bone at the face's center
        bone = armature.edit_bones.new(name=f"Bone {face.index}")
        bone.head = center
        bone.tail = center + obj.matrix_world.to_3x3() @ face.normal

# exit edit mode
bpy.ops.object.mode_set(mode='OBJECT')

# parent the mesh to the armature
obj.parent = armature_obj
obj.parent_type = 'BONE'
obj.parent_bone = "Bone 0"

# apply automatic weights to the mesh
bpy.ops.object.parent_set(type='ARMATURE_AUTO')

This is as far as I could get.

import bpy

# get the active object
obj = bpy.context.view_layer.objects.active

# make sure it is a mesh
if obj.type != 'MESH':
    print("Error: active object is not a mesh.")
    exit()

# get the mesh data
mesh = obj.data

# create an armature object
armature = bpy.data.armatures.new(name="Armature")
armature_obj = bpy.data.objects.new(name="Armature", object_data=armature)

# link the armature object to the scene
bpy.context.collection.objects.link(armature_obj)

# set the armature object as the active object
bpy.context.view_layer.objects.active = armature_obj

# enter edit mode
bpy.ops.object.mode_set(mode='EDIT')

# add bones in the middle of each selected face
for face in mesh.polygons:
    if face.select:
        # get the face's center in local space
        center = obj.matrix_world @ face.center
        # create a bone at the face's center
        bone = armature.edit_bones.new(name=f"Bone {face.index}")
        bone.head = center
        bone.tail = center + obj.matrix_world.to_3x3() @ face.normal

# exit edit mode
bpy.ops.object.mode_set(mode='OBJECT')

# parent the mesh to the armature
obj.parent = armature_obj
obj.parent_type = 'BONE'
obj.parent_bone = "Bone 0"

# apply automatic weights to the mesh
bpy.ops.object.parent_set(type='ARMATURE_AUTO')

I asked it to add bones to all faces if there were no selected faces but it canceled out part way through the script. I think this was simply due to the fact that there’s probably a limit to how long a ChatGPT response can be.
I’m quite confident that it could do the whole script if it wasn’t limited length wise though.

Yes, ChatGPT does stop in the middle sometimes but most of the time it works. Regarding code though I think It is something in the code that is causing problem. Anyway thanks for your reply.

So I told it to just give me to portion of the code it left out of the previous answer. Then I just merged the two scripts together. And, of course, it works. Although it looks like the terminal is showing a few errors.

Prompt: “just give me the portion of the script were you left off.”
Both scripts combined:

import bpy

# get the active object
obj = bpy.context.view_layer.objects.active

# make sure it is a mesh
if obj.type != 'MESH':
    print("Error: active object is not a mesh.")
    exit()

# get the mesh data
mesh = obj.data

# create an armature object
armature = bpy.data.armatures.new(name="Armature")
armature_obj = bpy.data.objects.new(name="Armature", object_data=armature)

# link the armature object to the scene
bpy.context.collection.objects.link(armature_obj)

# set the armature object as the active object
bpy.context.view_layer.objects.active = armature_obj

# enter edit mode
bpy.ops.object.mode_set(mode='EDIT')

# add bones in the middle of each selected face
selected = False
for face in mesh.polygons:
    if face.select:
        selected = True
        # get the face's center in local space
        center = obj.matrix_world @ face.center
        # create a bone at the face's center
        bone = armature.edit_bones.new(name=f"Bone {face.index}")
        bone.head = center
        bone.tail = center + obj.matrix_world.to_3x3() @ face.normal

# if no faces were selected, add bones to all faces
if not selected:
    for face in mesh.polygons:
        # get the face's center in local space
        center = obj.matrix_world @ face.center
        # create a bone at the face's center
        bone = armature.edit_bones.new(name=f"Bone {face.index}")
        bone.head = center
        bone.tail = center + obj.matrix_world.to_3x3() @ face.normal

# exit edit mode
bpy.ops.object.mode_set(mode='OBJECT')

# parent the mesh to the armature
obj.parent = armature_obj
obj.parent_type = 'BONE'
obj.parent_bone = "Bone 0"

# apply automatic weights to the mesh
bpy.ops.object.parent_set(type='ARMATURE_AUTO')

I do think though, regardless of how good the scripts are, ChatGPT could be used as a base to build on. That would be far easier and less time consuming than writing the entire script.

2 Likes

Thank you very much. Script is working nicely.

Also I do agree with your last point regarding using ChatGPT as a base and building on top of it. However It is also important to have basic understanding of python. If someone is already familiar with the python then can create a complete add-on with help of ChatGPT i believe.

1 Like

I know programming, but exploring the API and testing the waters from where to go, it takes a good amount of time.

Out of experience certain things are easy if have been used enough times, makes it really easy to answer for people who ask. But this depends on the area of interest and the question of the user. As for example I have 0% clue about the sculpting mode and brushes, but I have looked into edit mode vertices a lot of times. If I were to answer a sculpting-related answer I would spend many minutes trying to figure out how to access the API.

Usually even if someone is experienced-or-great at programming. There is always there is cost to pay for making prototypes. As for example having an initial plan and three different ideas on how to attack the idea again forces you to go to prototype-scrap-prototype-scrap phase until you are sure you went with the correct decision.

So in many cases, GPT can show the ropes to those who don’t know about programming. But also it can help those who know, to “visualize” solutions in a faster pace.
(However I won’t touch the subject of mastery which is another topic – the more you use GPT the more you postpone the 10.000 hours you put in programming).

1 Like

I have seen that many experienced machine learning developers started to integrate GitHub Copilot as a part of how they write code. What I have read is that some just write a comment and let it generate code and possibly fix what they see is wrong to quickly guide it.

More or less this is where we are going, soon will be something like Notepad-vs-Sublime thing. :nerd_face:

And Sublime is the AI, and Notepad is the autocomplete…

You mean Sublime is the AI because it is not open source?

Ha! Good point. :wink:

This is both amazing and unnerving…