Here are the results of the tests I did:
Official Blender 2.34 - Normal Settings - 5 min 6 sec
Official Blender 2.34 - Parts (8X8) - 6 min 33 sec
Official Blender 2.34 - Command Line - 5 min 13 sec
Blender Optimized - Normal Settings - 1 min 19 sec
Blender Optimized - Parts (8X8) - 2 min 43 sec
Blender Optimized - Command Line - 1 min 19 sec
So I came to the conclusion that the best options for my computer and the file I was using was to use blender optimized normal settings. Now of course, results will vary from computer to computer and from file to file.
The file I used was comprised of hundreds of meshes at subsurf level 2 and 3 totaling 1.3 million vertices. No materials, no textures, AO level 4, 3 lamps, OSA 16. All were rendered with my Athlon XP Mobile 2500 chip and 512 megs of ddr400 ram. Splitting up the file into many parts did slow down the process quite a bit. I know that I split up the file into 64 parts, but I tried it with less parts (2X2 and 4X4) and the speed difference was still negative.
But, what interests me is that maybe this technology can be used to render the image over a network. Just like we currently have network renderers that render animations frame at a time on different computers, we could have a network renderer that renders parts over the network. You could split up the image into a grid labeled with the alphabet on one side and numbers on the perpendicular side. Then tell one computer to render A1 and tell another computer to render A2 and another computer to render A3. Then when their done, render another part. I think this may work. Are there any coders listening? I’m going to go to blender.org to try and get some feedback.