So here’s the story:
Just for fun, I created a single bullet shell, had a particle system use that model, then used ‘make duplicates real’ to convert the generated particles to individual meshes. I then applied physics to them all and dropped them into a pile. So far so good. Finally, I baked the animations to keyframes so I could render motion blur. All easy enough.
Well, at that point, when I rendered a test frame, GPU render was incredibly slow, even in viewport render (I have twin GeForce 660 cards, not the best, but usually adequate). So thinking fast, I selected all the simulated shells and linked their object data to the original shell model. After that, the GPU viewport render was blazingly fast as usual, so I thought ‘great, I fixed the problem.’
But no. It’s true, the viewport render is perfect now, but when I use f12 render, it takes a decent amount of time to prep, before excruciatingly slowly rendering 2 tiles simultaneously. After a while, I just get sick of waiting for it and cancel the render. It’s never gotten past those first 2 tiles. I’m thinking maybe linking the object data isn’t the same as having an ‘alt-d’ instance of the original object, but I’m not sure if I can convert the duplicates into instances after the fact, can I?
Other information: There’s 100 shells in the scene, all with linked object data to the original shell which is on a different layer. They’re falling on a simple extruded plane. The scene is lit with an hdri and a sun lamp (which I didn’t bother to delete). The hdri and lamp are both set to multiple importance sample.
There’s about 136,000 verts in the scene. Performance other than f12 render is flawless.
I can’t figure out why it’s behaving the way it is, and I’ve done some google searches but haven’t found anything applicable to my specific problem. Any ideas?