Well, the point is that this is not a bug, just a weakness of blender’s octree method.
I’ll try to explain it as simply as possible. Raytracing means that blender has to do lots and lots of calculations, most important;y, intersection tests of rays with polygons.
It is possible to do this brute force, simply test every polygon if it is hit by a ray, but this will quickly get so slow the more polygons there are in the scene that you might need weeks, or even months to render more complex scenes. To optimize this, methods exist to subdivide the scene in parts that can be quickly tested for intersection so that not every polygon has to be tested, one of which is the ‘octree’ method that Blender uses.
A basic octree is created by starting with a cube around the scene and subdivide that cube in eight other cubes (exact cut through the midpoint). This subdivision is done several times until the cubes either contain a certain number of polygons or until the subdivision limit is reached.
This can work well, but it has some disadvantages.
The problem is that when the first cube around the scene is very large, the subdivision level might reach it’s limit before it has a chance to get to a detailed object it might contain, so that that object is entirely contained in a single cube. Which means that all of the polygons in that cube have to be tested for intersection every pixel…
And that is the cause of the slowdown you describe above.
So, as an example, say your scene would be something like a house with some rooms maybe, and one of the rooms would contain a very detailed object llike a statue or something like that. If you render that house from the outside, rendertimes might be ok, but when you try to render from the inside, like a closeup of the statue in the room, rendertimes suddenly might increase quite a bit because of this problem.
Better methods exist though. Yafray for instance creates a separate tree for every object, so doesn’t have this problem.
Well, I hope that makes it bit more clear…