Here is the first test of rendering (link to the higher quality version, since posting in default makes it horrible). I used child particles with a medium number of particles. The first part is with reel smart motion blur (from after effects), witch gets kind of crazy with particles. The second is without, though I don´t know if the difference is visible with the compression.
Notice that there was a problem with the mesh of the wave. It did not recocgnize the displacement texture. Sometimes blender does that.
I am pretty happy with the result despite the mesh problems and the lack of particle age transparency (it crashes cycles). If I can put some 10x more particles, I guess I can use cycles itself!
They are still cubes. It does make a big difference.
Now I want to test the volume shader from blender internal.
Amazing work… but if you watch videos you will see that in the middle of wave there are some splashes too… so you have to add some particles there but just little.
Hey, its coming along nicely guismo, and that shader is amazing.
Earlier I was able to cache 100mil particles and reload it successfully…however, I had varying degrees of success with external caches. Some times it worked, other times it didn’t…it didn’t seem related to the quantity of particles but I have no idea what was causing it to happen.
As for the animation, to me the odd part is how they seem to spawn in front of the wave. Perhaps if they spawn from inside the wave it wouldn’t be so apparent. Just a thought.
Yeah, you have those problems too then, huh?
What I noticed, and I trully believe after many tests, is that blender can’t really handle more then 10 millions particles in total and/or a certain threshold of particles per frame (file size per frame).
It can handle it if you have enough memory, but not through file cache. I mean, you can’t put as “external”. It acts all strange. It says it found x particles (though, if I am right, it never say to have found more then 10 million, even if you baked it with more), but when you scroll the timeline, it disappears.
So, if it is happening to you too, it must be a limitation.
As for the problem you mention, this is because the particle number is too slow. Right now, this is the best solution I can think of. Emiting from inside may create problems with the collider, but If I can’t think of some way to increase the particle number, then it will have to be. But a lot of other problems need more particles to be solved.
And… there is the memory problem. Goddamit. 12 gigas is too little.
How difficult would it be to split the particle systems up? If you did that, you could fine tune the timing, use more particles and presumably reduce render times. Another possibility would be to simply duplicate the particle system with a different seed value and drop the quantity of particles so they are small enough for disk cache. Also, did you ever increase your virtual memory size? I use 16gigs of ram and have my virtual memory set to a minimum of 25gigs on my fastest SSD. It might be worth a try…I can’t wait till I get my new mobo with 32gigs of ram
In my tests, the amount of particles barely changes the time of render. Both in cycles and using the volume shader (but this one heavily influenced by step distance). It either render or crash. The only benefit I see is this, minimizing crashes.
I was thinking of splitting the emitters in times. Like, rendering and baking only from 0-100 instead of 0-200. But your solution is good too. I will analyze and test witch is better for me.
As for virtual memory, id does not matter. Blender crashes when it crosses the real memory. I set a lot of virtual memory on the RAID disk (for performance), but if it crosses the 12gigas… it crashes.
I need more memory or workarounds. Simples as that.
Damned dell and its ecc memory. That machine cost a fortune but they did not even bother to put more then 12 gigas on it.