-
yes.
-
No. EEC is Error Correction Code. Those are memory modules for servers mostly, or systems that need a high relyability. Its a redundant parity bit error correction method. The memory modul more less is selfaware if an error occured storing information and knows what the correct information is.
Usually Socket1156 boards (for Core-i5/i7) have a default memory “idea” of 1066MHz. AFAIK this results from the FSB being a default of 106 MHz and having a multiplier of 10. I would have to check on the Ci5 system but I am too lazy atm 
So you can buy any memory from 1066 upwards. The usual buy is 1333 MHz memory, personally I´d go for 1800 or 2000. If you get the memory to run with that speed it will benefit you for memory operations, not linear though, the benefit should be between 15-30%
Costwise it´s not much difference but odd for the 1333, random pick, the cheapest modules listed on geizhals.at for 24GB modules:
AData 1333, 24: 101 Euro !?!?
AData 1800, 24: 65 Euro
AData 2000, 24: 70 Euro
Better timings get expensive though, especially T1 memory and fast (high clock) CL5-DDR3 modules.
T1 means that the memory is able to read or write each clockcycle of the CPU. Standard is T2, which means every second cycle. I am not sure if there are any or many T1 DDR3 modules.
And CL is the CAS Latency. It´s the time in clockcycles it takes for the memory to respond, or more precise. It´s the time from sending a column address to the memory and getting a correct data response.
- SSD more less is luxury. Having the OS on the SSD is not a huge boost, your system will boot faster, an OS like Windows barely needs the HDD after booting, most is ran as services and loaded in the memory after booting. What benefits is if you have your pagefile/swappartition on a SSD, it boosts performance greatly, and if you have your daily tools on the SSD. Many tools take ages to load (e.g. 3dsmax) or often need to load stuff from the HDD. That´s where you benefit.
Raid is no magic. It´s straight forward. I can generalize it for Linux Software Raid, which you can´t use for windows though, you´d need a raidcontroller for it.
You boot your linux installer and partition your disk. Software raid can only boot from Raid0 or Raid1 but you want Raid5.
You create on your three disks:
1 partition on each disk with 256 mb, choose use as raid, initialize Raid1 (mirror with 2 paritions) with one spare partition
1 partition on each disk with 2048 mb, choose use as raid, initialize Raid5 (striping and parity) with 3 partitions
1 partition on each disk with the rest of the disk, choose use as raid, initialize Raid5 (striping and parity) with 3 partitions
Apply settings.
After that the raid will show as 3 partition:
One with 256mb, and if one of the disks dies, the data will be re-initialized on the spare partition until you change the broken disk. So raid1 will be operable even if one disk dies and you can boot from it.
One will have 4096mb for swap. Two 2048mb partition are striped together to 4069mb, you´ll get a speedboost too if you use NCQ and AHCI as you can write to both disk “at the same time”, while the 3rd partition is the parity patition. Simplyfied said, it stores the logical operation of the data of the first 2 disks. Usually it is an exclusive OR, or XOR. The HDDs store BITs.
DiskA stores 0, DiskB stores 0, so the parity disk stores 0 as well.
0 XOR 0 = 0
DiskA stores 1, DiskB stores 0, so the parity disk stores 1.
0 XOR 0 = 1
DiskA stores 0, DiskB stores 1, so the parity disk stores 1.
0 XOR 0 = 1
DiskA stores 1, DiskB stores 1, so the parity disk stores 0 as well.
0 XOR 0 = 0
So you can see, if DiskB for instance dies, the RAID knows that DiskA has stored either 0 or 1 and the parity disk has stored either 0 or 1 and with reversing the XOR, the RAID can reinitialize all the data on the broken disk after you change it to a working one.
There are even higher redundancy raids if more than one disk at once dies, but that´s not practical for a home machine.
And the last partition shows up the same, just with XXXX space for your root filesystem.
Now you set
Raidpartition A to be used as ext4 and /boot
Raidpartition B to be used as /swap
Raidpartition C to be used as ext 3 and /
Done. The rest of the raid is automated and you don´t have to care again. If a disk dies, you replace it and when it asks to reinitialize you say yes. Speedbost noticable, dataloss impossible as long as only 1 disk dies at once.
Some mainboards today offer decent raid controllers, but only a few are capable of raid5.
A hardware raidcontroller costs from 20 Euro upward to a few thousand. The price sets the quality and speed usually, then again, for homeusers a cheap one is good enough.
PS) It´s better to educate, else people keep bugging you 
Jesus anyone with fish and fishing? 
Give people hardware advice, they’ll buy it and ask again, educate them to choose your own hardware and you’ll have peace 
Yes, OC benefits you overall. You just have to be aware where it benefits you, and that the beneift is at most linear, usually less.
Overclocking the CPU directly effects for instance rendering and does so linear.
Take a Ci7 with 4*3.40 GHz takes 30 minutes to render a frame, pure raytracing, not render preperation methods that don´t use all cores fully.
If you OC it to 4.5GHz, which should be rather easy, it runs with 132% performance. The rendering will now take 22.7 minutes.
Games barely use more than 2 cores, so there it´s beneficial too, because the 2 cores are faster now. There was a time when games performed better on a Core2Duo heavily OCd than on a Core2Quad, simply because the 2 cores where able to be OC´d higher than 2 cores in a Quadcore.
For the graphics card, it soley benefits you in GPGPU in CUDA/OpenCL. Like for the CPU it´s the one thing that has full load on a GPU. So it matters if you render with Cycles for instance and if the image takes 10 minutes or 7 minutes.
It doesn´t really matter if your game runs tih 70fps or with 80fps. Where it starts to matter is if it´s 25fps or 31fps or if you play at high resolutions with huge textures and high AF/AA levels.
Personally I OC my systems right from the start to have all available power right away and not as an act of desperation when the system gets too slow after 2 or 3 years just to squeeze out a year longer.
You should be aware though, that the energy consumption usually grows exponentionally while the performance grows linear.
The additional thermal stress is no issue given proper cooling. It doesn´t matter (to me) if a chip lasts 20 or just 18 years due to thermal stress.
Besides that (under load):
My C2Q and stock clocks with reference cooling had ~58°C at reference voltage of 1.225V
My C2Q now OC´d by 33% and aftermarket cooler runs with ~42°C at lowered 1.208V
My GTX470 at stock clocks with reference cooling had ~90° at 0.975V.
My GTX470 OC´d by 20% and aftermarket cooler runs with ~58°C at the same voltage.
So I dare to say although OC´d my components will hold longer now but the system consumes more power 
Now I am going to lean back and watch Bulldozer, SandybridgeE, GTX600 and HD7000 series and see if my new system will be AMD or Intel and if I keep the GTX470 or get something new.
The sad truth is, my sytem is more less 3 years old now and still sufficient for most tasks and there´s no justifyable reason for me to get a new one. Maybe something breaks soon 
Personally I think with the system you chose you should be good the next 2 years at least unless there´s a breakthrough in semiconductors and after that it might be enough to upgrade the graphics to a nextget PCIe 3.0 card. That´s why I said you might want to check a board that already has it, after all it actually doubles the bandwith of PCIe 2.0.