Page 2 of 3 FirstFirst 123 LastLast
Results 11 to 20 of 22

Thread: Best chipset for Linux software RAID?

  1. #11
    Join Date
    Oct 2007
    Location
    UK
    Posts
    160

    Default

    wait till that controller breaks... you'll be singing another tune ;-)

    I dont see why you'd bother pairing up such slow disks on such a nice controller.

    What raid level are you using?

  2. #12
    Join Date
    Jul 2009
    Posts
    351

    Default

    Quote Originally Posted by lordmozilla View Post
    wait till that controller breaks... you'll be singing another tune ;-)

    I dont see why you'd bother pairing up such slow disks on such a nice controller.

    What raid level are you using?
    Areca 1220 is only $450, a small fraction of the total system cost.

    I have a backup controller on another system, networked with 4 bonded Gb Ethernet.

    I replaced its tiny fan and heatsink with a real VGA heatsink and it does not rise above room temp.

    With dd I measure 115 MB/sec data transfer rate for these drives, better than a velociraptor, and excellent for a $50 drive.

    My zcav results look just like this:

    http://www.coker.com.au/bonnie++/zcav/results.html (see results for 1 Tb SATA drive).

    RAID 0! I have a fully redundant setup and good backups.

    I've been using computers since KIM-1. I have seen power supplies, fans, and drives fail all the time. I have never had a card or a motherboard fail, except due to my own stupidity. I am paranoid about heat and I always go way oversize with heatsinks. I turn my computers on and off all the time and I don't want the thermal shock.
    Last edited by frantaylor; 07-07-2009 at 02:48 PM.

  3. #13
    Join Date
    Oct 2007
    Location
    UK
    Posts
    160

    Default

    well turning computers off all the time is very bad for hard disks. With 7 in raid 0 your chances of lasting more than a year with the setup are tiny. And compared to the costs of an X25-M or even an X25-E i don't think your setup is very intersting at all these days.

    Why not go with two X25-M in raid0? Or do you really need the capacity ? even then i dont really understand why you would do such a setup in this day and age.

  4. #14
    Join Date
    Jul 2009
    Posts
    351

    Default

    Quote Originally Posted by lordmozilla View Post
    well turning computers off all the time is very bad for hard disks. With 7 in raid 0 your chances of lasting more than a year with the setup are tiny. And compared to the costs of an X25-M or even an X25-E i don't think your setup is very intersting at all these days.

    Why not go with two X25-M in raid0? Or do you really need the capacity ? even then i dont really understand why you would do such a setup in this day and age.
    I don't need the capacity, I need the speed. I am simulating entire networks of application services using VMware, and doing interop testing. I often have 10 VMs running at once. There is no way I am going to get over 400 Mb/sec throughput out of two drives, and that is what I am getting with this setup.

    The hardware RAID controller makes all the difference. At first I tried software RAID with the same drives, but the performance was only fair. Something funny happens with Linux software RAID when you have a lot of VMs running. I have neither the time nor the inclination to diagnose it. The RAID controller fixes the problem nicely and I can get back to work.

    I have conditioned all the drives, I sent many back to the factory. Now I have a nice group of drives that are all totally healthy.

    My previous setup had 7 160 Gb drives in software raid 0. It ran for over 3 years with no failures. I am still using those drives in other machines and they are still all good.

    The Areca card is not state of the art, but the hardware and the software are stable and solid. I was up and running in minutes with no hassles whatsoever.

    I just looked at those Intel SSDs. Wow, it would break my bank to get enough. They're only 80 Gb! I would need 20 of them, an external enclosure, and a huge controller.

    And after yet another look: why pay $315 for 80 Gb at 250 MB/sec bandwidth when you can pay $50 for 500 Gb at 115 MB/sec? The math just doesn't work.
    Last edited by frantaylor; 07-07-2009 at 07:33 PM.

  5. #15
    Join Date
    Oct 2007
    Location
    UK
    Posts
    160

    Default

    You can run X25's from an external raid controller just like your harddrives.

    I'm not discussing your controller, I mean hardware controllers are better, just annoying if they fail. ;-)

    I'm saying that for speed getting 7200rpm drives - no matter how many of them is silly these days when ssd's easily put drives like that out of th water. Since they are SATAII raiding them with a good card would put 600MB/s into possible range.

    113MB/s... two SSD's in hardware raid0 will easily beat that. Lower power consumption, lower failure etc.... less space, no noise... List goes on forever.

    Have you even looked at Intel X25 benchmarks?

  6. #16
    Join Date
    Jul 2009
    Posts
    351

    Default

    Quote Originally Posted by lordmozilla View Post
    You can run X25's from an external raid controller just like your harddrives.

    I'm not discussing your controller, I mean hardware controllers are better, just annoying if they fail. ;-)

    I'm saying that for speed getting 7200rpm drives - no matter how many of them is silly these days when ssd's easily put drives like that out of th water. Since they are SATAII raiding them with a good card would put 600MB/s into possible range.

    113MB/s... two SSD's in hardware raid0 will easily beat that. Lower power consumption, lower failure etc.... less space, no noise... List goes on forever.

    Have you even looked at Intel X25 benchmarks?
    Don't forget, that 115 MB/sec is for ONE $50 drive.

    Yes I did. 250 MB/sec. Very impressive. But I can get the same bandwidth and 16X the capacity for 1/3 the price by using two cheap disks.
    Last edited by frantaylor; 07-07-2009 at 08:38 PM.

  7. #17
    Join Date
    Oct 2007
    Location
    UK
    Posts
    160

    Default

    Quote Originally Posted by frantaylor View Post
    Don't forget, that 115 MB/sec is for ONE $50 drive.

    Yes I did. 250 MB/sec. Very impressive. But I can get the same bandwidth and 16X the capacity for 1/3 the price by using two cheap disks.
    I'm not convinced you can get that in sustained output at all. plus with raid0 overhead which are massive with 7 disks even with a brilliant controller..

    We'll just have to agree to disagree.

    Brendan

  8. #18
    Join Date
    Oct 2007
    Posts
    370

    Default

    i use 8x1tb disks in raid6 on an intel DG43NB, together with a pcie x1 sata controller using the sil3132 chipset..

    the disks are WD greenpower 5400rpm, and if we talk sequential performance, it does quite well:
    ida:~# dd if=/dev/md0 of=/dev/null bs=5M count=1000 iflag=direct
    5242880000 bytes (5.2 GB) copied, 13.535 s, 387 MB/s

  9. #19
    Join Date
    Oct 2007
    Location
    Under the bridge
    Posts
    2,153

    Default

    Quote Originally Posted by frantaylor View Post
    Yes I did. 250 MB/sec. Very impressive. But I can get the same bandwidth and 16X the capacity for 1/3 the price by using two cheap disks.
    Please note that these numbers do not tell the whole story.

    First, the 250MB/s for the X25 are constant throughout the disk, whereas the 115MB/sec will fall rapidly towards ~70MB/s as the disk fills up.

    Second, the X25 has an enormous advantage in latency and IOPS - think 1-3 orders of magnitude better performance. This will make a tremendous difference in your case (parallel VMs) - I wouldn't be surprised to see even a *single* X25 drive outperform the RAID configuration in response times *and* transfer speeds under heavy load (e.g. 3 VMs reading/writing full throttle.)

    Third, SSD performance scales almost linearly with their number. 4 drives and you've hit the TB/s mark, with latency almost completely unaffected (unlike mechanical drives).

    Finally, these drives should be much more reliable than mechanical disks.

    Of course, the size/price ratio is where it all falls down. SSD prices will probably not catch up to mechanical drives, but they seem to be falling at an impressive rate (the X25 was more than double the price half a year ago). If they keep falling at this rate, they could actually become affordable by the end of next year.

    IIRC, Intel is planning to announce new SSD models this or next week, so hopefully we'll see larger models soon.
    Last edited by BlackStar; 07-08-2009 at 10:01 AM.

  10. #20
    Join Date
    Jul 2009
    Posts
    351

    Default

    Quote Originally Posted by BlackStar View Post
    Please note that these numbers do not tell the whole story.

    First, the 250MB/s for the X25 are constant throughout the disk, whereas the 115MB/sec will fall rapidly towards ~70MB/s as the disk fills up.

    Second, the X25 has an enormous advantage in latency and IOPS - think 1-3 orders of magnitude better performance. This will make a tremendous difference in your case (parallel VMs) - I wouldn't be surprised to see even a *single* X25 drive outperform the RAID configuration in response times *and* transfer speeds under heavy load (e.g. 3 VMs reading/writing full throttle.)

    Third, SSD performance scales almost linearly with their number. 4 drives and you've hit the TB/s mark, with latency almost completely unaffected (unlike mechanical drives).

    Finally, these drives should be much more reliable than mechanical disks.

    Of course, the size/price ratio is where it all falls down. SSD prices will probably not catch up to mechanical drives, but they seem to be falling at an impressive rate (the X25 was more than double the price half a year ago). If they keep falling at this rate, they could actually become affordable by the end of next year.

    IIRC, Intel is planning to announce new SSD models this or next week, so hopefully we'll see larger models soon.
    This is all well and good, but I need to have ~1.5 Tb online, and there's just no way I am going to be able to afford to do that with SSD drives.

    And the problem with using a RAID array of SSD is that you are just moving the bottleneck to some internal bus. More than 2 or 3 in parallel is just a waste. You'll have enormous bandwith at the SATA connectors, but the PCIe x8 bus connector on the RAID controller is not getting any faster.
    Last edited by frantaylor; 07-08-2009 at 11:29 AM.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •