Promise SATA300 TX4 SATA 2.0
Phoronix: Promise SATA300 TX4 SATA 2.0
We don't review many disk controllers or hard drives at Phoronix but we decided to take a quick look at the Promise Technology SATA300 TX4 PCI controller card, which promises to be a cost-effective 4-port Serial ATA 2.0 controller. Two of the features include Native Command Queuing and Tagged Command Queuing support, but how does its performance compare to solutions integrated on the motherboard? In this review of the Promise SATA300 TX4 we tested it with Ubuntu 7.04 Feisty Fawn using an nForce 430 chipset.
No RAID support a GOOD THING
Nice review. I'm a recent first-time builder who built before
discovering this site. My motherboard reqs included many (6-8) SATA
ports, when in hindsight I should have aimed instead for 8 DIMM slots,
and bought this board. I may yet rebuild.
I was puzzled by the "No RAID support" comment without elaboration.
That is a GOOD THING. A pointer to Linux software raid would have
helped beginners here.
Linux software raid is vastly superior to any hardware raid in
anything close to this price range.
1. Such hardware raid at this price point, e.g. what would be built
into a motherboard, is widely termed in the Linux community as "fake
raid". It relies mostly on the cpu, just like Linux soft raid, only
with far less flexibility or support.
2. If the hardware fails, chances are that you're hosed unless you can
replicate the exact hardware again. Move soft raid drives to any new
Linux box and the soft raid will be recognized automatically.
3. Soft raid stays ahead of hard raid in feature sets, e.g. raid 6
support, ease of adding drives.
4. Soft raid is far more abstract. For example, I'm using the largest
partitions on each of three 750 GB drives for a 1.3 TB raid 5 array,
still leaving partitions free e.g. to spread swap over the same three
drives, rather than on my OS drive. This is optimized for either
compute or file serving; I'll be favoring one at a time, so no
performance conflict. A hardware raid could force me to dedicate
entire drives to the raid array; how likely is it that I would then
have three more drives available for spreading out swap? With this
same flexibility, I plan to write rendundant (lose any two) DVD backup
sets, using ordinary loopback-mounted files organized into a raid 6
Hardware raid just locks you into someone else's lack of imagination.
Linux software raid barely gets one of my four cores out of idle; as
my break-in test I used my raid array to repeatedly build 30 copies at
once of GHC Haskell from source. It worked flawlessly.
Software RAID vs Single Drive Performance + CPU Utiliization
I second the comments of Syzygies. I have two Abit IC7-G mobos, each of which has 4 on-board SATA ports. Each is running a 4-drive RAID-5 configuration with multiple volumes managed using Linux LVM.
I would love to be able to move at least one of these arrays to an old Pentium3 box as a server that I can park somewhere out of my workspace (noise reduction). Up till now, the only option to accomplish this was an expensive "true" hardware RAID controller*, such as 3Ware offers. (Bring money.)
(* True RAID controllers present the OS with one or more logical drives and completely insulate the OS from the physical details of the array. Many also offer hardware-based checksum engines, on-board caching, etc. The "pseudo" RAID offered in motherboard chipsets is a lame attempt at "claiming" RAID support, when the result is really software RAID, and often an inferior implementation at that.) /rant off
So, I would be *very* interested in benchmarks using this controller that cover the following configurations:
RAID-0 (striping, no parity)
RAID-5 (striping with parity)
all, while looking at CPU performance during the tests.
Oh, and while 'hdparm' is nice and quick, something like 'dbench' yields (IMO) more interesting results.