Kano, why would AHCI mode be needed? I'm in the understanding that it is only for the case where a native driver does not exist, and provides similar speed as a native driver. At least on my mobo it has the huge con that enabling it causes a ~10 second delay to bios boot, and since there's a native SATA driver, I've kept AHCI off.
I agree, currently the only way to get the most out of ATi cards is getting a recent Windows... :/ (could as well blame the whole community for not pulling together a unified future-proof video accel standard; Microsoft's DXVA seems to have gotten much more planning than the Linux equivalents)
Originally Posted by Jeff8086
But yeah, will very likely change (with Gallium3D which'll give not only OpenGL but also VDPAU) but not in very near future.
AHCI is more then a driver for when " native driver does not exist". It is a standardized method for detecting, configuring sata devices as well it also allows for hot-plugging and ncq where that maybe desirable in a HTPC. (Swapping of HD's and recording multiple streams at once.)
Originally Posted by curaga
My front end is probably going to start off as an Asrock ION machine.
It'll decode anything and I have no HD content yet
question for deanjo: I've just got that Biostar board I was talking about and I have three SATA drives attahced and an IDE DVD-RW.
It has many "modes", including Native IDE and AHCI.
I was always under the impression that AHCI is the best to use and highest performance.
(Although SATA channels 1-4 disappear from the BIOS using AHCI and two of the HDDs appear as IDE drives, the drive on SATA5 appears as a SATA drive)
Using linux soft RAID5 on three 500GB WD SATA drives, bonnie++ numbers were all <=60MB/S.
I know RAID5 is not a performance option but I like the balance it gives you. However, it seems a bit slow.
Anyway, completely OT but it's my T to go O from.
<=60MB/s looks about right for your drives on soft RAID5 (expect a 10-20MB/s improvement on JBOD at the best case).
What are you going to do on a HTPC that requires >60MB/s anyway?
Running linux softraid 5, your numbers look about normal, although from my experience with raid 5 on nforce mb's is that your best performance is not through linux's softraid but actually using the mb's fake raid setup and dmraid. Write performance for linux's softraid 5 is pretty dismal. Using the exact same setup options on both the software raid and the dmraid, dmraid quite literally slaughtered in performance. The same 3 drives with software raid would give you about ~65MB/S sustained reads and ~30 MB/s sustained writes. Switching it to use the bios raid and dmraid those numbers would jump to ~ 105 MB/s sustained reads and ~90 MB/s sustained writes.
Originally Posted by RoboJ1M
Apparently there is lots and lots of things to tune in RAID5.
getra and setra, changes the read ahead.
I (think) I'm looking at ~100MB/s with the read ahead bumped from 256 to 16384 (not yet sure whether that's KiB or b or MiB yet, it was just a brief stab before I went to bed)
Apparently this can have a detrimental affect on random read. Although iozone doesn't appear to think so. Not sure yet, lots of numbers hurt my eyes.
Anyway, task one is to learn iozone and how to digest it's info barf.