Intel 120GB 530 Series SSD Linux Performance
Phoronix: Intel 120GB 530 Series SSD Linux Performance
In my recent articles doing RAID 0/1/5/6/10 benchmarking on Btrfs/EXT4/XFS/F2FS, I've been using four 120GB Intel 530 Series SSDs. I went with these four solid-state drives for getting a deal on them and having been pleased with numerous Intel SSDs I've used in the past and still running in a few Linux test systems, but how does the Intel 530 Series SSD on Linux compare to other modern solid-state drives? If you've been eyeing the SSDSC2BW12 SSDs, here's some fresh single-drive SSD benchmarks using Btrfs compared to drives from OCZ, Corsair, and Samsung.
Probably silly question
but why use the custom kernel? Default kernels would mean more to me. I'm not new nor a veteran of Linux (Especially my love/hate of Ubuntu) but default kernel performance mean more to me. I'm an average user with enough knowledge to work but no time because of work, school, and my two loving booger eaters. Can we have default performance please?
(Not trying to be critical just curious.)
This is "default" kernel in terms of a vanilla/mainline kernel. It's not the Ubuntu build but from their mainline PPA, for getting the latest mainline kernel support with any recent file-system improvements.
Originally Posted by trethlyn
I would consider Intel DC S3500 if it wouldn't so overpriced.
I have the samsung 840 (500gb) and I couldn't be happier
Originally Posted by JS987
For desktop I would need something with capacitors like Samsung SM843T, but it costs more than S3500 and isn't available in my town. 840 Evo and 850 Pro are available, but don't have capacitors.
Originally Posted by nightmarex
My software is mostly CPU limited, which means SSD can be waste of money.
What should be an interesting test is the "enterprise" "readiness".
"Enterprise" servers usually have SAS controllers, which usually have a terrible and buggy implementation of SATA. Although SAS controllers *must* be able to do SATA (per spec), they usually make this somehow incompatible.
So what do I have learned from "enterprise" servers (DELL Blade M610) with SAS controllers between intel and OCZ:
*Never* use OCZ on fusion-mpt SAS controllers.
OCZ rocks at speed.
Intel sucks at speed.
OCZ SSD do not wear down (according to the smartctl, which is probably bogus).
Intels would wear down within 3 years within my production environments according to smartctl.
OCZ would be rejected by the SAS controller after some time, the SSD needing a power cycle to get back to normal (either by powercycling the server, or by reinserting the SSD).
I haven't seen any SSD rejection by SUN fire X2550 (AHCI instead of SAS), but SUN's usually kill themselves for other reasons. Never buy SUN intel based hardware.
On my not enterprise servers (whatever, intel AHCI), I use samsung 840 and samsung 840 EVO in raid 1 with 1/2 of it for bcaching a raid 1 wd-red setup.
No problems at all, but I never had a real load on my not-enterprise-servers.
I would add to that:
Originally Posted by Michael
I would never use a default Debian or Ubuntu kernel, as livebuild requires those distro kernels be patched with aufs. If aufs is not available in the kernel, a release-critical bug appears. I hope the overlayfs in 3.18 will finish that nonsense.
I am not opposed to small patches to the kernel, but this is a pretty major patch. Next to that, I have witnessed that distro kernels can actually introduce bugs that are no problems in vanilla.
So I consider an Ubuntu or Debian kernel only worthy to install on new hardware. But I do not consider them as enterprise worthy kernels.
To be short: a default kernel does not exist, only vanilla exists, and you have to compile that yourself.