Thanks for sharing the hdd data. I still think I will go with one if i make next changes. They are just so quiet and if I heat pipe my motherboard in a custom all in one case i'll probably want the silence. Those 10 dollar IDE CF disk drives I really hope people will create like raided versions of those where you have to plug in like 4 32 gb cards to create your disk. Even with the slow access times if those are raided they would be fairly nice. 100mb/s to 190mb/s.
Sadly I don't see much good data on CPU (not system) idle power.
has a nice but little chart. Wikipedia has a nice list of total power
I'm estimating that an AMD 5050e would expecd 45W/7w at full/idle.
My old P4/2.6Mhz might be 65W/42W.
Using some performance estimates and actual numbers for part of the server tasks (w/ "sar") I think the CPU (of the class I'm looking at) will see a 4%-6% load. The idle CPU power figure consumption therefore dominates the power consumption. For example if we make the simplifying assumption that the CPU will expend max power 5% of the time and be idle the other 95%, the AMD5050e would average 8.9W, and the P4 ~43.1W. If the power expended by some hypothetical AMD part was 100W/7w then the avg would be 11.6W. So the MAX power increasing radically makes little difference for my server since the duty cycle is so low. My goal is to seek out the lowest avg power that supplies sufficient performance, so the idle power required is the big issue.
Of course the main goal is cost - with a 5yr estimated server lifespan and my decent cost of power I expect that 1 watt of average power saving is worth ~$5.50(minimum) over the life of the server. I can probable save ~34W by using the 5050e in place of an ancient P-4, and this alone justifies the price CPU+MOBO+DRAM (ignoring the time cost of money).
Well, as I said there are some though most distributions keep to x86 and maybe PPC. But of course if you look around you'll find some for the other arches as well. Esp. Gentoo will probably work everywhere like NetBSD (which would also be a choice). Gentoo is also offering cross compiling so you may compile on a big AMD/intel box for a slower ARM/... embedded system.I think you underestimate the Linux offerings...
But yes, it will be hard to find one that can work as a storage controller. I know of the WD something, just a case with a non-x86 and some storage, Linux already into it. (But I don't trust WD on hard disks.)
Well, if you can find at least one board with a PCI/PCIe slot you could buy a controller with lots of IDE or SATA (or mixed?) storage and you should be ok with that. Just need to make sure the Linux kernel driver for the controller will compile/work on your specific arch..
I don't know which kind of throughput you expect on the file server, but 64-128M could be enough if you don't need a big disc cache.
Mhm, well but as you said there are some chips around that can do quite a job at computing. From the Chemnitz Linux Days I know that some people were dealing with extremely low power environment Linux (Prof. Luithardt from Switzerland) and he told me that there are vendors offering complete boards with e.g. ARM-CPU, RAM (slotted or solidly mounted), network, sometimes even GPU, RS232 for serial console or via FireWire iirc. and often with an option for CF or SD Cards as mass storage. I wanted to get myself into that topic, also with headless stuff that works only via serial console but alas since I started working on my chemistry back at university there was a severe lack of time. And I'm a total starter on headless and/or non-x86 machines.The problem I have is getting a non-x86 system with
sufficient power & mem & peripherals to do the job.
Yep, I saw some. Back at the times it was even just a 80x25 (and others) text adapter. Well, but you know that most x86-board Vendors will have a BIOS POST check for some GPU either at ISA, AGP, PCI/PCIe and they would beep and squeak all kinds of error if there is no GPU card. You would have to look closely or use a coreboot capable board.Many X86 servers like the IBM X-series have a very low-end video on card on-board b/c of this. This way you only only an attached display for configuration or diagnostics.
On the HDDs: if you really want solid stable rugged drives you will have to invest tons of money into SCSI. Still the best and most robust solution. SCSI was built for 24/7 operation - most IDE/SATA are not. I know that Seagate (my preference in HDDs) offers some labeled as 24/7. They publish a lot of specs on their pages so you might want to check the MTBF etc.. Of course these 3,5" suck more energy than a 2,5" but latter are not built for file servers. They're prolly more for laptops, maybe with a high spin up/down count.
Thanks for your comments and especially power figures Adarion.
I develop driver/kernel software for embedded systems (Linux & other) of various sorts for a living, so I'm quite familiar wiith the non-x86 issues. The embedded ARM NAS boards mostly use Marvell chipsets that do not meet my needs. Most of the development boards are shockingly expensive ($3000 for the Freescale MPC8641D for example) and even tho' Linux has been ported, it will likely require much driver work to make some off-the-shelf SATA card play with these. There are also more powerful ARM CPUs than the Marvell but these aren't available on any modestly prices boards I'm aware of. Perhaps on a $1500 6U PCI board or a similarly priced VME card. These are not realistic solutions unless some mass-market product contains the features I need.
If you read my initial post you'll see that I an not building a NAS, I'm building a router/server that will maintain a VPN connection, perform DNS forwarding, routing, firewall, and a number of modestly compute intensive apps and especially some performance challenging crypto tasks. These will not fit well in a 128KB DRAM footprint. I think 1GB DRAM should be sufficient and leave some good space for disk and network buffers.
I am not impressed with SCSI reliability claims ... see these ...
Conclusions from the second (Usenix) paper states, "In our data sets, the replacement rates of SATA disks are not worse than the replacement rates of SCSI or FC disks". Pro'ly once true that SCSI & FC were superior, but today the difference is marginal.
Both papers (my POV) indicate that there are several failure mechanisms not captured in the manufacturers MTTF calculation. So yes use the mfgr data, but expect the failure rate to be higher this. In the Google paper the range of measured failure rates indicate the best disk have ~500k hour MTTF, the worst(defective lot) ~67k hr by measurement.
I've read the data on Seagates Momentus 7200.4 laptop drive (600khr MTTF, and low power - mostly under 2.2W while operating). There are a lot of power-thrifty 3.5" drives, like the Seagate 7200.12 1TB which uses 6.6/5.4/5.0/0.79W during seek/rw/idle/standby (750k hr mttf). I too prefer Seagates and they are very good about providing data on their drives.
The 500GB laptop drive costs as much a good quality 3.5" 1.5TB drive, so *IF* you need the disk space, then 3x laptop drives will consume about the same power as a decent 1.5TB, will cost 3x as much, and the 3x laptop drives have 1/3rd the reliability of a single laptop drive which is already inferior to the 3.5" drive. Laptop drives have little or no power advantage on a "per gigabyte" basis.
So I can appreciate the use of a laptop drive where the primary goal is low power, single drive, and cost and reliability aren't design drivers. That's not my situation. Also I'm not seeking a truly silent system, but I would prefer to offload the cooling to 120mm case fans.
Another approach is the use of a solid state drive. These are currently very pricey, and one 64GB drive will consume ~2W when active, just like a laptop drive. The MTTF is reported at 1500k hours, but the price per gigabyte is currently about 20X that of a 3.5" 1TB rotating drive. These flash based drives have a limited number of write cycles, and their I/O can be very fast. So they might fill a niche for frequently accessed data. Perhaps holding a Linux XIP(execute in place) capable root file system mostly mounted read-only to avoid flash wear-out.
Although I object to lower reliability drives, it must be recognized that even a simple RAID1 improves the reliability dramatically - to the "don't care" point even for crummy drives. One of the papers mentions a bad batch of disks with a 13% AFR (~67k hr MTTF). The other paper suggests that failure of a disk in certain RAID increases the probability of another RAID disk failing by a factor of 39. So a RAID1 of these terrible disks, might still have a combined MTTF of 20 Million hours - far better than any single commercial drive. Of course RAID1 means approximately double the power.
I have concerns about the currently reliability & MTTF of my legacy disks, since some have many hours (years actually) of wear; mostly 160GB to 300GB SATAs. I'm thinking of using the oldest drives in a RAID(non-0) config, and using the remainder for non-RAID backup mostly kept in standby. That way if I lose a RAID drive (most likely case I think) I can rebuild the RAID with a new disk. If a backup drive(s) fail I can reconstruct a current backup from the live copy.
Note there are still single-points-of-failures, your PC might be hit by lightening, so RAID does not replace an offsite backup scheme, but it nearly eliminates the fear of single drive failure data losses.
Last edited by stevea; 05-26-2009 at 02:40 AM.
You are right, but OTOH if I have a couple disks (say RAID1) only spun-up daily to perform archival backup ,then I don't care greatly if this is a fair bit slower.
Doesn't solve the fundamental problem - the embedded parts have too few interfaces. Need a minimum of 4X SATA and 2X enet (one GigE, the other at least 100Mbit), and should really have an IDE. I don't see this on embedded card, but it's easy to find a <$100 AMD Mobo with all these and more.