Last edited by Nobu; 01-09-2012 at 09:27 PM.
@cynyr: The hardware does support decoding videos in hardware in most cases, it's just the question of driver and API support. So 1080p shouldn't be a problem (as long as we get drivers running). Perfect for sticking them to the back of your TV and use them as MythTV-clients or other media-player/HTPC stuff. Uses very few power, perfect to be hidden at the back of TV and has HDMI output.
And you're right: for people who just use their PCs for surfing and office stuff using linux these devices are enough.
Of course since it has no/few SATA-ports, these kind of devices don't do well as NAS/servers. My NAS can't be a "weak-CPU" device since my data is completely encrypted and en/decrypting using AES at 90MBytes/s (or more) needs some processing power.
What worries me more, though, is the desktop experience. In a video from 2 months ago Eben demonstrated LXDE desktop on the Raspbery Pi but it was running unaccelerated X. The experience is sub optimal, to put it mildly. I imagine the built in graphics accelerator doesn't have a 2D graphics engine and relies on the 3D engine for all drawing. So it has to be employed for the 2D acceleration. I don't know whether X can be made to do that. Maybe we'll need Wayland for that. In any case, it's not at all clear what the 3D situation is either. Will the drivers be open source, for instance. The same for the video decoding acceleration.
Don't get me wrong - I think the Raspberry Pi is a wonderful machine and I plan to get at least 2 but I'll use them without a display. Yet for its intended purpose the desktop experience is very important. And information on that is suspiciously missing. Or I may be wrong about that in which case please provide links. But given the record so far, I'm afraid that the drivers will not be open source which is very very bad (reference - Intel Poulsbo).
It does appear from benchmark results that none of the benchmarks are benefiting from NEON. That would give a significant advantage, even at lower clock speeds, for algorithms that can be vectorized. (I'm also not sure to what extent the x86 benchmarks benefit from MMX/SSE.. so I'm not quite sure if for an apples-to-apples comparison neon should be utilized)
Virtually ALL ARM chips have dedicated video decoder hardware. These, in fact, would make ***EXCELLENT*** HTPCs for precisely this reason. You don't need a massive CPU for video decoding.[*]HTPC, Not enough grunt to get 1080P or even 720P done.
You don't have to use a "full linux".... Adobe has an Android/ARM version available, Google has a youtube application, netflix has an android version.No flash so not Hulu, or youtube. Running full linux, so no netflix.[*]Desktop for grandma/kids, no flash as it is arm + linux based, no way to play those cheep games from the bin at bestbuy.
I don't think that there is any chance of this kind of device running up against server hardware any time soon. For now that will remain dominated by x86_64.Does the arm world have something like PCI-E or even PCI? How about a way to get more than 4 SATA ports and Dual GbE Lan that isn't via USB2.0?
HTPC is probably the main use for now, as well as low power "light use" desktop systems.Anyways, could someone explain to me what I would do with a Trim-Slice/Pandaboard ES? Looks fun to play with and maybe it would work nice for a kitchen/embeded computer, but I'm just not getting it.
I looked a bit at the internals of these benchmarks, trying to figure out why the results are so strange and unrealistic. And looks like a lot of fixes are badly needed. Some comments are in the Pandaboard ES thread: http://phoronix.com/forums/showthrea...096#post246096It does appear from benchmark results that none of the benchmarks are benefiting from NEON. That would give a significant advantage, even at lower clock speeds, for algorithms that can be vectorized. (I'm also not sure to what extent the x86 benchmarks benefit from MMX/SSE.. so I'm not quite sure if for an apples-to-apples comparison neon should be utilized)