Page 3 of 4 FirstFirst 1234 LastLast
Results 21 to 30 of 33

Thread: AMD processors/-based systems power hunger

  1. #21
    Join Date
    Jan 2010
    Location
    Portugal
    Posts
    945

    Default

    I also noticed that you are using DDR3-1600 on the athlon II system, which should add a few more watts compared to the DDR2 on the pentium. Other than that I think you already got all the reasons why the athlon consumes more power.
    BTW, I got my X4 630 stable at 1,23V. It was stable at 1,22V, but a few weeks ago I started getting sporadic error messages during boot saying that the "overvoltage" had failed, but that's probably due to my crappy PSU.

  2. #22
    Join Date
    Nov 2008
    Location
    Germany
    Posts
    5,411

    Default

    Quote Originally Posted by movieman View Post
    No it doesn't, with the possible exception of initial spin-up when booting. My 40GB SSD is actually rated as higher power consumption when writing than my 2TB HDD... of course because it doesn't rotate and have long seek times it spends very little time writing and most of the time idle.
    maybe you combare it between the best hdd and the cheapest/worst ssd ?

    if speed count you need an 7200 or 10000 upm harddrive if you wana compare it into the same speed class.

    then the SSD beats the hdd in power consuming be sure

  3. #23
    Join Date
    Nov 2008
    Location
    Germany
    Posts
    5,411

    Default

    Quote Originally Posted by MU_Engineer View Post
    x264 encoding certainly does use a lot of cores, and so can some file compression programs. Other than those, there aren't too many that you'd run on a desktop that are highly multithreaded. There are a ton of workstation applications like CFD, molecular modeling, 3D rendering, and code compilation that are thread-heavy, but they're not desktop applications. The only game I know of that uses a boatload of CPU cores is Microsoft Flight Simulator. There may be more, but most use one to three "heavy" threads and that's about it.
    i don't care about bad software

    but i care on good software and last time i check cpu cores in use on an raytracing engine the count goes to 64 theats... in blender




    Quote Originally Posted by MU_Engineer View Post
    No. I don't have the game, but I am pretty sure it does not need 12 cores or 64 GB of RAM to run. First of all, the only people that could even run it would be running at a bare minimum an Opteron 6168 with all eight RAM slots filled. That's a $750 chip and 8 GB DIMMs cost $220 a pop. Requiring the user to spend over $3000 on hardware just to play the game is a recipe for nobody buying the game. Secondly, the recommended hardware from the publisher says an A64 4400+ or faster with 2 GB of RAM. That's a far cry from 12 cores and 64 GB.
    i think the real clue is "I don't have the game"
    and you never be a fan of OFP-CTI or arma2-wafare
    you never touch an war game with over 1600 ki's and 128 human players and 10 000 view distance on an 225km² map with the highest skilled KI over the world.
    "but I am pretty sure it does not need 12 cores or 64 GB of RAM to run."
    need? well it does not need but if you wana do what i wana do in the game you really wana have that stuff because you don't wana die in the game.

    in the end arma2 supports 12 cores and 64gb ram i don't care about the minimum pc hardware rate or an optimum hardware rate for the single player missions.

    i only care about the maximum on the CTI/wafare multiplayer map with 128 players and 1408 Ki s with all settings on max.






    Quote Originally Posted by MU_Engineer View Post
    Why would you do that? The big advantage of Socket G34 systems are their ability to be run in multiprocessor systems and secondly to provide four dies' worth of cores on an EATX/SSI EEB -sized system. A single G34 will be slower than an equivalently-priced dual C32 setup and have no more RAM bandwidth or memory capacity. You can even get dual C32 systems in a standard ATX format (ASUS KCMA-D8), so there's really no reason to go single G34 over dual C32s.
    c32 is the next step of the socket f 1207 my last opteron system

    so i really know the weakness and i don't wana have that again.

    the g34 is much better because you save money on the cooling solution 50€ every socked against an c32 system.

    and an single socket g34 is ATX and not Eatx so you can build smaler systems.

    and the g34 opterons do have more speed per watt usage.



    Quote Originally Posted by MU_Engineer View Post
    Yes, but only an idiot would run that much non-ECC RAM in a system that supports ECC. I guess not having ECC in the RAM would make a desktop user feel right at home, since you can't overclock current Opteron gear and the boards are made to be more reliable than standard desktop gear. Something has to take the place of flaky overclocked CPUs and cheap components causing errors to require frequent reboots, so I guess RAM errors are as good of a reason as any.
    i got zero benefit out of my last ECC system

    the system also crashes and need restards

    and an non ECC system works well if you check the ram from time to time.

    my next system will not have ECC again be sure



    Quote Originally Posted by MU_Engineer View Post
    Yes, until the game crashes on you because you have a ton of non-ECC RAM in the system and a bit got flipped somewhere, corrupting the game data in that RAM.
    i call you an laier in that point because my last opteron crashes with ECC ram

  4. #24
    Join Date
    May 2007
    Location
    Third Rock from the Sun
    Posts
    6,583

    Default

    Quote Originally Posted by Qaridarium View Post
    opterons are not only used by server think about workstations
    *cough* http://jonpeddie.com/blogs/comments/...m-workstation/ *cough*

  5. #25
    Join Date
    Nov 2008
    Location
    Germany
    Posts
    5,411

    Default

    Quote Originally Posted by deanjo View Post
    thx for the link i really don't know that

  6. #26
    Join Date
    Nov 2006
    Location
    /home/$USER
    Posts
    113

    Default

    Quote Originally Posted by Qaridarium View Post
    i don't care about bad software

    but i care on good software and last time i check cpu cores in use on an raytracing engine the count goes to 64 theats... in blender
    And I said that things like 3D rendering (which would include Blender) are more workstation applications than they are desktop applications.


    i think the real clue is "I don't have the game"
    and you never be a fan of OFP-CTI or arma2-wafare
    you never touch an war game with over 1600 ki's and 128 human players and 10 000 view distance on an 225km² map with the highest skilled KI over the world.
    "but I am pretty sure it does not need 12 cores or 64 GB of RAM to run."
    need? well it does not need but if you wana do what i wana do in the game you really wana have that stuff because you don't wana die in the game.

    in the end arma2 supports 12 cores and 64gb ram i don't care about the minimum pc hardware rate or an optimum hardware rate for the single player missions.

    i only care about the maximum on the CTI/wafare multiplayer map with 128 players and 1408 Ki s with all settings on max.
    The games I do rarely play I play in single player vs. the computer mode, which don't require all that much from a system if you aren't running the absolute newest games or demand to run everything on super-high-ultimate settings. The last online multiplayer game I played was the original Counter-Strike, which ran fine on 1 GHz PIIIs.

    c32 is the next step of the socket f 1207 my last opteron system

    so i really know the weakness and i don't wana have that again.
    And what are the weaknesses other than the old NVIDIA chipset?

    the g34 is much better because you save money on the cooling solution 50€ every socked against an c32 system.
    No need to shell out a bunch of money for C32 heatsinks. You can most likely reuse your Socket F heatsinks on a C32 board, as long as the heatsinks are 3.5" pitch. If they are 4.1" pitch, you can use them on a Socket G34 board. You can also use regular AM2/AM3 desktop heatsinks with C32 systems. Many C32 motherboards include the appropriate mounting brackets, else find a Socket 754 or 939 mounting bracket on eBay or from a dead board somewhere. (AM2 or AM3 won't work as they have four bolt holes, 754/939 and Socket F/C32 have two bolt holes.

    and an single socket g34 is ATX and not Eatx so you can build smaler systems.
    ASUS's KCMA-D8 dual C32 board is also standard ATX.

    and the g34 opterons do have more speed per watt usage.
    No, the Opteron 4164 EE should have the most multithreaded performance per watt of the Opteron lineup. It's a 6-core unit at 1.80 GHz with a 35-watt TDP, while the 6164 HE is a 1.70 GHz 12-core with an 85-watt TDP. Two 4164 EEs would have a combined TDP of 70 watts and run 12 cores at 1.80 GHz.

    i got zero benefit out of my last ECC system

    the system also crashes and need restards
    What are you running for an operating system and what kinds of crashes are you talking about? If you're running Windows, that's probably why you are getting crashes and needing to restart all of the time.

    and an non ECC system works well if you check the ram from time to time.
    What do you mean by "check the RAM," run Memtest86+ after a reboot? ECC memory is used mostly to detect and correct soft errors that result from bit flipping during RAM operation due to background radiation and such. Cutting the power to the memory during a hard reboot would "fix" the flipped bit and you will see nothing in Memtest86+. The only thing you'll see in Memtest86+ are generally hard errors due to flaky/failing RAM or motherboard. ECC will certainly pick that up too, but you're really looking at two different things there.

    i call you an laier in that point because my last opteron crashes with ECC ram
    The system stability depends on a lot of things besides RAM. Software and drivers are an obvious culprit, as is the power supply and the noisiness of the power coming from the outlet. You could be running your ECC RAM in Chipkill mode with an 8-hour DRAM scrub, but if you're running Windows Me and powering that system from a $20 cheap Chinese PSU optimistically rated at 300 watts, you're going to be horribly unstable. That's obviously an exaggeration, but you get my point.

  7. #27
    Join Date
    Nov 2008
    Location
    Germany
    Posts
    5,411

    Default

    Quote Originally Posted by deanjo View Post
    most of the time i read german news so yes the german news about that is a little bit slower: http://www.computerbase.de/news/hard...station-markt/

    workstation means openGL and openGL mostly can not use more than 1 theat for putting the graphic into the gpu
    means a faster single theatet cpu wins means intel wins
    DX11 for exampel fix that on dx11 you can use more than 1 theat for putting graphic into the gpu..

    i don't know the status of opengl4 and multitask graphik pulling..

    amd just lose on bad/old software

    its just not the time for 24/32 cores on an dualsocket workstation system..

  8. #28
    Join Date
    Nov 2008
    Location
    Germany
    Posts
    5,411

    Default

    Quote Originally Posted by MU_Engineer View Post
    And I said that things like 3D rendering (which would include Blender) are more workstation applications than they are desktop applications.
    really? so i just care the wrong stuff ?

    Quote Originally Posted by MU_Engineer View Post
    The games I do rarely play I play in single player vs. the computer mode, which don't require all that much from a system if you aren't running the absolute newest games or demand to run everything on super-high-ultimate settings. The last online multiplayer game I played was the original Counter-Strike, which ran fine on 1 GHz PIIIs.
    CTI/Wafare was orginale an multiplayer map but with the time the KI does well.
    means you can play CTI in OFP and wafare in arma2 single player vs the KI or coop multiplayer vs the KI.
    you don't get the point because arma2 uses databased self learning KI for the singleplayer
    means you can play an single player with over 1500 units

    but for arma2 you need an strong cpu in the cti/wafare mode also if you play in the lowest settings thats because the settings only pulls the graphic down and not the "KI"


    Quote Originally Posted by MU_Engineer View Post
    And what are the weaknesses other than the old NVIDIA chipset?
    PCIe1.0,useless SLI, bios bugs,chipset to hot and the multisocked incompatility to cpu coolers becouse the first cpu blocks the seconds ones clousding mechanism.

    means next time i buy a single socket and nothing blocks my super big fat 1kg Mugen cooler


    Quote Originally Posted by MU_Engineer View Post
    No need to shell out a bunch of money for C32 heatsinks. You can most likely reuse your Socket F heatsinks on a C32 board, as long as the heatsinks are 3.5" pitch. If they are 4.1" pitch, you can use them on a Socket G34 board. You can also use regular AM2/AM3 desktop heatsinks with C32 systems. Many C32 motherboards include the appropriate mounting brackets, else find a Socket 754 or 939 mounting bracket on eBay or from a dead board somewhere. (AM2 or AM3 won't work as they have four bolt holes, 754/939 and Socket F/C32 have two bolt holes.
    its funny because one of my opteron board in the past do not have the "appropriate mounting brackets" and this is not funny if you wana have an bigblock silence cooler in your system and all server coolers are 5000UPM very loud coolers... and no one in germany sells: "appropriate mounting brackets"

    one of the mainboard an tyan one dies and i sold the oner one (asus)later after that.

    and hey all server heatsinks are just BAD BAD BAD BAD! and loud!

    bad and loud just because they save "place" and for my use i have place over place all over...

    my mugen cooler is 160cm high and this kind of heatsinks cool an opteron passiv

    so really the server heatsinks are so fucked up.


    Quote Originally Posted by MU_Engineer View Post
    ASUS's KCMA-D8 dual C32 board is also standard ATX.
    wrong??? the germany shop sites says EATX and not ATX and hey just calculate an exampel

    in germany the cheapest price for that board is 270€

    the cheapest Supermicro H8SGL-F is 227€

    means you save 43€ if you don't buy an C32 dualsocket board!

    and an AMD Opteron 4170 2.1ghz costs 170€ and 2 of them 340€

    yes thats 12 cores if you calculate that for 8 cores it costs 227€

    an AMD Opteron 6128 cots 260€ means you win 33€

    and you save 1 cpu cooler a good one costs 50€

    means in real you save 140€ if you don't buy an c32 system


    Quote Originally Posted by MU_Engineer View Post
    No, the Opteron 4164 EE should have the most multithreaded performance per watt of the Opteron lineup. It's a 6-core unit at 1.80 GHz with a 35-watt TDP, while the 6164 HE is a 1.70 GHz 12-core with an 85-watt TDP. Two 4164 EEs would have a combined TDP of 70 watts and run 12 cores at 1.80 GHz.
    right but its not logical thats because the 12 core do have the same cores in it...

    maybe that 6core are just better selected 'dies'


    Quote Originally Posted by MU_Engineer View Post
    What are you running for an operating system and what kinds of crashes are you talking about? If you're running Windows, that's probably why you are getting crashes and needing to restart all of the time.
    i run linux over 6 years now. 3 years with nvidia and 3 years with ati cards.

    but yes my memory can not tell you all my crashes in detail ..

    Quote Originally Posted by MU_Engineer View Post
    What do you mean by "check the RAM," run Memtest86+ after a reboot? ECC memory is used mostly to detect and correct soft errors that result from bit flipping during RAM operation due to background radiation and such. Cutting the power to the memory during a hard reboot would "fix" the flipped bit and you will see nothing in Memtest86+. The only thing you'll see in Memtest86+ are generally hard errors due to flaky/failing RAM or motherboard. ECC will certainly pick that up too, but you're really looking at two different things there.
    right... but its my personal feeling some desktops with non ecc rams are just more stable than my ECC pc


    Quote Originally Posted by MU_Engineer View Post
    The system stability depends on a lot of things besides RAM. Software and drivers are an obvious culprit, as is the power supply and the noisiness of the power coming from the outlet. You could be running your ECC RAM in Chipkill mode with an 8-hour DRAM scrub, but if you're running Windows Me and powering that system from a $20 cheap Chinese PSU optimistically rated at 300 watts, you're going to be horribly unstable. That's obviously an exaggeration, but you get my point.
    i got your point but you don't got my point.. my point is the OS/driver and the psu is more importand than the 'RAM'

    is more importand means i don't have the money to waste my money on less importand stuff

  9. #29
    Join Date
    Jan 2010
    Location
    Portugal
    Posts
    945

    Default

    Quote Originally Posted by Qaridarium View Post
    maybe you combare it between the best hdd and the cheapest/worst ssd ?
    Where did you get those 20W figures from? No 7200rpm desktop hard drive from the last 5 years uses that much power. Typical figures are in the max 8W-10W for 3,5" 7200rpm HDDs. In the world of 2,5" laptop drives the power consumption is already very very close to SSDs. Take a look at the seagate momentus 5400.6 drives: 0,8W idle and 2,85W write power. Something like a Corsair Force SSD has 0,5W idle and 2W operating power.

  10. #30
    Join Date
    Nov 2006
    Location
    /home/$USER
    Posts
    113

    Default

    Quote Originally Posted by Qaridarium View Post
    PCIe1.0,useless SLI, bios bugs,chipset to hot and the multisocked incompatility to cpu coolers becouse the first cpu blocks the seconds ones clousding mechanism.

    means next time i buy a single socket and nothing blocks my super big fat 1kg Mugen cooler

    its funny because one of my opteron board in the past do not have the "appropriate mounting brackets" and this is not funny if you wana have an bigblock silence cooler in your system and all server coolers are 5000UPM very loud coolers... and no one in germany sells: "appropriate mounting brackets"
    You would still need to check clearances carefully no matter what board you mount that Scythe Mugen on. It's simply an enormous heatsink that the only real reason to get it would be to passively cool the CPUs. There are certainly other heatsinks out there that aren't quite so huge that would work on an Opteron board but are still pretty quiet. You could also look at water cooling as that is quiet, water blocks are small and have few clearance issues, and there are blocks specifically designed to bolt to Socket F/C32 and G34 out there, so you don't need to use the clamp-on heatsink retention brackets.

    You can also look on eBay for retention brackets if your board does not come with one. They cost $4-10 and I'll bet that some of the sellers even ship to Germany.

    and hey all server heatsinks are just BAD BAD BAD BAD! and loud!

    bad and loud just because they save "place" and for my use i have place over place all over...
    Depends on your definition of "loud." If you demand pretty much total and complete silence from your machine (basically an SPL < 20 dB) then yes, they're all loud. All of them will also be louder than your enormous heatsink as well. But most people I've seen with Socket F boards (which would use the same heatsinks as C32) have made some pretty quiet machines out of 92 mm or carefully-selected 120 mm desktop heatsinks. Machines using 2U/3U server heatsinks with PWM-controlled fans 70 mm or larger with their speed controlled by the BIOS are very similar to your typical corporate office PC in noise level.

    my mugen cooler is 160cm high and this kind of heatsinks cool an opteron passiv

    so really the server heatsinks are so fucked up.
    You are just trying to use a heatsink that is very far beyond any size and weight specifications of heatsinks designed for that socket. You shouldn't be surprised that you would have trouble getting it to fit. You probably will have trouble mounting that heatsink on 90+% of desktop boards as well.

    wrong??? the germany shop sites says EATX and not ATX and hey just calculate an exampel
    ASUS says it is a 12" by 10" ATX board on their website. They also do not have the product listed on their German website.

    in germany the cheapest price for that board is 270€

    the cheapest Supermicro H8SGL-F is 227€

    means you save 43€ if you don't buy an C32 dualsocket board!
    The KCMA-D8 is about $290 over here compared to about $250 for the H8SGL-F.

    and an AMD Opteron 4170 2.1ghz costs 170€ and 2 of them 340€

    yes thats 12 cores if you calculate that for 8 cores it costs 227€

    an AMD Opteron 6128 cots 260€ means you win 33€

    and you save 1 cpu cooler a good one costs 50€

    means in real you save 140€ if you don't buy an c32 system
    You can't really just divide the price of a 6-core chip by 2/3 to get a price of a quad-core chip. The closest C32 equivalent to the 6128 would be two Opteron 4122s, which are 2.2 GHz quad-cores. Two of them cost $200, compared to $270 for the 6128. Two 4122s + the KCMA-D8 will run you $490, while a 6128 and an H8SGL will run you $520, so the C32 solution is a little less expensive and a little faster. Yes, it will likely be a wash after you buy heatsinks, but remember that the only heatsinks that will fit on G34 boards are server heatsinks or Koolance's $85 CPU-360 water block. That's it. You can at least use some more reasonably-sized desktop heatsinks on C32 boards that will be quieter than the server heatsinks for G34.


    right but its not logical thats because the 12 core do have the same cores in it...

    maybe that 6core are just better selected 'dies'
    The EE parts do use the "cream of the crop" of the dies, according to an AMD rep that frequents a lot of forums.

    i run linux over 6 years now. 3 years with nvidia and 3 years with ati cards.

    but yes my memory can not tell you all my crashes in detail ..
    I have a similar history and games are the buggiest programs with the highest propensity to lock up Linux systems in my opinion. If they're Windows games being run with WINE, it's even worse. Fortunately most locked-up games or X sessions can be killed with the magic SysRq keys, which dumps you into a text terminal to restart X without rebooting. But they're still pretty awful and apparently you play a lot of Windows games, so I imagine you see pretty frequent glitches and bugs.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •