Page 12 of 15 FirstFirst ... 21011121314 ... LastLast
Results 111 to 120 of 143

Thread: Tuxera Claims NTFS Is The Fastest File-System For Linux

  1. #111
    Join Date
    Nov 2008
    Posts
    418

    Default

    "Regarding ext3, it does not really protect your data well."
    Quote Originally Posted by crazycheese View Post
    It does, in full journaling and. in case of ext4, with barriers on, it does. At least at logical level - till physical layer. For physical layer there is SMART paired with backups.
    Maybe you did not read the PhD thesis, but I did. There are actually a lot of research on silent data corruption on ext3 and also on hw-raid.

    For instance, physics centre CERN did a study on this, and found out that many of their hardware raid Linux storage servers, showed silently corrupted data:
    http://www.zdnet.com/blog/storage/da...n-you-know/191

    What CERN did, was this: they wrote a special bit pattern on their hw raid storage servers, again and again. And after three weeks, they checked the entire disk and saw that the bit pattern was not correct. Some 1 become 0, and vice versa. The Linux server did not even knew this, and reported no errors. This is called Silent Corruption - the hardware, nor the OS, is aware that some bits have been flipped by random. They believe everything is correct, but it is not correct.






    "SMART does not help."
    Quote Originally Posted by crazycheese View Post
    I wonder why would they build it in? Self Monitoring and Analysis. I only need Reallocated Sector count and Spindle Start Retries to cover it all.
    If course, FS "could" additinally CRC the data, but thats what happens if you add encryption system on top.
    Let me ask you two questions:
    A) have you heard about ECC RAM? What is it for? Why does servers use ECC RAM? Do you know why?

    B) Have you ever read the specification sheet of a standard SAS enterprise disk? For instance Cheetah 15000rpm disk:
    http://www.seagate.com/docs/pdf/data...etah_15k_7.pdf
    Read the part about "non recoverable errors"

  2. #112
    Join Date
    Apr 2010
    Posts
    1,946

    Default

    Quote Originally Posted by kebabbert View Post
    "Regarding ext3, it does not really protect your data well."

    Maybe you did not read the PhD thesis, but I did. There are actually a lot of research on silent data corruption on ext3 and also on hw-raid.
    WHERE????!

    Quote Originally Posted by www.zdnet.com/blog/storage/data-corruption-is-worse-than-you-know/191 View Post
    * Disk errors.The wrote a special 2 GB file to more than 3,000 nodes every 2 hours and read it back checking for errors for 5 weeks. They found 500 errors on 100 nodes.
    o Single bit errors. 10% of disk errors.*1*
    o Sector (512 bytes) sized errors. 10% of disk errors.*2*
    o 64 KB regions. 80% of disk errors. This one turned out to be a bug in WD disk firmware interacting with 3Ware controller cards *3* which CERN fixed by updating the firmware in 3,000 drives.
    * RAID errors *4* They ran the verify command on 492 RAID systems each week for 4 weeks. The RAID controllers were spec’d at a Bit Error Rate of 10^14 read/written. The good news is that the observed BER was only about a 3rd of the spec’d rate. The bad news is that in reading/writing 2.4 megabytes of data there were some 300 errors.
    * Memory errors *5*. Good news: only 3 double-bit errors in 3 months on 1300 nodes. Bad news: according to the spec there shouldn’t have been any. Only double bit errors can’t be corrected.
    *1* Platter surface demagnetization errors! SMART detect this.
    *2* Firmware errors! Contact or sue hardware vendor!
    *3* Firmware errors! Same!
    *4* RAID Hardware logic & xfer errors! Same, but for RAID card/controller/cables!
    *5* RAM bit magnetization due to high density! Use ECC RAM, position RAM correctly - follow MB manufacturer recommendation, enclose hardware in grounded cages correctly!

    Where is LINUX EXTx CORRUPTING YOUR DATA HERE?

    Is filesystem DESIGNED to withstand all those errors? Hell, NO.
    It is like blaming Joe from Los Angeles in Fukushima crisis! He is american, and americans delivered parts to Nippon, so he is responsible for nuclear meltdown! He is NOT.
    What is Joe responsible? To support his family and do it well! There is no point in giving every single Joe nuclear physician education to control the reactor either!

    Projected to this "analysis", the file system should only do what filesystem should do - and do it well.

    Detect file corruption - ext does data block and journal checksumming. Ntfs? I know only a way via hacks.
    Prevent fragmentation - ext does this, designed with this priority. Ntfs does not do it and hence - speedups.
    Correctly support operating system security requirements - ext does this.
    Support for file requirements (timedate,name,reservations) - ext does this - and efficiently, unlike ntfs with MFT growing past 12-50% of partition size, without sane mechanism to change it.
    Maintain consistency over power-down/cuts - ext does this and can do full data journaling, where ntfs does only metadata.
    Badblocks - are not applicable to file system job, only in times of floppy disks. Nevertheless, ntfs tries to appy this in 21 century.

    And ext is opensource! Which means it runs everywhere and has no licensing payments - this means ntfs is GARBAGE.
    Ntfs is used ONLY and ONLY for legacy reasons. WHOLE MICROSOFT is built around LEGACY REASONS.
    They flood and occupy market by price damping, set own standards and then they pretty much control EVERYONE.
    THANK YOU CERN, FOR NOT USING MICROCRAP!

    Quote Originally Posted by kebabbert View Post
    For instance, physics centre CERN did a study on this, and found out that many of their hardware raid Linux storage servers, showed silently corrupted data:
    http://www.zdnet.com/blog/storage/da...n-you-know/191

    What CERN did, was this: they wrote a special bit pattern on their hw raid storage servers, again and again. And after three weeks, they checked the entire disk and saw that the bit pattern was not correct. Some 1 become 0, and vice versa. The Linux server did not even knew this, and reported no errors. This is called Silent Corruption - the hardware, nor the OS, is aware that some bits have been flipped by random. They believe everything is correct, but it is not correct.
    Yes, AND? Linux server supposed to correct hardware failures? Linux does not feature artificial intelligence. Yet.
    The guys threw HUGE testing at HUGE capacity arrays. Of course errors would show up, but from those, none were of linux or ext origin. Or you have something else to tell?



    Quote Originally Posted by kebabbert View Post
    "SMART does not help."
    Of course it does. It reports when first physical sector was remapped or when drive motor starts showing age. Sufficient for the desktop or workstation to replace the drive with the new one.

    Quote Originally Posted by kebabbert View Post
    Let me ask you two questions:
    A) have you heard about ECC RAM? What is it for? Why does servers use ECC RAM? Do you know why?

    B) Have you ever read the specification sheet of a standard SAS enterprise disk? For instance Cheetah 15000rpm disk:
    http://www.seagate.com/docs/pdf/data...etah_15k_7.pdf
    Read the part about "non recoverable errors"
    No, my head is only good to eat with.
    Of course I know, happens ECC is only available and built for server mainboards, although unofficially some asus boards seems to support it. In recent time, using ECC does start to make sense, with hi-density memory volumes going 4Gb and up.
    But its manufacturer job to make sure component does not break within its designed usage scenario.

    SATA has many SAS functions in it and is sufficient for desktop usage. SAS is too complex and has operating environment not normally seen on desktop, like 24/7 massively parallel data exchange with very limited error correction time, multi-disk and hotswap support. For example, you do not do SAS with 1000x 1Gb drives at home, you buy one 1Tb drive instead.

    Cheetah is good drive. But too slow vs SSD, too noisy and unreliable vs normal 7,2k. The "non-recoverable bits" part is statistical mean product, many publish it, I guess it is legal requirement.
    Last edited by crazycheese; 06-27-2011 at 03:15 PM.

  3. #113

    Default

    Quote Originally Posted by kebabbert View Post
    In other words, I would not trust on NTFS nor ext4. There is no research on ext4 as I know of, but that does not prove that ext4 is safe.
    It seems it's not the case in Linux:

    http://blogs.oracle.com/linux/entry/...ption_in_linux

    and this:

    http://www.betanews.com/article/Orac...ion/1228243294

    The 2.6.27 Linux kernel got bolstered today by "block I/O data integrity infrastructure" code which is seen by Oracle, the code's contributor, as a first for any operating system.
    So it seems it was resolved in Linux even before ZFS.
    Last edited by kraftman; 06-27-2011 at 04:52 PM.

  4. #114
    Join Date
    Jul 2008
    Location
    Berlin, Germany
    Posts
    858

    Default

    Quote Originally Posted by kraftman View Post
    So it seems it was resolved in Linux even before ZFS.
    So the "block I/O data integrity infrastructure" automagically resolves all data corruption issues? I think not.

    Quote Originally Posted by crazycheese View Post
    *1* Platter surface demagnetization errors! SMART detect this.
    Of course I know, happens ECC is only available and built for server mainboards, although unofficially some asus boards seems to support it.
    But its manufacturer job to make sure component does not break within its designed usage scenario.
    SMART does not tell you if the sector which you just read is correct or corrupt.
    ECC memory is supported on all AMD CPUs since socket 754 days, and only recently AMD started to screw consumers by dropping it from their Fusion parts. I think the majority of AM2/AM3/+ mobos support it too.
    If you read the CERN article, you will notice that apart from the memory/firmware problem, all components worked within their specified error rates.

  5. #115

    Default

    Quote Originally Posted by chithanh View Post
    So the "block I/O data integrity infrastructure" automagically resolves all data corruption issues? I think not.
    It's about sillent data corruption.

  6. #116
    Join Date
    Nov 2008
    Posts
    418

    Default

    Quote Originally Posted by kraftman View Post
    It seems it's not the case in Linux:
    No, I said something like "I dont know any RESEARCH on ext4, but lack of research does not prove that ext4 is safe".

    You show some Oracle engineers talking about data corruption. You dont show any research. Of course, developers behind ReiserFS, NTFS, ext3 etc are also engineers, and they also tried to make ReiserFS, ext3 and NTFS safe. But they failed according to a PhD thesis. You show similar links: some Oracle engineers saying that they tried to do a filesystem safe. But maybe they also failed?

    Again: I dont know any research on ext4 - but lack of research does not prove ext4 is safe. You need to provide research that shows that ext4 can handle silent corruption. You show some talks about Oracle engineers saying they want to make Linux safe. Just as Reiser said. But he failed, ReiserFS is not safe, according to PhD thesis.



    I agree this looks good for Linux. But I would like to see research on this, have the engineers succeeded or did they fail? But until I see research (maybe this solution is really bad? Or is it better than ZFS?) I would definitely use this Oracle solution. Or ZFS. I would avoid everything else if my data is important.



    Quote Originally Posted by kraftman View Post
    So it seems it was resolved in Linux even before ZFS.
    ZFS is much older than this. ZFS was officially announced 2004 but was in development several years before.

    One of your Linux link is from last year, 2010. The other link is almost from 2009 (December 2008). Thus, almost half a decade after Sun talked about ZFS and Silent Corruption, everyone else now today is aware of Silent Corruption and tries to develop solutions to protect against Silent Corruption. But is their solution as good as ZFS?

    There is recent research on ZFS and Silent Corruption, showing that ZFS protects against all the different silent corruption scenarios the research team tried to provoke:
    http://www.cs.wisc.edu/wind/Publicat...ion-fast10.pdf
    Thus, initial research shows ZFS to be much safer than any other solution, because ZFS catched all artifically injected errors.

    I want to see the same kind of research on your Linux links. But there are no research as I know of. So we have to wait, but until then I would use the Linux solution in your links (and hope that the Linux solution is safe), or I would use ZFS. But it is good that Oracle helps Linux to be safer.

  7. #117
    Join Date
    Nov 2008
    Posts
    418

    Default

    Quote Originally Posted by crazycheese View Post
    WHERE????!
    I showed you the link, that proves that ext3 is not safe. Nor XFS, JFS, ReiserFS or NTFS is safe. Just read the link I posted for you. In the link there is a PhD thesis that shows that ext3 is not safe. If you look in the link, you can find the PhD thesis.



    *1* Platter surface demagnetization errors! SMART detect this.
    *2* Firmware errors! Contact or sue hardware vendor!
    *3* Firmware errors! Same!
    *4* RAID Hardware logic & xfer errors! Same, but for RAID card/controller/cables!
    *5* RAM bit magnetization due to high density! Use ECC RAM, position RAM correctly - follow MB manufacturer recommendation, enclose hardware in grounded cages correctly!

    Where is LINUX EXTx CORRUPTING YOUR DATA HERE?
    You know, there are many more errors that those you listed. The problem is that ext3 does not catch all those errors. For instance, *2*, if there are firmware errors, then the filesystem should catch this. ext3 does not. The question is; is ext4 safer? We dont know, there is no research on ext4. But it seems that ext4 is safer.



    Is filesystem DESIGNED to withstand all those errors? Hell, NO.
    It is like blaming Joe from Los Angeles in Fukushima crisis! He is american, and americans delivered parts to Nippon, so he is responsible for nuclear meltdown! He is NOT. What is Joe responsible? To support his family and do it well! There is no point in giving every single Joe nuclear physician education to control the reactor either!
    Yes. ZFS is designed to withstand all those errors, and many more. There is research team of computer scientists that do research on ZFS. Read the paper in my post above.



    Projected to this "analysis", the file system should only do what filesystem should do - and do it well.
    As Jeff Bonwick says: "The job of the file system is to make sure that the data you wrote is intact and the data you get from the filesystem, is the same and has not been altered. Funny though, most filesystems can not do this". Jeff Bonwick is the lead architect behind ZFS.



    Detect file corruption - ext does data block and journal checksumming. Ntfs? I know only a way via hacks.
    Prevent fragmentation - ext does this, designed with this priority. Ntfs does not do it and hence - speedups.
    Correctly support operating system security requirements - ext does this.
    Support for file requirements (timedate,name,reservations) - ext does this - and efficiently, unlike ntfs with MFT growing past 12-50% of partition size, without sane mechanism to change it.
    Maintain consistency over power-down/cuts - ext does this and can do full data journaling, where ntfs does only metadata.
    Badblocks - are not applicable to file system job, only in times of floppy disks. Nevertheless, ntfs tries to appy this in 21 century.
    But still, research shows that ext3 is not safe. Neither is XFS, nor JFS nor ReiserFS. So, the engineers has not succeeded. Their solution is not safe enough.



    The guys threw HUGE testing at HUGE capacity arrays. Of course errors would show up, but from those, none were of linux or ext origin. Or you have something else to tell?
    A safe solution should catch all such errors. ZFS does.



    Of course it does. It reports when first physical sector was remapped or when drive motor starts showing age. Sufficient for the desktop or workstation to replace the drive with the new one.
    There are cases when SMART is not good enough. For instance, one power supply was bad, some 1 became 0 and vice versa. No one detected this. Except ZFS. Very quick, ZFS detected those errors. SMART did not notice.


    Of course I know, happens ECC is only available and built for server mainboards,
    In RAM memory sticks, some 1 might become 0, and vice versa. There are many reasons: power spikes, cosmic radiation, etc:
    http://en.wikipedia.org/wiki/Dynamic...ror_correction

    The reason we use ECC, is because ECC protects against some of these errors. The same errors happens to disk drives. For instance Bit Rot, after some years 1 might become 0, vice versa. Bugs in firmware, etc. A safe filesystem should catch all these errors and protect your data. Hardware raid does not protect your data. There is research on that.


    SATA has many SAS functions in it and is sufficient for desktop usage. SAS is too complex and has operating environment not normally seen on desktop, like 24/7 massively parallel data exchange with very limited error correction time, multi-disk and hotswap support. For example, you do not do SAS with 1000x 1Gb drives at home, you buy one 1Tb drive instead.
    I am trying to say that even high end safe, Enterprise SAS disks which costs much many says:
    "every 10^16 bits, there will be errors that is not recoverable".
    just read the spec sheet and you will see. Every 10^16 bits, there will be some read/write errors that are not recoverable nor repairable by the disk. And commodity SATA disks has much more errors than high end Enterprise server SAS disks.

  8. #118
    Join Date
    Mar 2011
    Posts
    113

    Question

    there are some linux fans near nervous-breakdown ....[*8
    normal they use linux too much ...i joke a little but as shown the kernel power bug that Phoronix found and solved , current linux is far from perfect .
    using things from big corporations like intel ,M$ ntfs , and the drivers they build . should be the top priority , at least untill linux is on 80% of all pcs [by now it s 5] , may be that will come . like there are distro , there will be kernels choice with proprietaries patents inside

  9. #119

    Default

    Quote Originally Posted by kebabbert View Post
    You show some Oracle engineers talking about data corruption. You dont show any research. Of course, developers behind ReiserFS, NTFS, ext3 etc are also engineers, and they also tried to make ReiserFS, ext3 and NTFS safe. But they failed according to a PhD thesis. You show similar links: some Oracle engineers saying that they tried to do a filesystem safe. But maybe they also failed?
    If the problem is resolved in the block layer it should probably make every native Linux file system safe from the silent data corruption. If the problem wasn't known before the PhD thesis it's obvious the file systems have failed in this matter. According to Oracle's blog post kernel was patched after the thesis. If there's no present thesis then it's matter of believing and conclusions.

    Again: I dont know any research on ext4 - but lack of research does not prove ext4 is safe. You need to provide research that shows that ext4 can handle silent corruption. You show some talks about Oracle engineers saying they want to make Linux safe. Just as Reiser said. But he failed, ReiserFS is not safe, according to PhD thesis.
    Like before: there were some changes aimed at the issue. Check this:

    http://blogs.oracle.com/linux/entry/...ata_corruption

    There's a white paper about sdc and Linux.

    I want to see the same kind of research on your Linux links. But there are no research as I know of. So we have to wait, but until then I would use the Linux solution in your links (and hope that the Linux solution is safe), or I would use ZFS. But it is good that Oracle helps Linux to be safer.
    I like this part.

  10. #120

    Default

    Quote Originally Posted by jcgeny View Post
    there are some linux fans near nervous-breakdown ....[*8
    normal they use linux too much ...i joke a little but as shown the kernel power bug that Phoronix found and solved , current linux is far from perfect .
    using things from big corporations like intel ,M$ ntfs , and the drivers they build . should be the top priority , at least untill linux is on 80% of all pcs [by now it s 5] , may be that will come . like there are distro , there will be kernels choice with proprietaries patents inside
    There's a bug mainly, because of messed up bios. Ntfs is crap, so I guess nobody will use it on Linux. Maybe when he has to deal with dual boot, because Windows cannot handle Linux file systems (unless you install some third party tool). Linux is far from perfect, but Windows is farthest. I can only imagine how many patents winblows violates.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •