Results 1 to 9 of 9

Thread: EXT4 File-System Updated For Linux 3.11 Kernel

  1. #1
    Join Date
    Jan 2007
    Posts
    15,437

    Default EXT4 File-System Updated For Linux 3.11 Kernel

    Phoronix: EXT4 File-System Updated For Linux 3.11 Kernel

    Ted Ts'o has already sent in his pull request for EXT4 file-system changes targeting the Linux 3.11 kernel...

    http://www.phoronix.com/vr.php?view=MTQwMTU

  2. #2
    Join Date
    Nov 2011
    Posts
    353

    Default

    I have a personal vendetta against tso. I know he is not the maintainer of badblocks, but he has personally rebuffed hundreds of people who complained that the program is broken and literally can take 2 days to scan a hard drive. I am now forced to use ddrescue and output it to a /dev/null. What a shame.

  3. #3
    Join Date
    Dec 2008
    Posts
    150

    Default

    badblocks is a fine program, I use it frequently.

    By default, it does multiple write/read passes -- four. But if you don't want that many passes, you can use a command line option to do fewer passes ( -t ).

    With regard to the speed of each pass, I have no difficulty getting it to write and read at the top speed of my HDDs. If you are having trouble with that, you may want to experiment with using 4096 for the block size ( -b ) and increasing the number of blocks tested at a time ( -c ). Also, if you care about speed, never use the non-destructive read-write mode ( -n ) -- always use the write-mode test ( -w ).

    BTW, exaggerate much? "personally rebuffed hundreds of people".
    Last edited by jwilliams; 07-03-2013 at 02:37 AM.

  4. #4
    Join Date
    Sep 2007
    Location
    Germany
    Posts
    100

    Default Snapshots

    What happened to snapshots in EXT4?
    IIRC there was some beta/staging code to implement snapshots in EXT4 last year. Where did it go?
    I really would like snapshots in EXT4.

  5. #5
    Join Date
    Nov 2011
    Posts
    353

    Default

    Quote Originally Posted by jwilliams View Post
    badblocks is a fine program, I use it frequently.

    BTW, exaggerate much? "personally rebuffed hundreds of people".
    there is a debian bug listing with about a thousand plus comments regarding this issue. they may have fixed it, but it was an absolute abomination on anything bigger than 160GB.

  6. #6
    Join Date
    Oct 2012
    Posts
    148

    Default

    Quote Originally Posted by garegin View Post
    there is a debian bug listing with about a thousand plus comments regarding this issue. they may have fixed it, but it was an absolute abomination on anything bigger than 160GB.
    that's interesting, I never had performance problems with badblocks, and I've used it primarily on 1TB+ HDDs, on few dozen different machines (from netbooks, up to enterprise RAID arrays). I mean, sure the test takes days to run with a 1TB drive, but not because the problem in badblocks, rather the effect of widening gap between read and write speed (which goes up linearly) and storage density (which goes up exponentially) and the fact that badblocks does test the HDD multiple times just with different bit patterns (4 to be exact). In other words, I always saw nearly maximum read/write speed of the drive.

  7. #7
    Join Date
    Jan 2013
    Posts
    1

    Default

    Quote Originally Posted by garegin View Post
    I have a personal vendetta against tso. I know he is not the maintainer of badblocks, but he has personally rebuffed hundreds of people who complained that the program is broken and literally can take 2 days to scan a hard drive. I am now forced to use ddrescue and output it to a /dev/null. What a shame.
    It depends on your disk controller. You might want to tune TLER/CCTL/ERC via smartctl (smartmontools):

    Code:
    smartctl -l scterc,10,10 /dev/disk
    The above will set the read/write retry timeout to 1 second (10 deciseconds). See the manpage of smartctl for details. This option helped me greatly when I was recovering 1TB drive from (what I assume was) a head crash that damaged a handful of sectors.

  8. #8
    Join Date
    Nov 2011
    Posts
    353

    Default

    I don't think that it's that complicated. other surface scanners work just fine. ddrescue always works at full speed. and it's not even a surface scanner!

  9. #9
    Join Date
    Dec 2008
    Posts
    150

    Default

    Quote Originally Posted by garegin View Post
    I don't think that it's that complicated. other surface scanners work just fine. ddrescue always works at full speed. and it's not even a surface scanner!
    ddrescue does not perform the same job as badblocks. ddrescue cannot write one or more patterns to all blocks on a drive and then read them back to see if the patterns are correct.

    But if you want to only read all the sectors from a drive, skipping over large areas of unreadable sectors, then perhaps ddrescue is a better choice than badblocks, since ddrescue has logic to more quickly get past large groups of bad sectors.

    I use badblocks mostly to test new drives. If a drive has a lot of bad sectors, I do not care about the speed of testing -- I return the drive. badblocks is a tool for testing drives that have few bad sectors. ddrescue is designed to rescue data from drives with a lot of bad sectors. Two different jobs, two different tools.

    With badblocks, you can specify a maximum number of bad blocks ( -e ) before aborting the test. This is useful if you are qualifying drives and your criteria specifies a failure with a certain minimum number of bad blocks -- no need to continue to test if the drive has already failed.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •