For lots of little writes, yeah -- but also remember that there are fewer blocks in the file to begin with because it's compressed. The worst possible situation would be frequently changing files that don't compress well (or at all)... like if you were editing MP3s on your flash drive.
Originally Posted by curaga
It's actually more suitable for files that are frequently read but infrequently modified. Usually stuff in /usr of an operating system -- so if you're booting an OS off of a flash drive, you probably want to enable transparent compression for /, but not for /home. That way your boot time will be drastically reduced because you have to read less data from the slow flash cells and pass less data through the slow USB bus. But in /home you'll have lots of tiny files constantly being modified; GNOME config files; browser cache; and so on. For those, you're right that any change could affect more blocks than necessary and require a lot of rewrites... but I thought there were compression schemes that are "stable" in the sense that changing one block within the file doesn't change the entire file in a ripple effect? I remember reading that about maybe LZO or Speedy codec somewhere... but I doubt LZMA is stable because the focus is on compression ratio first.
Wouldn't it be better to implement an IFS to extend FAT32 to allow files larger 4GB without changing the filesystem format? It would work similarly to how umsdos works. Call it SuperFAT or something. All files larger than 4GB would be split into multiple 4GB files and go into the SUPER.FAT directory in the root of the filesystem along with information for the IFS to make the split file appear as one large file in the filesystem hierarchy. You would even be able to retrieve the data without a SuperFAT IFS. All you would have to do is recombine the split file to another filesystem that supports files larger than 4GB. Simple and has some level of backward compatibility. I'd write up an IFS myself, but I'm not much C programmer.