If it was not obvious enough: With zfs send receive backups I do of course mean that you send these backups to another (maybe offsite) machine.
Originally Posted by drag
Microsoft has been talking about new and awesome file systems for their 'next' windows since 1996.
The last time for 'longhorn' - now known as Vista.
Yeah if you take advantage of that to do your backups then that is perfectly and 100% acceptable. Much better then the old fashion 'dump' command to back up your file system to tape or whatever.
Originally Posted by Goderic
preferably offsite and onsite, of course.
not unless you like data loss and root reinstalls
Originally Posted by kobblestown
Speaking of btrfs (or other filesystems) maturing, I'm thinking perhaps we aren't approaching this sort of stuff the best way. Filesystems are supposed to exhibit a certain degree of reliability, but it isn't clear to me how the current development methods ensure or even assess that.
Given the costs and risks associated with filesystem corruption, along with the lengthy process of ironing bugs out, maybe diving right into implementing an in-kernel filesystem isn't really useful. Performance is secondary to correctness, especially in the early development stages when you can afford to ignore certain aspects of the former.
What I'm saying is perhaps certain formal verification techniques are cost-effective in this scenario and may allow us to actually say something about reliability (unlike the test of time, as usually done). For example, we could start by implementing a high-level specification in a theorem prover and proving conceptual correctness, then progressively refining that to a FUSE-based or even in-kernel implementation. Usually that's too time-consuming (but so is waiting out the bugs) and too much a pain to make even minor changes for many applications to consider. But I think this could be worthwhile in this case, given the core structure and algorithms can be designed in early on and don't need to change as much.
So at least a certain class of bugs, mainly design errors, can be ruled out with a good degree of certainty. The question is whether we can reasonably extend that to a C implementation and partially model certain things we need about the kernel (e.g. threading, synchronization) without ending up with a proof that's fragile with respect to API changes, since ideally most of this translation should be machine-checked. I'm hoping somebody figures out a compromise or a sane way to apply this process in an existing, large codebase such as Linux, even if only in certain areas.