Page 5 of 6 FirstFirst ... 3456 LastLast
Results 41 to 50 of 52

Thread: Oracle Plans To Bring DTrace To Linux

  1. #41

    Default

    Quote Originally Posted by kebabbert View Post
    Holy shit. This can not be true?? Jesus. That is really a bad design choice from the BTRFS team. I really hope this is not true, because that will make BTRFS much worse than I ever imagined. Are you sure?
    Well, I don't care about PR bull and crappy and biased benchmarks against very old Linux systems. System that has 30% slower binaries with highest optimization simply can't be fast, can it? The reality shows slowlaris isn't interesting for Oracle, but they keep it because of old sun (thankfully's dead) customers. As for btrfs being 64bit system that's true and that's very good design choice.

  2. #42
    Join Date
    Nov 2008
    Posts
    418

    Default

    Yes, I know that BTRFS will be default filesystem for Oracle Linux. That is perfectly in order, because Linux is inferior to Solaris, and BTRFS is inferior to ZFS.

    Larry Ellison said that Linux is for lowend, and Solaris is for Highend. I cant find that link now, but if you want, I can google for that link. He also said
    http://www.pcworld.com/article/21256..._to_sparc.html
    Ellison now calls Solaris "the leading OS on the planet,"
    It makes sense to use ZFS for highend Solaris, and BTRFS for lowend Linux. Of course, if Larry really were serious with making Linux as good as Solaris, then Larry would relicense ZFS so Linux could use it. But Larry is not relicensing ZFS - why is that? ZFS is better than BTRFS, and ZFS is mature. Larry wants to keep Solaris for highend features, and Linux for lowend.

    But I am surprised that DTrace comes to Linux, because DTrace is much better than anything Linux has. Does really Larry want Linux to be as good as Solaris?

    There is a huge technical post by a famous DTrace contributor, where he compares Systemtap to DTrace. Among others, he says that Systemtap might crash the server which makes it not usable for production servers. Whereas DTrace is safe and can be used in production servers. He goes on and on, and it seems that Systemtap has missed some of the main points of having a dynamically instrument like DTrace.
    http://dtrace.org/blogs/brendan/2011...ing-systemtap/

    But if DTrace comes to Linux, that is good for Linux. Here is the main architect of DTrace, trying out Linux DTrace.
    http://dtrace.org/blogs/ahl/2011/10/...is-not-dtrace/

  3. #43
    Join Date
    Nov 2008
    Posts
    418

    Default

    Quote Originally Posted by simcop2387 View Post
    ZFS will always be able to detect data corruption (I believe), but repair will only happen if you are using one of the many RAID-Z configurations. Admittedly though if you're using ZFS and NOT running with RAID-Z then you really shouldn't be managing servers.
    This is not really correct that you need raid for ZFS to be able to repair data corruption. You can use a single disk and get repair. You have to specify "copies=2", which stores all data twice on the single disk. This halves the storage capacity of the disk. Someone said that no filesystem (except ZFS) can get repair on a single disk.


    Comparing ZFS to EXT4 though is rather unfair. EXT4 was as you say never designed for that protection. A more correct comparison would be to compare UFS to EXT4, as they both serve the same purpose. And before you say that you shouldn't ever use UFS, there are times where the overhead of ZFS shouldn't be bothered with, say when you are running many virtual solaris systems with all of their storage already on ZFS. There's no reason to put ZFS inside of ZFS.
    Absolutely, I agree that it is unfair to compare ZFS to ext4. ZFS is modern and detects and repair data corruption. ext4 has legacy design and should be compared to the old UFS. In that case, I dont know who is better. I would not be surprised if ext4 is better.

  4. #44
    Join Date
    Nov 2008
    Posts
    418

    Default

    Quote Originally Posted by kraftman View Post
    This is meaningless. ZFS is not safe by design, because in scenarios when there are long pauses between fs checks it can be corrupted. I showed you this once.
    You did? I missed that. Can you please post it again?

    Regarding if there are long pauses between fs checks, no, that does not matter. If you dont do a ZFS fs check in one year (as I have), what happens is that ZFS always detects corruption. Detection is an ongoing process, that never stops. If ZFS detects corruption, ZFS will automatically repair the corruption (if you have redundancy like raid or "copies=2") when you request the data.
    https://blogs.oracle.com/elowe/entry...ves_the_day_ta
    "I've been running over a week now with a faulty setup which is still corrupting data on its way to the disk, and have yet to see a problem with my data, since ZFS handily detects and corrects these errors on the fly."

    This guy has a weak power supply, and ZFS detects corruption in his setup almost immediately. Earlier, the earlier filesystem he used, did not detect these problems. He ran for years without no one telling him that data corruption occured all the time. But ZFS notices the slightest data corruption, and tells the user, and automatically repairs corruption.

    But every once in a while, you should traverse everything on disk and check everything, yes. Do a fs check.





    Quote Originally Posted by kraftman View Post
    You can only say ZFS is safer by default. There are ways than can make Ext4 nearly completely safe. Those 'computer science researches' didn't prove this, but they showed it has some mechanisms that can help in fighting silent data corruption.
    If you make ext4 "nearly completely safe", then it has to be heavily modified, and will be very similar to ZFS design in data detection aspect. And if there are ways to make ext4 nearly completely safe, why dont they do it?

    The computer science researchers did prove that for every artifical error they injected into the disks, ZFS detected all errors. And also, ZFS repaired all errors, when there were redundancy (for instance, raid). Other computer science researchers also injected artifical errors on common Linux filesystem disks, and the Linux filesystems did not even detect those errors. How can Linux filesystems repair errors they can not even detect? First step is detection.

    To detect errors, ZFS knows about everything; from RAM down to disk. The whole chain. ZFS has control over everything, ZFS is raid, filesystem, etc in one piece of software. This gives ZFS the ability to do end-to-end checksums: "The data in RAM, is it identical to what was stored on disk?" This comparison requires ZFS to have control over everything.

    Have you played a game as kid? You whisper a word in one child, he whispers it to the other, etc. The last child says the word loud and everyone giggles, because the starting word and the last word are never the same. You need always to compare the start, to the end - to be sure the data is identical. ZFS does this, compares RAM to disk. End-to-end checksums. This is because ZFS is one piece of code, doing everything.

    And guess what? Linux has several different layers, one raid layer. One filesystem layer. etc. All these layers does not know about other layers. There are no one to do end-to-end checksums. No one can compare RAM to disk, because there are several independent Linux layers.

    Maybe you heard about Linux kernel developers mocking ZFS for "rampant violation of layers"? The reason ZFS can do end-to-end checksums, and Linux can not - is because ZFS violates layers! That is the main point of ZFS. And for this, Linux kernel developers did not understand anything of ZFS and said ZFS was a piece of shit, because it violates layers. Well, because ZFS violates layers - and Linux does not - ZFS is superior. Again Linux kernel developers show their ignorance. Because Linux kernel developers complain on the Linux code is "having low quality" - maybe there is a correlation between ignorance and low code quality?

    ZFS does great things because it violates layers. The ZFS dev team realized the need to violate layers, and discusses this issue in several interviews and documents. Any attempt to clone ZFS will need to violate layers, too. Just like BTRFS - which also violates layers.





    Question 1) How did you know what computer science researchers say about ZFS? Have you studied the latest research or what? Which research papers have you read?





    Question 2) You have many times said "It is widely known Oracle wants to kill old, crappy, legacy slowlaris". Can you provide links, or are you just trying to start evil rumours and doing FUD about Solaris? (Which you have confessed earlier).





    I have never seen that Larry Ellison wants to kill Solaris. He says it is the best Unix out there.

    On the other hand, IBM has offically said they are going to kill of AIX, the Unix version from IBM. This is not evil rumours nor FUD from me. Here is the link going to IBM executives. Thus, I am not spreading FUD, I speak true and always back up my claims. I have substance in my claims. Check my sources, yourself:
    http://www.zdnet.co.uk/news/applicat...e-aix-2129537/
    "The day is approaching when Linux is likely to replace IBM's version of Unix, the company's top software executive said, an indication that the upstart operating system's stature is rising within Big Blue....Asked whether IBM's eventual goal is to replace AIX with Linux, Mills responded, "It's fairly obvious we're fine with that idea...It's the logical successor."





    Well, I don't care about PR bull and crappy and biased benchmarks against very old Linux systems. System that has 30% slower binaries with highest optimization simply can't be fast, can it? The reality shows slowlaris isn't interesting for Oracle, but they keep it because of old sun (thankfully's dead) customers. As for btrfs being 64bit system that's true and that's very good design choice.
    It does not matter if a filesystem is a bit slower, as long as it is safe. Would you prefer a slower safe filesystem, or a fast filesystem that might corrupt your data?

    And regarding performance of ZFS vs BTRFS. Well, ZFS scales better than BTRFS, and ZFS is faster when you start to use many disks. Here is a benchmark of only 16 SSD disks on BTRFS vs ZFS. And we see that out-of-the-box, ZFS is faster than BTRFS. Sure, BTRFS might be faster on a single disk, but Solaris has always targeted big servers, big scalability - and never really cared for a single disk or quad cores. Solaris is for many disks, many cpus, etc. The more disks you have, the better ZFS will be, and the slower BTRFS will be. Same with cpus.
    http://www.mail-archive.com/linux-bt.../msg05647.html

    Regarding BTRFS being 64 bit, that is quite bad actually. I dont have the time to redo the calculations again now, but this is reasoning from ZFS developers. They reasoned something like this:

    Today, we are storing Petabytes of data. CERN are storing petabytes, Facebook, Google, etc. To store PB of data, requires something like 2^62 bits. In a couple of years, all data will be doubled, thus requiring 2^63 bits. And in another few years, it will require 2^64 bits. After that, 64 bit filesystems will not do. Then you need filesystems that can store more data than 64 bits. Maybe ZFS would be a 72bit filesystem? And in a few years, ZFS would need to be increased again. Instead, ZFS developers said, "well, let us make 128 bits, and then we never have to worry anymore". Thus, ZFS is 128 bit and can handle 2^128 bits. 128 bit is enough and will never need to be increased. To fill up 2^128 bit, you need something like more atoms in the whole earth or something similar. If you stapled the entire earth with 4TB disks, 10 meters high everywhere on land and on sea - that would be something like 2^100 bits of storage or something similar. You would need several earths like that, to reach 2^128 bits. Thus, 128 bit filesystems is all that humankind will ever need. Laws of physics say so.

    Therefore, in some years from now, they need to redesign BTRFS to be more than 64 bits. That is short sighted, and not future proof. This again shows why BTRFS has some bad design choices and why BTRFS is inferior to ZFS. Heck, even one RedHat developer said BTRFS is "broken by design".

  5. #45

    Default

    Quote Originally Posted by kebabbert View Post
    You did? I missed that. Can you please post it again?
    No, you didn't miss it and no, I won't post it once again.

    Regarding if there are long pauses between fs checks, no, that does not matter. If you dont do a ZFS fs check in one year (as I have), what happens is that ZFS always detects corruption. Detection is an ongoing process, that never stops. If ZFS detects corruption, ZFS will automatically repair the corruption (if you have redundancy like raid or "copies=2") when you request the data.
    That's the problem. You've got to have copies to repair file system. The same can be done with Ext4. There are also bugs in zfs, so your data is not completely safe with it.

    If you make ext4 "nearly completely safe", then it has to be heavily modified, and will be very similar to ZFS design in data detection aspect. And if there are ways to make ext4 nearly completely safe, why dont they do it?
    Those who want to make it such safe just use proper system configuration and copies (raid). It's not that same file system has to care about data safeness. The same about detection and recovery.
    And guess what? Linux has several different layers, one raid layer. One filesystem layer. etc. All these layers does not know about other layers. There are no one to do end-to-end checksums. No one can compare RAM to disk, because there are several independent Linux layers.
    That's no true and I showed you this one, too. Patches were even sent by Oracle btw.
    Question 1) How did you know what computer science researchers say about ZFS? Have you studied the latest research or what? Which research papers have you read?
    The one you gave.

    Question 2) You have many times said "It is widely known Oracle wants to kill old, crappy, legacy slowlaris". Can you provide links, or are you just trying to start evil rumours and doing FUD about Solaris? (Which you have confessed earlier).
    You spread FUD about Linux, but I say true things about slowlaris. Links are meaningless in this case which is obvious, but marketshare, popularity and Oracle actions clearly shows slowlaris is going to end.

    I have never seen that Larry Ellison wants to kill Solaris. He says it is the best Unix out there.
    He says many stupid things. How many unixes are out there, today?

    And regarding performance of ZFS vs BTRFS. Well, ZFS scales better than BTRFS, and ZFS is faster when you start to use many disks. Here is a benchmark of only 16 SSD disks on BTRFS vs ZFS. And we see that out-of-the-box, ZFS is faster than BTRFS. Sure, BTRFS might be faster on a single disk, but Solaris has always targeted big servers, big scalability - and never really cared for a single disk or quad cores. Solaris is for many disks, many cpus, etc. The more disks you have, the better ZFS will be, and the slower BTRFS will be. Same with cpus.
    http://www.mail-archive.com/linux-bt.../msg05647.html
    Such old benchmarks with unstable btrfs version doesn't matter at all.

    Regarding BTRFS being 64 bit, that is quite bad actually. I dont have the time to redo the calculations again now, but this is reasoning from ZFS developers. They reasoned something like this:

    Today, we are storing Petabytes of data. CERN are storing petabytes, Facebook, Google, etc. To store PB of data, requires something like 2^62 bits. In a couple of years, all data will be doubled, thus requiring 2^63 bits. And in another few years, it will require 2^64 bits. After that, 64 bit filesystems will not do. Then you need filesystems that can store more data than 64 bits. Maybe ZFS would be a 72bit filesystem? And in a few years, ZFS would need to be increased again. Instead, ZFS developers said, "well, let us make 128 bits, and then we never have to worry anymore". Thus, ZFS is 128 bit and can handle 2^128 bits. 128 bit is enough and will never need to be increased. To fill up 2^128 bit, you need something like more atoms in the whole earth or something similar. If you stapled the entire earth with 4TB disks, 10 meters high everywhere on land and on sea - that would be something like 2^100 bits of storage or something similar. You would need several earths like that, to reach 2^128 bits. Thus, 128 bit filesystems is all that humankind will ever need. Laws of physics say so.
    64bit is simply enough and will be enough for long time. CERN, Google, Facebook runs Linux not slowlaris, so 64bit is and will be (nothing suggests they plan to replace Linux by system that has 30% slower binaries) enough.

    Therefore, in some years from now, they need to redesign BTRFS to be more than 64 bits. That is short sighted, and not future proof. This again shows why BTRFS has some bad design choices and why BTRFS is inferior to ZFS. Heck, even one RedHat developer said BTRFS is "broken by design".
    No, that's simply not true and this sounds like sun's FUD. Red Hat employer was mistaken and if you read discussion you should be aware of this.

  6. #46
    Join Date
    Nov 2008
    Posts
    418

    Default

    Quote Originally Posted by kraftman View Post
    No, you didn't miss it and no, I won't post it once again.
    You have confessed you FUD sometimes, so you have a track record of making up things.

    How can I know if you are telling the truth now? If you can not post a simple link again, then it looks like FUD. Dont you think?

    I know you have showed links earlier, but none of the links where relevant. In one case you showed a link, where they compared an old 800MHz SPARC Solaris server, vs a dual core Intel 2.4GHz Linux, and your link claimed that Linux is faster than Solaris. You thought that link was good and relevant. I say your link was not relevant. If you install Linux and Solaris on the same hardware - that is interesting and relevant - when we discuss performance of Solaris vs Linux. Thus, your links are not good. This link you showed earlier, can you repost it, or is the link as meaningless as your earlier links?



    That's the problem. You've got to have copies to repair file system.
    What is a problem? ZFS needs redundancy (raid or copies=2) to repair corrupted data. Every filesystem works like that. No filesystem can repair corrupted data without redundancy. If you use a single disk with ext4, then ext4 can not repair corrupted data for the following reasons:
    1) ext4 has no "copies=2" command, ext4 needs another disk to be able fetch missing data. ext4 must have two disks, or more. ZFS can have a single disk.
    2) ext4 has no way of detecting corrupted data or repair it. ext4 does parity calcuations, but no checksum for detecting corrupted data.

    So I dont see it as a problem that ZFS can repair corrupted data, whereas ext4 can not. What is the problem that ZFS can repair corrupted data? Can you explain again?



    The same can be done with Ext4.
    Yes, if you heavily modify ext4, then it can detect and repair corrupted data. Then it will have some mechanism similar to ZFS.



    There are also bugs in zfs, so your data is not completely safe with it.
    This is true. There are bugs in ZFS still. On 31 october recently, ZFS became 10 years old. And still it has bugs. It takes decades to find all bugs in a filesystem. When BTRFS reaches v1.0 in a couple of years, it will take 10 years before most bugs have been ironed out. There have been cases where people lost data with ZFS, yes. So your data is not completely safe with ZFS, no.

    Many people believe that ZFS is the safest alternative on the market today. Not a completely safe, but the safest. ext4 has no data corruption detection at all, so it is not safe at all. I fail to see how "ext4 is near completely safe" - can you describe how, or are FUDing again?



    Those who want to make it such safe just use proper system configuration and copies (raid). It's not that same file system has to care about data safeness. The same about detection and recovery.
    Do you believe that if you use hardware raid, then you get a safe storage solution? That is not true, as I have shown you links earlier. I can repost those links again if you wish. Hardware raid does not have any mechanism to detect nor repair corrupted data. Hw raid does parity calculations if a disk has crashed, to repair the raid again. But that is not checksum against data corruption. hw-raid are not built to handle data corruption.



    That's no true and I showed you this one, too. Patches were even sent by Oracle btw.
    I suspect you talk about the Oracle IDF patches? IDF are not safe. If you look at the spec sheet of Fibre Channel high end disks, that use IDF techniques, the FC disks still have problem with data corruption. Have you looked at those spec sheets? I can post them for you, if you wish to see. They say something like "1 error in every 10^16 bits" - how can read errors happen if IDF is completely safe?



    The one you gave.
    I did not understand this. Can you explain again? I know I posted links and you denied those links existed. I posted the links again and again, and you all you said was "post the research papers you talk about", and I did again and again in several posts. And still, you denied the research papers existed. So, are you know claiming that those research papers I posted, does exist? I dont understand. It seems you accept those papers now? Or are you still denying the research I showed you?



    You spread FUD about Linux,
    If I spread FUD about Linux, can you link to one such FUD post by me? As you know, I only quote Linus T, Andrew M, etc - when they say that Linux is buggy and bloated. That is not lies nor FUD. It is true. Or do you deny the Linux kernel devs said this? Do you want me to repost those interviews and links again? I can do that if you wish.



    but I say true things about slowlaris. Links are meaningless in this case which is obvious, but marketshare, popularity and Oracle actions clearly shows slowlaris is going to end.
    If you have a controversial claim, that Oracle wants to kill Solaris, then you should back it up somehow. If people are not agreeing on what you say, you should be able to reason and argue why you are correct. You should not just say "because I say so" - that is FUD.

    Larry has said officially that he is increasing resources much more than Sun ever had. There will be more developers on Solaris and SPARC cpu, than Sun ever had.
    http://news.cnet.com/8301-13505_3-10...in;contentBody
    In other words, Larry says he will bet heavily on Solaris. You say he is going to kill Solaris. This post shows that you are not correct on this, Solaris will not be killed. Do you agree that Larry is not interested in killing Solaris? Why would Larry say that “…Solaris is overwhelmingly the best open systems operating system on the planet.” if he wants to kill Solaris? No, you are not correct on this. I have showed you numerous links where Larry praises Solaris, and still you say that Larry are going to kill Solaris. Why? Isnt what you do, pure FUD and Trolling?
    http://cuddletech.com/blog/?p=279



    He says many stupid things. How many unixes are out there, today?
    Yes, Larry said stupid things. But he is also one of the richest mans on earth. So not all he says, is stupid? On the other hand, you lie and FUD a lot. You have even confessed you FUD. And some things you have said, are... not the brightest things men have said.



    Such old benchmarks with unstable btrfs version doesn't matter at all.
    Yes it matters, because you said that 64bit BTRFS is faster than ZFS. This links shows that you are not correct. BTRFS is not faster than ZFS. I have proved you are wrong. Even though BTRFS is using 64 bits, it scales bad and in slow when using many disks. BTRFS might be fast on single disk, because the developers mainly targets desktops, not big servers.



    64bit is simply enough and will be enough for long time. CERN, Google, Facebook runs Linux not slowlaris, so 64bit is and will be (nothing suggests they plan to replace Linux by system that has 30% slower binaries) enough.
    Yes, 64 bits will suffice for many years still. But 64 bits are not future proof. CERN today, stores 4 Petabytes on ZFS for long term storage on tier-1 and 2. In a couple of years, CERN will pass 64 bits, then CERN will need more than 64 bits. Maybe that is one of the reasons that CERN prefers ZFS?
    https://blogs.oracle.com/simons/entr..._science_means
    "Having conducted testing and analysis of ZFS, it is felt that the combination of ZFS and Solaris solves the critical data integrity issues that have been seen with other approaches. They feel the problem has been solved completely with the use of this technology. There is currently about one Petabyte of Thumper storage deployed across Tier1 and Tier2 sites. That number is expected to rise to approximately four Petabytes by the end of this summer."



    No, that's simply not true and this sounds like sun's FUD. Red Hat employer was mistaken and if you read discussion you should be aware of this.
    Well, it is true that CERN will need more than 64 bit filesystems when their large LHC collider which costed many billions gets active. When LHC starts to function in a couple of years, LHC will start to generate HUGE amounts of data, according to CERN. There will be many Petabytes.

    If CERN uses 64 bits, then CERN needs to split the data so that the data does not go beyond 2^64 bits. So CERN will have some data pools, with 2^64 bits, and another data pool with 2^64 bits, etc. Thus, there will be several data pools, and it will be difficult to examine all data in different pools. It is then better to use one single data pool, that holds all data because then CERN can run all calculations, without having to split them up. Thus, it is true that BTRFS will not be able to handle big scenarios - which means that BTRFS needs to be redesigned to use more than 64 bits. So, yes, I speak true.

    Regarding the RedHat developer, he said that BTRFS has some issues. That is true, and I do not lie nor FUD about this. Do you want to see the post where he writes this?
    Last edited by kebabbert; 11-06-2011 at 07:05 PM.

  7. #47
    Join Date
    Nov 2011
    Posts
    3

    Default

    Kebab: ZFS cannot deal with in-RAM corruption - by design. Deal with it.

    And you use Larry Ellison's quote "Solaris is overwhelmingly the best open systems operating system on the planet" for evaluating Solaris' technical status. Am sure you don't have a day job.. and don't wanna know details of your night job.
    Last edited by bluetomato; 11-10-2011 at 12:28 PM.

  8. #48
    Join Date
    Nov 2008
    Posts
    418

    Default

    Quote Originally Posted by bluetomato View Post
    Kebab: ZFS cannot deal with in-RAM corruption - by design. Deal with it.
    Yes, I know that ZFS does not repair data corruption in RAM, nor does ZFS repair data corruption in the CPU's registers, nor in the Graphic Card, nor in the bus, etc. So? Do you expect a FILE system to repair data in the graphics card's RAM? Or repair data in the CPU's register? Or in RAM? What kind of filesystem would that be that can repair data corruption everywhere in a server? No, such a filesystem will not exist. It is the responsibility of a filesystem to correctly handle data for storage on disk, and the responsibility of RAM to correctly handle RAM errors, and the responsibilty of the GPU, to handle errors in GPU, etc.

    I hope you dont believe that any Linux filesystem such as BTRFS corrects errors in the GPU, or in RAM? Are there ANY Linux filesystem that correctly saves data to disk? Research shows that most common Linux filesystems are not safe. There are no research on BTRFS yet, but Kraftman says it is in beta phase so BTRFS should be ruled out. (I showed benches of ZFS vs BTRFS on SSD disks, and Kraftman ruled that bench out because BTRFS is not done yet nor stable)



    And you use Larry Ellison's quote "Solaris is overwhelmingly the best open systems operating system on the planet" for evaluating Solaris' technical status.
    Yes, I dont deny it. It is dark outside right now. I am upgrading to latest Solaris 11 right now. etc. What was your point with such a declarative statement?

    MY point is that I am quoting Larry Ellison on this, because "some people" says that Larry wants to kill off Solaris, because it is slow (slowlaris). Well, it seems that Larry thinks that Solaris is the best. And I have shown benchmarks where Solaris holds several world records today. So, Larry does bet on Solaris, and Solaris is the fastest in the world on some benchmarks. So I have disproved "some people", yes?



    Am sure you don't have a day job.. and don't wanna know details of your night job.
    Oh, I work in finance, at a large world famous company you have heard of. I have a double degree, Masters in computer science and another in Math. Yes, I do have a day job. I dont have a night job, though. Do you work at night or day?
    Last edited by kebabbert; 11-10-2011 at 03:22 PM.

  9. #49
    Join Date
    Nov 2011
    Posts
    301

    Default Re Disk capacities...

    SGI says this about XFS http://oss.sgi.com/projects/xfs/):
    XFS is a full 64-bit filesystem, and thus is capable of handling filesystems as large as a million terabytes.

    2^63 = 9 x 10^18 = 9 exabytes
    CERN expects to need, per kebabbert's link, 57 petabytes of disk and 43 petabytes of tape storage (100 petabytes total).
    That's WELL under the limit of 64bit FS capacity (1/80), per SGI's numbers.

    XFS is one of the things CERN has gone to great lengths to support/use--it ships with SL by default.
    Hence the reference.
    (Plus it's supposedly the fastest FS for huge data files.)

  10. #50
    Join Date
    Nov 2008
    Posts
    418

    Default

    Quote Originally Posted by Ibidem View Post
    SGI says this about XFS http://oss.sgi.com/projects/xfs/):


    CERN expects to need, per kebabbert's link, 57 petabytes of disk and 43 petabytes of tape storage (100 petabytes total).
    That's WELL under the limit of 64bit FS capacity (1/80), per SGI's numbers.

    XFS is one of the things CERN has gone to great lengths to support/use--it ships with SL by default.
    Hence the reference.
    (Plus it's supposedly the fastest FS for huge data files.)
    Maybe you have missed all the links I posted, that shows that CERN is worried about data corruption? And I showed another link that showed that XFS does not protect against data corruption. Thus, XFS would not be good choice.

    Also, I have read that some vendor (is it RedHat) only supports up to 16TB raid sets with XFS. Is this true or false?

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •