Message ID | 1451185407-11422-1-git-send-email-mail@christoph.anton.mitterer.name (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
On Sun, 2015-12-27 at 04:03 +0100, Christoph Anton Mitterer wrote:
> -WARNING: defragmenting with kernels up to 2.6.37 will unlink COW-ed
Perhaps someone can also check the above.
I was looking through the git history, but, couldn't find anything wrt
2.6.37...
The commit's I've basically searched for in the non-stable repo were
38c227d87c49ad5d173cb5d4374d49acec6a495d (adding the ref-link aware defrag)
8101c8dbf6243ba517aab58d69bf1bc37d8b7b9c (removing it)
But maybe I've missed something.
Also, the wiki mentioned it for 3.13, I changed that:
https://btrfs.wiki.kernel.org/index.php?title=Changelog&action=historysubmit&diff=29765&oldid=29697
Please correct if wrong.
HTH,
Chris
Christoph Anton Mitterer posted on Sun, 27 Dec 2015 04:03:27 +0100 as excerpted: [Rewrapped here but all added lines.] > +WARNING: Defragmenting with Linux kernel versions < 3.9 or ? 3.14-rc2 > as well as > +with Linux stable kernel versions ? 3.10.31, ? 3.12.12 or ? > 3.13.4 will break up > +the ref-links of CoW data (for example files copied with > `cp --reflink`, > +snapshots or de-duplicated data). > +This may cause considerable increase of space usage depending on the > broken up +ref-links. Thanks. I had looked at that a few times and thought it needed updated, but I think it hadn't reached my pain threshold yet[1], so I hadn't yet posted about it. Glad it reached someone's pain threshold. =:^) --- [1] Pain threshold: Or more like, I was always doing something else at the time, which is probably everybody else's excuse too. But by contrast it can be noted that I posted right away when I noticed the mkfs.btrfs manpage totally lost raid1 mode with one update, because I use it, regardless of what else I was doing. I guess that must have hit my pain threshold...
On Sun, 2015-12-27 at 07:09 +0000, Duncan wrote:
> raid1 mode
I wonder when that reaches my pain threshold... and I submit a patch
that renames it "notreallyraid1" in all places ;-)
Cheers,
Chris.
On Mon, Dec 28, 2015 at 01:50:09AM +0100, Christoph Anton Mitterer wrote: > On Sun, 2015-12-27 at 07:09 +0000, Duncan wrote: > > raid1 mode > I wonder when that reaches my pain threshold... and I submit a patch > that renames it "notreallyraid1" in all places ;-) Isn't this an FAQ already? There is already a patch to rename the RAID modes. It's been sitting in the progs patch queue for about 2 years, because none of the senior devs has acked it yet (since it's a big user-visible change). Hugo.
On Mon, 2015-12-28 at 01:58 +0000, Hugo Mills wrote: > Isn't this an FAQ already? There is already a patch to rename the > RAID modes. It's been sitting in the progs patch queue for about 2 > years, because none of the senior devs has acked it yet (since it's a > big user-visible change). Uhm... yeah, it's a bit invasive... but that happens when such improper naming is done in the first place :-/ It's similar to when tools wrongly or ambiguously use SI prefixes, instead of proper kB, MB, GB, etc. (for base 1000) respectively KiB, MiB, GiB, etc. (for base 1024). Especially just using K,M,G, is simply evil and should lead to public punishment ;-) I'm also not really fond of what btrfs took over from LVM, namely kKmMgGtTpPeE,... it's ambiguous or at least unclean as well... Best would be probably if we don't use "raid" as names at all (or just as aliases for the actual canonical names), but rather describe what's actually... E.g. classic RAID1 = mirror btrfs RAID1 = dup2 or something similar like clone2, replica2 classic RAID0 = striped RAID5/6 = parityN Cheers, Chris.
Christoph Anton Mitterer posted on Mon, 28 Dec 2015 01:50:09 +0100 as excerpted: > On Sun, 2015-12-27 at 07:09 +0000, Duncan wrote: >> raid1 mode > I wonder when that reaches my pain threshold... and I submit a patch > that renames it "notreallyraid1" in all places ;-) I've seen two responses to that, both correct, AFAIK. 1) Btrfs very specifically and deliberately uses *lowercase* raidN in part to make that distinction, as the btrfs variants are chunk-level (and designed so that at some point in the future they can be subvolume and/or file level), not device-level (and at that future point, not necessarily filesystem level either). As we've seen in discussion in other threads, for raid10 in particular, that makes a profound difference in robustness in the multi-device failure case. 2) Regarding btrfs raid1 and raid10's current very specific two-way- mirroring in particular, limiting to two-way-mirroring in the 3+ devices case is well within established definitions and historic usage. Apparently, the N-devices = N-way-mirroring usage is relatively new, arguably first popularized by Linux mdraid, after which various hardware raid suppliers also implemented it due to competitive pressure. But only two-way-mirroring is required by the RAID-1 definition. Even were that not the case, point #1, btrfs' very specific use of *lowercase* raid1, still covers the two-way-limitation case just as well as it covers the chunk-level case. That said, that the limited pair-mirroring btrfs implements even in the 3+ device case still meets formal RAID-1 definitions was originally news to me as well, however well I might now accept the fact. But once my earlier naive assumptions were corrected, the remaining clarification issues fell below my pain threshold. But for those for whom it's still very close to their pain threshold, due to the above a patch effectively doing s/raid1/notreallyraid1/g is unlikely to be accepted. Much more likely to be accepted would be a patch to the btrfs-balance and mkfs.btrfs manpages adding note, preferably accounting for the raid10 situation as well, explaining that btrfs raid (lowercase) isn't RAID (uppercase) in the the traditional sense, that it's chunk-scope not device-scope and that this has implications in for instance robustness in the raid10 multi-device failure case, and that both raid1 and raid10 are (currently) limited to two-way-mirroring. Meanwhile, for anyone considering writing that patch, I'd also strongly recommend that the two-way-mirroring wording is separated out, at least onto its own lines if not a separate paragraph, so it can be cleanly deleted and/or modified once N-way-mirroring is introduced as a feature, without having to rewrite the chunk-level and raid10 bit as well.
On Mon, 2015-12-28 at 02:51 +0000, Duncan wrote: > 1) Btrfs very specifically and deliberately uses *lowercase* raidN > in part to make that distinction, as the btrfs variants are chunk- > level (and designed so that at some point in the future they can be > subvolume and/or file level), not device-level (and at that future > point, not necessarily filesystem level either). I guess no "normal" user would expect or understand that lower/upper case would imply any distinction. > 2) Regarding btrfs raid1 and raid10's current very specific two-way- > mirroring in particular, limiting to two-way-mirroring in the 3+ > devices > case is well within established definitions and historic usage. > Apparently, the N-devices = N-way-mirroring usage is relatively new, > arguably first popularized by Linux mdraid, after which various > hardware > raid suppliers also implemented it due to competitive pressure. But > only > two-way-mirroring is required by the RAID-1 definition. No, this is not true. This http://www.eecs.berkeley.edu/Pubs/TechRpts/1987/CSD-87-391.pdf is the original paper on RAID. Chapter 7 describes RAID1 and the clearly says "all disks are duplicated" as well as "Level 1 RAID has only one data disk". I wouldn't know any single case of a HW RAID controller (and we've had quite a few of them here at the Tier2) or other software implementation where RAID1 had another meaning than "N disks, N mirrors". > Even were that not the case, point #1, btrfs' very specific use of > *lowercase* raid1, still covers the two-way-limitation case just as > well > as it covers the chunk-level case. Hmm wouldn't still change anything, IMHO,... saying "lower case RAID is something different than upper case RAID" would be just a bit ... uhm... weird. Actually, because btrfs doing it at the chunk level (while RAID being at the device level), proves while my point that "raid" or "RAID" or any other lower/upper case combination shouldn't be used at all. Cheers, Chris.
Christoph Anton Mitterer posted on Mon, 28 Dec 2015 04:03:05 +0100 as excerpted: > On Mon, 2015-12-28 at 02:51 +0000, Duncan wrote: >> 1) Btrfs very specifically and deliberately uses *lowercase* raidN in >> part to make that distinction, as the btrfs variants are chunk- level >> (and designed so that at some point in the future they can be subvolume >> and/or file level), not device-level (and at that future point, not >> necessarily filesystem level either). > I guess no "normal" user would expect or understand that lower/upper > case would imply any distinction. I /could/ argue the case based on definition of the "normal" in "normal user", but I won't, as in any case I agree with you at least to the extent that a better explanation of the details should eventually be found both on the wiki (where it is arguably already covered in the sysadmin's and multiple devices pages) and in the btrfs-balance and mkfs.btrfs manpages (where it remains uncovered). >> 2) Regarding btrfs raid1 and raid10's current very specific two-way- >> mirroring in particular, limiting to two-way-mirroring in the 3+ >> devices case is well within established definitions and historic usage. >> Apparently, the N-devices = N-way-mirroring usage is relatively new, >> arguably first popularized by Linux mdraid, after which various >> hardware raid suppliers also implemented it due to competitive >> pressure. But only two-way-mirroring is required by the RAID-1 >> definition. > No, this is not true. > > This http://www.eecs.berkeley.edu/Pubs/TechRpts/1987/CSD-87-391.pdf is > the original paper on RAID. > Chapter 7 describes RAID1 and the clearly says "all disks are > duplicated" as well as "Level 1 RAID has only one data disk". Kudos for digging up the reference. =:^) Never-the-less, I (and others from which I got the position), believe your interpretation is arguably in error. More precisely... 1) In the context of the Level 1 RAID discussed in chapter 7, from earlier in the paper, in chapter 6, introducing RAID, on page six of the paper which is page 8 of the PDF (quotes here between the >>>>> and <<<<< demarcs, [...] indicating elision, as traditional): >>>>> Reliability: Our basic approach will be to break the arrays into reliability groups, with each group having extra "check" disks containing redundant information. [...] Here are some other terms that we use: D = total number of disks with data (not including the extra check disks); G = number of data disks in a group (not including the extra check disks); [...] C = number of check disks in a group; <<<<< That's the context, disks grouped for reliability, with data and check disks in a group, but multiple such groups. Then later in the paper, in the First Level RAID discussion in chapter 7, starting on page 9 of the paper, page 11 of the pdf: >>>>> Mirrored disks are a traditional approach for improving reliability of magnetic disks. This is the most expensive option since all disks are duplicated (G=1 and C=1), and every write to a data disk is also a write to a check disk. <<<<< With the definitions and context above, we see that the "(G=1 and C=1)" defines First Level RAID as exactly one data disk and one check disk in a reliability group, with multiple such groups. So yes, it has "only one data disk"... in a defined context where that's per group, with exactly one check disk as well, with multiple groups, such that each write to a group writes to exactly one data disk and one check disk, but a full write may be to many groups. This can be further seen by examining Table II on page 10 of the paper (12 of the pdf), where total number of disks is declared to be 2D (twice the number of data disks, based on the above definition of D), and usable storage capacity to be 50%. Further, in the commentary on the same page, "Since a Level 1 RAID has only one data disk in its group, we assume that the large transfer requires the same number of disk acting in concert as found in groups of the higher level RAIDs: 10 to 25 disks." Again, that emphasizes the per- group aspect of the G=1, C=1 definition, and the fact that there's many such groups in the deployment. Finally, "Duplicating all disks can mean doubling the cost of the database system or using only 50% of the disk storage capacity." Again, very clearly pair-mirroring, with many such pair-mirrors in the array. Which, other than the per-chunk rather than per-disk granularity, is _exactly_ what btrfs does. It would actually seem that the N-way-mirroring, where N=number-of- devices, usage of so-called raid1 is out of kilter with the original definition, not btrfs' very specific two-way-mirroring, regardless of the number of devices, which is actually very close to the original definition of two devices per groups, many such groups in an array. Tho I'll certainly agree that in today's usage, RAID-1 certainly /incorporates/ the N-way-mirroring usage, and would even agree that, within my rather limited exposure at least, it's the more common usage. But that doesn't make it the original usage, nor does it mean that there's no room in today's broader definition for the original usage, which then must remain as valid as the broader usage, today. So other than the per-chunk scope, btrfs raid1 would indeed seem to be real RAID-1. Never-the-less, given the broader usage today, there's definitely a need for some word of explanation in the mkfs.btrfs and btrfs-balance manpages. I'll agree there, but then I never disagreed with that in the first place, and indeed, that was my opinion from when I myself thought pair-mirroring wasn't proper raid1 -- that much hasn't changed. Meanwhile, I've actually quoted about 50% of the original paper's raid1 discussion in the above. The Level 1 RAID discussion is actually quite short, under a double-spaced page in the original paper, which itself is only 26 pdf pages long, including two pages of title and blank page at the beginning (thus the pdf page numbering being two pages higher than the paper's page numbering), and two plus pages of acknowledgments, references and appendix at the end, so only 22 pages of well spaced actual content. Those who haven't clicked thru to actually read it may be interested in doing so. Here it is again for convenience. =:^) http://www.eecs.berkeley.edu/Pubs/TechRpts/1987/CSD-87-391.pdf > I wouldn't know any single case of a HW RAID controller (and we've had > quite a few of them here at the Tier2) or other software implementation > where RAID1 had another meaning than "N disks, N mirrors". That may be. I'm sure you have more experience with it than I do. But that doesn't change the original definition, or mean that usage consistent with that original definition is incorrect, even if uncommon today. >> Even were that not the case, point #1, btrfs' very specific use of >> *lowercase* raid1, still covers the two-way-limitation case just as >> well as it covers the chunk-level case. > Hmm wouldn't still change anything, IMHO,... saying "lower case RAID is > something different than upper case RAID" would be just a bit ... uhm... > weird. > > Actually, because btrfs doing it at the chunk level (while RAID being at > the device level), proves while my point that "raid" or "RAID" or any > other lower/upper case combination shouldn't be used at all. I don't actually disagree with you there. Weird it is, agreed. But it's also the case, at least currently, and based on what Hugo said about a patch to change the terminology being in limbo for two years, during which the currently used terminology has become even more entrenched as btrfs is widely deployed in distro installations now (even if it isn't entirely stable yet), that it's unlikely to change. The best that could be done at this point is make raid1 an alias for something else, but even then, I'd guess the raid1 terminology would continue pretty much unabated, since it's already widely used and well entrenched in the various google engines as well as the archives for this list.
Hugo Mills posted on Mon, 28 Dec 2015 01:58:07 +0000 as excerpted: > On Mon, Dec 28, 2015 at 01:50:09AM +0100, Christoph Anton Mitterer > wrote: >> On Sun, 2015-12-27 at 07:09 +0000, Duncan wrote: >> > raid1 mode >> I wonder when that reaches my pain threshold... and I submit a patch >> that renames it "notreallyraid1" in all places ;-) > > Isn't this an FAQ already? There is already a patch to rename the > RAID modes. It's been sitting in the progs patch queue for about 2 > years, because none of the senior devs has acked it yet (since it's a > big user-visible change). I don't see it in the FAQ, but I see hints on both the sysadmin's guide and the usecases pages. (Either the wiki or firefox seems to be having certificate problems ATM and all I'm getting is an OCSP response has an invalid sig error. But the resurrect this page extension to the rescue, click the resurrect via google and I get it. Links has no problem loading the page, but lynx does, so it's not just firefox.) UseCases: First section is RAID, first question there is on creating a raid1 mirror in btrfs. It has this to say at the end of the answer: >>>>> NOTE This does not do the 'usual thing' for 3 or more drives. Until "N- Way" (traditional) RAID-1 is implemented: Loss of more than one drive might crash the array. For now, RAID-1 means 'one copy of what's important exists on two of the drives in the array no matter how many drives there may be in it'. <<<<< SysadminGuide: Second section is data usage and allocation. First subsection there is RAID and data replication. The first paragraph there is: >>>>> Btrfs's "RAID" implementation bears only passing resemblance to traditional RAID implementations. Instead, btrfs replicates data on a per- chunk basis. If the filesystem is configured to use "RAID-1", for example, chunks are allocated in pairs, with each chunk of the pair being taken from a different block device. Data written to such a chunk pair will be duplicated across both chunks. <<<<< The multi-device page has little theoretical discussion, only discussing current status and having a bunch of specific commandline examples.
diff --git a/Documentation/btrfs-filesystem.asciidoc b/Documentation/btrfs-filesystem.asciidoc index 31cd51b..600bbac 100644 --- a/Documentation/btrfs-filesystem.asciidoc +++ b/Documentation/btrfs-filesystem.asciidoc @@ -55,6 +55,13 @@ if the free space is too fragmented. Use 0 to take the kernel default, which is 256kB but may change in the future. You can also turn on compression in defragment operations. + +WARNING: Defragmenting with Linux kernel versions < 3.9 or ? 3.14-rc2 as well as +with Linux stable kernel versions ? 3.10.31, ? 3.12.12 or ? 3.13.4 will break up +the ref-links of CoW data (for example files copied with `cp --reflink`, +snapshots or de-duplicated data). +This may cause considerable increase of space usage depending on the broken up +ref-links. ++ `Options` + -v:::: @@ -79,10 +86,6 @@ target extent size, do not touch extents bigger than <size> For <start>, <len>, <size> it is possible to append units designator: \'K', \'M', \'G', \'T', \'P', or \'E', which represent KiB, MiB, GiB, TiB, PiB, or EiB, respectively. Case does not matter. -+ -WARNING: defragmenting with kernels up to 2.6.37 will unlink COW-ed copies of data, -don't use it if you use snapshots, have de-duplicated your data or made -copies with `cp --reflink`. *label* [<dev>|<mountpoint>] [<newlabel>]:: Show or update the label of a filesystem. diff --git a/Documentation/btrfs-mount.asciidoc b/Documentation/btrfs-mount.asciidoc index 39215a8..d364594 100644 --- a/Documentation/btrfs-mount.asciidoc +++ b/Documentation/btrfs-mount.asciidoc @@ -26,6 +26,13 @@ MOUNT OPTIONS Auto defragmentation detects small random writes into files and queue them up for the defrag process. Works best for small files; Not well suited for large database workloads. + + + WARNING: Defragmenting with Linux kernel versions < 3.9 or ? 3.14-rc2 as + well as with Linux stable kernel versions ? 3.10.31, ? 3.12.12 or + ? 3.13.4 will break up the ref-links of CoW data (for example files + copied with `cp --reflink`, snapshots or de-duplicated data). + This may cause considerable increase of space usage depending on the + broken up ref-links. *check_int*:: *check_int_data*::
In btrfs-filesystem(8), improved the documentation of snapshot unaware defragmentation and included the exact kernel version numbers being affected as well as the possible effects. No longer use th word "unlink" which is easily understood as "deleting a file". Moved the warning more to the beginning of "defragment" subcommand's documentation where it's more visible to readers. Added the same warning to the "autodefrag" option of btrfs-mount(5). Signed-off-by: Christoph Anton Mitterer <mail@christoph.anton.mitterer.name> --- Documentation/btrfs-filesystem.asciidoc | 11 +++++++---- Documentation/btrfs-mount.asciidoc | 7 +++++++ 2 files changed, 14 insertions(+), 4 deletions(-)