Message ID | 20220609105343.13591-2-lhenriques@suse.de (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | Two xattrs-related fixes for ceph | expand |
Hi Luís, On Thu, 9 Jun 2022 11:53:42 +0100, Luís Henriques wrote: > CephFS doesn't have a maximum xattr size. Instead, it imposes a maximum > size for the full set of xattrs names+values, which by default is 64K. > > This patch fixes the max_attrval_size so that it is slightly < 64K in > order to accommodate any already existing xattrs in the file. > > Signed-off-by: Luís Henriques <lhenriques@suse.de> > --- > tests/generic/020 | 10 +++++++++- > 1 file changed, 9 insertions(+), 1 deletion(-) > > diff --git a/tests/generic/020 b/tests/generic/020 > index d8648e96286e..76f13220fe85 100755 > --- a/tests/generic/020 > +++ b/tests/generic/020 > @@ -128,7 +128,7 @@ _attr_get_max() > pvfs2) > max_attrval_size=8192 > ;; > - xfs|udf|9p|ceph) > + xfs|udf|9p) > max_attrval_size=65536 > ;; > bcachefs) > @@ -139,6 +139,14 @@ _attr_get_max() > # the underlying filesystem, so just use the lowest value above. > max_attrval_size=1024 > ;; > + ceph) > + # CephFS does not have a maximum value for attributes. Instead, > + # it imposes a maximum size for the full set of xattrs > + # names+values, which by default is 64K. Set this to a value > + # that is slightly smaller than 64K so that it can accommodate > + # already existing xattrs. > + max_attrval_size=65000 > + ;; I take it a more exact calculation would be something like: (64K - $max_attrval_namelen - sizeof(user.snrub="fish2\012"))? Perhaps you could calculate this on the fly for CephFS by passing in the filename and subtracting the `getfattr -d $filename` results... That said, it'd probably get a bit ugly, expecially if encoding needs to be taken into account. Reviewed-by: David Disseldorp <ddiss@suse.de> Cheers, David
David Disseldorp <ddiss@suse.de> writes: > Hi Luís, > > On Thu, 9 Jun 2022 11:53:42 +0100, Luís Henriques wrote: > >> CephFS doesn't have a maximum xattr size. Instead, it imposes a maximum >> size for the full set of xattrs names+values, which by default is 64K. >> >> This patch fixes the max_attrval_size so that it is slightly < 64K in >> order to accommodate any already existing xattrs in the file. >> >> Signed-off-by: Luís Henriques <lhenriques@suse.de> >> --- >> tests/generic/020 | 10 +++++++++- >> 1 file changed, 9 insertions(+), 1 deletion(-) >> >> diff --git a/tests/generic/020 b/tests/generic/020 >> index d8648e96286e..76f13220fe85 100755 >> --- a/tests/generic/020 >> +++ b/tests/generic/020 >> @@ -128,7 +128,7 @@ _attr_get_max() >> pvfs2) >> max_attrval_size=8192 >> ;; >> - xfs|udf|9p|ceph) >> + xfs|udf|9p) >> max_attrval_size=65536 >> ;; >> bcachefs) >> @@ -139,6 +139,14 @@ _attr_get_max() >> # the underlying filesystem, so just use the lowest value above. >> max_attrval_size=1024 >> ;; >> + ceph) >> + # CephFS does not have a maximum value for attributes. Instead, >> + # it imposes a maximum size for the full set of xattrs >> + # names+values, which by default is 64K. Set this to a value >> + # that is slightly smaller than 64K so that it can accommodate >> + # already existing xattrs. >> + max_attrval_size=65000 >> + ;; > > I take it a more exact calculation would be something like: > (64K - $max_attrval_namelen - sizeof(user.snrub="fish2\012"))? > > Perhaps you could calculate this on the fly for CephFS by passing in the > filename and subtracting the `getfattr -d $filename` results... That > said, it'd probably get a bit ugly, expecially if encoding needs to be > taken into account. In fact, this is *exactly* what I had before Dave suggested to keep it simple. After moving the code back into common/attr, where's how the generic code would look like: + ceph) + # CephFS does have a limit for the whole set of names+values + # attributes in a file. Thus, it is necessary to get the sizes + # of all names and values already existent and subtract them to + # the (default) maximum, which is 64k. + local len=0 + while read line; do + # skip 1st line + [ "$line" != "${line#'#'}" ] && continue + n=$(echo $line | awk -F"=0x" '{print $1}') + v=$(echo $line | awk -F"=0x" '{print $2}') + nlen=${#n} + vlen=${#v} + # total is the sum of the name len and the value len + # divided by 2 because we're dumping them in hex format + t=$(($nlen + $vlen / 2)) + len=$(($len + $t)) + done <<< $(_getfattr -d -e hex $file 2> /dev/null) + echo $((65536 - $max_attrval_namelen - $len)) + ;; so... yeah, I'm not particularly gifted on shell, it could probably be done in more clever/cleaner ways. Anyway, I'm open to revisit this if this is the preferred solution. > Reviewed-by: David Disseldorp <ddiss@suse.de> Thanks David. (And sorry! I completely forgot to include you on CC as I had promised.) Cheers,
On Thu, 09 Jun 2022 15:54:15 +0100, Luís Henriques wrote: > David Disseldorp <ddiss@suse.de> writes: ... > > I take it a more exact calculation would be something like: > > (64K - $max_attrval_namelen - sizeof(user.snrub="fish2\012"))? > > > > Perhaps you could calculate this on the fly for CephFS by passing in the > > filename and subtracting the `getfattr -d $filename` results... That > > said, it'd probably get a bit ugly, expecially if encoding needs to be > > taken into account. > > In fact, this is *exactly* what I had before Dave suggested to keep it > simple. Arg, sorry I missed your previous round. > After moving the code back into common/attr, where's how the > generic code would look like: > > + ceph) > + # CephFS does have a limit for the whole set of names+values > + # attributes in a file. Thus, it is necessary to get the sizes > + # of all names and values already existent and subtract them to > + # the (default) maximum, which is 64k. > + local len=0 > + while read line; do > + # skip 1st line > + [ "$line" != "${line#'#'}" ] && continue > + n=$(echo $line | awk -F"=0x" '{print $1}') > + v=$(echo $line | awk -F"=0x" '{print $2}') > + nlen=${#n} > + vlen=${#v} > + # total is the sum of the name len and the value len > + # divided by 2 because we're dumping them in hex format > + t=$(($nlen + $vlen / 2)) > + len=$(($len + $t)) > + done <<< $(_getfattr -d -e hex $file 2> /dev/null) > + echo $((65536 - $max_attrval_namelen - $len)) > + ;; > > so... yeah, I'm not particularly gifted on shell, it could probably be > done in more clever/cleaner ways. Anyway, I'm open to revisit this if > this is the preferred solution. hmm, I was hoping something like... (( 65536 - $max_attrval_namelen - $(getfattr -d $file | _filter | wc -c) )) would be possible, but getfattr output does make it a bit too messy. Cheers, David
On 6/9/22 10:21 PM, David Disseldorp wrote: > Hi Luís, > > On Thu, 9 Jun 2022 11:53:42 +0100, Luís Henriques wrote: > >> CephFS doesn't have a maximum xattr size. Instead, it imposes a maximum >> size for the full set of xattrs names+values, which by default is 64K. >> >> This patch fixes the max_attrval_size so that it is slightly < 64K in >> order to accommodate any already existing xattrs in the file. >> >> Signed-off-by: Luís Henriques <lhenriques@suse.de> >> --- >> tests/generic/020 | 10 +++++++++- >> 1 file changed, 9 insertions(+), 1 deletion(-) >> >> diff --git a/tests/generic/020 b/tests/generic/020 >> index d8648e96286e..76f13220fe85 100755 >> --- a/tests/generic/020 >> +++ b/tests/generic/020 >> @@ -128,7 +128,7 @@ _attr_get_max() >> pvfs2) >> max_attrval_size=8192 >> ;; >> - xfs|udf|9p|ceph) >> + xfs|udf|9p) >> max_attrval_size=65536 >> ;; >> bcachefs) >> @@ -139,6 +139,14 @@ _attr_get_max() >> # the underlying filesystem, so just use the lowest value above. >> max_attrval_size=1024 >> ;; >> + ceph) >> + # CephFS does not have a maximum value for attributes. Instead, >> + # it imposes a maximum size for the full set of xattrs >> + # names+values, which by default is 64K. Set this to a value >> + # that is slightly smaller than 64K so that it can accommodate >> + # already existing xattrs. >> + max_attrval_size=65000 >> + ;; > I take it a more exact calculation would be something like: > (64K - $max_attrval_namelen - sizeof(user.snrub="fish2\012"))? Yeah, something like this looks better to me. I am afraid without reaching up to the real max size we couldn't test the real bugs out from ceph. Such as the bug you fixed in ceph Locker.cc code. > Perhaps you could calculate this on the fly for CephFS by passing in the > filename and subtracting the `getfattr -d $filename` results... That > said, it'd probably get a bit ugly, expecially if encoding needs to be > taken into account. > > Reviewed-by: David Disseldorp <ddiss@suse.de> > > Cheers, David >
David Disseldorp <ddiss@suse.de> writes: > On Thu, 09 Jun 2022 15:54:15 +0100, Luís Henriques wrote: > >> David Disseldorp <ddiss@suse.de> writes: > ... >> > I take it a more exact calculation would be something like: >> > (64K - $max_attrval_namelen - sizeof(user.snrub="fish2\012"))? >> > >> > Perhaps you could calculate this on the fly for CephFS by passing in the >> > filename and subtracting the `getfattr -d $filename` results... That >> > said, it'd probably get a bit ugly, expecially if encoding needs to be >> > taken into account. >> >> In fact, this is *exactly* what I had before Dave suggested to keep it >> simple. > > Arg, sorry I missed your previous round. > >> After moving the code back into common/attr, where's how the >> generic code would look like: >> >> + ceph) >> + # CephFS does have a limit for the whole set of names+values >> + # attributes in a file. Thus, it is necessary to get the sizes >> + # of all names and values already existent and subtract them to >> + # the (default) maximum, which is 64k. >> + local len=0 >> + while read line; do >> + # skip 1st line >> + [ "$line" != "${line#'#'}" ] && continue >> + n=$(echo $line | awk -F"=0x" '{print $1}') >> + v=$(echo $line | awk -F"=0x" '{print $2}') >> + nlen=${#n} >> + vlen=${#v} >> + # total is the sum of the name len and the value len >> + # divided by 2 because we're dumping them in hex format >> + t=$(($nlen + $vlen / 2)) >> + len=$(($len + $t)) >> + done <<< $(_getfattr -d -e hex $file 2> /dev/null) >> + echo $((65536 - $max_attrval_namelen - $len)) >> + ;; >> >> so... yeah, I'm not particularly gifted on shell, it could probably be >> done in more clever/cleaner ways. Anyway, I'm open to revisit this if >> this is the preferred solution. > > hmm, I was hoping something like... > (( 65536 - $max_attrval_namelen - $(getfattr -d $file | _filter | wc -c) )) > would be possible, but getfattr output does make it a bit too messy. Yeah, also we must decode the attributes as hex otherwise we'll miss non-string values. Anyway, I'll see if I find something better. Thanks, David. Cheers,
Xiubo Li <xiubli@redhat.com> writes: > On 6/9/22 10:21 PM, David Disseldorp wrote: >> Hi Luís, >> >> On Thu, 9 Jun 2022 11:53:42 +0100, Luís Henriques wrote: >> >>> CephFS doesn't have a maximum xattr size. Instead, it imposes a maximum >>> size for the full set of xattrs names+values, which by default is 64K. >>> >>> This patch fixes the max_attrval_size so that it is slightly < 64K in >>> order to accommodate any already existing xattrs in the file. >>> >>> Signed-off-by: Luís Henriques <lhenriques@suse.de> >>> --- >>> tests/generic/020 | 10 +++++++++- >>> 1 file changed, 9 insertions(+), 1 deletion(-) >>> >>> diff --git a/tests/generic/020 b/tests/generic/020 >>> index d8648e96286e..76f13220fe85 100755 >>> --- a/tests/generic/020 >>> +++ b/tests/generic/020 >>> @@ -128,7 +128,7 @@ _attr_get_max() >>> pvfs2) >>> max_attrval_size=8192 >>> ;; >>> - xfs|udf|9p|ceph) >>> + xfs|udf|9p) >>> max_attrval_size=65536 >>> ;; >>> bcachefs) >>> @@ -139,6 +139,14 @@ _attr_get_max() >>> # the underlying filesystem, so just use the lowest value above. >>> max_attrval_size=1024 >>> ;; >>> + ceph) >>> + # CephFS does not have a maximum value for attributes. Instead, >>> + # it imposes a maximum size for the full set of xattrs >>> + # names+values, which by default is 64K. Set this to a value >>> + # that is slightly smaller than 64K so that it can accommodate >>> + # already existing xattrs. >>> + max_attrval_size=65000 >>> + ;; >> I take it a more exact calculation would be something like: >> (64K - $max_attrval_namelen - sizeof(user.snrub="fish2\012"))? > > Yeah, something like this looks better to me. Right, it could be hard-coded. But we'd need to take into account that the attribute value may not be ascii. That's why my initial attempt to fix this was to decode everything in hex. > I am afraid without reaching up to the real max size we couldn't test the real > bugs out from ceph. Such as the bug you fixed in ceph Locker.cc code. OK, I'll change this to use the exact value. Thanks, Xiubo. Cheers,
diff --git a/tests/generic/020 b/tests/generic/020 index d8648e96286e..76f13220fe85 100755 --- a/tests/generic/020 +++ b/tests/generic/020 @@ -128,7 +128,7 @@ _attr_get_max() pvfs2) max_attrval_size=8192 ;; - xfs|udf|9p|ceph) + xfs|udf|9p) max_attrval_size=65536 ;; bcachefs) @@ -139,6 +139,14 @@ _attr_get_max() # the underlying filesystem, so just use the lowest value above. max_attrval_size=1024 ;; + ceph) + # CephFS does not have a maximum value for attributes. Instead, + # it imposes a maximum size for the full set of xattrs + # names+values, which by default is 64K. Set this to a value + # that is slightly smaller than 64K so that it can accommodate + # already existing xattrs. + max_attrval_size=65000 + ;; *) # Assume max ~1 block of attrs BLOCK_SIZE=`_get_block_size $TEST_DIR`
CephFS doesn't have a maximum xattr size. Instead, it imposes a maximum size for the full set of xattrs names+values, which by default is 64K. This patch fixes the max_attrval_size so that it is slightly < 64K in order to accommodate any already existing xattrs in the file. Signed-off-by: Luís Henriques <lhenriques@suse.de> --- tests/generic/020 | 10 +++++++++- 1 file changed, 9 insertions(+), 1 deletion(-)