Message ID | 20210524155941.16376-2-rf@opensource.cirrus.com (mailing list archive) |
---|---|
State | Superseded |
Headers | show |
Series | Fix truncation warnings from building test_scanf.c | expand |
Context | Check | Description |
---|---|---|
netdev/tree_selection | success | Not a local patch |
On 24/05/2021 17.59, Richard Fitzgerald wrote: > sparse was producing warnings of the form: > > sparse: cast truncates bits from constant value (ffff0001 becomes 1) > > The problem was that value_representable_in_type() compared unsigned types > against type_min(). But type_min() is only valid for signed types because > it is calculating the value -type_max() - 1. ... and casts that to (T), so it does produce 0 as it should. E.g. for T==unsigned char, we get #define type_min(T) ((T)((T)-type_max(T)-(T)1)) (T)((T)-255 - (T)1) (T)(-256) which is 0 of type unsigned char. The minimum value of an > unsigned is obviously 0, so only type_max() need be tested. That part is true. But type_min and type_max have been carefully created to produce values of the appropriate type that actually represent the minimum/maximum representable in that type, without invoking UB. If this program doesn't produce the expected results for you, I'd be very interested in knowing your compiler version: #include <stdio.h> #define is_signed_type(type) (((type)(-1)) < (type)1) #define __type_half_max(type) ((type)1 << (8*sizeof(type) - 1 - is_signed_type(type))) #define type_max(T) ((T)((__type_half_max(T) - 1) + __type_half_max(T))) #define type_min(T) ((T)((T)-type_max(T)-(T)1)) int main(int argc, char *argv[]) { #define p(T, PT, fmt) do { \ PT vmin = type_min(T); \ PT vmax = type_max(T); \ printf("min(%s) = "fmt", max(%s) = "fmt"\n",#T, vmin, #T, vmax); \ } while (0) p(_Bool, int, "%d"); p(unsigned char, int, "%d"); p(signed char, int, "%d"); p(unsigned int, unsigned int, "%u"); p(unsigned long long, unsigned long long, "%llu"); p(signed long long, signed long long, "%lld"); return 0; } > lib/test_scanf.c | 13 ++++++------- > 1 file changed, 6 insertions(+), 7 deletions(-) > > diff --git a/lib/test_scanf.c b/lib/test_scanf.c > index 8d577aec6c28..48ff5747a4da 100644 > --- a/lib/test_scanf.c > +++ b/lib/test_scanf.c > @@ -187,8 +187,8 @@ static const unsigned long long numbers[] __initconst = { > #define value_representable_in_type(T, val) \ > (is_signed_type(T) \ > ? ((long long)(val) >= type_min(T)) && ((long long)(val) <= type_max(T)) \ > - : ((unsigned long long)(val) >= type_min(T)) && \ > - ((unsigned long long)(val) <= type_max(T))) > + : ((unsigned long long)(val) <= type_max(T))) With or without this, these tests are tautological when T is "long long" or "unsigned long long". I don't know if that is intended. But it won't, say, exclude ~0ULL if that is in the numbers[] array from being treated as fitting in a "long long". Rasmus
On 25/05/2021 10:55, Rasmus Villemoes wrote: > On 24/05/2021 17.59, Richard Fitzgerald wrote: >> sparse was producing warnings of the form: >> >> sparse: cast truncates bits from constant value (ffff0001 becomes 1) >> >> The problem was that value_representable_in_type() compared unsigned types >> against type_min(). But type_min() is only valid for signed types because >> it is calculating the value -type_max() - 1. Ok, I see I was wrong about that. It does in fact work safely. Do you want me to update the commit message to remove this? > > ... and casts that to (T), so it does produce 0 as it should. E.g. for > T==unsigned char, we get > > #define type_min(T) ((T)((T)-type_max(T)-(T)1)) > (T)((T)-255 - (T)1) > (T)(-256) > sparse warns about those truncating casts. > which is 0 of type unsigned char. > > The minimum value of an >> unsigned is obviously 0, so only type_max() need be tested. > > That part is true. > > But type_min and type_max have been carefully created to produce values > of the appropriate type that actually represent the minimum/maximum > representable in that type, without invoking UB. If this program doesn't > produce the expected results for you, I'd be very interested in knowing > your compiler version: > From the kernel test robot report: compiler: m68k-linux-gcc (GCC) 9.3.0 reproduce: wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross chmod +x ~/bin/make.cross # apt-get install sparse # sparse version: v0.6.3-341-g8af24329-dirty # https://git.kernel.org/pub/scm/linux/kernel/git/next/linux-next.git/commit/?id=50f530e176eac808e64416732e54c0686ce2c39b git remote add linux-next https://git.kernel.org/pub/scm/linux/kernel/git/next/linux-next.git git fetch --no-tags linux-next master git checkout 50f530e176eac808e64416732e54c0686ce2c39b # save the attached .config to linux build tree COMPILER_INSTALL_PATH=$HOME/0day COMPILER=gcc-9.3.0 make.cro I get the same warnings with Linaro GCC 7.5-2019.12) and sparse v0.6.3-184-g1b896707. > #include <stdio.h> > > #define is_signed_type(type) (((type)(-1)) < (type)1) > #define __type_half_max(type) ((type)1 << (8*sizeof(type) - 1 - > is_signed_type(type))) > #define type_max(T) ((T)((__type_half_max(T) - 1) + __type_half_max(T))) > #define type_min(T) ((T)((T)-type_max(T)-(T)1)) > > int main(int argc, char *argv[]) > { > #define p(T, PT, fmt) do { \ > PT vmin = type_min(T); \ > PT vmax = type_max(T); \ > printf("min(%s) = "fmt", max(%s) = "fmt"\n",#T, vmin, #T, vmax); \ > } while (0) > > p(_Bool, int, "%d"); > p(unsigned char, int, "%d"); > p(signed char, int, "%d"); > p(unsigned int, unsigned int, "%u"); > p(unsigned long long, unsigned long long, "%llu"); > p(signed long long, signed long long, "%lld"); > > return 0; > } > > > >> lib/test_scanf.c | 13 ++++++------- >> 1 file changed, 6 insertions(+), 7 deletions(-) >> >> diff --git a/lib/test_scanf.c b/lib/test_scanf.c >> index 8d577aec6c28..48ff5747a4da 100644 >> --- a/lib/test_scanf.c >> +++ b/lib/test_scanf.c >> @@ -187,8 +187,8 @@ static const unsigned long long numbers[] __initconst = { >> #define value_representable_in_type(T, val) \ >> (is_signed_type(T) \ >> ? ((long long)(val) >= type_min(T)) && ((long long)(val) <= type_max(T)) \ >> - : ((unsigned long long)(val) >= type_min(T)) && \ >> - ((unsigned long long)(val) <= type_max(T))) >> + : ((unsigned long long)(val) <= type_max(T))) > > > With or without this, these tests are tautological when T is "long long" > or "unsigned long long". I don't know if that is intended. But it won't, > say, exclude ~0ULL if that is in the numbers[] array from being treated > as fitting in a "long long". I don't entirely understand your comment. But the point of the test is to exclude values that can't be represented by a type shorter than long long or unsigned long long. > > Rasmus >
On 25/05/2021 12.10, Richard Fitzgerald wrote: > On 25/05/2021 10:55, Rasmus Villemoes wrote: >> On 24/05/2021 17.59, Richard Fitzgerald wrote: >>> sparse was producing warnings of the form: >>> >>> sparse: cast truncates bits from constant value (ffff0001 becomes 1) >>> >>> The problem was that value_representable_in_type() compared unsigned >>> types >>> against type_min(). But type_min() is only valid for signed types >>> because >>> it is calculating the value -type_max() - 1. > > Ok, I see I was wrong about that. It does in fact work safely. Do you > want me to update the commit message to remove this? Well, it was the "is only valid for signed types" I reacted to, so yes, please reword. >> ... and casts that to (T), so it does produce 0 as it should. E.g. for >> T==unsigned char, we get >> >> #define type_min(T) ((T)((T)-type_max(T)-(T)1)) >> (T)((T)-255 - (T)1) >> (T)(-256) >> > > sparse warns about those truncating casts. That's sad. As the comments and commit log indicate, I was very careful to avoid gcc complaining, even with various -Wfoo that are not normally enabled in a kernel build. I think sparse is wrong here. Cc += Luc. >>> diff --git a/lib/test_scanf.c b/lib/test_scanf.c >>> index 8d577aec6c28..48ff5747a4da 100644 >>> --- a/lib/test_scanf.c >>> +++ b/lib/test_scanf.c >>> @@ -187,8 +187,8 @@ static const unsigned long long numbers[] >>> __initconst = { >>> #define value_representable_in_type(T, val) \ >>> (is_signed_type(T) \ >>> ? ((long long)(val) >= type_min(T)) && ((long long)(val) <= >>> type_max(T)) \ >>> - : ((unsigned long long)(val) >= type_min(T)) && \ >>> - ((unsigned long long)(val) <= type_max(T))) >>> + : ((unsigned long long)(val) <= type_max(T))) >> >> >> With or without this, these tests are tautological when T is "long long" >> or "unsigned long long". I don't know if that is intended. But it won't, >> say, exclude ~0ULL if that is in the numbers[] array from being treated >> as fitting in a "long long". > > I don't entirely understand your comment. But the point of the test is > to exclude values that can't be represented by a type shorter than > long long or unsigned long long. Right. But ~0ULL aka 0xffffffffffffffffULL is in that numbers[] array, and that value cannot be represented in a "long long". Yet the test still proceeds to do a test with it, AFAICT first sprinting it with "%lld", then reading it back with "%lld". The first will produce -1, which of course does fit, and the test case passes. I was just wondering if this is really intended. Rasmus
On Tue, May 25, 2021 at 12:30:02PM +0200, Rasmus Villemoes wrote: > On 25/05/2021 12.10, Richard Fitzgerald wrote: > > On 25/05/2021 10:55, Rasmus Villemoes wrote: > >> On 24/05/2021 17.59, Richard Fitzgerald wrote: > >>> sparse was producing warnings of the form: > >>> > >>> sparse: cast truncates bits from constant value (ffff0001 becomes 1) > >>> > >>> The problem was that value_representable_in_type() compared unsigned > >>> types > >>> against type_min(). But type_min() is only valid for signed types > >>> because > >>> it is calculating the value -type_max() - 1. > > > > Ok, I see I was wrong about that. It does in fact work safely. Do you > > want me to update the commit message to remove this? > > Well, it was the "is only valid for signed types" I reacted to, so yes, > please reword. > > >> ... and casts that to (T), so it does produce 0 as it should. E.g. for > >> T==unsigned char, we get > >> > >> #define type_min(T) ((T)((T)-type_max(T)-(T)1)) > >> (T)((T)-255 - (T)1) > >> (T)(-256) > >> > > > > sparse warns about those truncating casts. > > That's sad. As the comments and commit log indicate, I was very careful > to avoid gcc complaining, even with various -Wfoo that are not normally > enabled in a kernel build. I think sparse is wrong here. Cc += Luc. Well, there is a cast and it effectively truncates the upper bits of the constant, so sparse is kinda right but ... months ago I once investigated these warnings and in all cases but one the use of the cast was legit. Most of them was for: 1) a 32-bit constant that was (via some macro) split as two 16-bit constants which were then written to some 16-bit HW registers. The problem would not happen if the macro would use a AND mask instead of a cast but it seems that people tend to refer the cast, I think it's the wrong choice but eh. 2) some generic macro that do things like: #define macro(size, value) \ switch (size) { case 1: ... (u8) value; case 2: ... (u16) value; ... x = macro(sizeof(int), 0xffff0001); So, each time the macro is used for 32-bit, the code still contains a cast of the value to some smaller type, even if all uses are OK. The problem here is that these warnings are issued by sparse well before it can know that the code is dead and when it know it, these casts are already eliminated. I'm sure this warning can sometimes catch a real problem but most of the time it's not, just false warnings. I think it would be best to disable this warning by default, but IIRC this has already be discussed (years ago) and there was some opposition. Maybe enabling it only at W=2 or something. I dunno. -- Luc
diff --git a/lib/test_scanf.c b/lib/test_scanf.c index 8d577aec6c28..48ff5747a4da 100644 --- a/lib/test_scanf.c +++ b/lib/test_scanf.c @@ -187,8 +187,8 @@ static const unsigned long long numbers[] __initconst = { #define value_representable_in_type(T, val) \ (is_signed_type(T) \ ? ((long long)(val) >= type_min(T)) && ((long long)(val) <= type_max(T)) \ - : ((unsigned long long)(val) >= type_min(T)) && \ - ((unsigned long long)(val) <= type_max(T))) + : ((unsigned long long)(val) <= type_max(T))) + #define test_one_number(T, gen_fmt, scan_fmt, val, fn) \ do { \ @@ -204,12 +204,11 @@ do { \ int i; \ \ for (i = 0; i < ARRAY_SIZE(numbers); i++) { \ - if (!value_representable_in_type(T, numbers[i])) \ - continue; \ - \ - test_one_number(T, gen_fmt, scan_fmt, numbers[i], fn); \ + if (value_representable_in_type(T, numbers[i])) \ + test_one_number(T, gen_fmt, scan_fmt, \ + numbers[i], fn); \ \ - if (is_signed_type(T)) \ + if (value_representable_in_type(T, -numbers[i])) \ test_one_number(T, gen_fmt, scan_fmt, \ -numbers[i], fn); \ } \
sparse was producing warnings of the form: sparse: cast truncates bits from constant value (ffff0001 becomes 1) The problem was that value_representable_in_type() compared unsigned types against type_min(). But type_min() is only valid for signed types because it is calculating the value -type_max() - 1. The minimum value of an unsigned is obviously 0, so only type_max() need be tested. This patch also takes the opportunity to clean up the implementation of simple_numbers_loop() to use a common pattern for the positive and negative test. Reported-by: kernel test robot <lkp@intel.com> Fixes: 17aadada455d ("lib: test_scanf: Add tests for sscanf number conversion") Signed-off-by: Richard Fitzgerald <rf@opensource.cirrus.com> --- lib/test_scanf.c | 13 ++++++------- 1 file changed, 6 insertions(+), 7 deletions(-)