Message ID | 20231009151026.66145-4-aleksander.lobakin@intel.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | ip_tunnel: convert __be16 tunnel flags to bitmaps | expand |
On Mon, Oct 09, 2023 at 05:10:15PM +0200, Alexander Lobakin wrote: > Since commit b03fc1173c0c ("bitops: let optimize out non-atomic bitops > on compile-time constants"), the compilers are able to expand inline > bitmap operations to compile-time initializers when possible. > However, during the round of replacement if-__set-else-__clear with > __assign_bit() as per Andy's advice, bloat-o-meter showed +1024 bytes > difference in object code size for one module (even one function), > where the pattern: > > DECLARE_BITMAP(foo) = { }; // on the stack, zeroed > > if (a) > __set_bit(const_bit_num, foo); > if (b) > __set_bit(another_const_bit_num, foo); > ... > > is heavily used, although there should be no difference: the bitmap is > zeroed, so the second half of __assign_bit() should be compiled-out as > a no-op. > I either missed the fact that __assign_bit() has bitmap pointer marked > as `volatile` (as we usually do for bitmaps) or was hoping that the No, we usually don't. Atomic ops on individual bits is a notable exception for bitmaps, as the comment for generic_test_bit() says, for example: /* * Unlike the bitops with the '__' prefix above, this one *is* atomic, * so `volatile` must always stay here with no cast-aways. See * `Documentation/atomic_bitops.txt` for the details. */ For non-atomic single-bit operations and all multi-bit ops, volatile is useless, and generic___test_and_set_bit() in the same file casts the *addr to a plain 'unsigned long *'. > compilers would at least try to look past the `volatile` for > __always_inline functions. Anyhow, due to that attribute, the compilers > were always compiling the whole expression and no mentioned compile-time > optimizations were working. > > Convert __assign_bit() to a macro since it's a very simple if-else and > all of the checks are performed inside __set_bit() and __clear_bit(), > thus that wrapper has to be as transparent as possible. After that > change, despite it showing only -20 bytes change for vmlinux (due to > that it's still relatively unpopular), no drastic code size changes > happen when replacing if-set-else-clear for onstack bitmaps with > __assign_bit(), meaning the compiler now expands them to the actual > operations will all the expected optimizations. > > Cc: Andy Shevchenko <andriy.shevchenko@linux.intel.com> > Reviewed-by: Przemek Kitszel <przemyslaw.kitszel@intel.com> > Signed-off-by: Alexander Lobakin <aleksander.lobakin@intel.com> > --- > include/linux/bitops.h | 10 ++-------- > 1 file changed, 2 insertions(+), 8 deletions(-) > > diff --git a/include/linux/bitops.h b/include/linux/bitops.h > index e0cd09eb91cd..f98f4fd1047f 100644 > --- a/include/linux/bitops.h > +++ b/include/linux/bitops.h > @@ -284,14 +284,8 @@ static __always_inline void assign_bit(long nr, volatile unsigned long *addr, > clear_bit(nr, addr); > } > > -static __always_inline void __assign_bit(long nr, volatile unsigned long *addr, > - bool value) > -{ > - if (value) > - __set_bit(nr, addr); > - else > - __clear_bit(nr, addr); > -} > +#define __assign_bit(nr, addr, value) \ > + ((value) ? __set_bit(nr, addr) : __clear_bit(nr, addr)) Can you protect nr and addr with braces just as well? Can you convert the atomic version too, to keep them synchronized ? > > /** > * __ptr_set_bit - Set bit in a pointer's value > -- > 2.41.0
From: Yury Norov <yury.norov@gmail.com> Date: Mon, 9 Oct 2023 09:18:40 -0700 > On Mon, Oct 09, 2023 at 05:10:15PM +0200, Alexander Lobakin wrote: [...] >> -static __always_inline void __assign_bit(long nr, volatile unsigned long *addr, >> - bool value) >> -{ >> - if (value) >> - __set_bit(nr, addr); >> - else >> - __clear_bit(nr, addr); >> -} >> +#define __assign_bit(nr, addr, value) \ >> + ((value) ? __set_bit(nr, addr) : __clear_bit(nr, addr)) > > Can you protect nr and addr with braces just as well? > Can you convert the atomic version too, to keep them synchronized ? + for both. I didn't convert assign_bit() as I thought it wouldn't give any optimization improvements, but yeah, let the compiler decide. > >> >> /** >> * __ptr_set_bit - Set bit in a pointer's value >> -- >> 2.41.0 Thanks, Olek
diff --git a/include/linux/bitops.h b/include/linux/bitops.h index e0cd09eb91cd..f98f4fd1047f 100644 --- a/include/linux/bitops.h +++ b/include/linux/bitops.h @@ -284,14 +284,8 @@ static __always_inline void assign_bit(long nr, volatile unsigned long *addr, clear_bit(nr, addr); } -static __always_inline void __assign_bit(long nr, volatile unsigned long *addr, - bool value) -{ - if (value) - __set_bit(nr, addr); - else - __clear_bit(nr, addr); -} +#define __assign_bit(nr, addr, value) \ + ((value) ? __set_bit(nr, addr) : __clear_bit(nr, addr)) /** * __ptr_set_bit - Set bit in a pointer's value