diff mbox series

[10/82] locking/atomic/x86: Silence intentional wrapping addition

Message ID 20240123002814.1396804-10-keescook@chromium.org (mailing list archive)
State Changes Requested
Headers show
Series overflow: Refactor open-coded arithmetic wrap-around | expand

Commit Message

Kees Cook Jan. 23, 2024, 12:26 a.m. UTC
Annotate atomic_add_return() to avoid signed overflow instrumentation.
It is expected to wrap around.

Cc: Will Deacon <will@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Boqun Feng <boqun.feng@gmail.com>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Dave Hansen <dave.hansen@linux.intel.com>
Cc: x86@kernel.org
Cc: "H. Peter Anvin" <hpa@zytor.com>
Signed-off-by: Kees Cook <keescook@chromium.org>
---
 arch/x86/include/asm/atomic.h | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

Comments

Mark Rutland Jan. 23, 2024, 9:27 a.m. UTC | #1
On Mon, Jan 22, 2024 at 04:26:45PM -0800, Kees Cook wrote:
> Annotate atomic_add_return() to avoid signed overflow instrumentation.
> It is expected to wrap around.
> 
> Cc: Will Deacon <will@kernel.org>
> Cc: Peter Zijlstra <peterz@infradead.org>
> Cc: Boqun Feng <boqun.feng@gmail.com>
> Cc: Mark Rutland <mark.rutland@arm.com>
> Cc: Thomas Gleixner <tglx@linutronix.de>
> Cc: Ingo Molnar <mingo@redhat.com>
> Cc: Borislav Petkov <bp@alien8.de>
> Cc: Dave Hansen <dave.hansen@linux.intel.com>
> Cc: x86@kernel.org
> Cc: "H. Peter Anvin" <hpa@zytor.com>
> Signed-off-by: Kees Cook <keescook@chromium.org>
> ---
>  arch/x86/include/asm/atomic.h | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/arch/x86/include/asm/atomic.h b/arch/x86/include/asm/atomic.h
> index 55a55ec04350..4120cdd87da8 100644
> --- a/arch/x86/include/asm/atomic.h
> +++ b/arch/x86/include/asm/atomic.h
> @@ -80,7 +80,7 @@ static __always_inline bool arch_atomic_add_negative(int i, atomic_t *v)
>  }
>  #define arch_atomic_add_negative arch_atomic_add_negative
>  
> -static __always_inline int arch_atomic_add_return(int i, atomic_t *v)
> +static __always_inline __signed_wrap int arch_atomic_add_return(int i, atomic_t *v)
>  {
>  	return i + xadd(&v->counter, i);
>  }

I think that here (and in the arm64 patch) it'd be better to use add_wrap() on
the specific statement, i.e. have:

static __always_inline int arch_atomic_add_return(int i, atomic_t *v)
{
	return add_wrap(i, xadd(&v->counter, i));
}

... since otherwise the annotation could applly to the '+' or something else
(e.g. if the 'xadd() part is a special macro), and the annotation might
unexpectedly hide things if we add other statements here in future.

Mark.

> -- 
> 2.34.1
>
Kees Cook Jan. 23, 2024, 9:54 p.m. UTC | #2
On Tue, Jan 23, 2024 at 09:27:26AM +0000, Mark Rutland wrote:
> On Mon, Jan 22, 2024 at 04:26:45PM -0800, Kees Cook wrote:
> > Annotate atomic_add_return() to avoid signed overflow instrumentation.
> > It is expected to wrap around.
> > 
> > Cc: Will Deacon <will@kernel.org>
> > Cc: Peter Zijlstra <peterz@infradead.org>
> > Cc: Boqun Feng <boqun.feng@gmail.com>
> > Cc: Mark Rutland <mark.rutland@arm.com>
> > Cc: Thomas Gleixner <tglx@linutronix.de>
> > Cc: Ingo Molnar <mingo@redhat.com>
> > Cc: Borislav Petkov <bp@alien8.de>
> > Cc: Dave Hansen <dave.hansen@linux.intel.com>
> > Cc: x86@kernel.org
> > Cc: "H. Peter Anvin" <hpa@zytor.com>
> > Signed-off-by: Kees Cook <keescook@chromium.org>
> > ---
> >  arch/x86/include/asm/atomic.h | 2 +-
> >  1 file changed, 1 insertion(+), 1 deletion(-)
> > 
> > diff --git a/arch/x86/include/asm/atomic.h b/arch/x86/include/asm/atomic.h
> > index 55a55ec04350..4120cdd87da8 100644
> > --- a/arch/x86/include/asm/atomic.h
> > +++ b/arch/x86/include/asm/atomic.h
> > @@ -80,7 +80,7 @@ static __always_inline bool arch_atomic_add_negative(int i, atomic_t *v)
> >  }
> >  #define arch_atomic_add_negative arch_atomic_add_negative
> >  
> > -static __always_inline int arch_atomic_add_return(int i, atomic_t *v)
> > +static __always_inline __signed_wrap int arch_atomic_add_return(int i, atomic_t *v)
> >  {
> >  	return i + xadd(&v->counter, i);
> >  }
> 
> I think that here (and in the arm64 patch) it'd be better to use add_wrap() on
> the specific statement, i.e. have:
> 
> static __always_inline int arch_atomic_add_return(int i, atomic_t *v)
> {
> 	return add_wrap(i, xadd(&v->counter, i));
> }
> 
> ... since otherwise the annotation could applly to the '+' or something else
> (e.g. if the 'xadd() part is a special macro), and the annotation might
> unexpectedly hide things if we add other statements here in future.

Okay, sure, I can do that. I may have some header inclusion problems,
but I'll give it a shot.
diff mbox series

Patch

diff --git a/arch/x86/include/asm/atomic.h b/arch/x86/include/asm/atomic.h
index 55a55ec04350..4120cdd87da8 100644
--- a/arch/x86/include/asm/atomic.h
+++ b/arch/x86/include/asm/atomic.h
@@ -80,7 +80,7 @@  static __always_inline bool arch_atomic_add_negative(int i, atomic_t *v)
 }
 #define arch_atomic_add_negative arch_atomic_add_negative
 
-static __always_inline int arch_atomic_add_return(int i, atomic_t *v)
+static __always_inline __signed_wrap int arch_atomic_add_return(int i, atomic_t *v)
 {
 	return i + xadd(&v->counter, i);
 }