diff mbox series

[RESEND,1/3] x86: Add task_struct flag to force SIGBUS on MCE

Message ID 20240723144752.1478226-1-andrew.zaborowski@intel.com (mailing list archive)
State New
Headers show
Series [RESEND,1/3] x86: Add task_struct flag to force SIGBUS on MCE | expand

Commit Message

Andrew Zaborowski July 23, 2024, 2:47 p.m. UTC
Uncorrected memory errors for user pages are signaled to processes
using SIGBUS or, if the error happens in a syscall, an error retval
from the syscall.  The SIGBUS is documented in
Documentation/mm/hwpoison.rst#failure-recovery-modes

But there are corner cases where we cannot or don't want to return a
plain error from the syscall.  Subsequent commits covers two such cases:
execve and rseq.  Current code, in both places, will kill the task with a
SIGSEGV on error.  While not explicitly stated, it can be argued that it
should be a SIGBUS, for consistency and for the benefit of the userspace
signal handlers.  Even if the process cannot handle the signal, perhaps
the parent process can.  This was the case in the scenario that
motivated this patch.

In both cases, the architecture's exception handler (MCE handler on x86)
will queue a call to memory_failure.  This doesn't work because the
syscall-specific code sees the -EFAULT and terminates the task before
the queued work runs.

To fix this: 1. let pending work run in the error cases in both places.

And 2. on MCE, ensure memory_failure() is passed MF_ACTION_REQUIRED so
that the SIGBUS is queued.  Normally when the MCE is in a syscall,
a fixup of return IP and a call to kill_me_never() are what we want.
But in this case it's necessary to queue kill_me_maybe() which will set
MF_ACTION_REQUIRED which is checked by memory_failure().

To do this the syscall code will set current->kill_on_efault, a new
task_struct flag.  Check that flag in
arch/x86/kernel/cpu/mce/core.c:do_machine_check()

Note: the flag is not x86 specific even if only x86 handling is being
added here.  The definition could be guarded by #ifdef
CONFIG_MEMORY_FAILURE, but it would then need set/clear utilities.

Signed-off-by: Andrew Zaborowski <andrew.zaborowski@intel.com>
---
Resending through an SMTP server that won't add the company footer.

This is a v2 of
https://lore.kernel.org/linux-mm/20240501015340.3014724-1-andrew.zaborowski@intel.com/
In the v1 the existing flag current->in_execve was being reused instead
of adding a new one.  Kees Cook commented in
https://lore.kernel.org/linux-mm/202405010915.465AF19@keescook/ that
current->in_execve is going away.  Lacking a better idea and seeing
that execve() and rseq() would benefit from using a common mechanism, I
decided to add this new flag.

Perhaps with a better name current->kill_on_efault could replace
brpm->point_of_no_return to offset the pain of having this extra flag.
---
 arch/x86/kernel/cpu/mce/core.c | 18 +++++++++++++++++-
 include/linux/sched.h          |  2 ++
 2 files changed, 19 insertions(+), 1 deletion(-)

Comments

Kees Cook Aug. 6, 2024, 4:36 a.m. UTC | #1
On Tue, Jul 23, 2024 at 04:47:50PM +0200, Andrew Zaborowski wrote:
> Uncorrected memory errors for user pages are signaled to processes
> using SIGBUS or, if the error happens in a syscall, an error retval
> from the syscall.  The SIGBUS is documented in
> Documentation/mm/hwpoison.rst#failure-recovery-modes
> 
> But there are corner cases where we cannot or don't want to return a
> plain error from the syscall.  Subsequent commits covers two such cases:
> execve and rseq.  Current code, in both places, will kill the task with a
> SIGSEGV on error.  While not explicitly stated, it can be argued that it
> should be a SIGBUS, for consistency and for the benefit of the userspace
> signal handlers.  Even if the process cannot handle the signal, perhaps
> the parent process can.  This was the case in the scenario that
> motivated this patch.
> 
> In both cases, the architecture's exception handler (MCE handler on x86)
> will queue a call to memory_failure.  This doesn't work because the
> syscall-specific code sees the -EFAULT and terminates the task before
> the queued work runs.
> 
> To fix this: 1. let pending work run in the error cases in both places.
> 
> And 2. on MCE, ensure memory_failure() is passed MF_ACTION_REQUIRED so
> that the SIGBUS is queued.  Normally when the MCE is in a syscall,
> a fixup of return IP and a call to kill_me_never() are what we want.
> But in this case it's necessary to queue kill_me_maybe() which will set
> MF_ACTION_REQUIRED which is checked by memory_failure().
> 
> To do this the syscall code will set current->kill_on_efault, a new
> task_struct flag.  Check that flag in
> arch/x86/kernel/cpu/mce/core.c:do_machine_check()
> 
> Note: the flag is not x86 specific even if only x86 handling is being
> added here.  The definition could be guarded by #ifdef
> CONFIG_MEMORY_FAILURE, but it would then need set/clear utilities.
> 
> Signed-off-by: Andrew Zaborowski <andrew.zaborowski@intel.com>
> ---
> Resending through an SMTP server that won't add the company footer.
> 
> This is a v2 of
> https://lore.kernel.org/linux-mm/20240501015340.3014724-1-andrew.zaborowski@intel.com/
> In the v1 the existing flag current->in_execve was being reused instead
> of adding a new one.  Kees Cook commented in
> https://lore.kernel.org/linux-mm/202405010915.465AF19@keescook/ that
> current->in_execve is going away.  Lacking a better idea and seeing
> that execve() and rseq() would benefit from using a common mechanism, I
> decided to add this new flag.
> 
> Perhaps with a better name current->kill_on_efault could replace
> brpm->point_of_no_return to offset the pain of having this extra flag.
> ---
>  arch/x86/kernel/cpu/mce/core.c | 18 +++++++++++++++++-

Since this touches arch/x86/, can an x86 maintainer review this? I can
carry this via the execve tree...

-Kees

>  include/linux/sched.h          |  2 ++
>  2 files changed, 19 insertions(+), 1 deletion(-)
> 
> diff --git a/arch/x86/kernel/cpu/mce/core.c b/arch/x86/kernel/cpu/mce/core.c
> index ad0623b65..13f2ace3d 100644
> --- a/arch/x86/kernel/cpu/mce/core.c
> +++ b/arch/x86/kernel/cpu/mce/core.c
> @@ -1611,7 +1611,7 @@ noinstr void do_machine_check(struct pt_regs *regs)
>  			if (p)
>  				SetPageHWPoison(p);
>  		}
> -	} else {
> +	} else if (!current->kill_on_efault) {
>  		/*
>  		 * Handle an MCE which has happened in kernel space but from
>  		 * which the kernel can recover: ex_has_fault_handler() has
> @@ -1628,6 +1628,22 @@ noinstr void do_machine_check(struct pt_regs *regs)
>  
>  		if (m.kflags & MCE_IN_KERNEL_COPYIN)
>  			queue_task_work(&m, msg, kill_me_never);
> +	} else {
> +		/*
> +		 * Even with recovery code extra handling is required when
> +		 * we're not returning to userspace after error (e.g. in
> +		 * execve() beyond the point of no return) to ensure that
> +		 * a SIGBUS is delivered.
> +		 */
> +		if (m.kflags & MCE_IN_KERNEL_RECOV) {
> +			if (!fixup_exception(regs, X86_TRAP_MC, 0, 0))
> +				mce_panic("Failed kernel mode recovery", &m, msg);
> +		}
> +
> +		if (!mce_usable_address(&m))
> +			queue_task_work(&m, msg, kill_me_now);
> +		else
> +			queue_task_work(&m, msg, kill_me_maybe);
>  	}
>  
>  out:
> diff --git a/include/linux/sched.h b/include/linux/sched.h
> index 61591ac6e..0cde1ba11 100644
> --- a/include/linux/sched.h
> +++ b/include/linux/sched.h
> @@ -975,6 +975,8 @@ struct task_struct {
>  	/* delay due to memory thrashing */
>  	unsigned                        in_thrashing:1;
>  #endif
> +	/* Kill task on user memory access error */
> +	unsigned                        kill_on_efault:1;
>  
>  	unsigned long			atomic_flags; /* Flags requiring atomic access. */
>  
> -- 
> 2.43.0
>
Borislav Petkov Aug. 6, 2024, 8:35 a.m. UTC | #2
On August 6, 2024 7:36:40 AM GMT+03:00, Kees Cook <kees@kernel.org> wrote:
>Since this touches arch/x86/, can an x86 maintainer review this? I can
>carry this via the execve tree...

No, we can't until the smoke from the handwaving clears:

>> While not explicitly stated, it can be argued that it
>> should be a SIGBUS, for consistency and for the benefit of the userspace
>> signal handlers.  Even if the process cannot handle the signal, perhaps
>> the parent process can.  This was the case in the scenario that
>> motivated this patch.

I have no clue what that is trying to tell me.
Andrew Zaborowski Aug. 8, 2024, 12:01 a.m. UTC | #3
[Sorry if I'm breaking threading]

Borislav Petkov <bp@alien8.de> wrote:
> On August 6, 2024 7:36:40 AM GMT+03:00, Kees Cook <kees@kernel.org> wrote:
> >> While not explicitly stated, it can be argued that it
> >> should be a SIGBUS, for consistency and for the benefit of the userspace
> >> signal handlers.  Even if the process cannot handle the signal, perhaps
> >> the parent process can.  This was the case in the scenario that
> >> motivated this patch.
>
> I have no clue what that is trying to tell me.

Documentation/mm/hwpoison.rst#failure-recovery-modes documents the
SIGBUS on memory failure behavior:

       Send SIGBUS when the application runs into the corrupted page.

There may be other docs that specify this behavior but I didn't find
them.  To me this implies that when userspace code directly accesses
poisoned memory it should receive a SIGBUS but not sure if the wording
also implies a SIGBUS on a kernel access on behalf of userspace, i.e.
in a syscall.  Hence why I said "not explicitly stated".

Now I'm not sure it matters.  Logically if a program wants to handle
memory errors, or its parent process wants to know the child was
killed because of a HW problem, it probably doesn't care whether the
page was accessed directly or indirectly so it expects a SIGBUS
always.  Tony Luck also seemed to agree this was the expected behavior
when commenting on this on a different forum.

Best regards
Borislav Petkov Aug. 8, 2024, 2:56 p.m. UTC | #4
On Thu, Aug 08, 2024 at 02:01:43AM +0200, Andrew Zaborowski wrote:
> Now I'm not sure it matters.

I'm not sure it matters either. You're adding all that code and
task_struct member just because the kernel sends SIGBUS on a memory
failure. Oh well.

How is that more beneficial for the overall recovery strategy than
killing the current task? IOW, what is the real, practical advantage of
this and why do we want to support it indefinitely?

Thx.
Andrew Zaborowski Aug. 9, 2024, 1:22 a.m. UTC | #5
On Thu, 8 Aug 2024 at 16:55, Borislav Petkov <bp@alien8.de> wrote:
> I'm not sure it matters either. You're adding all that code and
> task_struct member just because the kernel sends SIGBUS on a memory
> failure. Oh well.
>
> How is that more beneficial for the overall recovery strategy than
> killing the current task? IOW, what is the real, practical advantage of
> this and why do we want to support it indefinitely?

I don't have a "real world" use case, we hit these two bugs in HW
testing.  Qemu relies on the SIGBUS logic but the execve and rseq
cases cannot be recovered from, the main benefit of sending the
correct signal is perhaps information to the user.

If this cannot be fixed then optimally it should be documented.

As for "all that code", the memory failure handling code is of certain
size and this is a comparatively tiny fix for a tiny issue.

Best regards
Borislav Petkov Aug. 9, 2024, 8:34 a.m. UTC | #6
On Fri, Aug 09, 2024 at 03:22:19AM +0200, Andrew Zaborowski wrote:
> I don't have a "real world" use case, we hit these two bugs in HW
> testing.

You inject MCEs or what testing do you mean here?

In what pages? I presume user...

So instead of the process getting killed, you want to return SIGBUS
because, "hey caller, your process encountered an MCE while being
attempted to be executed"?

> Qemu relies on the SIGBUS logic but the execve and rseq
> cases cannot be recovered from, the main benefit of sending the
> correct signal is perhaps information to the user.

You will have that info in the logs - we're usually very loud when we
get an MCE...

> If this cannot be fixed then optimally it should be documented.

I'm not convinced at all that jumping through hoops you're doing, is
worth the effort.

> As for "all that code", the memory failure handling code is of certain
> size and this is a comparatively tiny fix for a tiny issue.

No, I didn't say anything about the memory failure code - it is about
supporting that obscure use case and the additional logic you're adding
to the #MC handler which looks like a real mess already and us having to
support that use case indefinitely.

So why does it matter if a process which is being executed and gets an
MCE beyond the point of no return absolutely needs to return SIGBUS vs
it getting killed and you still get an MCE logged on the machine, in
either case?

I mean, I would understand it when the parent process can do something
meaningful about it but if not, why does it matter at all?

Thx.
Andrew Zaborowski Aug. 10, 2024, 1:20 a.m. UTC | #7
Borislav Petkov <bp@alien8.de> wrote:
> So instead of the process getting killed, you want to return SIGBUS
> because, "hey caller, your process encountered an MCE while being
> attempted to be executed"?

The tests could be changed to expect the SIGSEGV but in this case it
seemed that the test was good and the kernel was misbehaving.  One of
the authors of the MCE handling code confirmed that.

>
> > Qemu relies on the SIGBUS logic but the execve and rseq
> > cases cannot be recovered from, the main benefit of sending the
> > correct signal is perhaps information to the user.
>
> You will have that info in the logs - we're usually very loud when we
> get an MCE...

True, though that's hard to link to a specific process crash.  It's
also hard to extract the page address in the process's address space
from that, although I don't think there's a current use case.

>
> > If this cannot be fixed then optimally it should be documented.
>
> I'm not convinced at all that jumping through hoops you're doing, is
> worth the effort.

That could be, again this could be fixed in the documentation instead.

>
> > As for "all that code", the memory failure handling code is of certain
> > size and this is a comparatively tiny fix for a tiny issue.
>
> No, I didn't say anything about the memory failure code - it is about

I was replying to your comment about the size of the change.

> supporting that obscure use case and the additional logic you're adding
> to the #MC handler which looks like a real mess already and us having to
> support that use case indefinitely.

Supporting something generally includes supporting the common and the
obscure cases.  From the user's point of view the kernel has been
committed to supporting these scenarios indefinitely or until the
deprecation of the SIGBUS-on-memory-error logic, and simply has a bug.

>
> So why does it matter if a process which is being executed and gets an
> MCE beyond the point of no return absolutely needs to return SIGBUS vs
> it getting killed and you still get an MCE logged on the machine, in
> either case?

A SIGSEGV strongly implies a problem with the program being run, not a
specific instance of it.  A SIGBUS could be not the program's fault,
like in this case.

In these tests the workload was simply relaunched on a SIGBUS which
sounded fair to me.  A qemu VM could similarly be restarted on an
unrecoverable MCE in a page that doesn't belong to the VM but to qemu
itself.

Best regards
Borislav Petkov Aug. 10, 2024, 3:21 a.m. UTC | #8
On Sat, Aug 10, 2024 at 03:20:10AM +0200, Andrew Zaborowski wrote:
> True, though that's hard to link to a specific process crash.

The process name when the MCE gets reported is *actually* there in the
splat: current->comm.

> I was replying to your comment about the size of the change.

My comment was about the code you're adding:

 arch/x86/kernel/cpu/mce/core.c | 18 +++++++++++++++++-
 fs/exec.c                      | 15 ++++++++++++---
 include/linux/sched.h          |  2 ++
 kernel/rseq.c                  | 25 +++++++++++++++++++++----
 4 files changed, 52 insertions(+), 8 deletions(-)

If it is in drivers, I don't care. But it is in main paths and for
a questionable use case.

> Supporting something generally includes supporting the common and the
> obscure cases.

Bullshit. We certainly won't support some obscure use cases just
because.

> From the user's point of view the kernel has been committed to
> supporting these scenarios indefinitely or until the deprecation of
> the SIGBUS-on-memory-error logic, and simply has a bug.

And lemme repeat my question:

So why does it matter if a process which is being executed and gets an
MCE beyond the point of no return absolutely needs to return SIGBUS vs
it getting killed and you still get an MCE logged on the machine, in
either case?

Bug which is seen by whom or what?

If a process dies, it dies.

> In these tests the workload was simply relaunched on a SIGBUS which
> sounded fair to me.  A qemu VM could similarly be restarted on an
> unrecoverable MCE in a page that doesn't belong to the VM but to qemu
> itself.

If an MCE hits at that particular point once in a blue moon, I don't
care. If it is a special use case where you inject an MCE right then and
there to test recovery actions, then that's perhaps a different story.

Usually, a lot of things can be done. As long as there's a valid use
case to support. But since you hesitate to explain what exactly we're
supporting, I can't help you.
Andrew Zaborowski Aug. 10, 2024, 3:55 a.m. UTC | #9
On Sat, 10 Aug 2024 at 05:21, Borislav Petkov <bp@alien8.de> wrote:
> On Sat, Aug 10, 2024 at 03:20:10AM +0200, Andrew Zaborowski wrote:
> > True, though that's hard to link to a specific process crash.
>
> The process name when the MCE gets reported is *actually* there in the
> splat: current->comm.

That's the current process.  The list of processes to be signalled is
determined later and not in a simple way.

>
> > Supporting something generally includes supporting the common and the
> > obscure cases.
>
> Bullshit. We certainly won't support some obscure use cases just
> because.

It's simple reliability, if you support something only sometimes no
one can rely on it.  Without a deep analysis of their kernel code
snapshot at least.

>
> > From the user's point of view the kernel has been committed to
> > supporting these scenarios indefinitely or until the deprecation of
> > the SIGBUS-on-memory-error logic, and simply has a bug.
>
> And lemme repeat my question:
>
> So why does it matter if a process which is being executed and gets an
> MCE beyond the point of no return absolutely needs to return SIGBUS vs
> it getting killed and you still get an MCE logged on the machine, in
> either case?
>
> Bug which is seen by whom or what?

In the case I know of, by the parent process, it's basing some
decision on the signal number and the expected behavior from the
kernel even if not unambiguously documented.

Like I said it can be worked around in userspace, my change doesn't
*enable* the use case.

Best regards
Borislav Petkov Aug. 10, 2024, 9:25 a.m. UTC | #10
So,

to sum up the thread here:

1. Arguably, tasks which have encountered a memory error should get sent
   a SIGBUS.

2. if a valid use case appears, the proper fix should be not to sprinkle
   special-handling code in random syscalls in a whack-a-mole fashion
   but to note somewhere in the task struct that the task has
   encountered a memory error and then *override* the signal it gets
   sent to it with a SIGBUS one.

   I.e., this should be a generic solution, if anything.

Thx.
diff mbox series

Patch

diff --git a/arch/x86/kernel/cpu/mce/core.c b/arch/x86/kernel/cpu/mce/core.c
index ad0623b65..13f2ace3d 100644
--- a/arch/x86/kernel/cpu/mce/core.c
+++ b/arch/x86/kernel/cpu/mce/core.c
@@ -1611,7 +1611,7 @@  noinstr void do_machine_check(struct pt_regs *regs)
 			if (p)
 				SetPageHWPoison(p);
 		}
-	} else {
+	} else if (!current->kill_on_efault) {
 		/*
 		 * Handle an MCE which has happened in kernel space but from
 		 * which the kernel can recover: ex_has_fault_handler() has
@@ -1628,6 +1628,22 @@  noinstr void do_machine_check(struct pt_regs *regs)
 
 		if (m.kflags & MCE_IN_KERNEL_COPYIN)
 			queue_task_work(&m, msg, kill_me_never);
+	} else {
+		/*
+		 * Even with recovery code extra handling is required when
+		 * we're not returning to userspace after error (e.g. in
+		 * execve() beyond the point of no return) to ensure that
+		 * a SIGBUS is delivered.
+		 */
+		if (m.kflags & MCE_IN_KERNEL_RECOV) {
+			if (!fixup_exception(regs, X86_TRAP_MC, 0, 0))
+				mce_panic("Failed kernel mode recovery", &m, msg);
+		}
+
+		if (!mce_usable_address(&m))
+			queue_task_work(&m, msg, kill_me_now);
+		else
+			queue_task_work(&m, msg, kill_me_maybe);
 	}
 
 out:
diff --git a/include/linux/sched.h b/include/linux/sched.h
index 61591ac6e..0cde1ba11 100644
--- a/include/linux/sched.h
+++ b/include/linux/sched.h
@@ -975,6 +975,8 @@  struct task_struct {
 	/* delay due to memory thrashing */
 	unsigned                        in_thrashing:1;
 #endif
+	/* Kill task on user memory access error */
+	unsigned                        kill_on_efault:1;
 
 	unsigned long			atomic_flags; /* Flags requiring atomic access. */