diff mbox series

[v9,2/2] ACPI: APEI: handle synchronous exceptions in task work

Message ID 20231007072818.58951-3-xueshuai@linux.alibaba.com (mailing list archive)
State New
Headers show
Series None | expand

Commit Message

Shuai Xue Oct. 7, 2023, 7:28 a.m. UTC
Hardware errors could be signaled by synchronous interrupt, e.g.  when an
error is detected by a background scrubber, or signaled by synchronous
exception, e.g. when an uncorrected error is consumed. Both synchronous and
asynchronous error are queued and handled by a dedicated kthread in
workqueue.

commit 7f17b4a121d0 ("ACPI: APEI: Kick the memory_failure() queue for
synchronous errors") keep track of whether memory_failure() work was
queued, and make task_work pending to flush out the workqueue so that the
work for synchronous error is processed before returning to user-space.
The trick ensures that the corrupted page is unmapped and poisoned. And
after returning to user-space, the task starts at current instruction which
triggering a page fault in which kernel will send SIGBUS to current process
due to VM_FAULT_HWPOISON.

However, the memory failure recovery for hwpoison-aware mechanisms does not
work as expected. For example, hwpoison-aware user-space processes like
QEMU register their customized SIGBUS handler and enable early kill mode by
seting PF_MCE_EARLY at initialization. Then the kernel will directy notify
the process by sending a SIGBUS signal in memory failure with wrong
si_code: the actual user-space process accessing the corrupt memory
location, but its memory failure work is handled in a kthread context, so
it will send SIGBUS with BUS_MCEERR_AO si_code to the actual user-space
process instead of BUS_MCEERR_AR in kill_proc().

To this end, separate synchronous and asynchronous error handling into
different paths like X86 platform does:

- valid synchronous errors: queue a task_work to synchronously send SIGBUS
  before ret_to_user.
- valid asynchronous errors: queue a work into workqueue to asynchronously
  handle memory failure.
- abnormal branches such as invalid PA, unexpected severity, no memory
  failure config support, invalid GUID section, OOM, etc.

Then for valid synchronous errors, the current context in memory failure is
exactly belongs to the task consuming poison data and it will send SIBBUS
with proper si_code.

Signed-off-by: Shuai Xue <xueshuai@linux.alibaba.com>
Tested-by: Ma Wupeng <mawupeng1@huawei.com>
Reviewed-by: Kefeng Wang <wangkefeng.wang@huawei.com>
Reviewed-by: Xiaofei Tan <tanxiaofei@huawei.com>
Reviewed-by: Baolin Wang <baolin.wang@linux.alibaba.com>
---
 arch/x86/kernel/cpu/mce/core.c |  9 +---
 drivers/acpi/apei/ghes.c       | 84 +++++++++++++++++++++-------------
 include/acpi/ghes.h            |  3 --
 include/linux/mm.h             |  1 -
 mm/memory-failure.c            | 22 +++------
 5 files changed, 59 insertions(+), 60 deletions(-)

Comments

James Morse Nov. 30, 2023, 5:39 p.m. UTC | #1
Hi Shuai,

On 07/10/2023 08:28, Shuai Xue wrote:
> Hardware errors could be signaled by synchronous interrupt,

I'm struggling with 'synchronous interrupt'. Do you mean arm64's 'precise' (all
instructions before the exception were executed, and none after).
Otherwise, surely any interrupt from a background scrubber is inherently asynchronous!


> e.g.  when an
> error is detected by a background scrubber, or signaled by synchronous
> exception, e.g. when an uncorrected error is consumed. Both synchronous and
> asynchronous error are queued and handled by a dedicated kthread in
> workqueue.
> 
> commit 7f17b4a121d0 ("ACPI: APEI: Kick the memory_failure() queue for
> synchronous errors") keep track of whether memory_failure() work was
> queued, and make task_work pending to flush out the workqueue so that the
> work for synchronous error is processed before returning to user-space.

It does it regardless, if user-space was interrupted by APEI any work queued as a result
of that should be completed before we go back to user-space. Otherwise we can bounce
between user-space and firmware, with the kernel only running the APEI code, and never
making progress.


> The trick ensures that the corrupted page is unmapped and poisoned. And
> after returning to user-space, the task starts at current instruction which
> triggering a page fault in which kernel will send SIGBUS to current process
> due to VM_FAULT_HWPOISON.
> 
> However, the memory failure recovery for hwpoison-aware mechanisms does not
> work as expected. For example, hwpoison-aware user-space processes like
> QEMU register their customized SIGBUS handler and enable early kill mode by
> seting PF_MCE_EARLY at initialization. Then the kernel will directy notify

(setting, directly)

> the process by sending a SIGBUS signal in memory failure with wrong

> si_code: the actual user-space process accessing the corrupt memory
> location, but its memory failure work is handled in a kthread context, so
> it will send SIGBUS with BUS_MCEERR_AO si_code to the actual user-space
> process instead of BUS_MCEERR_AR in kill_proc().

This is hard to parse, "the user-space process is accessing"? (dropping 'actual' and
adding 'is')


Wasn't this behaviour fixed by the previous patch?

What problem are you fixing here?


> To this end, separate synchronous and asynchronous error handling into
> different paths like X86 platform does:
> 
> - valid synchronous errors: queue a task_work to synchronously send SIGBUS
>   before ret_to_user.

> - valid asynchronous errors: queue a work into workqueue to asynchronously
>   handle memory failure.

Why? The signal issue was fixed by the previous patch. Why delay the handling of a
poisoned memory location further?


> - abnormal branches such as invalid PA, unexpected severity, no memory
>   failure config support, invalid GUID section, OOM, etc.

... do what?


> Then for valid synchronous errors, the current context in memory failure is
> exactly belongs to the task consuming poison data and it will send SIBBUS
> with proper si_code.


> diff --git a/arch/x86/kernel/cpu/mce/core.c b/arch/x86/kernel/cpu/mce/core.c
> index 6f35f724cc14..1675ff77033d 100644
> --- a/arch/x86/kernel/cpu/mce/core.c
> +++ b/arch/x86/kernel/cpu/mce/core.c
> @@ -1334,17 +1334,10 @@ static void kill_me_maybe(struct callback_head *cb)
>  		return;
>  	}
>  
> -	/*
> -	 * -EHWPOISON from memory_failure() means that it already sent SIGBUS
> -	 * to the current process with the proper error info,
> -	 * -EOPNOTSUPP means hwpoison_filter() filtered the error event,
> -	 *
> -	 * In both cases, no further processing is required.
> -	 */
>  	if (ret == -EHWPOISON || ret == -EOPNOTSUPP)
>  		return;
>  
> -	pr_err("Memory error not recovered");
> +	pr_err("Sending SIGBUS to current task due to memory error not recovered");
>  	kill_me_now(cb);
>  }
>  

I'm not sure how this hunk is relevant to the commit message.


> diff --git a/drivers/acpi/apei/ghes.c b/drivers/acpi/apei/ghes.c
> index 88178aa6222d..014401a65ed5 100644
> --- a/drivers/acpi/apei/ghes.c
> +++ b/drivers/acpi/apei/ghes.c
> @@ -484,6 +497,18 @@ static bool ghes_do_memory_failure(u64 physical_addr, int flags)
>  		return false;
>  	}
>  
> +	if (flags == MF_ACTION_REQUIRED && current->mm) {
> +		twcb = kmalloc(sizeof(*twcb), GFP_ATOMIC);
> +		if (!twcb)
> +			return false;

Yuck - New failure modes! This is why the existing code always has this memory allocated
in struct ghes_estatus_node.


> +		twcb->pfn = pfn;
> +		twcb->flags = flags;
> +		init_task_work(&twcb->twork, memory_failure_cb);
> +		task_work_add(current, &twcb->twork, TWA_RESUME);
> +		return true;
> +	}
> +
>  	memory_failure_queue(pfn, flags);
>  	return true;
>  }

[..]

> @@ -696,7 +721,14 @@ static bool ghes_do_proc(struct ghes *ghes,
>  		}
>  	}
>  
> -	return queued;
> +	/*
> +	 * If no memory failure work is queued for abnormal synchronous
> +	 * errors, do a force kill.
> +	 */
> +	if (sync && !queued) {
> +		pr_err("Sending SIGBUS to current task due to memory error not recovered");
> +		force_sig(SIGBUS);
> +	}
>  }

I think this is a lot of churn, and this hunk is the the only meaningful change in
behaviour. Can you explain how this happens?


Wouldn't it be simpler to split ghes_kick_task_work() to have a sync/async version.
The synchronous version can unconditionally force_sig_mceerr(BUS_MCEERR_AR, ...) after
memory_failure_queue_kick() - but that still means memory_failure() is unable to disappear
errors that it fixed - see MF_RECOVERED.



> diff --git a/mm/memory-failure.c b/mm/memory-failure.c
> index 4d6e43c88489..0d02f8a0b556 100644
> --- a/mm/memory-failure.c
> +++ b/mm/memory-failure.c
> @@ -2161,9 +2161,12 @@ static int memory_failure_dev_pagemap(unsigned long pfn, int flags,
>   * Must run in process context (e.g. a work queue) with interrupts
>   * enabled and no spinlocks held.
>   *
> - * Return: 0 for successfully handled the memory error,
> - *         -EOPNOTSUPP for hwpoison_filter() filtered the error event,
> - *         < 0(except -EOPNOTSUPP) on failure.
> + * Return values:
> + *   0             - success
> + *   -EOPNOTSUPP   - hwpoison_filter() filtered the error event.
> + *   -EHWPOISON    - sent SIGBUS to the current process with the proper
> + *                   error info by kill_accessing_process().
> + *   other negative values - failure
>   */
>  int memory_failure(unsigned long pfn, int flags)
>  {

I'm not sure how this hunk is relevant to the commit message.


Thanks,

James
Shuai Xue Dec. 1, 2023, 7:03 a.m. UTC | #2
On 2023/12/1 01:39, James Morse wrote:
> Hi Shuai,
> 
> On 07/10/2023 08:28, Shuai Xue wrote:
>> Hardware errors could be signaled by synchronous interrupt,
> 
> I'm struggling with 'synchronous interrupt'. Do you mean arm64's 'precise' (all
> instructions before the exception were executed, and none after).
> Otherwise, surely any interrupt from a background scrubber is inherently asynchronous!
> 

I am sorry, this is typo. I mean asynchronous interrupt.

> 
>> e.g.  when an
>> error is detected by a background scrubber, or signaled by synchronous
>> exception, e.g. when an uncorrected error is consumed. Both synchronous and
>> asynchronous error are queued and handled by a dedicated kthread in
>> workqueue.
>>
>> commit 7f17b4a121d0 ("ACPI: APEI: Kick the memory_failure() queue for
>> synchronous errors") keep track of whether memory_failure() work was
>> queued, and make task_work pending to flush out the workqueue so that the
>> work for synchronous error is processed before returning to user-space.
> 
> It does it regardless, if user-space was interrupted by APEI any work queued as a result
> of that should be completed before we go back to user-space. Otherwise we can bounce
> between user-space and firmware, with the kernel only running the APEI code, and never
> making progress.
> 

Agreed.

> 
>> The trick ensures that the corrupted page is unmapped and poisoned. And
>> after returning to user-space, the task starts at current instruction which
>> triggering a page fault in which kernel will send SIGBUS to current process
>> due to VM_FAULT_HWPOISON.
>>
>> However, the memory failure recovery for hwpoison-aware mechanisms does not
>> work as expected. For example, hwpoison-aware user-space processes like
>> QEMU register their customized SIGBUS handler and enable early kill mode by
>> seting PF_MCE_EARLY at initialization. Then the kernel will directly notify
> 
> (setting, directly)

Thank you. Will fix it.

> 
>> the process by sending a SIGBUS signal in memory failure with wrong
> 
>> si_code: the actual user-space process accessing the corrupt memory
>> location, but its memory failure work is handled in a kthread context, so
>> it will send SIGBUS with BUS_MCEERR_AO si_code to the actual user-space
>> process instead of BUS_MCEERR_AR in kill_proc().
> 
> This is hard to parse, "the user-space process is accessing"? (dropping 'actual' and
> adding 'is')

Will fix it.


> 
> 
> Wasn't this behaviour fixed by the previous patch?
> 
> What problem are you fixing here?


Nope. The memory_failure() runs in a kthread context, but not the
user-space process which consuming poison data.


    // kill_proc() in memory-failure.c

	if ((flags & MF_ACTION_REQUIRED) && (t == current))
		ret = force_sig_mceerr(BUS_MCEERR_AR,
				 (void __user *)tk->addr, addr_lsb);
	else
		ret = send_sig_mceerr(BUS_MCEERR_AO, (void __user *)tk->addr,
				      addr_lsb, t);

So, even we queue memory_failure() with MF_ACTION_REQUIRED flags in
previous patch, it will still send a sigbus with BUS_MCEERR_AO in the else
branch of kill_proc().

> 
> 
>> To this end, separate synchronous and asynchronous error handling into
>> different paths like X86 platform does:
>>
>> - valid synchronous errors: queue a task_work to synchronously send SIGBUS
>>   before ret_to_user.
> 
>> - valid asynchronous errors: queue a work into workqueue to asynchronously
>>   handle memory failure.
> 
> Why? The signal issue was fixed by the previous patch. Why delay the handling of a
> poisoned memory location further?

The signal issue is not fixed completely. See my reply above.

> 
> 
>> - abnormal branches such as invalid PA, unexpected severity, no memory
>>   failure config support, invalid GUID section, OOM, etc.
> 
> ... do what?

If no memory failure work is queued for abnormal errors, do a force kill.
Will also add this comment to commit log.

> 
> 
>> Then for valid synchronous errors, the current context in memory failure is
>> exactly belongs to the task consuming poison data and it will send SIBBUS
>> with proper si_code.
> 
> 
>> diff --git a/arch/x86/kernel/cpu/mce/core.c b/arch/x86/kernel/cpu/mce/core.c
>> index 6f35f724cc14..1675ff77033d 100644
>> --- a/arch/x86/kernel/cpu/mce/core.c
>> +++ b/arch/x86/kernel/cpu/mce/core.c
>> @@ -1334,17 +1334,10 @@ static void kill_me_maybe(struct callback_head *cb)
>>  		return;
>>  	}
>>  
>> -	/*
>> -	 * -EHWPOISON from memory_failure() means that it already sent SIGBUS
>> -	 * to the current process with the proper error info,
>> -	 * -EOPNOTSUPP means hwpoison_filter() filtered the error event,
>> -	 *
>> -	 * In both cases, no further processing is required.
>> -	 */
>>  	if (ret == -EHWPOISON || ret == -EOPNOTSUPP)
>>  		return;
>>  
>> -	pr_err("Memory error not recovered");
>> +	pr_err("Sending SIGBUS to current task due to memory error not recovered");
>>  	kill_me_now(cb);
>>  }
>>  
> 
> I'm not sure how this hunk is relevant to the commit message.

I handle memory_failure() error code in its arm64 call site
memory_failure_cb() with some comments, similar to x86 call site
kill_me_maybe(). I moved these two part comments to function declaration,
followed by review comments from Kefeng.

I should split this into a separate patch. Will do it in next version.

> 
> 
>> diff --git a/drivers/acpi/apei/ghes.c b/drivers/acpi/apei/ghes.c
>> index 88178aa6222d..014401a65ed5 100644
>> --- a/drivers/acpi/apei/ghes.c
>> +++ b/drivers/acpi/apei/ghes.c
>> @@ -484,6 +497,18 @@ static bool ghes_do_memory_failure(u64 physical_addr, int flags)
>>  		return false;
>>  	}
>>  
>> +	if (flags == MF_ACTION_REQUIRED && current->mm) {
>> +		twcb = kmalloc(sizeof(*twcb), GFP_ATOMIC);
>> +		if (!twcb)
>> +			return false;
> 
> Yuck - New failure modes! This is why the existing code always has this memory allocated
> in struct ghes_estatus_node.

Are you suggesting to move fields of struct sync_task_work to struct
ghes_estatus_node, and use ghes_estatus_node here? Or we can just alloc
struct sync_task_work with gen_pool_alloc from ghes_estatus_pool.

> 
> 
>> +		twcb->pfn = pfn;
>> +		twcb->flags = flags;
>> +		init_task_work(&twcb->twork, memory_failure_cb);
>> +		task_work_add(current, &twcb->twork, TWA_RESUME);
>> +		return true;
>> +	}
>> +
>>  	memory_failure_queue(pfn, flags);
>>  	return true;
>>  }
> 
> [..]
> 
>> @@ -696,7 +721,14 @@ static bool ghes_do_proc(struct ghes *ghes,
>>  		}
>>  	}
>>  
>> -	return queued;
>> +	/*
>> +	 * If no memory failure work is queued for abnormal synchronous
>> +	 * errors, do a force kill.
>> +	 */
>> +	if (sync && !queued) {
>> +		pr_err("Sending SIGBUS to current task due to memory error not recovered");
>> +		force_sig(SIGBUS);
>> +	}
>>  }
> 
> I think this is a lot of churn, and this hunk is the the only meaningful change in
> behaviour. Can you explain how this happens?

For example:
- invalid GUID section in ghes_do_proc()
- CPER_MEM_VALID_PA is not set, unexpected severity in
  ghes_handle_memory_failure().
- CONFIG_ACPI_APEI_MEMORY_FAILURE is not enabled, !pfn_vaild(pfn) in
  ghes_do_memory_failure()

> 
> 
> Wouldn't it be simpler to split ghes_kick_task_work() to have a sync/async version.
> The synchronous version can unconditionally force_sig_mceerr(BUS_MCEERR_AR, ...) after
> memory_failure_queue_kick() - but that still means memory_failure() is unable to disappear
> errors that it fixed - see MF_RECOVERED.

Sorry, I don't think so. Unconditionally send a sigbus is not a good
choice.  For example, if a sync memory error detected in instruction memory
error, the kernel should transparently fix and no signal should be send.

    ./einj_mem_uc instr
    [168522.751671] Memory failure: 0x89dedd: corrupted page was clean: dropped without side effects
    [168522.751679] Memory failure: 0x89dedd: recovery action for clean LRU page: Recovered

With this patch set, the instr case behaves consistently on both the arm64 and x86 platforms.

The complex page error_states are handled in memory_failure(). IMHO, we
should left this part to it.

> 
>> diff --git a/mm/memory-failure.c b/mm/memory-failure.c
>> index 4d6e43c88489..0d02f8a0b556 100644
>> --- a/mm/memory-failure.c
>> +++ b/mm/memory-failure.c
>> @@ -2161,9 +2161,12 @@ static int memory_failure_dev_pagemap(unsigned long pfn, int flags,
>>   * Must run in process context (e.g. a work queue) with interrupts
>>   * enabled and no spinlocks held.
>>   *
>> - * Return: 0 for successfully handled the memory error,
>> - *         -EOPNOTSUPP for hwpoison_filter() filtered the error event,
>> - *         < 0(except -EOPNOTSUPP) on failure.
>> + * Return values:
>> + *   0             - success
>> + *   -EOPNOTSUPP   - hwpoison_filter() filtered the error event.
>> + *   -EHWPOISON    - sent SIGBUS to the current process with the proper
>> + *                   error info by kill_accessing_process().
>> + *   other negative values - failure
>>   */
>>  int memory_failure(unsigned long pfn, int flags)
>>  {
> 
> I'm not sure how this hunk is relevant to the commit message.


As mentioned, I will split this into a separate patch.

> 
> 
> Thanks,
> 
> James


Thank you for valuable comments.
Best Regards,
Shuai
diff mbox series

Patch

diff --git a/arch/x86/kernel/cpu/mce/core.c b/arch/x86/kernel/cpu/mce/core.c
index 6f35f724cc14..1675ff77033d 100644
--- a/arch/x86/kernel/cpu/mce/core.c
+++ b/arch/x86/kernel/cpu/mce/core.c
@@ -1334,17 +1334,10 @@  static void kill_me_maybe(struct callback_head *cb)
 		return;
 	}
 
-	/*
-	 * -EHWPOISON from memory_failure() means that it already sent SIGBUS
-	 * to the current process with the proper error info,
-	 * -EOPNOTSUPP means hwpoison_filter() filtered the error event,
-	 *
-	 * In both cases, no further processing is required.
-	 */
 	if (ret == -EHWPOISON || ret == -EOPNOTSUPP)
 		return;
 
-	pr_err("Memory error not recovered");
+	pr_err("Sending SIGBUS to current task due to memory error not recovered");
 	kill_me_now(cb);
 }
 
diff --git a/drivers/acpi/apei/ghes.c b/drivers/acpi/apei/ghes.c
index 88178aa6222d..014401a65ed5 100644
--- a/drivers/acpi/apei/ghes.c
+++ b/drivers/acpi/apei/ghes.c
@@ -450,28 +450,41 @@  static void ghes_clear_estatus(struct ghes *ghes,
 }
 
 /*
- * Called as task_work before returning to user-space.
- * Ensure any queued work has been done before we return to the context that
- * triggered the notification.
+ * struct sync_task_work - for synchronous RAS event
+ *
+ * @twork:                callback_head for task work
+ * @pfn:                  page frame number of corrupted page
+ * @flags:                fine tune action taken
+ *
+ * Structure to pass task work to be handled before
+ * ret_to_user via task_work_add().
  */
-static void ghes_kick_task_work(struct callback_head *head)
+struct sync_task_work {
+	struct callback_head twork;
+	u64 pfn;
+	int flags;
+};
+
+static void memory_failure_cb(struct callback_head *twork)
 {
-	struct acpi_hest_generic_status *estatus;
-	struct ghes_estatus_node *estatus_node;
-	u32 node_len;
+	int ret;
+	struct sync_task_work *twcb =
+		container_of(twork, struct sync_task_work, twork);
 
-	estatus_node = container_of(head, struct ghes_estatus_node, task_work);
-	if (IS_ENABLED(CONFIG_ACPI_APEI_MEMORY_FAILURE))
-		memory_failure_queue_kick(estatus_node->task_work_cpu);
+	ret = memory_failure(twcb->pfn, twcb->flags);
+	kfree(twcb);
 
-	estatus = GHES_ESTATUS_FROM_NODE(estatus_node);
-	node_len = GHES_ESTATUS_NODE_LEN(cper_estatus_len(estatus));
-	gen_pool_free(ghes_estatus_pool, (unsigned long)estatus_node, node_len);
+	if (!ret || ret == -EHWPOISON || ret == -EOPNOTSUPP)
+		return;
+
+	pr_err("Sending SIGBUS to current task due to memory error not recovered");
+	force_sig(SIGBUS);
 }
 
 static bool ghes_do_memory_failure(u64 physical_addr, int flags)
 {
 	unsigned long pfn;
+	struct sync_task_work *twcb;
 
 	if (!IS_ENABLED(CONFIG_ACPI_APEI_MEMORY_FAILURE))
 		return false;
@@ -484,6 +497,18 @@  static bool ghes_do_memory_failure(u64 physical_addr, int flags)
 		return false;
 	}
 
+	if (flags == MF_ACTION_REQUIRED && current->mm) {
+		twcb = kmalloc(sizeof(*twcb), GFP_ATOMIC);
+		if (!twcb)
+			return false;
+
+		twcb->pfn = pfn;
+		twcb->flags = flags;
+		init_task_work(&twcb->twork, memory_failure_cb);
+		task_work_add(current, &twcb->twork, TWA_RESUME);
+		return true;
+	}
+
 	memory_failure_queue(pfn, flags);
 	return true;
 }
@@ -652,7 +677,7 @@  static void ghes_defer_non_standard_event(struct acpi_hest_generic_data *gdata,
 	schedule_work(&entry->work);
 }
 
-static bool ghes_do_proc(struct ghes *ghes,
+static void ghes_do_proc(struct ghes *ghes,
 			 const struct acpi_hest_generic_status *estatus)
 {
 	int sev, sec_sev;
@@ -696,7 +721,14 @@  static bool ghes_do_proc(struct ghes *ghes,
 		}
 	}
 
-	return queued;
+	/*
+	 * If no memory failure work is queued for abnormal synchronous
+	 * errors, do a force kill.
+	 */
+	if (sync && !queued) {
+		pr_err("Sending SIGBUS to current task due to memory error not recovered");
+		force_sig(SIGBUS);
+	}
 }
 
 static void __ghes_print_estatus(const char *pfx,
@@ -998,9 +1030,7 @@  static void ghes_proc_in_irq(struct irq_work *irq_work)
 	struct ghes_estatus_node *estatus_node;
 	struct acpi_hest_generic *generic;
 	struct acpi_hest_generic_status *estatus;
-	bool task_work_pending;
 	u32 len, node_len;
-	int ret;
 
 	llnode = llist_del_all(&ghes_estatus_llist);
 	/*
@@ -1015,25 +1045,16 @@  static void ghes_proc_in_irq(struct irq_work *irq_work)
 		estatus = GHES_ESTATUS_FROM_NODE(estatus_node);
 		len = cper_estatus_len(estatus);
 		node_len = GHES_ESTATUS_NODE_LEN(len);
-		task_work_pending = ghes_do_proc(estatus_node->ghes, estatus);
+
+		ghes_do_proc(estatus_node->ghes, estatus);
+
 		if (!ghes_estatus_cached(estatus)) {
 			generic = estatus_node->generic;
 			if (ghes_print_estatus(NULL, generic, estatus))
 				ghes_estatus_cache_add(generic, estatus);
 		}
-
-		if (task_work_pending && current->mm) {
-			estatus_node->task_work.func = ghes_kick_task_work;
-			estatus_node->task_work_cpu = smp_processor_id();
-			ret = task_work_add(current, &estatus_node->task_work,
-					    TWA_RESUME);
-			if (ret)
-				estatus_node->task_work.func = NULL;
-		}
-
-		if (!estatus_node->task_work.func)
-			gen_pool_free(ghes_estatus_pool,
-				      (unsigned long)estatus_node, node_len);
+		gen_pool_free(ghes_estatus_pool, (unsigned long)estatus_node,
+			      node_len);
 
 		llnode = next;
 	}
@@ -1094,7 +1115,6 @@  static int ghes_in_nmi_queue_one_entry(struct ghes *ghes,
 
 	estatus_node->ghes = ghes;
 	estatus_node->generic = ghes->generic;
-	estatus_node->task_work.func = NULL;
 	estatus = GHES_ESTATUS_FROM_NODE(estatus_node);
 
 	if (__ghes_read_estatus(estatus, buf_paddr, fixmap_idx, len)) {
diff --git a/include/acpi/ghes.h b/include/acpi/ghes.h
index 3c8bba9f1114..e5e0c308d27f 100644
--- a/include/acpi/ghes.h
+++ b/include/acpi/ghes.h
@@ -35,9 +35,6 @@  struct ghes_estatus_node {
 	struct llist_node llnode;
 	struct acpi_hest_generic *generic;
 	struct ghes *ghes;
-
-	int task_work_cpu;
-	struct callback_head task_work;
 };
 
 struct ghes_estatus_cache {
diff --git a/include/linux/mm.h b/include/linux/mm.h
index bf5d0b1b16f4..3ce9e4371659 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -3835,7 +3835,6 @@  enum mf_flags {
 int mf_dax_kill_procs(struct address_space *mapping, pgoff_t index,
 		      unsigned long count, int mf_flags);
 extern int memory_failure(unsigned long pfn, int flags);
-extern void memory_failure_queue_kick(int cpu);
 extern int unpoison_memory(unsigned long pfn);
 extern void shake_page(struct page *p);
 extern atomic_long_t num_poisoned_pages __read_mostly;
diff --git a/mm/memory-failure.c b/mm/memory-failure.c
index 4d6e43c88489..0d02f8a0b556 100644
--- a/mm/memory-failure.c
+++ b/mm/memory-failure.c
@@ -2161,9 +2161,12 @@  static int memory_failure_dev_pagemap(unsigned long pfn, int flags,
  * Must run in process context (e.g. a work queue) with interrupts
  * enabled and no spinlocks held.
  *
- * Return: 0 for successfully handled the memory error,
- *         -EOPNOTSUPP for hwpoison_filter() filtered the error event,
- *         < 0(except -EOPNOTSUPP) on failure.
+ * Return values:
+ *   0             - success
+ *   -EOPNOTSUPP   - hwpoison_filter() filtered the error event.
+ *   -EHWPOISON    - sent SIGBUS to the current process with the proper
+ *                   error info by kill_accessing_process().
+ *   other negative values - failure
  */
 int memory_failure(unsigned long pfn, int flags)
 {
@@ -2445,19 +2448,6 @@  static void memory_failure_work_func(struct work_struct *work)
 	}
 }
 
-/*
- * Process memory_failure work queued on the specified CPU.
- * Used to avoid return-to-userspace racing with the memory_failure workqueue.
- */
-void memory_failure_queue_kick(int cpu)
-{
-	struct memory_failure_cpu *mf_cpu;
-
-	mf_cpu = &per_cpu(memory_failure_cpu, cpu);
-	cancel_work_sync(&mf_cpu->work);
-	memory_failure_work_func(&mf_cpu->work);
-}
-
 static int __init memory_failure_init(void)
 {
 	struct memory_failure_cpu *mf_cpu;