diff mbox series

[02/10] hexagon: fix livelock in uaccess

Message ID Y9l0LyAA3zAGeT51@ZenIV (mailing list archive)
State Not Applicable
Headers show
Series [01/10] alpha: fix livelock in uaccess | expand

Checks

Context Check Description
conchuod/cover_letter warning Series does not have a cover letter
conchuod/tree_selection success Guessed tree name to be for-next
conchuod/fixes_present success Fixes tag not required for -next series
conchuod/maintainers_pattern success MAINTAINERS pattern errors before the patch: 13 and now 13
conchuod/verify_signedoff success Signed-off-by tag matches author and committer
conchuod/kdoc success Errors and warnings before: 0 this patch: 0
conchuod/build_rv64_clang_allmodconfig success Errors and warnings before: 0 this patch: 0
conchuod/module_param success Was 0 now: 0
conchuod/build_rv64_gcc_allmodconfig success Errors and warnings before: 0 this patch: 0
conchuod/alphanumeric_selects success Out of order selects before the patch: 57 and now 57
conchuod/build_rv32_defconfig success Build OK
conchuod/dtb_warn_rv64 success Errors and warnings before: 2 this patch: 2
conchuod/header_inline success No static functions without inline keyword in header files
conchuod/checkpatch success total: 0 errors, 0 warnings, 0 checks, 12 lines checked
conchuod/source_inline success Was 0 now: 0
conchuod/build_rv64_nommu_k210_defconfig success Build OK
conchuod/verify_fixes success No Fixes tag
conchuod/build_rv64_nommu_virt_defconfig success Build OK

Commit Message

Al Viro Jan. 31, 2023, 8:03 p.m. UTC
hexagon equivalent of 26178ec11ef3 "x86: mm: consolidate VM_FAULT_RETRY handling"
If e.g. get_user() triggers a page fault and a fatal signal is caught, we might
end up with handle_mm_fault() returning VM_FAULT_RETRY and not doing anything
to page tables.  In such case we must *not* return to the faulting insn -
that would repeat the entire thing without making any progress; what we need
instead is to treat that as failed (user) memory access.

Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
---
 arch/hexagon/mm/vm_fault.c | 5 ++++-
 1 file changed, 4 insertions(+), 1 deletion(-)

Comments

Brian Cain Feb. 10, 2023, 2:59 a.m. UTC | #1
> -----Original Message-----
> From: Al Viro <viro@ftp.linux.org.uk> On Behalf Of Al Viro
> Sent: Tuesday, January 31, 2023 2:04 PM
> To: linux-arch@vger.kernel.org
> Cc: linux-alpha@vger.kernel.org; linux-ia64@vger.kernel.org; linux-
> hexagon@vger.kernel.org; linux-m68k@lists.linux-m68k.org; Michal Simek
> <monstr@monstr.eu>; Dinh Nguyen <dinguyen@kernel.org>;
> openrisc@lists.librecores.org; linux-parisc@vger.kernel.org; linux-
> riscv@lists.infradead.org; sparclinux@vger.kernel.org; Linus Torvalds
> <torvalds@linux-foundation.org>
> Subject: [PATCH 02/10] hexagon: fix livelock in uaccess
> 
> WARNING: This email originated from outside of Qualcomm. Please be wary of
> any links or attachments, and do not enable macros.
> 
> hexagon equivalent of 26178ec11ef3 "x86: mm: consolidate
> VM_FAULT_RETRY handling"
> If e.g. get_user() triggers a page fault and a fatal signal is caught, we might
> end up with handle_mm_fault() returning VM_FAULT_RETRY and not doing
> anything
> to page tables.  In such case we must *not* return to the faulting insn -
> that would repeat the entire thing without making any progress; what we
> need
> instead is to treat that as failed (user) memory access.
> 
> Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
> ---
>  arch/hexagon/mm/vm_fault.c | 5 ++++-
>  1 file changed, 4 insertions(+), 1 deletion(-)
> 
> diff --git a/arch/hexagon/mm/vm_fault.c b/arch/hexagon/mm/vm_fault.c
> index f73c7cbfe326..4b578d02fd01 100644
> --- a/arch/hexagon/mm/vm_fault.c
> +++ b/arch/hexagon/mm/vm_fault.c
> @@ -93,8 +93,11 @@ void do_page_fault(unsigned long address, long cause,
> struct pt_regs *regs)
> 
>         fault = handle_mm_fault(vma, address, flags, regs);
> 
> -       if (fault_signal_pending(fault, regs))
> +       if (fault_signal_pending(fault, regs)) {
> +               if (!user_mode(regs))
> +                       goto no_context;
>                 return;
> +       }
> 
>         /* The fault is fully completed (including releasing mmap lock) */
>         if (fault & VM_FAULT_COMPLETED)
> --
> 2.30.2

Acked-by: Brian Cain <bcain@quicinc.com>
diff mbox series

Patch

diff --git a/arch/hexagon/mm/vm_fault.c b/arch/hexagon/mm/vm_fault.c
index f73c7cbfe326..4b578d02fd01 100644
--- a/arch/hexagon/mm/vm_fault.c
+++ b/arch/hexagon/mm/vm_fault.c
@@ -93,8 +93,11 @@  void do_page_fault(unsigned long address, long cause, struct pt_regs *regs)
 
 	fault = handle_mm_fault(vma, address, flags, regs);
 
-	if (fault_signal_pending(fault, regs))
+	if (fault_signal_pending(fault, regs)) {
+		if (!user_mode(regs))
+			goto no_context;
 		return;
+	}
 
 	/* The fault is fully completed (including releasing mmap lock) */
 	if (fault & VM_FAULT_COMPLETED)