From patchwork Tue Jan 31 20:07:17 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Al Viro X-Patchwork-Id: 13123358 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 48227C38142 for ; Tue, 31 Jan 2023 20:07:26 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230152AbjAaUHZ (ORCPT ); Tue, 31 Jan 2023 15:07:25 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60944 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229944AbjAaUHY (ORCPT ); Tue, 31 Jan 2023 15:07:24 -0500 Received: from zeniv.linux.org.uk (zeniv.linux.org.uk [IPv6:2a03:a000:7:0:5054:ff:fe1c:15ff]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 13C7E589AC; Tue, 31 Jan 2023 12:07:19 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=linux.org.uk; s=zeniv-20220401; h=Sender:In-Reply-To:Content-Type: MIME-Version:References:Message-ID:Subject:Cc:To:From:Date:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description; bh=fkGQb+Z+Q8nqxAsgBkwiDlyCIx3ldW4LiysiQRChEQY=; b=uiRdXlLmPVjFhiEbdNWx2iBg/L ukT6759SKWvD4VTcf7s8Zrnsy20tZQIyfGpDAYQMZK4Vr2bXIoZfz9SOM1xMKM3izRzmQrCiw1f59 7UEFhjpIZP09LO6b7yhwL8PNq1KyjxUHtjTU7yoIPqR36F37H94mx0pWJ3RipDcr8xYjM1AAKeQ3J xKdN74J8AHWk6MLBfur4EYMpaBLeXd7YgHATaE32CWS+oS9CqUV91B1jzUXKjV+MrSoHKbtmYhCoh 8etJOX4tJer3i70iBmbcFo15uIh3SC5WE7al3VG5ojYolEtBMrtuCEO4yPEAB5yxFlJ8pp4muojCB NT+qnH5Q==; Received: from viro by zeniv.linux.org.uk with local (Exim 4.96 #2 (Red Hat Linux)) id 1pMwuT-005Ioa-1X; Tue, 31 Jan 2023 20:07:17 +0000 Date: Tue, 31 Jan 2023 20:07:17 +0000 From: Al Viro To: linux-arch@vger.kernel.org Cc: linux-alpha@vger.kernel.org, linux-ia64@vger.kernel.org, linux-hexagon@vger.kernel.org, linux-m68k@lists.linux-m68k.org, Michal Simek , Dinh Nguyen , openrisc@lists.librecores.org, linux-parisc@vger.kernel.org, linux-riscv@lists.infradead.org, sparclinux@vger.kernel.org, Linus Torvalds Subject: [PATCH 10/10] sparc: fix livelock in uaccess Message-ID: References: MIME-Version: 1.0 Content-Disposition: inline In-Reply-To: Sender: Al Viro Precedence: bulk List-ID: X-Mailing-List: linux-parisc@vger.kernel.org sparc equivalent of 26178ec11ef3 "x86: mm: consolidate VM_FAULT_RETRY handling" If e.g. get_user() triggers a page fault and a fatal signal is caught, we might end up with handle_mm_fault() returning VM_FAULT_RETRY and not doing anything to page tables. In such case we must *not* return to the faulting insn - that would repeat the entire thing without making any progress; what we need instead is to treat that as failed (user) memory access. Signed-off-by: Al Viro --- arch/sparc/mm/fault_32.c | 5 ++++- arch/sparc/mm/fault_64.c | 7 ++++++- 2 files changed, 10 insertions(+), 2 deletions(-) diff --git a/arch/sparc/mm/fault_32.c b/arch/sparc/mm/fault_32.c index 91259f291c54..179295b14664 100644 --- a/arch/sparc/mm/fault_32.c +++ b/arch/sparc/mm/fault_32.c @@ -187,8 +187,11 @@ asmlinkage void do_sparc_fault(struct pt_regs *regs, int text_fault, int write, */ fault = handle_mm_fault(vma, address, flags, regs); - if (fault_signal_pending(fault, regs)) + if (fault_signal_pending(fault, regs)) { + if (!from_user) + goto no_context; return; + } /* The fault is fully completed (including releasing mmap lock) */ if (fault & VM_FAULT_COMPLETED) diff --git a/arch/sparc/mm/fault_64.c b/arch/sparc/mm/fault_64.c index 4acc12eafbf5..d91305de694c 100644 --- a/arch/sparc/mm/fault_64.c +++ b/arch/sparc/mm/fault_64.c @@ -424,8 +424,13 @@ asmlinkage void __kprobes do_sparc64_fault(struct pt_regs *regs) fault = handle_mm_fault(vma, address, flags, regs); - if (fault_signal_pending(fault, regs)) + if (fault_signal_pending(fault, regs)) { + if (regs->tstate & TSTATE_PRIV) { + insn = get_fault_insn(regs, insn); + goto handle_kernel_fault; + } goto exit_exception; + } /* The fault is fully completed (including releasing mmap lock) */ if (fault & VM_FAULT_COMPLETED)