From patchwork Tue Jan 31 20:05:17 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Al Viro X-Patchwork-Id: 13123353 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 70FF7C636CD for ; Tue, 31 Jan 2023 20:05:22 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231880AbjAaUFV (ORCPT ); Tue, 31 Jan 2023 15:05:21 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60270 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229637AbjAaUFU (ORCPT ); Tue, 31 Jan 2023 15:05:20 -0500 Received: from zeniv.linux.org.uk (zeniv.linux.org.uk [IPv6:2a03:a000:7:0:5054:ff:fe1c:15ff]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 78CA056ED5; Tue, 31 Jan 2023 12:05:19 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=linux.org.uk; s=zeniv-20220401; h=Sender:In-Reply-To:Content-Type: MIME-Version:References:Message-ID:Subject:Cc:To:From:Date:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description; bh=3kTCQJAzvtQgF+3xVESNHSwfzkDuXB+SbkK+fGW9zGg=; b=bGYu2JdcFxfcNz9N9WK6QGtxFH n5T+8OVqLi+TTCCa+E46lH+Kjx3zHOAPxbgy0ZP7B2oy4PZSg58DZr+cutBNk0vUoPEtWyk+A/Jh/ rVDifq05ryKDxsQtc00avshK/j20/pEkz2hx8GvIOWK8xAafyH15128EgVU7ZBuxv6QK1RQmr1wYa QWwyIaTWTrCHCT+ycDdW8uW/ZJSQNVhdTTDu4uoUrr5VWEe5EMZUKN9u3QjNZREUFwa6DO6QRqi6M iIOGix659cEq+kGzhE3+5ZabtWHALGzDuufXwbtgR6vd+tfpkSmbd4VYgDLWmHy5Llp9+eE11N20y v5D3jftg==; Received: from viro by zeniv.linux.org.uk with local (Exim 4.96 #2 (Red Hat Linux)) id 1pMwsX-005InD-1o; Tue, 31 Jan 2023 20:05:17 +0000 Date: Tue, 31 Jan 2023 20:05:17 +0000 From: Al Viro To: linux-arch@vger.kernel.org Cc: linux-alpha@vger.kernel.org, linux-ia64@vger.kernel.org, linux-hexagon@vger.kernel.org, linux-m68k@lists.linux-m68k.org, Michal Simek , Dinh Nguyen , openrisc@lists.librecores.org, linux-parisc@vger.kernel.org, linux-riscv@lists.infradead.org, sparclinux@vger.kernel.org, Linus Torvalds Subject: [PATCH 05/10] microblaze: fix livelock in uaccess Message-ID: References: MIME-Version: 1.0 Content-Disposition: inline In-Reply-To: Sender: Al Viro Precedence: bulk List-ID: X-Mailing-List: linux-parisc@vger.kernel.org microblaze equivalent of 26178ec11ef3 "x86: mm: consolidate VM_FAULT_RETRY handling" If e.g. get_user() triggers a page fault and a fatal signal is caught, we might end up with handle_mm_fault() returning VM_FAULT_RETRY and not doing anything to page tables. In such case we must *not* return to the faulting insn - that would repeat the entire thing without making any progress; what we need instead is to treat that as failed (user) memory access. Signed-off-by: Al Viro --- arch/microblaze/mm/fault.c | 5 ++++- 1 file changed, 4 insertions(+), 1 deletion(-) diff --git a/arch/microblaze/mm/fault.c b/arch/microblaze/mm/fault.c index 5c40c3ebe52f..32d9717039ba 100644 --- a/arch/microblaze/mm/fault.c +++ b/arch/microblaze/mm/fault.c @@ -219,8 +219,11 @@ void do_page_fault(struct pt_regs *regs, unsigned long address, */ fault = handle_mm_fault(vma, address, flags, regs); - if (fault_signal_pending(fault, regs)) + if (fault_signal_pending(fault, regs)) { + if (!user_mode(regs)) + goto no_context; return; + } /* The fault is fully completed (including releasing mmap lock) */ if (fault & VM_FAULT_COMPLETED)