From patchwork Sat Sep 21 13:50:52 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Justin He X-Patchwork-Id: 11155491 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 868C115E6 for ; Sat, 21 Sep 2019 13:51:31 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 6254E20C01 for ; Sat, 21 Sep 2019 13:51:31 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=lists.infradead.org header.i=@lists.infradead.org header.b="WwvFokKS" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 6254E20C01 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=arm.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:MIME-Version:Cc:List-Subscribe: List-Help:List-Post:List-Archive:List-Unsubscribe:List-Id:References: In-Reply-To:Message-Id:Date:Subject:To:From:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=wuJtfYstU8Sg9ZEFnTnODPDc3gDPufq1iNdN3DNNQBU=; b=WwvFokKSzBelFsXzInNG4Puslt WgK79vkj/ARqE0958xfVqvjgJj1CEvaf9SZcLH9kQf5InmS8x6EIO6sBSQ8C9NQXL/EA/Y/TstCu6 QuloWAs9x45f0uxGAAUMm3THxKaIQ7fRgtQRfL3eegL0liKKOQRoWkS4UEJpKV9S/UzXxlrs0RUkk wgjdzSa226DOLyAfDDCq6Ex29mRmG4BNvf+QID7knRcbIF0BWbcgLm05p3Drcx495TnIphAiYjJr/ 4Ik+z0qR2DgCch+U9SOv+uZcBgVf5ur234GOvh9JYZD3OitFtwMIdhVyFjLPbCIag4ocWLOIb/N9r mzGlDK6w==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.92.2 #3 (Red Hat Linux)) id 1iBfnO-0006c6-QS; Sat, 21 Sep 2019 13:51:30 +0000 Received: from foss.arm.com ([217.140.110.172]) by bombadil.infradead.org with esmtp (Exim 4.92.2 #3 (Red Hat Linux)) id 1iBfn7-0006J4-1L for linux-arm-kernel@lists.infradead.org; Sat, 21 Sep 2019 13:51:14 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 99A861597; Sat, 21 Sep 2019 06:51:12 -0700 (PDT) Received: from localhost.localdomain (entos-thunderx2-02.shanghai.arm.com [10.169.40.54]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 65E6C3F67D; Sat, 21 Sep 2019 06:51:07 -0700 (PDT) From: Jia He To: Catalin Marinas , Will Deacon , Mark Rutland , James Morse , Marc Zyngier , Matthew Wilcox , "Kirill A. Shutemov" , linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, Suzuki Poulose Subject: [PATCH v8 1/3] arm64: cpufeature: introduce helper cpu_has_hw_af() Date: Sat, 21 Sep 2019 21:50:52 +0800 Message-Id: <20190921135054.142360-2-justin.he@arm.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20190921135054.142360-1-justin.he@arm.com> References: <20190921135054.142360-1-justin.he@arm.com> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20190921_065113_125544_33203AAC X-CRM114-Status: GOOD ( 11.40 ) X-Spam-Score: 0.0 (/) X-Spam-Report: SpamAssassin version 3.4.2 on bombadil.infradead.org summary: Content analysis details: (0.0 points) pts rule name description ---- ---------------------- -------------------------------------------------- 0.0 SPF_HELO_NONE SPF: HELO does not publish an SPF Record -0.0 SPF_PASS SPF: sender matches SPF record X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Ralph Campbell , Jia He , Anshuman Khandual , Alex Van Brunt , Kaly Xin , =?utf-8?b?SsOpcsO0bWUgR2xpc3Nl?= , Punit Agrawal , hejianet@gmail.com, Andrew Morton , nd@arm.com, Robin Murphy , Thomas Gleixner MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org We unconditionally set the HW_AFDBM capability and only enable it on CPUs which really have the feature. But sometimes we need to know whether this cpu has the capability of HW AF. So decouple AF from DBM by new helper cpu_has_hw_af(). Reported-by: kbuild test robot Suggested-by: Suzuki Poulose Signed-off-by: Jia He Reviewed-by: Catalin Marinas --- arch/arm64/include/asm/cpufeature.h | 10 ++++++++++ 1 file changed, 10 insertions(+) diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h index c96ffa4722d3..46caf934ba4e 100644 --- a/arch/arm64/include/asm/cpufeature.h +++ b/arch/arm64/include/asm/cpufeature.h @@ -667,6 +667,16 @@ static inline u32 id_aa64mmfr0_parange_to_phys_shift(int parange) default: return CONFIG_ARM64_PA_BITS; } } + +/* Decouple AF from AFDBM. */ +static inline bool cpu_has_hw_af(void) +{ + if (IS_ENABLED(CONFIG_ARM64_HW_AFDBM)) + return read_cpuid(ID_AA64MMFR1_EL1) & 0xf; + + return false; +} + #endif /* __ASSEMBLY__ */ #endif From patchwork Sat Sep 21 13:50:53 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Justin He X-Patchwork-Id: 11155493 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 3705C15E6 for ; Sat, 21 Sep 2019 13:51:53 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 155CF20C01 for ; Sat, 21 Sep 2019 13:51:53 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=lists.infradead.org header.i=@lists.infradead.org header.b="qV3eHM4C" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 155CF20C01 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=arm.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:MIME-Version:Cc:List-Subscribe: List-Help:List-Post:List-Archive:List-Unsubscribe:List-Id:References: In-Reply-To:Message-Id:Date:Subject:To:From:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=QdtZSwuS25S/hUo+7Io15loCA+D/C1XABQZVpKjiv6I=; b=qV3eHM4CwjoYLRCexRybgDGbio i50vIqeTTZxROsW45OQTioFWs1JAxmYwfB43wJzkkq1I8XV60noIr9w1MRoa4a9HnHBH3P2ByeeY/ pQEJA0Lr6odE2yookHMmy54EAkYgyiBYqLv/+vyqda087TVRAtO8B9A5qDJ0OgEvV1Z4NC6fFzQX8 yTY2wckfXq/tjlb/cfqKYq7zQWf+nD++29iZO5+6ityzeSbbUNbpt77X3PMRaWBlFJhyXprY/ZYPJ wU9eNaTgwJ6IFQWrYs7e2CJBLzbn1WrYw1kPEI5QVeJfACuZnlvKzwVFyEkARSlfndi5SDsS0F2tX FqQ4bCsw==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.92.2 #3 (Red Hat Linux)) id 1iBfng-0006ss-D6; Sat, 21 Sep 2019 13:51:48 +0000 Received: from foss.arm.com ([217.140.110.172]) by bombadil.infradead.org with esmtp (Exim 4.92.2 #3 (Red Hat Linux)) id 1iBfnC-0006OV-Lo for linux-arm-kernel@lists.infradead.org; Sat, 21 Sep 2019 13:51:20 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 370D41570; Sat, 21 Sep 2019 06:51:18 -0700 (PDT) Received: from localhost.localdomain (entos-thunderx2-02.shanghai.arm.com [10.169.40.54]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 0AF4B3F67D; Sat, 21 Sep 2019 06:51:12 -0700 (PDT) From: Jia He To: Catalin Marinas , Will Deacon , Mark Rutland , James Morse , Marc Zyngier , Matthew Wilcox , "Kirill A. Shutemov" , linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, Suzuki Poulose Subject: [PATCH v8 2/3] arm64: mm: implement arch_faults_on_old_pte() on arm64 Date: Sat, 21 Sep 2019 21:50:53 +0800 Message-Id: <20190921135054.142360-3-justin.he@arm.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20190921135054.142360-1-justin.he@arm.com> References: <20190921135054.142360-1-justin.he@arm.com> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20190921_065118_925910_A7958157 X-CRM114-Status: GOOD ( 10.87 ) X-Spam-Score: 0.0 (/) X-Spam-Report: SpamAssassin version 3.4.2 on bombadil.infradead.org summary: Content analysis details: (0.0 points) pts rule name description ---- ---------------------- -------------------------------------------------- 0.0 SPF_HELO_NONE SPF: HELO does not publish an SPF Record -0.0 SPF_PASS SPF: sender matches SPF record X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Ralph Campbell , Jia He , Anshuman Khandual , Alex Van Brunt , Kaly Xin , =?utf-8?b?SsOpcsO0bWUgR2xpc3Nl?= , Punit Agrawal , hejianet@gmail.com, Andrew Morton , nd@arm.com, Robin Murphy , Thomas Gleixner MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org On arm64 without hardware Access Flag, copying fromuser will fail because the pte is old and cannot be marked young. So we always end up with zeroed page after fork() + CoW for pfn mappings. we don't always have a hardware-managed access flag on arm64. Hence implement arch_faults_on_old_pte on arm64 to indicate that it might cause page fault when accessing old pte. Signed-off-by: Jia He Reviewed-by: Catalin Marinas --- arch/arm64/include/asm/pgtable.h | 12 ++++++++++++ 1 file changed, 12 insertions(+) diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h index e09760ece844..4a9939615e41 100644 --- a/arch/arm64/include/asm/pgtable.h +++ b/arch/arm64/include/asm/pgtable.h @@ -868,6 +868,18 @@ static inline void update_mmu_cache(struct vm_area_struct *vma, #define phys_to_ttbr(addr) (addr) #endif +/* + * On arm64 without hardware Access Flag, copying fromuser will fail because + * the pte is old and cannot be marked young. So we always end up with zeroed + * page after fork() + CoW for pfn mappings. we don't always have a + * hardware-managed access flag on arm64. + */ +static inline bool arch_faults_on_old_pte(void) +{ + return !cpu_has_hw_af(); +} +#define arch_faults_on_old_pte arch_faults_on_old_pte + #endif /* !__ASSEMBLY__ */ #endif /* __ASM_PGTABLE_H */ From patchwork Sat Sep 21 13:50:54 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Justin He X-Patchwork-Id: 11155495 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 571BD1745 for ; Sat, 21 Sep 2019 13:52:06 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 3308D2080F for ; Sat, 21 Sep 2019 13:52:06 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=lists.infradead.org header.i=@lists.infradead.org header.b="GsEitK7N" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 3308D2080F Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=arm.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:MIME-Version:Cc:List-Subscribe: List-Help:List-Post:List-Archive:List-Unsubscribe:List-Id:References: In-Reply-To:Message-Id:Date:Subject:To:From:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=5KvtvwWCDCglleqQp8Oe7/38QdTHULTxSLHJsUZ2R9U=; b=GsEitK7N1KsmKlAeUkgozRPeVv q/2SB7gap2wO2CgnBe1Xob7NEkdID+Y3536Hn9jvSAlQJM6sItZB6xuUAqa22QZgWAyVYlFHYd+wZ 0FXz7U8t3lQE11VUZbDnsP8MbhpjzheV+qDft2BByysfLMg8kCKcYvaZYJREsRfWV/GLgvkQTAq49 1zCoC7UWxH/QaZg+DnFtY8m+ZZjg1GwIJsFueeJXWxuOnplQ900rD64TXXEUFegBI3b6e/sYeJNl9 iclqRQprLy4kqEkLfHbSXj4KBzrUDxyu5iLXrtUgvReqZunG57FIhX/Jju8LFlZSPoFvrAIVcRAro t+fwTqKw==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.92.2 #3 (Red Hat Linux)) id 1iBfnx-00077W-9n; Sat, 21 Sep 2019 13:52:05 +0000 Received: from foss.arm.com ([217.140.110.172]) by bombadil.infradead.org with esmtp (Exim 4.92.2 #3 (Red Hat Linux)) id 1iBfnI-0006Tu-Bs for linux-arm-kernel@lists.infradead.org; Sat, 21 Sep 2019 13:51:26 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id C6AF21597; Sat, 21 Sep 2019 06:51:23 -0700 (PDT) Received: from localhost.localdomain (entos-thunderx2-02.shanghai.arm.com [10.169.40.54]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 9B7433F67D; Sat, 21 Sep 2019 06:51:18 -0700 (PDT) From: Jia He To: Catalin Marinas , Will Deacon , Mark Rutland , James Morse , Marc Zyngier , Matthew Wilcox , "Kirill A. Shutemov" , linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, Suzuki Poulose Subject: [PATCH v8 3/3] mm: fix double page fault on arm64 if PTE_AF is cleared Date: Sat, 21 Sep 2019 21:50:54 +0800 Message-Id: <20190921135054.142360-4-justin.he@arm.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20190921135054.142360-1-justin.he@arm.com> References: <20190921135054.142360-1-justin.he@arm.com> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20190921_065124_524861_8D83AAC8 X-CRM114-Status: GOOD ( 18.93 ) X-Spam-Score: 0.0 (/) X-Spam-Report: SpamAssassin version 3.4.2 on bombadil.infradead.org summary: Content analysis details: (0.0 points) pts rule name description ---- ---------------------- -------------------------------------------------- 0.0 SPF_HELO_NONE SPF: HELO does not publish an SPF Record -0.0 SPF_PASS SPF: sender matches SPF record X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Ralph Campbell , Jia He , Anshuman Khandual , Alex Van Brunt , Kaly Xin , =?utf-8?b?SsOpcsO0bWUgR2xpc3Nl?= , Punit Agrawal , hejianet@gmail.com, Andrew Morton , nd@arm.com, Robin Murphy , Thomas Gleixner MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org When we tested pmdk unit test [1] vmmalloc_fork TEST1 in arm64 guest, there will be a double page fault in __copy_from_user_inatomic of cow_user_page. Below call trace is from arm64 do_page_fault for debugging purpose [ 110.016195] Call trace: [ 110.016826] do_page_fault+0x5a4/0x690 [ 110.017812] do_mem_abort+0x50/0xb0 [ 110.018726] el1_da+0x20/0xc4 [ 110.019492] __arch_copy_from_user+0x180/0x280 [ 110.020646] do_wp_page+0xb0/0x860 [ 110.021517] __handle_mm_fault+0x994/0x1338 [ 110.022606] handle_mm_fault+0xe8/0x180 [ 110.023584] do_page_fault+0x240/0x690 [ 110.024535] do_mem_abort+0x50/0xb0 [ 110.025423] el0_da+0x20/0x24 The pte info before __copy_from_user_inatomic is (PTE_AF is cleared): [ffff9b007000] pgd=000000023d4f8003, pud=000000023da9b003, pmd=000000023d4b3003, pte=360000298607bd3 As told by Catalin: "On arm64 without hardware Access Flag, copying from user will fail because the pte is old and cannot be marked young. So we always end up with zeroed page after fork() + CoW for pfn mappings. we don't always have a hardware-managed access flag on arm64." This patch fix it by calling pte_mkyoung. Also, the parameter is changed because vmf should be passed to cow_user_page() Add a WARN_ON_ONCE when __copy_from_user_inatomic() returns error in case there can be some obscure use-case.(by Kirill) [1] https://github.com/pmem/pmdk/tree/master/src/test/vmmalloc_fork Reported-by: Yibo Cai Signed-off-by: Jia He Reviewed-by: Matthew Wilcox (Oracle) Acked-by: Kirill A. Shutemov --- mm/memory.c | 67 ++++++++++++++++++++++++++++++++++++++++++++++++----- 1 file changed, 61 insertions(+), 6 deletions(-) diff --git a/mm/memory.c b/mm/memory.c index e2bb51b6242e..ae09b070b04d 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -118,6 +118,13 @@ int randomize_va_space __read_mostly = 2; #endif +#ifndef arch_faults_on_old_pte +static inline bool arch_faults_on_old_pte(void) +{ + return false; +} +#endif + static int __init disable_randmaps(char *s) { randomize_va_space = 0; @@ -2140,8 +2147,13 @@ static inline int pte_unmap_same(struct mm_struct *mm, pmd_t *pmd, return same; } -static inline void cow_user_page(struct page *dst, struct page *src, unsigned long va, struct vm_area_struct *vma) +static inline bool cow_user_page(struct page *dst, struct page *src, + struct vm_fault *vmf) { + struct vm_area_struct *vma = vmf->vma; + struct mm_struct *mm = vma->vm_mm; + unsigned long addr = vmf->address; + debug_dma_assert_idle(src); /* @@ -2151,21 +2163,53 @@ static inline void cow_user_page(struct page *dst, struct page *src, unsigned lo * fails, we just zero-fill it. Live with it. */ if (unlikely(!src)) { - void *kaddr = kmap_atomic(dst); - void __user *uaddr = (void __user *)(va & PAGE_MASK); + void *kaddr; + pte_t entry; + void __user *uaddr = (void __user *)(addr & PAGE_MASK); + /* On architectures with software "accessed" bits, we would + * take a double page fault, so mark it accessed here. + */ + if (arch_faults_on_old_pte() && !pte_young(vmf->orig_pte)) { + vmf->pte = pte_offset_map_lock(mm, vmf->pmd, addr, + &vmf->ptl); + if (likely(pte_same(*vmf->pte, vmf->orig_pte))) { + entry = pte_mkyoung(vmf->orig_pte); + if (ptep_set_access_flags(vma, addr, + vmf->pte, entry, 0)) + update_mmu_cache(vma, addr, vmf->pte); + } else { + /* Other thread has already handled the fault + * and we don't need to do anything. If it's + * not the case, the fault will be triggered + * again on the same address. + */ + pte_unmap_unlock(vmf->pte, vmf->ptl); + return false; + } + pte_unmap_unlock(vmf->pte, vmf->ptl); + } + + kaddr = kmap_atomic(dst); /* * This really shouldn't fail, because the page is there * in the page tables. But it might just be unreadable, * in which case we just give up and fill the result with * zeroes. */ - if (__copy_from_user_inatomic(kaddr, uaddr, PAGE_SIZE)) + if (__copy_from_user_inatomic(kaddr, uaddr, PAGE_SIZE)) { + /* Give a warn in case there can be some obscure + * use-case + */ + WARN_ON_ONCE(1); clear_page(kaddr); + } kunmap_atomic(kaddr); flush_dcache_page(dst); } else - copy_user_highpage(dst, src, va, vma); + copy_user_highpage(dst, src, addr, vma); + + return true; } static gfp_t __get_fault_gfp_mask(struct vm_area_struct *vma) @@ -2318,7 +2362,18 @@ static vm_fault_t wp_page_copy(struct vm_fault *vmf) vmf->address); if (!new_page) goto oom; - cow_user_page(new_page, old_page, vmf->address, vma); + + if (!cow_user_page(new_page, old_page, vmf)) { + /* COW failed, if the fault was solved by other, + * it's fine. If not, userspace would re-fault on + * the same address and we will handle the fault + * from the second attempt. + */ + put_page(new_page); + if (old_page) + put_page(old_page); + return 0; + } } if (mem_cgroup_try_charge_delay(new_page, mm, GFP_KERNEL, &memcg, false))