From patchwork Fri Sep 20 13:54:35 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jia He X-Patchwork-Id: 11154491 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 0734016B1 for ; Fri, 20 Sep 2019 13:55:13 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id AE67C2054F for ; Fri, 20 Sep 2019 13:55:12 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=lists.infradead.org header.i=@lists.infradead.org header.b="CBvG/mWp" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org AE67C2054F Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=arm.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:MIME-Version:Cc:List-Subscribe: List-Help:List-Post:List-Archive:List-Unsubscribe:List-Id:References: In-Reply-To:Message-Id:Date:Subject:To:From:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=wuJtfYstU8Sg9ZEFnTnODPDc3gDPufq1iNdN3DNNQBU=; b=CBvG/mWpFil/1nuW9OsZgL10tL rkooQy2gnwP0ghl/QgbPlWLdXpHVWGMs41AnkYtVsx3094er2OGSMi1zWHU0VDkjGJiCQ1hqBDJB9 8g9+gqRODigrMaF2BR8NYkcs80AJNys/bKfkxShXpUwfV6UwS6tl5ZLrdFHK8Z+bR4iww90v1sw6P MjQIIKttlisd0VBpqEqpPe7e+ayFrf77Rcbera9+eVPkn/Ne1HiehpQavGK9jiJTc2ryyjyF9k0SA Q0lloOoQC9kZ9qF3RXrgSVQOIcPPg3ejGvFKKaJ7FlNAlOY9j5Z4mLbR9gSBTgPGezvPCeXOau+Hc 5s4qVw7w==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.92.2 #3 (Red Hat Linux)) id 1iBJNO-0008S6-7m; Fri, 20 Sep 2019 13:55:10 +0000 Received: from foss.arm.com ([217.140.110.172]) by bombadil.infradead.org with esmtp (Exim 4.92.2 #3 (Red Hat Linux)) id 1iBJNC-0008N2-F0 for linux-arm-kernel@lists.infradead.org; Fri, 20 Sep 2019 13:55:00 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id EB45B1597; Fri, 20 Sep 2019 06:54:57 -0700 (PDT) Received: from localhost.localdomain (entos-thunderx2-02.shanghai.arm.com [10.169.40.54]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 78A403F67D; Fri, 20 Sep 2019 06:54:53 -0700 (PDT) From: Jia He To: Catalin Marinas , Will Deacon , Mark Rutland , James Morse , Marc Zyngier , Matthew Wilcox , "Kirill A. Shutemov" , linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, Suzuki Poulose Subject: [PATCH v7 1/3] arm64: cpufeature: introduce helper cpu_has_hw_af() Date: Fri, 20 Sep 2019 21:54:35 +0800 Message-Id: <20190920135437.25622-2-justin.he@arm.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20190920135437.25622-1-justin.he@arm.com> References: <20190920135437.25622-1-justin.he@arm.com> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20190920_065458_561528_40C35EA4 X-CRM114-Status: GOOD ( 10.99 ) X-Spam-Score: 0.0 (/) X-Spam-Report: SpamAssassin version 3.4.2 on bombadil.infradead.org summary: Content analysis details: (0.0 points) pts rule name description ---- ---------------------- -------------------------------------------------- -0.0 SPF_PASS SPF: sender matches SPF record 0.0 SPF_HELO_NONE SPF: HELO does not publish an SPF Record X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Ralph Campbell , Jia He , Anshuman Khandual , Alex Van Brunt , Kaly Xin , =?utf-8?b?SsOpcsO0bWUgR2xpc3Nl?= , Punit Agrawal , hejianet@gmail.com, Andrew Morton , nd@arm.com, Robin Murphy , Thomas Gleixner MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org We unconditionally set the HW_AFDBM capability and only enable it on CPUs which really have the feature. But sometimes we need to know whether this cpu has the capability of HW AF. So decouple AF from DBM by new helper cpu_has_hw_af(). Reported-by: kbuild test robot Suggested-by: Suzuki Poulose Signed-off-by: Jia He --- arch/arm64/include/asm/cpufeature.h | 10 ++++++++++ 1 file changed, 10 insertions(+) diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h index c96ffa4722d3..46caf934ba4e 100644 --- a/arch/arm64/include/asm/cpufeature.h +++ b/arch/arm64/include/asm/cpufeature.h @@ -667,6 +667,16 @@ static inline u32 id_aa64mmfr0_parange_to_phys_shift(int parange) default: return CONFIG_ARM64_PA_BITS; } } + +/* Decouple AF from AFDBM. */ +static inline bool cpu_has_hw_af(void) +{ + if (IS_ENABLED(CONFIG_ARM64_HW_AFDBM)) + return read_cpuid(ID_AA64MMFR1_EL1) & 0xf; + + return false; +} + #endif /* __ASSEMBLY__ */ #endif From patchwork Fri Sep 20 13:54:36 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jia He X-Patchwork-Id: 11154495 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 68A071599 for ; Fri, 20 Sep 2019 13:55:29 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 417B82054F for ; Fri, 20 Sep 2019 13:55:29 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=lists.infradead.org header.i=@lists.infradead.org header.b="h42DQrL4" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 417B82054F Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=arm.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:MIME-Version:Cc:List-Subscribe: List-Help:List-Post:List-Archive:List-Unsubscribe:List-Id:References: In-Reply-To:Message-Id:Date:Subject:To:From:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=QdtZSwuS25S/hUo+7Io15loCA+D/C1XABQZVpKjiv6I=; b=h42DQrL4ay7fCZPplcjO9erTrR lAgq0bd72HMGydS/5Gt4+gEQ74UQLPpZXlMr/SS+PvCqiH10S1EePSAyTeQ1+2MfovzuEptPy8LtK wJdxTWhPuaNjJTYZHr/m/jfcy7MgFQGZ9FndGYePtfUlrnVDeTg7zg/alp8/8Aw8kA6D++DPzM+6Z jzrSL+0mLCFtWDOUxJYQwmlfnrlzuUMfbTQ+6eVvlrywt90sYkf5TfAqBd38/WEexIyRllnCSfGHw GluXBRFQfGFNrj76TfVNgZukKQ5Nn9ksVcSitk7TShlALJ+bpp4qWWlECEyxYLyWGTU9XP9k44CUl JJxYhGiQ==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.92.2 #3 (Red Hat Linux)) id 1iBJNg-0001gg-AQ; Fri, 20 Sep 2019 13:55:28 +0000 Received: from foss.arm.com ([217.140.110.172]) by bombadil.infradead.org with esmtp (Exim 4.92.2 #3 (Red Hat Linux)) id 1iBJNJ-00006Z-4a for linux-arm-kernel@lists.infradead.org; Fri, 20 Sep 2019 13:55:10 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id B7B5C1570; Fri, 20 Sep 2019 06:55:02 -0700 (PDT) Received: from localhost.localdomain (entos-thunderx2-02.shanghai.arm.com [10.169.40.54]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 4DD873F67D; Fri, 20 Sep 2019 06:54:58 -0700 (PDT) From: Jia He To: Catalin Marinas , Will Deacon , Mark Rutland , James Morse , Marc Zyngier , Matthew Wilcox , "Kirill A. Shutemov" , linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, Suzuki Poulose Subject: [PATCH v7 2/3] arm64: mm: implement arch_faults_on_old_pte() on arm64 Date: Fri, 20 Sep 2019 21:54:36 +0800 Message-Id: <20190920135437.25622-3-justin.he@arm.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20190920135437.25622-1-justin.he@arm.com> References: <20190920135437.25622-1-justin.he@arm.com> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20190920_065505_647023_8CE7931B X-CRM114-Status: GOOD ( 10.46 ) X-Spam-Score: 0.0 (/) X-Spam-Report: SpamAssassin version 3.4.2 on bombadil.infradead.org summary: Content analysis details: (0.0 points) pts rule name description ---- ---------------------- -------------------------------------------------- -0.0 SPF_PASS SPF: sender matches SPF record 0.0 SPF_HELO_NONE SPF: HELO does not publish an SPF Record X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Ralph Campbell , Jia He , Anshuman Khandual , Alex Van Brunt , Kaly Xin , =?utf-8?b?SsOpcsO0bWUgR2xpc3Nl?= , Punit Agrawal , hejianet@gmail.com, Andrew Morton , nd@arm.com, Robin Murphy , Thomas Gleixner MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org On arm64 without hardware Access Flag, copying fromuser will fail because the pte is old and cannot be marked young. So we always end up with zeroed page after fork() + CoW for pfn mappings. we don't always have a hardware-managed access flag on arm64. Hence implement arch_faults_on_old_pte on arm64 to indicate that it might cause page fault when accessing old pte. Signed-off-by: Jia He --- arch/arm64/include/asm/pgtable.h | 12 ++++++++++++ 1 file changed, 12 insertions(+) diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h index e09760ece844..4a9939615e41 100644 --- a/arch/arm64/include/asm/pgtable.h +++ b/arch/arm64/include/asm/pgtable.h @@ -868,6 +868,18 @@ static inline void update_mmu_cache(struct vm_area_struct *vma, #define phys_to_ttbr(addr) (addr) #endif +/* + * On arm64 without hardware Access Flag, copying fromuser will fail because + * the pte is old and cannot be marked young. So we always end up with zeroed + * page after fork() + CoW for pfn mappings. we don't always have a + * hardware-managed access flag on arm64. + */ +static inline bool arch_faults_on_old_pte(void) +{ + return !cpu_has_hw_af(); +} +#define arch_faults_on_old_pte arch_faults_on_old_pte + #endif /* !__ASSEMBLY__ */ #endif /* __ASM_PGTABLE_H */ From patchwork Fri Sep 20 13:54:37 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jia He X-Patchwork-Id: 11154493 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 81E9216B1 for ; Fri, 20 Sep 2019 13:55:26 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 5805A2054F for ; Fri, 20 Sep 2019 13:55:26 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=lists.infradead.org header.i=@lists.infradead.org header.b="U8HVSYFB" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 5805A2054F Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=arm.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:MIME-Version:Cc:List-Subscribe: List-Help:List-Post:List-Archive:List-Unsubscribe:List-Id:References: In-Reply-To:Message-Id:Date:Subject:To:From:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=fisKYON3DzxVY0UGaZaiaJpwWhiLXREh+Q9SVIsB8x8=; b=U8HVSYFBUNS25Im3/R56pq+VoM zcBwGc4GK2mtnOpi9G/uIwxpNhF3+YERXuSxXk7OUxwBZ+z1CrF+MxCGZv2o2AKOb8cHTFmDI3t06 zHzv9CoDj9h2oKaxbzDdMbiwW5M1o/j9x6cmImHBjR14nYOZ8XwYJi2PHoYq65/1EQjzfYrWRWJa3 8koLV5DJZBSZ32Y9/TdcLHkz+6rGcnGEYMJa0/e+TqJIzJS7NPMGvFO9L+1gau9xl0mDDwEcZTf61 svtlFjr/Du8ShCyYoQ2KNNEbceAH0vTFaMePM2n2UPrFLw/hkOfuo5IrW8oUungTbGE2HvfW7WB9R kgvJm4Wg==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.92.2 #3 (Red Hat Linux)) id 1iBJNY-0001To-70; Fri, 20 Sep 2019 13:55:20 +0000 Received: from foss.arm.com ([217.140.110.172]) by bombadil.infradead.org with esmtp (Exim 4.92.2 #3 (Red Hat Linux)) id 1iBJNN-0000d0-Q3 for linux-arm-kernel@lists.infradead.org; Fri, 20 Sep 2019 13:55:12 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 85C741597; Fri, 20 Sep 2019 06:55:07 -0700 (PDT) Received: from localhost.localdomain (entos-thunderx2-02.shanghai.arm.com [10.169.40.54]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 1A7653F67D; Fri, 20 Sep 2019 06:55:02 -0700 (PDT) From: Jia He To: Catalin Marinas , Will Deacon , Mark Rutland , James Morse , Marc Zyngier , Matthew Wilcox , "Kirill A. Shutemov" , linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, Suzuki Poulose Subject: [PATCH v7 3/3] mm: fix double page fault on arm64 if PTE_AF is cleared Date: Fri, 20 Sep 2019 21:54:37 +0800 Message-Id: <20190920135437.25622-4-justin.he@arm.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20190920135437.25622-1-justin.he@arm.com> References: <20190920135437.25622-1-justin.he@arm.com> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20190920_065510_280595_39761FB7 X-CRM114-Status: GOOD ( 18.53 ) X-Spam-Score: 0.0 (/) X-Spam-Report: SpamAssassin version 3.4.2 on bombadil.infradead.org summary: Content analysis details: (0.0 points) pts rule name description ---- ---------------------- -------------------------------------------------- -0.0 SPF_PASS SPF: sender matches SPF record 0.0 SPF_HELO_NONE SPF: HELO does not publish an SPF Record X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Ralph Campbell , Jia He , Anshuman Khandual , Alex Van Brunt , Kaly Xin , =?utf-8?b?SsOpcsO0bWUgR2xpc3Nl?= , Punit Agrawal , hejianet@gmail.com, Andrew Morton , nd@arm.com, Robin Murphy , Thomas Gleixner MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org When we tested pmdk unit test [1] vmmalloc_fork TEST1 in arm64 guest, there will be a double page fault in __copy_from_user_inatomic of cow_user_page. Below call trace is from arm64 do_page_fault for debugging purpose [ 110.016195] Call trace: [ 110.016826] do_page_fault+0x5a4/0x690 [ 110.017812] do_mem_abort+0x50/0xb0 [ 110.018726] el1_da+0x20/0xc4 [ 110.019492] __arch_copy_from_user+0x180/0x280 [ 110.020646] do_wp_page+0xb0/0x860 [ 110.021517] __handle_mm_fault+0x994/0x1338 [ 110.022606] handle_mm_fault+0xe8/0x180 [ 110.023584] do_page_fault+0x240/0x690 [ 110.024535] do_mem_abort+0x50/0xb0 [ 110.025423] el0_da+0x20/0x24 The pte info before __copy_from_user_inatomic is (PTE_AF is cleared): [ffff9b007000] pgd=000000023d4f8003, pud=000000023da9b003, pmd=000000023d4b3003, pte=360000298607bd3 As told by Catalin: "On arm64 without hardware Access Flag, copying from user will fail because the pte is old and cannot be marked young. So we always end up with zeroed page after fork() + CoW for pfn mappings. we don't always have a hardware-managed access flag on arm64." This patch fix it by calling pte_mkyoung. Also, the parameter is changed because vmf should be passed to cow_user_page() Add a WARN_ON_ONCE when __copy_from_user_inatomic() returns error in case there can be some obscure use-case.(by Kirill) [1] https://github.com/pmem/pmdk/tree/master/src/test/vmmalloc_fork Reported-by: Yibo Cai Signed-off-by: Jia He Acked-by: Kirill A. Shutemov --- mm/memory.c | 67 ++++++++++++++++++++++++++++++++++++++++++++++++----- 1 file changed, 61 insertions(+), 6 deletions(-) diff --git a/mm/memory.c b/mm/memory.c index e2bb51b6242e..3e39e40fee87 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -118,6 +118,13 @@ int randomize_va_space __read_mostly = 2; #endif +#ifndef arch_faults_on_old_pte +static inline bool arch_faults_on_old_pte(void) +{ + return false; +} +#endif + static int __init disable_randmaps(char *s) { randomize_va_space = 0; @@ -2140,8 +2147,13 @@ static inline int pte_unmap_same(struct mm_struct *mm, pmd_t *pmd, return same; } -static inline void cow_user_page(struct page *dst, struct page *src, unsigned long va, struct vm_area_struct *vma) +static inline int cow_user_page(struct page *dst, struct page *src, + struct vm_fault *vmf) { + struct vm_area_struct *vma = vmf->vma; + struct mm_struct *mm = vma->vm_mm; + unsigned long addr = vmf->address; + debug_dma_assert_idle(src); /* @@ -2151,21 +2163,53 @@ static inline void cow_user_page(struct page *dst, struct page *src, unsigned lo * fails, we just zero-fill it. Live with it. */ if (unlikely(!src)) { - void *kaddr = kmap_atomic(dst); - void __user *uaddr = (void __user *)(va & PAGE_MASK); + void *kaddr; + pte_t entry; + void __user *uaddr = (void __user *)(addr & PAGE_MASK); + /* On architectures with software "accessed" bits, we would + * take a double page fault, so mark it accessed here. + */ + if (arch_faults_on_old_pte() && !pte_young(vmf->orig_pte)) { + vmf->pte = pte_offset_map_lock(mm, vmf->pmd, addr, + &vmf->ptl); + if (likely(pte_same(*vmf->pte, vmf->orig_pte))) { + entry = pte_mkyoung(vmf->orig_pte); + if (ptep_set_access_flags(vma, addr, + vmf->pte, entry, 0)) + update_mmu_cache(vma, addr, vmf->pte); + } else { + /* Other thread has already handled the fault + * and we don't need to do anything. If it's + * not the case, the fault will be triggered + * again on the same address. + */ + pte_unmap_unlock(vmf->pte, vmf->ptl); + return -1; + } + pte_unmap_unlock(vmf->pte, vmf->ptl); + } + + kaddr = kmap_atomic(dst); /* * This really shouldn't fail, because the page is there * in the page tables. But it might just be unreadable, * in which case we just give up and fill the result with * zeroes. */ - if (__copy_from_user_inatomic(kaddr, uaddr, PAGE_SIZE)) + if (__copy_from_user_inatomic(kaddr, uaddr, PAGE_SIZE)) { + /* Give a warn in case there can be some obscure + * use-case + */ + WARN_ON_ONCE(1); clear_page(kaddr); + } kunmap_atomic(kaddr); flush_dcache_page(dst); } else - copy_user_highpage(dst, src, va, vma); + copy_user_highpage(dst, src, addr, vma); + + return 0; } static gfp_t __get_fault_gfp_mask(struct vm_area_struct *vma) @@ -2318,7 +2362,18 @@ static vm_fault_t wp_page_copy(struct vm_fault *vmf) vmf->address); if (!new_page) goto oom; - cow_user_page(new_page, old_page, vmf->address, vma); + + if (cow_user_page(new_page, old_page, vmf)) { + /* COW failed, if the fault was solved by other, + * it's fine. If not, userspace would re-fault on + * the same address and we will handle the fault + * from the second attempt. + */ + put_page(new_page); + if (old_page) + put_page(old_page); + return 0; + } } if (mem_cgroup_try_charge_delay(new_page, mm, GFP_KERNEL, &memcg, false))