From patchwork Thu Jan 23 07:55:12 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Xuefeng Wang X-Patchwork-Id: 11346821 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 156C013A4 for ; Thu, 23 Jan 2020 07:05:16 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id D5D212467B for ; Thu, 23 Jan 2020 07:05:15 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org D5D212467B Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=hisilicon.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id E51E16B0003; Thu, 23 Jan 2020 02:05:14 -0500 (EST) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id E00796B0005; Thu, 23 Jan 2020 02:05:14 -0500 (EST) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id CC8656B0007; Thu, 23 Jan 2020 02:05:14 -0500 (EST) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0158.hostedemail.com [216.40.44.158]) by kanga.kvack.org (Postfix) with ESMTP id B6AB76B0005 for ; Thu, 23 Jan 2020 02:05:14 -0500 (EST) Received: from smtpin23.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with SMTP id 4F0DA180AD804 for ; Thu, 23 Jan 2020 07:05:14 +0000 (UTC) X-FDA: 76408012548.23.net74_108edc4bef90b X-Spam-Summary: 2,0,0,a981e533b5a72821,d41d8cd98f00b204,wxf.wang@hisilicon.com,:catalin.marinas@arm.com:will@kernel.org:mark.rutland@arm.com:arnd@arndb.de:akpm@linux-foundation.org:linux-arch@vger.kernel.org::linux-kernel@vger.kernel.org:linux-arm-kernel@lists.infradead.org:chenzhou10@huawei.com:wxf.wang@hisilicon.com,RULES_HIT:41:69:355:379:541:582:800:960:973:988:989:1152:1260:1261:1277:1313:1314:1345:1359:1437:1516:1518:1534:1543:1711:1730:1747:1777:1792:1981:2194:2199:2393:2553:2559:2562:2895:3138:3139:3140:3141:3142:3354:3865:3867:3868:3871:3872:4321:4605:5007:6119:6261:7903:10004:10400:11026:11232:11473:11658:11914:12043:12048:12114:12291:12296:12297:12438:12555:12679:12683:12895:12986:13141:13230:14096:14097:14110:14181:14394:14721:21080:21451:21627:21990:30054:30070:30090,0,RBL:45.249.212.32:@hisilicon.com:.lbl8.mailshell.net-62.14.2.100 64.201.201.201,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:neutral,Custom_rules:0 :0:0,LFt X-HE-Tag: net74_108edc4bef90b X-Filterd-Recvd-Size: 4898 Received: from huawei.com (szxga06-in.huawei.com [45.249.212.32]) by imf40.hostedemail.com (Postfix) with ESMTP for ; Thu, 23 Jan 2020 07:05:13 +0000 (UTC) Received: from DGGEMS401-HUB.china.huawei.com (unknown [172.30.72.60]) by Forcepoint Email with ESMTP id 48CDD697474171B3E209; Thu, 23 Jan 2020 15:05:10 +0800 (CST) Received: from huawei.com (10.175.102.38) by DGGEMS401-HUB.china.huawei.com (10.3.19.201) with Microsoft SMTP Server id 14.3.439.0; Thu, 23 Jan 2020 15:05:03 +0800 From: Xuefeng Wang To: , , , , CC: , , , , , Xuefeng Wang Subject: [PATCH v2 1/2] mm: add helpers pmdp_modify_prot_start/commit Date: Thu, 23 Jan 2020 15:55:12 +0800 Message-ID: <20200123075514.15142-2-wxf.wang@hisilicon.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200123075514.15142-1-wxf.wang@hisilicon.com> References: <20200123075514.15142-1-wxf.wang@hisilicon.com> MIME-Version: 1.0 X-Originating-IP: [10.175.102.38] X-CFilter-Loop: Reflected X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Introduce helpers pmdp_modify_prot_start/commit to abstract pmdp_modify_prot transaction. Helpers pmdp_modify_prot_start/commit are functionally unchanged. Signed-off-by: Xuefeng Wang Signed-off-by: Chen Zhou --- include/asm-generic/pgtable.h | 35 +++++++++++++++++++++++++++++++++++ mm/huge_memory.c | 19 ++++++++----------- 2 files changed, 43 insertions(+), 11 deletions(-) -- 2.17.1 diff --git a/include/asm-generic/pgtable.h b/include/asm-generic/pgtable.h index 798ea36a0549..e81bd58a9170 100644 --- a/include/asm-generic/pgtable.h +++ b/include/asm-generic/pgtable.h @@ -673,6 +673,41 @@ static inline void ptep_modify_prot_commit(struct vm_area_struct *vma, __ptep_modify_prot_commit(vma, addr, ptep, pte); } #endif /* __HAVE_ARCH_PTEP_MODIFY_PROT_TRANSACTION */ + +static inline pmd_t __pmdp_modify_prot_start(struct vm_area_struct *vma, + unsigned long addr, + pmd_t *pmdp) +{ + return pmdp_invalidate(vma, addr, pmdp); +} + +static inline void __pmdp_modify_prot_commit(struct vm_area_struct *vma, + unsigned long addr, + pmd_t *pmdp, pmd_t pmd) +{ + set_pmd_at(vma->vm_mm, addr, pmdp, pmd); +} + +#ifdef CONFIG_TRANSPARENT_HUGEPAGE +#ifndef __HAVE_ARCH_PMDP_MODIFY_PROT_TRANSACTION +static inline pmd_t pmdp_modify_prot_start(struct vm_area_struct *vma, + unsigned long addr, + pmd_t *pmdp) +{ + __pmdp_modify_prot_start(vma, addr, pmdp); +} +#endif /* __HAVE_ARCH_PMDP_MODIFY_PROT_TRANSACTION */ + +/* + * Commit an update to a pmd. + */ +static inline void pmdp_modify_prot_commit(struct vm_area_struct *vma, + unsigned long addr, + pmd_t *pmdp, pmd_t old_pmd, pmd_t pmd) +{ + __pmdp_modify_prot_commit(vma, addr, pmdp, pmd); +} +#endif /* CONFIG_TRANSPARENT_HUGEPAGE */ #endif /* CONFIG_MMU */ /* diff --git a/mm/huge_memory.c b/mm/huge_memory.c index a88093213674..53515a3c91dd 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -1933,9 +1933,8 @@ bool move_huge_pmd(struct vm_area_struct *vma, unsigned long old_addr, int change_huge_pmd(struct vm_area_struct *vma, pmd_t *pmd, unsigned long addr, pgprot_t newprot, int prot_numa) { - struct mm_struct *mm = vma->vm_mm; spinlock_t *ptl; - pmd_t entry; + pmd_t pmdnt, oldpmd; bool preserve_write; int ret; @@ -1961,7 +1960,7 @@ int change_huge_pmd(struct vm_area_struct *vma, pmd_t *pmd, newpmd = swp_entry_to_pmd(entry); if (pmd_swp_soft_dirty(*pmd)) newpmd = pmd_swp_mksoft_dirty(newpmd); - set_pmd_at(mm, addr, pmd, newpmd); + set_pmd_at(vma->vm_mm, addr, pmd, newpmd); } goto unlock; } @@ -1995,18 +1994,16 @@ int change_huge_pmd(struct vm_area_struct *vma, pmd_t *pmd, * * The race makes MADV_DONTNEED miss the huge pmd and don't clear it * which may break userspace. - * - * pmdp_invalidate() is required to make sure we don't miss - * dirty/young flags set by hardware. */ - entry = pmdp_invalidate(vma, addr, pmd); - entry = pmd_modify(entry, newprot); + oldpmd = pmdp_modify_prot_start(vma, addr, pmd); + pmdnt = pmd_modify(oldpmd, newprot); if (preserve_write) - entry = pmd_mk_savedwrite(entry); + pmdnt = pmd_mk_savedwrite(pmdnt); + pmdp_modify_prot_commit(vma, addr, pmd, oldpmd, pmdnt); + ret = HPAGE_PMD_NR; - set_pmd_at(mm, addr, pmd, entry); - BUG_ON(vma_is_anonymous(vma) && !preserve_write && pmd_write(entry)); + BUG_ON(vma_is_anonymous(vma) && !preserve_write && pmd_write(pmdnt)); unlock: spin_unlock(ptl); return ret; From patchwork Thu Jan 23 07:55:13 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Xuefeng Wang X-Patchwork-Id: 11346823 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 64FD61580 for ; Thu, 23 Jan 2020 07:05:18 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 32F2F2467E for ; Thu, 23 Jan 2020 07:05:18 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 32F2F2467E Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=hisilicon.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 11C276B0005; Thu, 23 Jan 2020 02:05:15 -0500 (EST) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 092C46B0008; Thu, 23 Jan 2020 02:05:14 -0500 (EST) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id D3D566B0006; Thu, 23 Jan 2020 02:05:14 -0500 (EST) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0238.hostedemail.com [216.40.44.238]) by kanga.kvack.org (Postfix) with ESMTP id B2DD46B0003 for ; Thu, 23 Jan 2020 02:05:14 -0500 (EST) Received: from smtpin27.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with SMTP id 662F38248047 for ; Thu, 23 Jan 2020 07:05:14 +0000 (UTC) X-FDA: 76408012548.27.order37_108e4e492d52b X-Spam-Summary: 2,0,0,3db8590fddfa556e,d41d8cd98f00b204,wxf.wang@hisilicon.com,:catalin.marinas@arm.com:will@kernel.org:mark.rutland@arm.com:arnd@arndb.de:akpm@linux-foundation.org:linux-arch@vger.kernel.org::linux-kernel@vger.kernel.org:linux-arm-kernel@lists.infradead.org:chenzhou10@huawei.com:wxf.wang@hisilicon.com,RULES_HIT:41:355:379:541:582:800:960:968:973:988:989:1042:1152:1260:1261:1277:1313:1314:1345:1359:1437:1516:1518:1534:1542:1711:1730:1747:1777:1792:2393:2559:2562:2693:2895:3138:3139:3140:3141:3142:3353:3865:3866:3867:3868:3870:3871:3872:4037:4250:4321:5007:6117:6119:6120:6261:7774:7875:7901:7903:8603:8957:9707:10004:10400:11026:11473:11657:11658:11914:12043:12048:12296:12297:12438:12555:12895:12986:13153:13228:13548:14096:14097:14181:14394:14721:21080:21451:21483:21627:21740:21772:21990:30046:30054:30070,0,RBL:45.249.212.32:@hisilicon.com:.lbl8.mailshell.net-62.14.2.100 64.201.201.201,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SP F:fp,MSB X-HE-Tag: order37_108e4e492d52b X-Filterd-Recvd-Size: 3428 Received: from huawei.com (szxga06-in.huawei.com [45.249.212.32]) by imf14.hostedemail.com (Postfix) with ESMTP for ; Thu, 23 Jan 2020 07:05:13 +0000 (UTC) Received: from DGGEMS401-HUB.china.huawei.com (unknown [172.30.72.60]) by Forcepoint Email with ESMTP id 56141EF6D7548A770757; Thu, 23 Jan 2020 15:05:10 +0800 (CST) Received: from huawei.com (10.175.102.38) by DGGEMS401-HUB.china.huawei.com (10.3.19.201) with Microsoft SMTP Server id 14.3.439.0; Thu, 23 Jan 2020 15:05:04 +0800 From: Xuefeng Wang To: , , , , CC: , , , , , Xuefeng Wang Subject: [PATCH v2 2/2] arm64: mm: rework the pmd protect changing flow Date: Thu, 23 Jan 2020 15:55:13 +0800 Message-ID: <20200123075514.15142-3-wxf.wang@hisilicon.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200123075514.15142-1-wxf.wang@hisilicon.com> References: <20200123075514.15142-1-wxf.wang@hisilicon.com> MIME-Version: 1.0 X-Originating-IP: [10.175.102.38] X-CFilter-Loop: Reflected X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On KunPeng920 board. When changing permission of a large range region, pmdp_invalidate() takes about 65% in profile (with hugepages) in JIT tool. Kernel will flush tlb twice: first flush happens in pmdp_invalidate, second flush happens at the end of change_protect_range(). The first pmdp_invalidate is not necessary if the hardware support atomic pmdp changing. The atomic changing pimd to zero can prevent the hardware from update asynchronous. So reconstruct it and remove the first pmdp_invalidate. And the second tlb flush can make sure the new tlb entry valid. Add pmdp_modify_prot_start() in arm64, which uses pmdp_huge_get_and_clear() to fetch the pmd and zero entry, preventing racing of any hardware updates. After rework, the mprotect can get 3~13 times performace gain in range 64M to 512M. 4K granule/THP on memory size(M) 64 128 256 320 448 512 pre-patch 0.77 1.40 2.64 3.23 4.49 5.10 post-patch 0.20 0.23 0.28 0.31 0.37 0.39 Signed-off-by: Xuefeng Wang Signed-off-by: Chen Zhou --- arch/arm64/include/asm/pgtable.h | 14 ++++++++++++++ 1 file changed, 14 insertions(+) -- 2.17.1 diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h index cd5de0e40bfa..bccdaa5bd5f2 100644 --- a/arch/arm64/include/asm/pgtable.h +++ b/arch/arm64/include/asm/pgtable.h @@ -769,6 +769,20 @@ static inline pmd_t pmdp_huge_get_and_clear(struct mm_struct *mm, } #endif /* CONFIG_TRANSPARENT_HUGEPAGE */ +#ifdef CONFIG_TRANSPARENT_HUGEPAGE +#define __HAVE_ARCH_PMDP_MODIFY_PROT_TRANSACTION +static inline pmd_t pmdp_modify_prot_start(struct vm_area_struct *vma, + unsigned long addr, + pmd_t *pmdp) +{ + /* + * Atomic change pmd to zero, prevent the hardware from update + * aynchronously update it. + */ + return pmdp_huge_get_and_clear(vma->vm_mm, addr, pmdp); +} +#endif /* CONFIG_TRANSPARENT_HUGEPAGE */ + /* * ptep_set_wrprotect - mark read-only while trasferring potential hardware * dirty status (PTE_DBM && !PTE_RDONLY) to the software PTE_DIRTY bit.