From patchwork Thu Oct 11 03:52:43 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Aneesh Kumar K.V" X-Patchwork-Id: 10635681 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id E529816B1 for ; Thu, 11 Oct 2018 03:53:25 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id CB18D289D1 for ; Thu, 11 Oct 2018 03:53:25 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id BF77C28A6F; Thu, 11 Oct 2018 03:53:25 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_NONE autolearn=ham version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 153D0289D1 for ; Thu, 11 Oct 2018 03:53:24 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id ABFAF6B0006; Wed, 10 Oct 2018 23:53:22 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id A99FA6B0008; Wed, 10 Oct 2018 23:53:22 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 960796B000A; Wed, 10 Oct 2018 23:53:22 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-oi1-f199.google.com (mail-oi1-f199.google.com [209.85.167.199]) by kanga.kvack.org (Postfix) with ESMTP id 664116B0006 for ; Wed, 10 Oct 2018 23:53:22 -0400 (EDT) Received: by mail-oi1-f199.google.com with SMTP id y68-v6so5139197oie.21 for ; Wed, 10 Oct 2018 20:53:22 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-original-authentication-results:x-gm-message-state:from:to:cc :subject:date:in-reply-to:references:message-id; bh=fFNI/9LfaTNekGTkklnADsrWFBXNbCbuXHjUGrijQDs=; b=J4C/v8AuZx5NRB8HRzu4bO0VdBrgHCI2qT9XoO3gFyz4mzEl1kb7VXR6dFuiuVLHck +Rh089uR1vOBJ/j9XedL6emYYeNBCydsI19a95JpUBCeODB9vZylrhLVXrzK9kmuNymf bgGJu/3ldn2XPXso81ZyAHSqW7zAtbp0izqsPzKXrWS/E2NK4oBK46eQyufIAkqBoxQg oG9fxmp8nztdBmfvQsMbE/Q56FL0Eel7Qjm3VocancnLun/znmlnrWX2ZTqSkCNkW9Ry xeJFk2Bz4TVDVtgoemlXVa0GWl3Xt4VLfUTAnhIKM51/7tifBspXwkWtIApspyuldqy7 54Ow== X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of aneesh.kumar@linux.ibm.com designates 148.163.156.1 as permitted sender) smtp.mailfrom=aneesh.kumar@linux.ibm.com; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=ibm.com X-Gm-Message-State: ABuFfojJfPqPSu6L9ecR3L4wyZQgs0RXo5LZWzdCtErrkzkCyw5lN9FR O+sQW78BvBcbHdvL4vIerXxmUm+mnysxazxIj74BfsXKfnZKkoKBWV1UnYpk+4eykZKAHk86+8t p5KSfQP4hX09+fTXfKD/L5tSr9rnWw9+JLgzwR1OqOG+nvFlxr8lo3glwcbii1cDfhw== X-Received: by 2002:a9d:3fe4:: with SMTP id i33mr20790209ote.249.1539230002159; Wed, 10 Oct 2018 20:53:22 -0700 (PDT) X-Google-Smtp-Source: ACcGV62FI/SaV4Opa8W210VbmwDjYO1bOToEvE2ZMekIj/HeL8mEVkNUmNUQiVk0drMgDN2cny6m X-Received: by 2002:a9d:3fe4:: with SMTP id i33mr20790174ote.249.1539230001253; Wed, 10 Oct 2018 20:53:21 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1539230001; cv=none; d=google.com; s=arc-20160816; b=hsIAhim2g82eopE/x5VO0c5uiG6v/qQDVHxYku1diYG/1Cna246l41dvg42xPKOpMZ XFdtPStLUirbsddQZTsw4thRcZv0fZL6oE4k4E/fgFVsaULnUYGA4OHT4MvFcgHpdhHG nA2Ru0JwU/vw6f8BTkDChGECv3b3a8paWVPK9Ai7atOFlkO3if62HE1aHrLkTgG08kK5 BKFXG2I4GFcqmXv8JPZwGsvtuc0xgoZDXZ0rnCZwemIPX4n29ctccVS0hCSlxUuHdMp2 bOk6/+YBh1UDSMtAuDgfezqyGBp396Nlyuz64TeIh3RqRHZvXyqZ1euG/9xxN4Hr37/c 4tBg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=message-id:references:in-reply-to:date:subject:cc:to:from; bh=fFNI/9LfaTNekGTkklnADsrWFBXNbCbuXHjUGrijQDs=; b=VH0cDlVWoMULr683o1eZ8IEolfPQ74Ej/rCHV/ghj6w5fv4hnEGY9ey9vEmXk3zo9u qyZCeFUoVIAAXFcpAjRD5/aHXhR1Cva1DTeSnFkLpJhlOF7+c1kHCjAxt0mU8u2Bok/H kkcLsrdPR/HzimIGuW+tM57m2njj+f0iHCWaYqGXIGyYB8NIzrWEzutCYyI6u5N1eRy7 sxo71cOZ+n1e+yrj+K6qm/VUBbjXjj18Gv5dRYTPbCZAi/1PH581Xjox8ZnZVda5IIkK HK4pNutlADw6Y+SEcydtxQb/i8c9wNaG/dbd/QPtmwM/mPiUJDa9yQIq/ClezpzK1jHJ Y1kA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of aneesh.kumar@linux.ibm.com designates 148.163.156.1 as permitted sender) smtp.mailfrom=aneesh.kumar@linux.ibm.com; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=ibm.com Received: from mx0a-001b2d01.pphosted.com (mx0a-001b2d01.pphosted.com. [148.163.156.1]) by mx.google.com with ESMTPS id y83-v6si12641555oie.104.2018.10.10.20.53.21 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 10 Oct 2018 20:53:21 -0700 (PDT) Received-SPF: pass (google.com: domain of aneesh.kumar@linux.ibm.com designates 148.163.156.1 as permitted sender) client-ip=148.163.156.1; Authentication-Results: mx.google.com; spf=pass (google.com: domain of aneesh.kumar@linux.ibm.com designates 148.163.156.1 as permitted sender) smtp.mailfrom=aneesh.kumar@linux.ibm.com; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=ibm.com Received: from pps.filterd (m0098409.ppops.net [127.0.0.1]) by mx0a-001b2d01.pphosted.com (8.16.0.22/8.16.0.22) with SMTP id w9B3moKK144967 for ; Wed, 10 Oct 2018 23:53:20 -0400 Received: from e34.co.us.ibm.com (e34.co.us.ibm.com [32.97.110.152]) by mx0a-001b2d01.pphosted.com with ESMTP id 2n1t4jhjgw-1 (version=TLSv1.2 cipher=AES256-GCM-SHA384 bits=256 verify=NOT) for ; Wed, 10 Oct 2018 23:53:20 -0400 Received: from localhost by e34.co.us.ibm.com with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted for from ; Wed, 10 Oct 2018 21:53:19 -0600 Received: from b03cxnp08026.gho.boulder.ibm.com (9.17.130.18) by e34.co.us.ibm.com (192.168.1.134) with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted; (version=TLSv1/SSLv3 cipher=AES256-GCM-SHA384 bits=256/256) Wed, 10 Oct 2018 21:53:16 -0600 Received: from b03ledav004.gho.boulder.ibm.com (b03ledav004.gho.boulder.ibm.com [9.17.130.235]) by b03cxnp08026.gho.boulder.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id w9B3rFVS38404226 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=FAIL); Wed, 10 Oct 2018 20:53:15 -0700 Received: from b03ledav004.gho.boulder.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 8D35B7805E; Wed, 10 Oct 2018 21:53:15 -0600 (MDT) Received: from b03ledav004.gho.boulder.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 15A317805C; Wed, 10 Oct 2018 21:53:13 -0600 (MDT) Received: from skywalker.ibmuc.com (unknown [9.85.73.187]) by b03ledav004.gho.boulder.ibm.com (Postfix) with ESMTP; Wed, 10 Oct 2018 21:53:12 -0600 (MDT) From: "Aneesh Kumar K.V" To: akpm@linux-foundation.org, mpe@ellerman.id.au, benh@kernel.crashing.org Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, "Aneesh Kumar K.V" Subject: [PATCH 1/5] mm: Update ptep_modify_prot_start/commit to take vm_area_struct as arg Date: Thu, 11 Oct 2018 09:22:43 +0530 X-Mailer: git-send-email 2.17.1 In-Reply-To: <20181011035247.30687-1-aneesh.kumar@linux.ibm.com> References: <20181011035247.30687-1-aneesh.kumar@linux.ibm.com> X-TM-AS-GCONF: 00 x-cbid: 18101103-0016-0000-0000-0000093F5F93 X-IBM-SpamModules-Scores: X-IBM-SpamModules-Versions: BY=3.00009857; HX=3.00000242; KW=3.00000007; PH=3.00000004; SC=3.00000268; SDB=6.01100919; UDB=6.00569641; IPR=6.00880964; MB=3.00023702; MTD=3.00000008; XFM=3.00000015; UTC=2018-10-11 03:53:18 X-IBM-AV-DETECTION: SAVI=unused REMOTE=unused XFE=unused x-cbparentid: 18101103-0017-0000-0000-000040A93298 Message-Id: <20181011035247.30687-2-aneesh.kumar@linux.ibm.com> X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:,, definitions=2018-10-11_01:,, signatures=0 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 priorityscore=1501 malwarescore=0 suspectscore=0 phishscore=0 bulkscore=0 spamscore=0 clxscore=1015 lowpriorityscore=0 mlxscore=0 impostorscore=0 mlxlogscore=816 adultscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.0.1-1807170000 definitions=main-1810110036 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP Some architecture may want to call flush_tlb_range from these helpers. Signed-off-by: Aneesh Kumar K.V --- arch/s390/include/asm/pgtable.h | 4 ++-- arch/s390/mm/pgtable.c | 6 ++++-- arch/x86/include/asm/paravirt.h | 7 +++++-- fs/proc/task_mmu.c | 4 ++-- include/asm-generic/pgtable.h | 8 ++++---- mm/memory.c | 4 ++-- mm/mprotect.c | 4 ++-- 7 files changed, 21 insertions(+), 16 deletions(-) diff --git a/arch/s390/include/asm/pgtable.h b/arch/s390/include/asm/pgtable.h index 0e7cb0dc9c33..8e7f26dfedc6 100644 --- a/arch/s390/include/asm/pgtable.h +++ b/arch/s390/include/asm/pgtable.h @@ -1035,8 +1035,8 @@ static inline pte_t ptep_get_and_clear(struct mm_struct *mm, } #define __HAVE_ARCH_PTEP_MODIFY_PROT_TRANSACTION -pte_t ptep_modify_prot_start(struct mm_struct *, unsigned long, pte_t *); -void ptep_modify_prot_commit(struct mm_struct *, unsigned long, pte_t *, pte_t); +pte_t ptep_modify_prot_start(struct vm_area_struct *, unsigned long, pte_t *); +void ptep_modify_prot_commit(struct vm_area_struct *, unsigned long, pte_t *, pte_t); #define __HAVE_ARCH_PTEP_CLEAR_FLUSH static inline pte_t ptep_clear_flush(struct vm_area_struct *vma, diff --git a/arch/s390/mm/pgtable.c b/arch/s390/mm/pgtable.c index f2cc7da473e4..29c0a21cd34a 100644 --- a/arch/s390/mm/pgtable.c +++ b/arch/s390/mm/pgtable.c @@ -301,12 +301,13 @@ pte_t ptep_xchg_lazy(struct mm_struct *mm, unsigned long addr, } EXPORT_SYMBOL(ptep_xchg_lazy); -pte_t ptep_modify_prot_start(struct mm_struct *mm, unsigned long addr, +pte_t ptep_modify_prot_start(struct vm_area_struct *vma, unsigned long addr, pte_t *ptep) { pgste_t pgste; pte_t old; int nodat; + struct mm_struct *mm = vma->vm_mm; preempt_disable(); pgste = ptep_xchg_start(mm, addr, ptep); @@ -320,10 +321,11 @@ pte_t ptep_modify_prot_start(struct mm_struct *mm, unsigned long addr, } EXPORT_SYMBOL(ptep_modify_prot_start); -void ptep_modify_prot_commit(struct mm_struct *mm, unsigned long addr, +void ptep_modify_prot_commit(struct vm_area_struct *vma, unsigned long addr, pte_t *ptep, pte_t pte) { pgste_t pgste; + struct mm_struct *mm = vma->vm_mm; if (!MACHINE_HAS_NX) pte_val(pte) &= ~_PAGE_NOEXEC; diff --git a/arch/x86/include/asm/paravirt.h b/arch/x86/include/asm/paravirt.h index e375d4266b53..c5d203a51e50 100644 --- a/arch/x86/include/asm/paravirt.h +++ b/arch/x86/include/asm/paravirt.h @@ -421,10 +421,11 @@ static inline pgdval_t pgd_val(pgd_t pgd) } #define __HAVE_ARCH_PTEP_MODIFY_PROT_TRANSACTION -static inline pte_t ptep_modify_prot_start(struct mm_struct *mm, unsigned long addr, +static inline pte_t ptep_modify_prot_start(struct vm_area_struct *vma, unsigned long addr, pte_t *ptep) { pteval_t ret; + struct mm_struct *mm = vma->vm_mm; ret = PVOP_CALL3(pteval_t, pv_mmu_ops.ptep_modify_prot_start, mm, addr, ptep); @@ -432,9 +433,11 @@ static inline pte_t ptep_modify_prot_start(struct mm_struct *mm, unsigned long a return (pte_t) { .pte = ret }; } -static inline void ptep_modify_prot_commit(struct mm_struct *mm, unsigned long addr, +static inline void ptep_modify_prot_commit(struct vm_area_struct *vma, unsigned long addr, pte_t *ptep, pte_t pte) { + struct mm_struct *mm = vma->vm_mm; + if (sizeof(pteval_t) > sizeof(long)) /* 5 arg words */ pv_mmu_ops.ptep_modify_prot_commit(mm, addr, ptep, pte); diff --git a/fs/proc/task_mmu.c b/fs/proc/task_mmu.c index 5ea1d64cb0b4..229df16e7ad0 100644 --- a/fs/proc/task_mmu.c +++ b/fs/proc/task_mmu.c @@ -938,10 +938,10 @@ static inline void clear_soft_dirty(struct vm_area_struct *vma, pte_t ptent = *pte; if (pte_present(ptent)) { - ptent = ptep_modify_prot_start(vma->vm_mm, addr, pte); + ptent = ptep_modify_prot_start(vma, addr, pte); ptent = pte_wrprotect(ptent); ptent = pte_clear_soft_dirty(ptent); - ptep_modify_prot_commit(vma->vm_mm, addr, pte, ptent); + ptep_modify_prot_commit(vma, addr, pte, ptent); } else if (is_swap_pte(ptent)) { ptent = pte_swp_clear_soft_dirty(ptent); set_pte_at(vma->vm_mm, addr, pte, ptent); diff --git a/include/asm-generic/pgtable.h b/include/asm-generic/pgtable.h index 88ebc6102c7c..021b94cd3260 100644 --- a/include/asm-generic/pgtable.h +++ b/include/asm-generic/pgtable.h @@ -606,22 +606,22 @@ static inline void __ptep_modify_prot_commit(struct mm_struct *mm, * queue the update to be done at some later time. The update must be * actually committed before the pte lock is released, however. */ -static inline pte_t ptep_modify_prot_start(struct mm_struct *mm, +static inline pte_t ptep_modify_prot_start(struct vm_area_struct *vma, unsigned long addr, pte_t *ptep) { - return __ptep_modify_prot_start(mm, addr, ptep); + return __ptep_modify_prot_start(vma->vm_mm, addr, ptep); } /* * Commit an update to a pte, leaving any hardware-controlled bits in * the PTE unmodified. */ -static inline void ptep_modify_prot_commit(struct mm_struct *mm, +static inline void ptep_modify_prot_commit(struct vm_area_struct *vma, unsigned long addr, pte_t *ptep, pte_t pte) { - __ptep_modify_prot_commit(mm, addr, ptep, pte); + __ptep_modify_prot_commit(vma->vm_mm, addr, ptep, pte); } #endif /* __HAVE_ARCH_PTEP_MODIFY_PROT_TRANSACTION */ #endif /* CONFIG_MMU */ diff --git a/mm/memory.c b/mm/memory.c index c467102a5cbc..261d30f51499 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -3806,12 +3806,12 @@ static vm_fault_t do_numa_page(struct vm_fault *vmf) * Make it present again, Depending on how arch implementes non * accessible ptes, some can allow access by kernel mode. */ - pte = ptep_modify_prot_start(vma->vm_mm, vmf->address, vmf->pte); + pte = ptep_modify_prot_start(vma, vmf->address, vmf->pte); pte = pte_modify(pte, vma->vm_page_prot); pte = pte_mkyoung(pte); if (was_writable) pte = pte_mkwrite(pte); - ptep_modify_prot_commit(vma->vm_mm, vmf->address, vmf->pte, pte); + ptep_modify_prot_commit(vma, vmf->address, vmf->pte, pte); update_mmu_cache(vma, vmf->address, vmf->pte); page = vm_normal_page(vma, vmf->address, pte); diff --git a/mm/mprotect.c b/mm/mprotect.c index 6d331620b9e5..a301d4c83d3c 100644 --- a/mm/mprotect.c +++ b/mm/mprotect.c @@ -110,7 +110,7 @@ static unsigned long change_pte_range(struct vm_area_struct *vma, pmd_t *pmd, continue; } - ptent = ptep_modify_prot_start(mm, addr, pte); + ptent = ptep_modify_prot_start(vma, addr, pte); ptent = pte_modify(ptent, newprot); if (preserve_write) ptent = pte_mk_savedwrite(ptent); @@ -121,7 +121,7 @@ static unsigned long change_pte_range(struct vm_area_struct *vma, pmd_t *pmd, !(vma->vm_flags & VM_SOFTDIRTY))) { ptent = pte_mkwrite(ptent); } - ptep_modify_prot_commit(mm, addr, pte, ptent); + ptep_modify_prot_commit(vma, addr, pte, ptent); pages++; } else if (IS_ENABLED(CONFIG_MIGRATION)) { swp_entry_t entry = pte_to_swp_entry(oldpte);