From patchwork Thu Feb 23 00:57:51 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Axel Rasmussen X-Patchwork-Id: 13149746 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id A32BAC64ED6 for ; Thu, 23 Feb 2023 00:58:08 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 3C6896B0074; Wed, 22 Feb 2023 19:58:08 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 34FAC6B0075; Wed, 22 Feb 2023 19:58:08 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 1A2B66B0078; Wed, 22 Feb 2023 19:58:08 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id 07E256B0074 for ; Wed, 22 Feb 2023 19:58:08 -0500 (EST) Received: from smtpin01.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id C684AC0A93 for ; Thu, 23 Feb 2023 00:58:07 +0000 (UTC) X-FDA: 80496745014.01.F20882C Received: from mail-yw1-f202.google.com (mail-yw1-f202.google.com [209.85.128.202]) by imf11.hostedemail.com (Postfix) with ESMTP id 0A5E340004 for ; Thu, 23 Feb 2023 00:58:05 +0000 (UTC) Authentication-Results: imf11.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=GP4aXCyk; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf11.hostedemail.com: domain of 3Hbr2Yw0KCJIwJ07DwE8GEE092AA270.yA8749GJ-886Hwy6.AD2@flex--axelrasmussen.bounces.google.com designates 209.85.128.202 as permitted sender) smtp.mailfrom=3Hbr2Yw0KCJIwJ07DwE8GEE092AA270.yA8749GJ-886Hwy6.AD2@flex--axelrasmussen.bounces.google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1677113886; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=crzeVilQZNZe8c56p1bNhkq2g07oeV8oS4dAlzTlegU=; b=3FBEpMtvpAEhcfDDPGUaeRin62mZ9r5jRJ8yaRLaTFdu6JCBZ4VBhezGlv1iKYxhcpEqPH wP23u3oj9d51hP4Z/SzTf9tLxR/dzAV4FFy8SrcWt9SkstvfjMrE/w+8k4cn2ZvpqlgRD6 ezaH9Fj2AXklkNDPuzx5wVO5hSSB13Q= ARC-Authentication-Results: i=1; imf11.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=GP4aXCyk; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf11.hostedemail.com: domain of 3Hbr2Yw0KCJIwJ07DwE8GEE092AA270.yA8749GJ-886Hwy6.AD2@flex--axelrasmussen.bounces.google.com designates 209.85.128.202 as permitted sender) smtp.mailfrom=3Hbr2Yw0KCJIwJ07DwE8GEE092AA270.yA8749GJ-886Hwy6.AD2@flex--axelrasmussen.bounces.google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1677113886; a=rsa-sha256; cv=none; b=ZDDIPCbGluavsO7dfdCNPG6IagwA0C/mqY0H05881vl2BR3RIOrl6cc8AeeWJt47WPVteC IKCbq6HYuju2DtbOkNu9AvXpTh7b2Blr3ahBmBwr1dNuCYkKaHn5c1mWaTDbHO8G9QMc+K fAF5n1Igb7NGYOW8bqxXERkLiXruyTc= Received: by mail-yw1-f202.google.com with SMTP id 00721157ae682-536c525d470so94784517b3.18 for ; Wed, 22 Feb 2023 16:58:05 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=crzeVilQZNZe8c56p1bNhkq2g07oeV8oS4dAlzTlegU=; b=GP4aXCykYcYZ+lQ0pILznEUuX8lY9N28z/+LLjarDn1cUEuykv9RRlA5hl7vdxNFnn ZynIEzq508/lWuETY0c5sXFJuMKGKfEV6RB8hHhkgLpqKRsNbN1lae/pCHaBzLLI/soh Ts28NOavEGAd3HNTIMwrRo4XzoF7DzYUZnq5GUxTYgHUFPd8Tf1/t2xg1exl+Bvxzbbk 9h9NV161jRTSWkojK2wS3GhMW3YQDsKiheW+yccPqjKsTQC2u2uRskyV9WIRYhI0awQU GfFfDwW04GD9QzuMfmaYcern+tzWoyiSx+YASfWby0XMmfB2K5V/PXr17+EU4fWL8LfI c08Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=crzeVilQZNZe8c56p1bNhkq2g07oeV8oS4dAlzTlegU=; b=3Sm3sVeA1jTkxmpSCJN+CS4aIMKxTwq1I3eEZEnj+95CrBy+IfWrk1NiGOYFNVpTQD s7iau9zktpHJaSf/YfyGt5aMTdO94lX6K2RFMx9n+esPHNd+Y8izSTu8obL8O9L6zxfv h902s/cEdaA3+G1vmZhIEVQ1XCYJhJgZkb4LEPkbtx5h0lBEWcaUpRlJJMCaEoLK6MqU +q2ta/K3lc2yBIpDNcVNmOTBcjn9BpMRryJMAN1BjJw0FPyy0G5P7ITyok/FtxjZy/XO WL668KGDDKZMK2Q0UrJSAt0lFaQFBrXHRmW/m4NlzKE13vLj4DANhYdM1oezEFNtF6/o HLqQ== X-Gm-Message-State: AO0yUKWyKYe7XPOVL1v/O4HlqJUzDC6IPqIxg0DoVxneDaKpJ6loyxuy gYerzV6JeRLSshu96sNb2fpLFZV8kD0uiO0nOSwq X-Google-Smtp-Source: AK7set8bkDtleVyG1BExnR0y6OnigT/sFOZBFohQ02Aj0fqxRbapwvWOxU9TLvCAxCtdBrhuYZS3KArzhpJT19Jk6G4B X-Received: from axel.svl.corp.google.com ([2620:15c:2d4:203:3e99:dd9e:9db9:449c]) (user=axelrasmussen job=sendgmr) by 2002:a05:6902:1024:b0:8da:3163:224 with SMTP id x4-20020a056902102400b008da31630224mr1864272ybt.0.1677113885244; Wed, 22 Feb 2023 16:58:05 -0800 (PST) Date: Wed, 22 Feb 2023 16:57:51 -0800 In-Reply-To: <20230223005754.2700663-1-axelrasmussen@google.com> Mime-Version: 1.0 References: <20230223005754.2700663-1-axelrasmussen@google.com> X-Mailer: git-send-email 2.39.2.637.g21b0678d19-goog Message-ID: <20230223005754.2700663-3-axelrasmussen@google.com> Subject: [PATCH v2 2/5] mm: userfaultfd: don't pass around both mm and vma From: Axel Rasmussen To: Alexander Viro , Andrew Morton , Hugh Dickins , Jan Kara , "Liam R. Howlett" , Matthew Wilcox , Mike Kravetz , Mike Rapoport , Muchun Song , Nadav Amit , Peter Xu , Shuah Khan Cc: James Houghton , linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-kselftest@vger.kernel.org, Axel Rasmussen X-Rspamd-Queue-Id: 0A5E340004 X-Rspamd-Server: rspam09 X-Rspam-User: X-Stat-Signature: yyuh3k3htimxnbdu7cwmcgcphmxdk5ai X-HE-Tag: 1677113885-587927 X-HE-Meta: U2FsdGVkX19X26lXQt1lYcvMfxlVitacVw1wnHBQ/XZBaDzNdX/MzH/BPE7HD1Sq8At2Pdnr43Q6vvwtHYU75S12Ir9Hfpex8q7wtOAaPJXjac3QAgqK9Gkfw/bj05OXXCh/AGRP+2x6VZALQmD1vN4w4hpGTQ78s4Y8U9fEcYPbG7ku6OsHJ2P0dKOdZgbLDfqYoD9un18dB+zHzovQ37qdlB5acyYsLamEsNhKrnzHXyVS9FZgKsf8XSZ/qsrvw3JkKAFhO2Q3LRknuLDwoqtec7JpQReWADtTsDa+rcB7wgluQYHKm+nZ0jf+CqYZmEez4GusMNKx19eO384v5sW+c4as64gpmIXeX1ZkcPft56NyV4/qJypV87dNe6iYynojeB52TJsnEMZtHkfApILKWQsDVbVDRMAG8CFtdBo6c8sBLiqGkfP35EsJznwz4qepxVkHm1510UkCqlc17+EO6QtfCGXECOKaGQGn0cWSjaKYgsWkrK4RhmZ9TAf3AFNhjsPWSk4/S04fw8bjyB3gG/senEiJN9hDRUE2eh77JEET1bNB08NLpkmQN9wbmGbz/grNMB403UHUePs9/CEHV/49J6zIz3EFhLHTN71xnUCznh7E5ELljuOb/zU+iehyqoYYYsHhYf1uHP+Nx+7EoLO7B3BNoaMzQlnBSl/gZ9NVKu1UiEbm7xn0pqAZtcTRAnlDoVNPI6i/B/dFxm5U/NF6olwV2jAbsZrDv9S/rRui0t2HcnsH/ldv+zevkCGFJe+7uJa+nsKAHe08D2S3+yS3aJ+tDPsuQTI1WNdfYQr7R0gSEzdVa2sYLJgOC6aT92ffbQzVe5lUib9d27vrjxfnlWd7oVnyoVntA8Kxd67Dtyqzu+3pgtE+ajjYge71BI0QW/jdmQ9+6gEnkNecMtzMgVHDzESdP6RHhhFgBW6crYCBgdijUAPiHFPHzpqi8yxLiCQK7jStaz9 3tLis4B2 rewcv2aMpLr+ZRXhaeXswlLr5wX0AO5+1EBw186HzAanrIWzAEqcCrakhcUd+swfU9lRRmX7hLwJ6RdPsLz/kZODgokYcqhUNBqFkvTKzAZVaYaU4HR9Q88ETkB6/03+Xo8tkSb+NzoexV0r1vb7XKGhRQvxLxUQuiHgzt3MWIMRtCdcFrO94LY/Veg+XGZxtF9+xYL9NndfdYigkMpqKfOXPfitdVn6YalxsNIABXHXwiM6TMK+4zhCiVpZCqCpDyaDaNhdMyh4qn9u4HP1DLWcoZ7h68CpVnddvM/r8g9eSNqg2TSj5o4aNsuIaIPKgtGJdcbvM2a4lGS0Jri81aZ9MHUiPFdXd0i9E3hZzmUkhJJHFhFMCjDwuVOQiK3lzR8g0+sAdTTHzhrkthdil52wZkm58dbz7c6eeM31zOVqJeEKRP+3LSQc7FahJYt87KKqTX5Z4TuNK1PN5EwevVOh0+BesnQJlugaETvOSHPYEZw2YuTFf7rZrb+apCnAwLytK+Kz+XzdecUwfoBgBeDePh9ZM860zrUaCDAlE0Boke+KjOoT5dTd8ob0Mqwz9CdeiG1jIX+nuvAevOXUOaf63t6h/HsBkm9g+WglPeHsJXeIlZFA6Zw/SG6awF7ls5IjktRE4FFzLCGk4ex4MMxXjxVrpK1+r3KtbLjE485A8vXBBiv4+EbYhTQ== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Quite a few userfaultfd functions took both mm and vma pointers as arguments. Since the mm is trivially accessible via vma->vm_mm, there's no reason to pass both; it just needlessly extends the already long argument list. Get rid of the mm pointer, where possible, to shorten the argument list. Signed-off-by: Axel Rasmussen --- fs/userfaultfd.c | 2 +- include/linux/hugetlb.h | 5 ++- include/linux/shmem_fs.h | 4 +-- include/linux/userfaultfd_k.h | 4 +-- mm/hugetlb.c | 9 +++-- mm/shmem.c | 7 ++-- mm/userfaultfd.c | 66 ++++++++++++++++------------------- 7 files changed, 45 insertions(+), 52 deletions(-) diff --git a/fs/userfaultfd.c b/fs/userfaultfd.c index c08a26ae77d6..a95f6aaef76b 100644 --- a/fs/userfaultfd.c +++ b/fs/userfaultfd.c @@ -1637,7 +1637,7 @@ static int userfaultfd_unregister(struct userfaultfd_ctx *ctx, /* Reset ptes for the whole vma range if wr-protected */ if (userfaultfd_wp(vma)) - uffd_wp_range(mm, vma, start, vma_end - start, false); + uffd_wp_range(vma, start, vma_end - start, false); new_flags = vma->vm_flags & ~__VM_UFFD_FLAGS; prev = vma_merge(mm, prev, start, vma_end, new_flags, diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h index 3c389b74e02d..d3fc104aab78 100644 --- a/include/linux/hugetlb.h +++ b/include/linux/hugetlb.h @@ -157,7 +157,7 @@ unsigned long hugetlb_total_pages(void); vm_fault_t hugetlb_fault(struct mm_struct *mm, struct vm_area_struct *vma, unsigned long address, unsigned int flags); #ifdef CONFIG_USERFAULTFD -int hugetlb_mfill_atomic_pte(struct mm_struct *dst_mm, pte_t *dst_pte, +int hugetlb_mfill_atomic_pte(pte_t *dst_pte, struct vm_area_struct *dst_vma, unsigned long dst_addr, unsigned long src_addr, @@ -355,8 +355,7 @@ static inline void hugetlb_free_pgd_range(struct mmu_gather *tlb, } #ifdef CONFIG_USERFAULTFD -static inline int hugetlb_mfill_atomic_pte(struct mm_struct *dst_mm, - pte_t *dst_pte, +static inline int hugetlb_mfill_atomic_pte(pte_t *dst_pte, struct vm_area_struct *dst_vma, unsigned long dst_addr, unsigned long src_addr, diff --git a/include/linux/shmem_fs.h b/include/linux/shmem_fs.h index d500ea967dc7..2a0b1dc0460f 100644 --- a/include/linux/shmem_fs.h +++ b/include/linux/shmem_fs.h @@ -149,14 +149,14 @@ extern void shmem_uncharge(struct inode *inode, long pages); #ifdef CONFIG_USERFAULTFD #ifdef CONFIG_SHMEM -extern int shmem_mfill_atomic_pte(struct mm_struct *dst_mm, pmd_t *dst_pmd, +extern int shmem_mfill_atomic_pte(pmd_t *dst_pmd, struct vm_area_struct *dst_vma, unsigned long dst_addr, unsigned long src_addr, bool zeropage, bool wp_copy, struct page **pagep); #else /* !CONFIG_SHMEM */ -#define shmem_mfill_atomic_pte(dst_mm, dst_pmd, dst_vma, dst_addr, \ +#define shmem_mfill_atomic_pte(dst_pmd, dst_vma, dst_addr, \ src_addr, zeropage, wp_copy, pagep) ({ BUG(); 0; }) #endif /* CONFIG_SHMEM */ #endif /* CONFIG_USERFAULTFD */ diff --git a/include/linux/userfaultfd_k.h b/include/linux/userfaultfd_k.h index 6c5ad5d4aa06..c6c23408d300 100644 --- a/include/linux/userfaultfd_k.h +++ b/include/linux/userfaultfd_k.h @@ -56,7 +56,7 @@ enum mcopy_atomic_mode { MCOPY_ATOMIC_CONTINUE, }; -extern int mfill_atomic_install_pte(struct mm_struct *dst_mm, pmd_t *dst_pmd, +extern int mfill_atomic_install_pte(pmd_t *dst_pmd, struct vm_area_struct *dst_vma, unsigned long dst_addr, struct page *page, bool newly_allocated, bool wp_copy); @@ -73,7 +73,7 @@ extern ssize_t mfill_atomic_continue(struct mm_struct *dst_mm, unsigned long dst extern int mwriteprotect_range(struct mm_struct *dst_mm, unsigned long start, unsigned long len, bool enable_wp, atomic_t *mmap_changing); -extern void uffd_wp_range(struct mm_struct *dst_mm, struct vm_area_struct *vma, +extern void uffd_wp_range(struct vm_area_struct *vma, unsigned long start, unsigned long len, bool enable_wp); /* mm helpers */ diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 915a390442e7..0afd2ed8ad39 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -6162,8 +6162,7 @@ vm_fault_t hugetlb_fault(struct mm_struct *mm, struct vm_area_struct *vma, * Used by userfaultfd UFFDIO_* ioctls. Based on userfaultfd's mfill_atomic_pte * with modifications for hugetlb pages. */ -int hugetlb_mfill_atomic_pte(struct mm_struct *dst_mm, - pte_t *dst_pte, +int hugetlb_mfill_atomic_pte(pte_t *dst_pte, struct vm_area_struct *dst_vma, unsigned long dst_addr, unsigned long src_addr, @@ -6282,7 +6281,7 @@ int hugetlb_mfill_atomic_pte(struct mm_struct *dst_mm, page_in_pagecache = true; } - ptl = huge_pte_lock(h, dst_mm, dst_pte); + ptl = huge_pte_lock(h, dst_vma->vm_mm, dst_pte); ret = -EIO; if (PageHWPoison(page)) @@ -6324,9 +6323,9 @@ int hugetlb_mfill_atomic_pte(struct mm_struct *dst_mm, if (wp_copy) _dst_pte = huge_pte_mkuffd_wp(_dst_pte); - set_huge_pte_at(dst_mm, dst_addr, dst_pte, _dst_pte); + set_huge_pte_at(dst_vma->vm_mm, dst_addr, dst_pte, _dst_pte); - hugetlb_count_add(pages_per_huge_page(h), dst_mm); + hugetlb_count_add(pages_per_huge_page(h), dst_vma->vm_mm); /* No need to invalidate - it was non-present before */ update_mmu_cache(dst_vma, dst_addr, dst_pte); diff --git a/mm/shmem.c b/mm/shmem.c index 41f82c5a5e28..cc03c61190eb 100644 --- a/mm/shmem.c +++ b/mm/shmem.c @@ -2398,8 +2398,7 @@ static struct inode *shmem_get_inode(struct mnt_idmap *idmap, struct super_block } #ifdef CONFIG_USERFAULTFD -int shmem_mfill_atomic_pte(struct mm_struct *dst_mm, - pmd_t *dst_pmd, +int shmem_mfill_atomic_pte(pmd_t *dst_pmd, struct vm_area_struct *dst_vma, unsigned long dst_addr, unsigned long src_addr, @@ -2489,11 +2488,11 @@ int shmem_mfill_atomic_pte(struct mm_struct *dst_mm, goto out_release; ret = shmem_add_to_page_cache(folio, mapping, pgoff, NULL, - gfp & GFP_RECLAIM_MASK, dst_mm); + gfp & GFP_RECLAIM_MASK, dst_vma->vm_mm); if (ret) goto out_release; - ret = mfill_atomic_install_pte(dst_mm, dst_pmd, dst_vma, dst_addr, + ret = mfill_atomic_install_pte(dst_pmd, dst_vma, dst_addr, &folio->page, true, wp_copy); if (ret) goto out_delete_from_cache; diff --git a/mm/userfaultfd.c b/mm/userfaultfd.c index 3980e1b7b7f8..4bf5c97c665a 100644 --- a/mm/userfaultfd.c +++ b/mm/userfaultfd.c @@ -55,7 +55,7 @@ struct vm_area_struct *find_dst_vma(struct mm_struct *dst_mm, * This function handles both MCOPY_ATOMIC_NORMAL and _CONTINUE for both shmem * and anon, and for both shared and private VMAs. */ -int mfill_atomic_install_pte(struct mm_struct *dst_mm, pmd_t *dst_pmd, +int mfill_atomic_install_pte(pmd_t *dst_pmd, struct vm_area_struct *dst_vma, unsigned long dst_addr, struct page *page, bool newly_allocated, bool wp_copy) @@ -93,7 +93,7 @@ int mfill_atomic_install_pte(struct mm_struct *dst_mm, pmd_t *dst_pmd, */ _dst_pte = pte_wrprotect(_dst_pte); - dst_pte = pte_offset_map_lock(dst_mm, dst_pmd, dst_addr, &ptl); + dst_pte = pte_offset_map_lock(dst_vma->vm_mm, dst_pmd, dst_addr, &ptl); if (vma_is_shmem(dst_vma)) { /* serialize against truncate with the page table lock */ @@ -129,9 +129,9 @@ int mfill_atomic_install_pte(struct mm_struct *dst_mm, pmd_t *dst_pmd, * Must happen after rmap, as mm_counter() checks mapping (via * PageAnon()), which is set by __page_set_anon_rmap(). */ - inc_mm_counter(dst_mm, mm_counter(page)); + inc_mm_counter(dst_vma->vm_mm, mm_counter(page)); - set_pte_at(dst_mm, dst_addr, dst_pte, _dst_pte); + set_pte_at(dst_vma->vm_mm, dst_addr, dst_pte, _dst_pte); /* No need to invalidate - it was non-present before */ update_mmu_cache(dst_vma, dst_addr, dst_pte); @@ -141,8 +141,7 @@ int mfill_atomic_install_pte(struct mm_struct *dst_mm, pmd_t *dst_pmd, return ret; } -static int mfill_atomic_pte_copy(struct mm_struct *dst_mm, - pmd_t *dst_pmd, +static int mfill_atomic_pte_copy(pmd_t *dst_pmd, struct vm_area_struct *dst_vma, unsigned long dst_addr, unsigned long src_addr, @@ -204,10 +203,10 @@ static int mfill_atomic_pte_copy(struct mm_struct *dst_mm, __SetPageUptodate(page); ret = -ENOMEM; - if (mem_cgroup_charge(page_folio(page), dst_mm, GFP_KERNEL)) + if (mem_cgroup_charge(page_folio(page), dst_vma->vm_mm, GFP_KERNEL)) goto out_release; - ret = mfill_atomic_install_pte(dst_mm, dst_pmd, dst_vma, dst_addr, + ret = mfill_atomic_install_pte(dst_pmd, dst_vma, dst_addr, page, true, wp_copy); if (ret) goto out_release; @@ -218,8 +217,7 @@ static int mfill_atomic_pte_copy(struct mm_struct *dst_mm, goto out; } -static int mfill_atomic_pte_zeropage(struct mm_struct *dst_mm, - pmd_t *dst_pmd, +static int mfill_atomic_pte_zeropage(pmd_t *dst_pmd, struct vm_area_struct *dst_vma, unsigned long dst_addr) { @@ -231,7 +229,7 @@ static int mfill_atomic_pte_zeropage(struct mm_struct *dst_mm, _dst_pte = pte_mkspecial(pfn_pte(my_zero_pfn(dst_addr), dst_vma->vm_page_prot)); - dst_pte = pte_offset_map_lock(dst_mm, dst_pmd, dst_addr, &ptl); + dst_pte = pte_offset_map_lock(dst_vma->vm_mm, dst_pmd, dst_addr, &ptl); if (dst_vma->vm_file) { /* the shmem MAP_PRIVATE case requires checking the i_size */ inode = dst_vma->vm_file->f_inode; @@ -244,7 +242,7 @@ static int mfill_atomic_pte_zeropage(struct mm_struct *dst_mm, ret = -EEXIST; if (!pte_none(*dst_pte)) goto out_unlock; - set_pte_at(dst_mm, dst_addr, dst_pte, _dst_pte); + set_pte_at(dst_vma->vm_mm, dst_addr, dst_pte, _dst_pte); /* No need to invalidate - it was non-present before */ update_mmu_cache(dst_vma, dst_addr, dst_pte); ret = 0; @@ -254,8 +252,7 @@ static int mfill_atomic_pte_zeropage(struct mm_struct *dst_mm, } /* Handles UFFDIO_CONTINUE for all shmem VMAs (shared or private). */ -static int mfill_atomic_pte_continue(struct mm_struct *dst_mm, - pmd_t *dst_pmd, +static int mfill_atomic_pte_continue(pmd_t *dst_pmd, struct vm_area_struct *dst_vma, unsigned long dst_addr, bool wp_copy) @@ -283,7 +280,7 @@ static int mfill_atomic_pte_continue(struct mm_struct *dst_mm, goto out_release; } - ret = mfill_atomic_install_pte(dst_mm, dst_pmd, dst_vma, dst_addr, + ret = mfill_atomic_install_pte(dst_pmd, dst_vma, dst_addr, page, false, wp_copy); if (ret) goto out_release; @@ -324,7 +321,7 @@ static pmd_t *mm_alloc_pmd(struct mm_struct *mm, unsigned long address) * mfill_atomic processing for HUGETLB vmas. Note that this routine is * called with mmap_lock held, it will release mmap_lock before returning. */ -static __always_inline ssize_t mfill_atomic_hugetlb(struct mm_struct *dst_mm, +static __always_inline ssize_t mfill_atomic_hugetlb( struct vm_area_struct *dst_vma, unsigned long dst_start, unsigned long src_start, @@ -332,6 +329,7 @@ static __always_inline ssize_t mfill_atomic_hugetlb(struct mm_struct *dst_mm, enum mcopy_atomic_mode mode, bool wp_copy) { + struct mm_struct *dst_mm = dst_vma->vm_mm; int vm_shared = dst_vma->vm_flags & VM_SHARED; ssize_t err; pte_t *dst_pte; @@ -425,7 +423,7 @@ static __always_inline ssize_t mfill_atomic_hugetlb(struct mm_struct *dst_mm, goto out_unlock; } - err = hugetlb_mfill_atomic_pte(dst_mm, dst_pte, dst_vma, + err = hugetlb_mfill_atomic_pte(dst_pte, dst_vma, dst_addr, src_addr, mode, &page, wp_copy); @@ -477,17 +475,15 @@ static __always_inline ssize_t mfill_atomic_hugetlb(struct mm_struct *dst_mm, } #else /* !CONFIG_HUGETLB_PAGE */ /* fail at build time if gcc attempts to use this */ -extern ssize_t mfill_atomic_hugetlb(struct mm_struct *dst_mm, - struct vm_area_struct *dst_vma, - unsigned long dst_start, - unsigned long src_start, - unsigned long len, - enum mcopy_atomic_mode mode, - bool wp_copy); +extern ssize_t mfill_atomic_hugetlb(struct vm_area_struct *dst_vma, + unsigned long dst_start, + unsigned long src_start, + unsigned long len, + enum mcopy_atomic_mode mode, + bool wp_copy); #endif /* CONFIG_HUGETLB_PAGE */ -static __always_inline ssize_t mfill_atomic_pte(struct mm_struct *dst_mm, - pmd_t *dst_pmd, +static __always_inline ssize_t mfill_atomic_pte(pmd_t *dst_pmd, struct vm_area_struct *dst_vma, unsigned long dst_addr, unsigned long src_addr, @@ -498,7 +494,7 @@ static __always_inline ssize_t mfill_atomic_pte(struct mm_struct *dst_mm, ssize_t err; if (mode == MCOPY_ATOMIC_CONTINUE) { - return mfill_atomic_pte_continue(dst_mm, dst_pmd, dst_vma, + return mfill_atomic_pte_continue(dst_pmd, dst_vma, dst_addr, wp_copy); } @@ -514,14 +510,14 @@ static __always_inline ssize_t mfill_atomic_pte(struct mm_struct *dst_mm, */ if (!(dst_vma->vm_flags & VM_SHARED)) { if (mode == MCOPY_ATOMIC_NORMAL) - err = mfill_atomic_pte_copy(dst_mm, dst_pmd, dst_vma, + err = mfill_atomic_pte_copy(dst_pmd, dst_vma, dst_addr, src_addr, page, wp_copy); else - err = mfill_atomic_pte_zeropage(dst_mm, dst_pmd, + err = mfill_atomic_pte_zeropage(dst_pmd, dst_vma, dst_addr); } else { - err = shmem_mfill_atomic_pte(dst_mm, dst_pmd, dst_vma, + err = shmem_mfill_atomic_pte(dst_pmd, dst_vma, dst_addr, src_addr, mode != MCOPY_ATOMIC_NORMAL, wp_copy, page); @@ -602,7 +598,7 @@ static __always_inline ssize_t mfill_atomic(struct mm_struct *dst_mm, * If this is a HUGETLB vma, pass off to appropriate routine */ if (is_vm_hugetlb_page(dst_vma)) - return mfill_atomic_hugetlb(dst_mm, dst_vma, dst_start, + return mfill_atomic_hugetlb(dst_vma, dst_start, src_start, len, mcopy_mode, wp_copy); @@ -655,7 +651,7 @@ static __always_inline ssize_t mfill_atomic(struct mm_struct *dst_mm, BUG_ON(pmd_none(*dst_pmd)); BUG_ON(pmd_trans_huge(*dst_pmd)); - err = mfill_atomic_pte(dst_mm, dst_pmd, dst_vma, dst_addr, + err = mfill_atomic_pte(dst_pmd, dst_vma, dst_addr, src_addr, &page, mcopy_mode, wp_copy); cond_resched(); @@ -724,7 +720,7 @@ ssize_t mfill_atomic_continue(struct mm_struct *dst_mm, unsigned long start, mmap_changing, 0); } -void uffd_wp_range(struct mm_struct *dst_mm, struct vm_area_struct *dst_vma, +void uffd_wp_range(struct vm_area_struct *dst_vma, unsigned long start, unsigned long len, bool enable_wp) { struct mmu_gather tlb; @@ -735,7 +731,7 @@ void uffd_wp_range(struct mm_struct *dst_mm, struct vm_area_struct *dst_vma, else newprot = vm_get_page_prot(dst_vma->vm_flags); - tlb_gather_mmu(&tlb, dst_mm); + tlb_gather_mmu(&tlb, dst_vma->vm_mm); change_protection(&tlb, dst_vma, start, start + len, newprot, enable_wp ? MM_CP_UFFD_WP : MM_CP_UFFD_WP_RESOLVE); tlb_finish_mmu(&tlb); @@ -786,7 +782,7 @@ int mwriteprotect_range(struct mm_struct *dst_mm, unsigned long start, goto out_unlock; } - uffd_wp_range(dst_mm, dst_vma, start, len, enable_wp); + uffd_wp_range(dst_vma, start, len, enable_wp); err = 0; out_unlock: