From patchwork Wed Feb 14 20:44:29 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Hildenbrand X-Patchwork-Id: 13557007 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 04398C48BC3 for ; Wed, 14 Feb 2024 20:45:05 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id DE6706B00A5; Wed, 14 Feb 2024 15:45:03 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id D6F3F6B00A6; Wed, 14 Feb 2024 15:45:03 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id C0E056B00A7; Wed, 14 Feb 2024 15:45:03 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id A984A6B00A5 for ; Wed, 14 Feb 2024 15:45:03 -0500 (EST) Received: from smtpin22.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id 578A080CEB for ; Wed, 14 Feb 2024 20:45:03 +0000 (UTC) X-FDA: 81791588886.22.BB3B74A Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by imf17.hostedemail.com (Postfix) with ESMTP id AD0E140008 for ; Wed, 14 Feb 2024 20:45:01 +0000 (UTC) Authentication-Results: imf17.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=QBsFUciT; dmarc=pass (policy=none) header.from=redhat.com; spf=pass (imf17.hostedemail.com: domain of david@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=david@redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1707943501; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=SyWghWuBP6GkqnhxcyLXYq2ssX4b7HML4Pf9katCZ5k=; b=RMkFDYycOaxRF7OY6etqTGPBxvCPrRUu53SnXko1JuE1TYwJrGjfn/lC06C3J0BQuaLb3a 3sS4EJNqLU0ivldVg0zvNJw28FW98+WhzhcMyYyU/bwL1QzzzLYYMmBfzRtL6leqoSiND2 aKotSgIqn0M8ownD/rJwduPtjivuCHA= ARC-Authentication-Results: i=1; imf17.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=QBsFUciT; dmarc=pass (policy=none) header.from=redhat.com; spf=pass (imf17.hostedemail.com: domain of david@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=david@redhat.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1707943501; a=rsa-sha256; cv=none; b=spFX6zSJbgMV2+g/HdQdbOj2OAxAzi9th8l5qqiByNwGT6y1Y8iIvmwrGKUxHEdubImf9+ qVQqMFnS57VOjqbg2nOUkomNpxPQYiYYRBnFVSgPNpCC1bcVocw/JofMiCEJVFzYw2QRuw 3xkcRN6ZqdEQJ7iaMuvQ1ib6lEFgkpg= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1707943500; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=SyWghWuBP6GkqnhxcyLXYq2ssX4b7HML4Pf9katCZ5k=; b=QBsFUciTdcQvEEwIrR2s0jSUR3Spd74TiMKrEjN0NElz7OYdiGHFb/U46yoUeVd/oLvTgN j5prsNA6BNsariKFHBHJwvDY10Se5sd/7X+vouHq3P+9wvuhn5MpzEeCJXdxMmiqfPK/we 4ShppiKUcOwxPu9Inz5p3wwE9yN6kcc= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-608-cfALmccYMXOB9OP0YG_4hQ-1; Wed, 14 Feb 2024 15:44:58 -0500 X-MC-Unique: cfALmccYMXOB9OP0YG_4hQ-1 Received: from smtp.corp.redhat.com (int-mx07.intmail.prod.int.rdu2.redhat.com [10.11.54.7]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id B0CEB10AFD6E; Wed, 14 Feb 2024 20:44:56 +0000 (UTC) Received: from t14s.fritz.box (unknown [10.39.194.174]) by smtp.corp.redhat.com (Postfix) with ESMTP id 244661C066A9; Wed, 14 Feb 2024 20:44:53 +0000 (UTC) From: David Hildenbrand To: linux-kernel@vger.kernel.org Cc: linux-mm@kvack.org, David Hildenbrand , Andrew Morton , Matthew Wilcox , Ryan Roberts , Catalin Marinas , Yin Fengwei , Michal Hocko , Will Deacon , "Aneesh Kumar K.V" , Nick Piggin , Peter Zijlstra , Michael Ellerman , Christophe Leroy , "Naveen N. Rao" , Heiko Carstens , Vasily Gorbik , Alexander Gordeev , Christian Borntraeger , Sven Schnelle , Arnd Bergmann , linux-arch@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-s390@vger.kernel.org Subject: [PATCH v3 04/10] mm/memory: factor out zapping folio pte into zap_present_folio_pte() Date: Wed, 14 Feb 2024 21:44:29 +0100 Message-ID: <20240214204435.167852-5-david@redhat.com> In-Reply-To: <20240214204435.167852-1-david@redhat.com> References: <20240214204435.167852-1-david@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.4.1 on 10.11.54.7 X-Rspam-User: X-Stat-Signature: 9uneh4zs7thykjioka95t44m5pf4c5zu X-Rspamd-Server: rspam07 X-Rspamd-Queue-Id: AD0E140008 X-HE-Tag: 1707943501-797514 X-HE-Meta: U2FsdGVkX1+mmEGG0/L7NzzGNPPA9DvrjKp0wniR5cDziiBiBmpjQP+x/qbJK1HpAFXRYV1bX5LulYDknYfg6FPZElGHJDp/Z1RZkAunI0ahdSZgxq0EgNEEDvdyHYpXO4C2pNwzgu97SvhgdPe7SJB6YSm7ZQb+ii5VOUzDMOcln6gg0mr+taiqxYKWT45aJ8KLFMo7fvqpGFggQ1Pi0rtSQ/nPcBoQTswHyXV7jJxaFFJ01Popp1izdMED5N+PFZyALnvoTfvwA2/+Ir743U2QfZqUAtyOfaSxzVUybXQlwJEtNA18CQVjriDgd17jUxHP5Hb2Iukn7CvyGgNknCtx40KermQixNMOF1OwrS1jonBkj//bfjZ6sYe4TcNGwYL4PM4/AJDJk87gN/vJ6NBWqNvP9ojq63V/dHxA3MOyBZRl/WR4wPuXEpNF2Q7ef8aJ0NrrS39En/jVU6RV0g/3YFZisInyrFc+PtCuu4swuW8Tm1ny81H8VDdFvP2l5ALq66fKw2iNljjbsNRB2HWqe+QYgu4roz6dmFOLnCpgNGhz4IpOz0V+EF5mCIVlctyrSsGIra6LUWMVavjVya0oHHdMPk07SNXRvpBCerj0i7JAC+Yu2TNePBdDw97D5aKRO770nWoSrLxzUgXq+/TIfE6njB4ofhNufv52foVi006+Tlnpzed2FiZYcvuv4NHr+r+8WzddHVgNo1HK4IaY2/CzLO7QHLYIy8+9FoK0otOFGa8nEhwTedFoaPfaYUpnXa7VEp3j7glH3d97EG0gEBcmPWoFMXjZVvnL/nv4kRhhc9fbOAApPBzlC/Y5Wp5ak1Hv9RcWOe7AQh5wUBsmfdV55ez3EAmLFBBNZYUtyPyR1WKYxwAOxtZ15hBI+xYjPhIAeLyr4DgpZpR2l0bCKapRDkr+btUEQO/B2AGwnCVKEEcfJVxmEJA0RHey X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Let's prepare for further changes by factoring it out into a separate function. Reviewed-by: Ryan Roberts Signed-off-by: David Hildenbrand --- mm/memory.c | 53 ++++++++++++++++++++++++++++++++--------------------- 1 file changed, 32 insertions(+), 21 deletions(-) diff --git a/mm/memory.c b/mm/memory.c index 7a3ebb6e5909..a3efc4da258a 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -1528,30 +1528,14 @@ zap_install_uffd_wp_if_needed(struct vm_area_struct *vma, pte_install_uffd_wp_if_needed(vma, addr, pte, pteval); } -static inline void zap_present_pte(struct mmu_gather *tlb, - struct vm_area_struct *vma, pte_t *pte, pte_t ptent, - unsigned long addr, struct zap_details *details, - int *rss, bool *force_flush, bool *force_break) +static inline void zap_present_folio_pte(struct mmu_gather *tlb, + struct vm_area_struct *vma, struct folio *folio, + struct page *page, pte_t *pte, pte_t ptent, unsigned long addr, + struct zap_details *details, int *rss, bool *force_flush, + bool *force_break) { struct mm_struct *mm = tlb->mm; bool delay_rmap = false; - struct folio *folio; - struct page *page; - - page = vm_normal_page(vma, addr, ptent); - if (!page) { - /* We don't need up-to-date accessed/dirty bits. */ - ptep_get_and_clear_full(mm, addr, pte, tlb->fullmm); - arch_check_zapped_pte(vma, ptent); - tlb_remove_tlb_entry(tlb, pte, addr); - VM_WARN_ON_ONCE(userfaultfd_wp(vma)); - ksm_might_unmap_zero_page(mm, ptent); - return; - } - - folio = page_folio(page); - if (unlikely(!should_zap_folio(details, folio))) - return; if (!folio_test_anon(folio)) { ptent = ptep_get_and_clear_full(mm, addr, pte, tlb->fullmm); @@ -1586,6 +1570,33 @@ static inline void zap_present_pte(struct mmu_gather *tlb, } } +static inline void zap_present_pte(struct mmu_gather *tlb, + struct vm_area_struct *vma, pte_t *pte, pte_t ptent, + unsigned long addr, struct zap_details *details, + int *rss, bool *force_flush, bool *force_break) +{ + struct mm_struct *mm = tlb->mm; + struct folio *folio; + struct page *page; + + page = vm_normal_page(vma, addr, ptent); + if (!page) { + /* We don't need up-to-date accessed/dirty bits. */ + ptep_get_and_clear_full(mm, addr, pte, tlb->fullmm); + arch_check_zapped_pte(vma, ptent); + tlb_remove_tlb_entry(tlb, pte, addr); + VM_WARN_ON_ONCE(userfaultfd_wp(vma)); + ksm_might_unmap_zero_page(mm, ptent); + return; + } + + folio = page_folio(page); + if (unlikely(!should_zap_folio(details, folio))) + return; + zap_present_folio_pte(tlb, vma, folio, page, pte, ptent, addr, details, + rss, force_flush, force_break); +} + static unsigned long zap_pte_range(struct mmu_gather *tlb, struct vm_area_struct *vma, pmd_t *pmd, unsigned long addr, unsigned long end,