From patchwork Tue Mar 8 21:34:11 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Zach O'Keefe X-Patchwork-Id: 12774397 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id DD992C433EF for ; Tue, 8 Mar 2022 21:35:04 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 721818D000B; Tue, 8 Mar 2022 16:35:04 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 6CFF28D0001; Tue, 8 Mar 2022 16:35:04 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 549508D000B; Tue, 8 Mar 2022 16:35:04 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0153.hostedemail.com [216.40.44.153]) by kanga.kvack.org (Postfix) with ESMTP id 3D83B8D0001 for ; Tue, 8 Mar 2022 16:35:04 -0500 (EST) Received: from smtpin21.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id E56EB18288A48 for ; Tue, 8 Mar 2022 21:35:03 +0000 (UTC) X-FDA: 79222524486.21.EF37659 Received: from mail-pj1-f73.google.com (mail-pj1-f73.google.com [209.85.216.73]) by imf27.hostedemail.com (Postfix) with ESMTP id 7490340005 for ; Tue, 8 Mar 2022 21:35:03 +0000 (UTC) Received: by mail-pj1-f73.google.com with SMTP id y1-20020a17090a644100b001bc901aba0dso278766pjm.8 for ; Tue, 08 Mar 2022 13:35:03 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=NhFddjnsK0PIVdpBWrNU/x4hymHnPa8n/Ik4F0cygtk=; b=OWR3g8uNE2HHASyt/2BcsKeJlkq9k0i4r4+Kb0dQtnuy2YWiTLGnwtv8EDfQKXfOGD LVNjUDR6Gfi8AbTSu9RieP80ovKhF1HhtaTdNTKRAQhgzPpDOTEq7VTBWFaicd+dpYTD 5njmHp9vpcgdaizPFfQAvwdCBY2ONFEMYWx1Upu084kgH+IKCyK8gd3LCT9CO5gXRvyX g4VeQqp0jHVegeVrrXHn1k/TdUiQd2FXYm4GNOoVMafmfYEK7ljqHxSwj7NMix0ACc5w aeoFWA7fEiRUGrgpUXFhNilbjLTdgJKhcu90D+jiX/HEPZjsvVZ+tx2xq1AOsuJEY+3D 8CBQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=NhFddjnsK0PIVdpBWrNU/x4hymHnPa8n/Ik4F0cygtk=; b=TvTJV13DtvtCjReybPBRvZzyOH2BRksdkHwi8ElmCrxqJUzR6sbBiAQZqZ/yJNYUHb 0jYTNvTCYW6SOD0psE6rFhXWGF1yCqzlm+XHAQTjwkMOw4S1YhKJjMSkoYPqbKe6odKu PDcEK8PhIZgNFKyyQtzfGXJPKmSp67c6wZq2yJNyUR3mbuE2peOXRhYGKbblsvx9gXkv eLlEKcnjyYthLpBvM/Kf3vNR0u7BrPOljbYYY6NwJIyST+I9DSIuhSH4ZB/PcmopPx0n 3isRdzc6usYTk4jO1u4zD50/e7tqBWkwCrZ7CeXf15k6SEApe9pMxkyPOJSks8e3st5B Psrg== X-Gm-Message-State: AOAM5334bLgLg1PIufuP3VstOX3pE4NHLNstdEDDIHeF+LpNrpnJ/+45 7j28sQyXTNqczL8LcvKkZarkHgD6sBaL X-Google-Smtp-Source: ABdhPJyEVDLySllBmNQp1JPL5pb572jTuuXj39Sm4ofIw8LNfOF0luQJ+gMR0V19/cRtVbJgqdoXhpd/8pJX X-Received: from zokeefe3.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:1b6]) (user=zokeefe job=sendgmr) by 2002:a17:90b:1e10:b0:1bf:6c78:54a9 with SMTP id pg16-20020a17090b1e1000b001bf6c7854a9mr927885pjb.1.1646775301800; Tue, 08 Mar 2022 13:35:01 -0800 (PST) Date: Tue, 8 Mar 2022 13:34:11 -0800 In-Reply-To: <20220308213417.1407042-1-zokeefe@google.com> Message-Id: <20220308213417.1407042-9-zokeefe@google.com> Mime-Version: 1.0 References: <20220308213417.1407042-1-zokeefe@google.com> X-Mailer: git-send-email 2.35.1.616.g0bdcbb4464-goog Subject: [RFC PATCH 08/14] mm/thp: add madv_thp_vm_flags to __transparent_hugepage_enabled() From: "Zach O'Keefe" To: Alex Shi , David Hildenbrand , David Rientjes , Michal Hocko , Pasha Tatashin , SeongJae Park , Song Liu , Vlastimil Babka , Zi Yan , linux-mm@kvack.org Cc: Andrea Arcangeli , Andrew Morton , Arnd Bergmann , Axel Rasmussen , Chris Kennelly , Chris Zankel , Helge Deller , Hugh Dickins , Ivan Kokshaysky , "James E.J. Bottomley" , Jens Axboe , "Kirill A. Shutemov" , Matthew Wilcox , Matt Turner , Max Filippov , Miaohe Lin , Minchan Kim , Patrick Xia , Pavel Begunkov , Peter Xu , Richard Henderson , Thomas Bogendoerfer , Yang Shi , "Zach O'Keefe" X-Rspamd-Server: rspam06 X-Rspamd-Queue-Id: 7490340005 X-Stat-Signature: 8et4i8ptm9sxsxk4ct1gire3gxzc1c4b Authentication-Results: imf27.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=OWR3g8uN; spf=pass (imf27.hostedemail.com: domain of 3BcwnYgcKCFQLA600102AA270.yA8749GJ-886Hwy6.AD2@flex--zokeefe.bounces.google.com designates 209.85.216.73 as permitted sender) smtp.mailfrom=3BcwnYgcKCFQLA600102AA270.yA8749GJ-886Hwy6.AD2@flex--zokeefe.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com X-Rspam-User: X-HE-Tag: 1646775303-321814 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Later in the series, in madvise collapse context, we will want to optionally ignore MADV_NOHUGEPAGE. However, we'd also like to standardize on __transparent_hugepage_enabled() for determining anon thp eligibility. Add a new argument to __transparent_hugepage_enabled() which represents the vma flags to be used instead of those in vma->vm_flags for VM_[NO]HUGEPAGE checks. I.e. checks inside __transparent_hugepage_enabled() which previously didn't care about madvise settings, such as dax check, or stack check, are unaffected. Signed-off-by: Zach O'Keefe --- include/linux/huge_mm.h | 14 ++++++++++---- mm/huge_memory.c | 2 +- mm/memory.c | 6 ++++-- 3 files changed, 15 insertions(+), 7 deletions(-) diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h index 2999190adc22..fd905b0b2c71 100644 --- a/include/linux/huge_mm.h +++ b/include/linux/huge_mm.h @@ -143,8 +143,13 @@ static inline bool transhuge_vma_enabled(struct vm_area_struct *vma, /* * to be used on vmas which are known to support THP. * Use transparent_hugepage_active otherwise + * + * madv_thp_vm_flags are used instead of vma->vm_flags for VM_NOHUGEPAGE + * and VM_HUGEPAGE. Principal use is ignoring VM_NOHUGEPAGE when in madvise + * collapse context. */ -static inline bool __transparent_hugepage_enabled(struct vm_area_struct *vma) +static inline bool __transparent_hugepage_enabled(struct vm_area_struct *vma, + unsigned long madv_thp_vm_flags) { /* @@ -153,7 +158,7 @@ static inline bool __transparent_hugepage_enabled(struct vm_area_struct *vma) if (transparent_hugepage_flags & (1 << TRANSPARENT_HUGEPAGE_NEVER_DAX)) return false; - if (!transhuge_vma_enabled(vma, vma->vm_flags)) + if (!transhuge_vma_enabled(vma, madv_thp_vm_flags)) return false; if (vma_is_temporary_stack(vma)) @@ -167,7 +172,7 @@ static inline bool __transparent_hugepage_enabled(struct vm_area_struct *vma) if (transparent_hugepage_flags & (1 << TRANSPARENT_HUGEPAGE_REQ_MADV_FLAG)) - return !!(vma->vm_flags & VM_HUGEPAGE); + return !!(madv_thp_vm_flags & VM_HUGEPAGE); return false; } @@ -316,7 +321,8 @@ static inline bool folio_test_pmd_mappable(struct folio *folio) return false; } -static inline bool __transparent_hugepage_enabled(struct vm_area_struct *vma) +static inline bool __transparent_hugepage_enabled(struct vm_area_struct *vma, + unsigned long madv_thp_vm_flags) { return false; } diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 3557aabe86fe..25b7590b9846 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -83,7 +83,7 @@ bool transparent_hugepage_active(struct vm_area_struct *vma) if (!transhuge_vma_suitable(vma, addr)) return false; if (vma_is_anonymous(vma)) - return __transparent_hugepage_enabled(vma); + return __transparent_hugepage_enabled(vma, vma->vm_flags); if (vma_is_shmem(vma)) return shmem_huge_enabled(vma); if (IS_ENABLED(CONFIG_READ_ONLY_THP_FOR_FS)) diff --git a/mm/memory.c b/mm/memory.c index 4499cf09c21f..a6f2a8a20329 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -4695,7 +4695,8 @@ static vm_fault_t __handle_mm_fault(struct vm_area_struct *vma, if (!vmf.pud) return VM_FAULT_OOM; retry_pud: - if (pud_none(*vmf.pud) && __transparent_hugepage_enabled(vma)) { + if (pud_none(*vmf.pud) && + __transparent_hugepage_enabled(vma, vma->vm_flags)) { ret = create_huge_pud(&vmf); if (!(ret & VM_FAULT_FALLBACK)) return ret; @@ -4726,7 +4727,8 @@ static vm_fault_t __handle_mm_fault(struct vm_area_struct *vma, if (pud_trans_unstable(vmf.pud)) goto retry_pud; - if (pmd_none(*vmf.pmd) && __transparent_hugepage_enabled(vma)) { + if (pmd_none(*vmf.pmd) && + __transparent_hugepage_enabled(vma, vma->vm_flags)) { ret = create_huge_pmd(&vmf); if (!(ret & VM_FAULT_FALLBACK)) return ret;