From patchwork Wed Jul 6 23:59:28 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Zach O'Keefe X-Patchwork-Id: 12908934 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6F70FC43334 for ; Thu, 7 Jul 2022 00:06:25 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 072568E000A; Wed, 6 Jul 2022 20:06:24 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id F3D5C8E0001; Wed, 6 Jul 2022 20:06:23 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id CF3E38E000A; Wed, 6 Jul 2022 20:06:23 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id B656A8E0001 for ; Wed, 6 Jul 2022 20:06:23 -0400 (EDT) Received: from smtpin24.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id 92D7B20614 for ; Thu, 7 Jul 2022 00:06:23 +0000 (UTC) X-FDA: 79658361846.24.E2686AC Received: from mail-pf1-f202.google.com (mail-pf1-f202.google.com [209.85.210.202]) by imf19.hostedemail.com (Postfix) with ESMTP id 245131A0026 for ; Thu, 7 Jul 2022 00:06:22 +0000 (UTC) Received: by mail-pf1-f202.google.com with SMTP id k21-20020aa78215000000b005283ff3d0c3so4273412pfi.1 for ; Wed, 06 Jul 2022 17:06:22 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=3YJ3/oW0r+TtsHTKxqu4fhHtZgoYHtXGhei0VFjd4Ko=; b=puYC8ZEwzCddIlxsAk9ztbJS4lmhUnHGPxup06sGt7MijxQi5/xtF8QWaGsQVH9z8M VE7z9KAhFOZzbdPRbZtqiabuOpAPtgwcVJMCODJdWc4/rUhLtW8QrdmfB7AtgtdoxEes OHma8Jn2m6ai1TKeICCXuFRiXEXcnglXy526c01o5xuinZm+dUw3Gc97bPlbFevfVwaK UJXK5D4t2Za4inq5DRx5SZ3ERoYfx0Tjbia5Gq1obkvNg9TPfbMg/vWI3tCOaUdO3loY 7IuB9zwtn1PHkjTQEbA7lAs2M/SxApFRV8lW9PkD+jgn78qHN/ZW3Uf/lj3A0B9U9Ufd MDEQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=3YJ3/oW0r+TtsHTKxqu4fhHtZgoYHtXGhei0VFjd4Ko=; b=6MRfzywSzqof4Noo/lI1zoh6tsCZia3X84goLWKSRb+4xXupJvHPQxNpk3omh4HM1Z Tiv5zHCTMl3FjLLtoebf9uF/HoeDTnrJSiCgnwprJ+vmC6E2qb+DoC1MdhfDqGhwUWl0 IUSPAYdi/PzSKze+chzIJ74KhHrCDTNI1pWcQXvkJDJorSF4/YM7M3ZRYigi6Ct8xxNq sowa9u/EDWUzHTBYFYdH0+GYQLFiPrnUIeZMZ43tUs578OrszGhO0VkByvZF+LBJjoo5 FeJhEh0khLGeJsbVqzSo2L3laLx8/6T2xij+MBjDFnATUta1qk6JVcQ3nFvVzhxnTo/b vIDA== X-Gm-Message-State: AJIora+Ea5TREwIR5o5qHqBIC8vg2i4/BrJujM2vkPtxr/lIeYenEf2E vOz7FblFH6Yt3X5qg5V52sY2x8/hwObb X-Google-Smtp-Source: AGRyM1tnbi2poQESktW967KSHNPaR6vKI8wQCkAyl31Do3w/h/f29A3K/UOituKCFzFdfulHNnlBTbPN2Ue0 X-Received: from zokeefe3.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:1b6]) (user=zokeefe job=sendgmr) by 2002:a17:902:c205:b0:16b:f021:4299 with SMTP id 5-20020a170902c20500b0016bf0214299mr13053381pll.70.1657152382185; Wed, 06 Jul 2022 17:06:22 -0700 (PDT) Date: Wed, 6 Jul 2022 16:59:28 -0700 In-Reply-To: <20220706235936.2197195-1-zokeefe@google.com> Message-Id: <20220706235936.2197195-11-zokeefe@google.com> Mime-Version: 1.0 References: <20220706235936.2197195-1-zokeefe@google.com> X-Mailer: git-send-email 2.37.0.rc0.161.g10f37bed90-goog Subject: [mm-unstable v7 10/18] mm/khugepaged: rename prefix of shared collapse functions From: "Zach O'Keefe" To: Alex Shi , David Hildenbrand , David Rientjes , Matthew Wilcox , Michal Hocko , Pasha Tatashin , Peter Xu , Rongwei Wang , SeongJae Park , Song Liu , Vlastimil Babka , Yang Shi , Zi Yan , linux-mm@kvack.org Cc: Andrea Arcangeli , Andrew Morton , Arnd Bergmann , Axel Rasmussen , Chris Kennelly , Chris Zankel , Helge Deller , Hugh Dickins , Ivan Kokshaysky , "James E.J. Bottomley" , Jens Axboe , "Kirill A. Shutemov" , Matt Turner , Max Filippov , Miaohe Lin , Minchan Kim , Patrick Xia , Pavel Begunkov , Thomas Bogendoerfer , "Zach O'Keefe" ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1657152383; a=rsa-sha256; cv=none; b=W0ZNdIygtyQnSQ5u3zB8LwXC1jAT0cQ3z5rsspeRwKtnHSYtKowlYgnqPHAEICo1nkyVYZ 6gAa9UghWZb8y5FzN5t+98jyDO7OO5FbtEv093PO7CExBfk0lhJ2kJ8wjfqQ+2mMRP6PQK MrZUrCOSsD9BslOExcCof5oM7Bstaf8= ARC-Authentication-Results: i=1; imf19.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=puYC8ZEw; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf19.hostedemail.com: domain of 3fiPGYgcKCPk0plffgfhpphmf.dpnmjovy-nnlwbdl.psh@flex--zokeefe.bounces.google.com designates 209.85.210.202 as permitted sender) smtp.mailfrom=3fiPGYgcKCPk0plffgfhpphmf.dpnmjovy-nnlwbdl.psh@flex--zokeefe.bounces.google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1657152383; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=3YJ3/oW0r+TtsHTKxqu4fhHtZgoYHtXGhei0VFjd4Ko=; b=WqwsXrCWijcwFGOyjNzTezZA9nbNqJaKm/YW0SPGb7+BSH6G4PJv9cxH1nKkNz4nHz8hoJ ZtlabfNQgsYyTl9aW9fsbShDYiEcreZNSab6OcxprrSN2kvf6kleRBk3Pex7cfzjz0qKgd Ug7XJRTQZBwzMFrCPedJ0u+SaXfUDWY= Authentication-Results: imf19.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=puYC8ZEw; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf19.hostedemail.com: domain of 3fiPGYgcKCPk0plffgfhpphmf.dpnmjovy-nnlwbdl.psh@flex--zokeefe.bounces.google.com designates 209.85.210.202 as permitted sender) smtp.mailfrom=3fiPGYgcKCPk0plffgfhpphmf.dpnmjovy-nnlwbdl.psh@flex--zokeefe.bounces.google.com X-Stat-Signature: mxp3cgwp899ymp8jrg7kig3f8muff357 X-Rspamd-Queue-Id: 245131A0026 X-Rspamd-Server: rspam05 X-Rspam-User: X-HE-Tag: 1657152382-730155 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: The following functions are shared between khugepaged and madvise collapse contexts. Replace the "khugepaged_" prefix with generic "hpage_collapse_" prefix in such cases: khugepaged_test_exit() -> hpage_collapse_test_exit() khugepaged_scan_abort() -> hpage_collapse_scan_abort() khugepaged_scan_pmd() -> hpage_collapse_scan_pmd() khugepaged_find_target_node() -> hpage_collapse_find_target_node() khugepaged_alloc_page() -> hpage_collapse_alloc_page() The kerenel ABI (e.g. huge_memory:mm_khugepaged_scan_pmd tracepoint) is unaltered. Signed-off-by: Zach O'Keefe Reviewed-by: Yang Shi --- mm/khugepaged.c | 68 +++++++++++++++++++++++++------------------------ 1 file changed, 35 insertions(+), 33 deletions(-) diff --git a/mm/khugepaged.c b/mm/khugepaged.c index 2b2d832e44f2..e0d00180512c 100644 --- a/mm/khugepaged.c +++ b/mm/khugepaged.c @@ -94,7 +94,7 @@ struct collapse_control { /* Num pages scanned per node */ int node_load[MAX_NUMNODES]; - /* Last target selected in khugepaged_find_target_node() */ + /* Last target selected in hpage_collapse_find_target_node() */ int last_target_node; }; @@ -438,7 +438,7 @@ static void insert_to_mm_slots_hash(struct mm_struct *mm, hash_add(mm_slots_hash, &mm_slot->hash, (long)mm); } -static inline int khugepaged_test_exit(struct mm_struct *mm) +static inline int hpage_collapse_test_exit(struct mm_struct *mm) { return atomic_read(&mm->mm_users) == 0; } @@ -453,7 +453,7 @@ void __khugepaged_enter(struct mm_struct *mm) return; /* __khugepaged_exit() must not run from under us */ - VM_BUG_ON_MM(khugepaged_test_exit(mm), mm); + VM_BUG_ON_MM(hpage_collapse_test_exit(mm), mm); if (unlikely(test_and_set_bit(MMF_VM_HUGEPAGE, &mm->flags))) { free_mm_slot(mm_slot); return; @@ -505,11 +505,10 @@ void __khugepaged_exit(struct mm_struct *mm) } else if (mm_slot) { /* * This is required to serialize against - * khugepaged_test_exit() (which is guaranteed to run - * under mmap sem read mode). Stop here (after we - * return all pagetables will be destroyed) until - * khugepaged has finished working on the pagetables - * under the mmap_lock. + * hpage_collapse_test_exit() (which is guaranteed to run + * under mmap sem read mode). Stop here (after we return all + * pagetables will be destroyed) until khugepaged has finished + * working on the pagetables under the mmap_lock. */ mmap_write_lock(mm); mmap_write_unlock(mm); @@ -754,13 +753,12 @@ static void khugepaged_alloc_sleep(void) remove_wait_queue(&khugepaged_wait, &wait); } - struct collapse_control khugepaged_collapse_control = { .is_khugepaged = true, .last_target_node = NUMA_NO_NODE, }; -static bool khugepaged_scan_abort(int nid, struct collapse_control *cc) +static bool hpage_collapse_scan_abort(int nid, struct collapse_control *cc) { int i; @@ -795,7 +793,7 @@ static inline gfp_t alloc_hugepage_khugepaged_gfpmask(void) } #ifdef CONFIG_NUMA -static int khugepaged_find_target_node(struct collapse_control *cc) +static int hpage_collapse_find_target_node(struct collapse_control *cc) { int nid, target_node = 0, max_value = 0; @@ -819,13 +817,13 @@ static int khugepaged_find_target_node(struct collapse_control *cc) return target_node; } #else -static int khugepaged_find_target_node(struct collapse_control *cc) +static int hpage_collapse_find_target_node(struct collapse_control *cc) { return 0; } #endif -static bool khugepaged_alloc_page(struct page **hpage, gfp_t gfp, int node) +static bool hpage_collapse_alloc_page(struct page **hpage, gfp_t gfp, int node) { *hpage = __alloc_pages_node(node, gfp, HPAGE_PMD_ORDER); if (unlikely(!*hpage)) { @@ -850,7 +848,7 @@ static int hugepage_vma_revalidate(struct mm_struct *mm, unsigned long address, { struct vm_area_struct *vma; - if (unlikely(khugepaged_test_exit(mm))) + if (unlikely(hpage_collapse_test_exit(mm))) return SCAN_ANY_PROCESS; *vmap = vma = find_vma(mm, address); @@ -913,7 +911,7 @@ static int check_pmd_still_valid(struct mm_struct *mm, /* * Bring missing pages in from swap, to complete THP collapse. - * Only done if khugepaged_scan_pmd believes it is worthwhile. + * Only done if hpage_collapse_scan_pmd believes it is worthwhile. * * Called and returns without pte mapped or spinlocks held. * Note that if false is returned, mmap_lock will be released. @@ -978,9 +976,9 @@ static int alloc_charge_hpage(struct page **hpage, struct mm_struct *mm, /* Only allocate from the target node */ gfp_t gfp = (cc->is_khugepaged ? alloc_hugepage_khugepaged_gfpmask() : GFP_TRANSHUGE) | __GFP_THISNODE; - int node = khugepaged_find_target_node(cc); + int node = hpage_collapse_find_target_node(cc); - if (!khugepaged_alloc_page(hpage, gfp, node)) + if (!hpage_collapse_alloc_page(hpage, gfp, node)) return SCAN_ALLOC_HUGE_PAGE_FAIL; if (unlikely(mem_cgroup_charge(page_folio(*hpage), mm, gfp))) return SCAN_CGROUP_CHARGE_FAIL; @@ -1140,9 +1138,10 @@ static int collapse_huge_page(struct mm_struct *mm, unsigned long address, return result; } -static int khugepaged_scan_pmd(struct mm_struct *mm, struct vm_area_struct *vma, - unsigned long address, bool *mmap_locked, - struct collapse_control *cc) +static int hpage_collapse_scan_pmd(struct mm_struct *mm, + struct vm_area_struct *vma, + unsigned long address, bool *mmap_locked, + struct collapse_control *cc) { pmd_t *pmd; pte_t *pte, *_pte; @@ -1234,7 +1233,7 @@ static int khugepaged_scan_pmd(struct mm_struct *mm, struct vm_area_struct *vma, * hit record. */ node = page_to_nid(page); - if (khugepaged_scan_abort(node, cc)) { + if (hpage_collapse_scan_abort(node, cc)) { result = SCAN_SCAN_ABORT; goto out_unmap; } @@ -1313,7 +1312,7 @@ static void collect_mm_slot(struct mm_slot *mm_slot) lockdep_assert_held(&khugepaged_mm_lock); - if (khugepaged_test_exit(mm)) { + if (hpage_collapse_test_exit(mm)) { /* free mm_slot */ hash_del(&mm_slot->hash); list_del(&mm_slot->mm_node); @@ -1486,7 +1485,7 @@ static void khugepaged_collapse_pte_mapped_thps(struct mm_slot *mm_slot) if (!mmap_write_trylock(mm)) return; - if (unlikely(khugepaged_test_exit(mm))) + if (unlikely(hpage_collapse_test_exit(mm))) goto out; for (i = 0; i < mm_slot->nr_pte_mapped_thp; i++) @@ -1548,7 +1547,8 @@ static void retract_page_tables(struct address_space *mapping, pgoff_t pgoff) * it'll always mapped in small page size for uffd-wp * registered ranges. */ - if (!khugepaged_test_exit(mm) && !userfaultfd_wp(vma)) + if (!hpage_collapse_test_exit(mm) && + !userfaultfd_wp(vma)) collapse_and_free_pmd(mm, vma, addr, pmd); mmap_write_unlock(mm); } else { @@ -1975,7 +1975,7 @@ static int khugepaged_scan_file(struct mm_struct *mm, struct file *file, } node = page_to_nid(page); - if (khugepaged_scan_abort(node, cc)) { + if (hpage_collapse_scan_abort(node, cc)) { result = SCAN_SCAN_ABORT; break; } @@ -2069,7 +2069,7 @@ static unsigned int khugepaged_scan_mm_slot(unsigned int pages, int *result, goto breakouterloop_mmap_lock; progress++; - if (unlikely(khugepaged_test_exit(mm))) + if (unlikely(hpage_collapse_test_exit(mm))) goto breakouterloop; address = khugepaged_scan.address; @@ -2078,7 +2078,7 @@ static unsigned int khugepaged_scan_mm_slot(unsigned int pages, int *result, unsigned long hstart, hend; cond_resched(); - if (unlikely(khugepaged_test_exit(mm))) { + if (unlikely(hpage_collapse_test_exit(mm))) { progress++; break; } @@ -2099,7 +2099,7 @@ static unsigned int khugepaged_scan_mm_slot(unsigned int pages, int *result, bool mmap_locked = true; cond_resched(); - if (unlikely(khugepaged_test_exit(mm))) + if (unlikely(hpage_collapse_test_exit(mm))) goto breakouterloop; VM_BUG_ON(khugepaged_scan.address < hstart || @@ -2116,9 +2116,10 @@ static unsigned int khugepaged_scan_mm_slot(unsigned int pages, int *result, mmap_locked = false; fput(file); } else { - *result = khugepaged_scan_pmd(mm, vma, - khugepaged_scan.address, - &mmap_locked, cc); + *result = hpage_collapse_scan_pmd(mm, vma, + khugepaged_scan.address, + &mmap_locked, + cc); } if (*result == SCAN_SUCCEED) ++khugepaged_pages_collapsed; @@ -2148,7 +2149,7 @@ static unsigned int khugepaged_scan_mm_slot(unsigned int pages, int *result, * Release the current mm_slot if this mm is about to die, or * if we scanned all vmas of this mm. */ - if (khugepaged_test_exit(mm) || !vma) { + if (hpage_collapse_test_exit(mm) || !vma) { /* * Make sure that if mm_users is reaching zero while * khugepaged runs here, khugepaged_exit will find @@ -2432,7 +2433,8 @@ int madvise_collapse(struct vm_area_struct *vma, struct vm_area_struct **prev, } mmap_assert_locked(mm); memset(cc->node_load, 0, sizeof(cc->node_load)); - result = khugepaged_scan_pmd(mm, vma, addr, &mmap_locked, cc); + result = hpage_collapse_scan_pmd(mm, vma, addr, &mmap_locked, + cc); if (!mmap_locked) *prev = NULL; /* Tell caller we dropped mmap_lock */