From patchwork Mon Nov 27 08:46:45 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Muchun Song X-Patchwork-Id: 13469267 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 75AFFC4167B for ; Mon, 27 Nov 2023 08:47:36 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 0ED966B032B; Mon, 27 Nov 2023 03:47:36 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 075A96B032C; Mon, 27 Nov 2023 03:47:36 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id E58846B032D; Mon, 27 Nov 2023 03:47:35 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id D1F756B032B for ; Mon, 27 Nov 2023 03:47:35 -0500 (EST) Received: from smtpin22.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id AEB0A4012B for ; Mon, 27 Nov 2023 08:47:35 +0000 (UTC) X-FDA: 81503105670.22.1FCE3D0 Received: from mail-oi1-f179.google.com (mail-oi1-f179.google.com [209.85.167.179]) by imf08.hostedemail.com (Postfix) with ESMTP id E3388160016 for ; Mon, 27 Nov 2023 08:47:33 +0000 (UTC) Authentication-Results: imf08.hostedemail.com; dkim=pass header.d=bytedance.com header.s=google header.b=DUta5HiH; dmarc=pass (policy=quarantine) header.from=bytedance.com; spf=pass (imf08.hostedemail.com: domain of songmuchun@bytedance.com designates 209.85.167.179 as permitted sender) smtp.mailfrom=songmuchun@bytedance.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1701074853; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=KDwCfd7tNwbLPqndee0T3tycLaDjfbEI5ZluCSMX2aI=; b=s5f+aWVcwTRBCFOb8PMVVoHGHx2L3+E/HJPrUNeUY5TQ5g7t5m3rt7aSeUBW4DOwTWSqbO ijrjbieXA8PyWNTUY/s+OK2FFDCvF1Wsx6UNbS25kSrzPRjriRL+05kueh1mpJpTl4Jrqp H84ArNsYo7pKIkw6RdfpOB/s9UIM/6w= ARC-Authentication-Results: i=1; imf08.hostedemail.com; dkim=pass header.d=bytedance.com header.s=google header.b=DUta5HiH; dmarc=pass (policy=quarantine) header.from=bytedance.com; spf=pass (imf08.hostedemail.com: domain of songmuchun@bytedance.com designates 209.85.167.179 as permitted sender) smtp.mailfrom=songmuchun@bytedance.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1701074853; a=rsa-sha256; cv=none; b=PfYcnUGA4yjYqqjJTl5TeAp9NLQkGKEtP2HL+axCRlh6OhokhTTM5uChlFXq20zsYbr8vX AdxZb3d/yiY7oidg+tzW4ofBPuaeMxhY1dabi7YLR+q1m6EaTsHBGT6gyee346ZRO6VzTV U5Aw6SHOmJAUn2i1PA8VCUapYmfV/Sg= Received: by mail-oi1-f179.google.com with SMTP id 5614622812f47-3b844357f7cso2529338b6e.1 for ; Mon, 27 Nov 2023 00:47:33 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance.com; s=google; t=1701074853; x=1701679653; darn=kvack.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=KDwCfd7tNwbLPqndee0T3tycLaDjfbEI5ZluCSMX2aI=; b=DUta5HiH6AYXWZZ7Rh8uSc2E/ymi+lcq58iD05Sy8DMrPBLM3IDYFfnTdmNUFx65R9 HBawJFEGiHrOXHPO+4qpP/8E8yU6Tbjf9ch/I+gX1DyvL3U3NMVeX8JaCSVcIimPGQgo 0yimQvUNIR27TKwMikq2BrUmiHOQLmAo+02HIoge+7CEUAx9PvMIvjgyD4wtSllqlPFQ 6VDnjkvBspgGJeaGfkX8KRoL92t8i61UtUlPx/IsWUKBu9ycRu2JO4C/ejz4VHLE37gn Luo0xkKxoNvxEE26euBQtOynT7yhsqmT+bKc4bDVgQjCmA9z1BRDGeh50MVz1OrABkqI 4fGw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1701074853; x=1701679653; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=KDwCfd7tNwbLPqndee0T3tycLaDjfbEI5ZluCSMX2aI=; b=sTUdZSRq7IPMkrDXQ8P95WfHRT4Gr/W2gwo7u4s2enNG3BfPxHpcSJbb5QK9LpqLBF z6yEdEJ/cLRfh6t4m4o+CVTtaw3/spK9cTnMj8k8xkbPcC5uP+oded6s/pmrsWFLnKRX GfDp+U26chu4HgAhfbYmOYWeoWf7seyf4AHoB2ri7iSExt+9uS4zolEshxxaw9tVqqNr Pl7kS3+ybNJqEm1zjiE9Y0iegzOY4fMb/MXvGcE1hZWivSYyD6A03jqoSAtcIPK8haGM dHkJaHdX+qT4zGxNdX4JHXvfz0Ku3eSZLQtqLZTV0UWxj4RSU6f9r3Z3gR0dj2oWMX36 LfhQ== X-Gm-Message-State: AOJu0YxEwVr+TfmwSBAWNB9B7N4XwCBpeUmJ4uICOyyVsEBETckPb1lD vbijaKYWKynAZib2EFuOdmeA2A== X-Google-Smtp-Source: AGHT+IGjtsE1BR2K2VhxfkYoA/VOTcyqYgi8MEBB3p+kUcn0pG5OAgrvWgbA1djFn9WP71UH2Rx0sg== X-Received: by 2002:aca:1c02:0:b0:3ae:156f:d325 with SMTP id c2-20020aca1c02000000b003ae156fd325mr12357984oic.58.1701074853042; Mon, 27 Nov 2023 00:47:33 -0800 (PST) Received: from PXLDJ45XCM.bytedance.net ([139.177.225.230]) by smtp.gmail.com with ESMTPSA id e22-20020aa78c56000000b006c875abecbcsm6686932pfd.121.2023.11.27.00.47.29 (version=TLS1_3 cipher=TLS_CHACHA20_POLY1305_SHA256 bits=256/256); Mon, 27 Nov 2023 00:47:32 -0800 (PST) From: Muchun Song To: mike.kravetz@oracle.com, muchun.song@linux.dev, akpm@linux-foundation.org Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, Muchun Song Subject: [PATCH 4/4] mm: hugetlb_vmemmap: convert page to folio Date: Mon, 27 Nov 2023 16:46:45 +0800 Message-Id: <20231127084645.27017-5-songmuchun@bytedance.com> X-Mailer: git-send-email 2.39.3 (Apple Git-145) In-Reply-To: <20231127084645.27017-1-songmuchun@bytedance.com> References: <20231127084645.27017-1-songmuchun@bytedance.com> MIME-Version: 1.0 X-Rspamd-Queue-Id: E3388160016 X-Rspam-User: X-Rspamd-Server: rspam05 X-Stat-Signature: iym67wwwjfiw4gegqkbwhpdhnegcz87g X-HE-Tag: 1701074853-828940 X-HE-Meta: U2FsdGVkX18CH7Gy3wlzg3gxqu8vneI94MI88ucGGnGprkYltdXNGAwf3KTNwANqEe7brsLpIOogXDFfy3Z7Q3cp5xhrI/6JQjeO25Di7JAEmXjgChYjKdI74BezXyhbBrCABINjlfiID0SI/6gCLEHmaLmHYs8wN7gaZGKCFE6KyuARhp82zu0QWsw8aVXjIbi/4ZUoBOVnhwhIPzwgLW7VdSJ+O7c2PaCboW9Gl6zSy75DAvePCsezaG/UjKHpDT4bGrUt46nGR2kdU1N90qHF3FmT6cRik4JJtUBFryCACzEVW5Qk45QVhAlaoCkdf0cP+jcxMnpfsONfio2KzKYt6fY+O59u3LFgdlSeVcG1A1X4Sb4dGJIJa5x9apTaPsGnzN/Xmrh5beycdFcqxccsykifGF9X2tjezcssVTQSS0SpwBwRlVYjRTuepVSwVhLlXLpTejQeU/9g8SHqodhnpJsAkwlnnV9yJ0Hk8gGFihM1ik0yeM7EhrmSK+sFZf+Iim+5hP2cEVLbgboGmp7uW6HlI7S+LfVHjTupqmMnDc+W8+K7hFgwEfG6nBTst2pTMlvL6l+Y8kpShOj39Hh3DQ37H4uO2QmPk+qIBWi79F1TzqfidA4+hCPBbuRYNjdi2qADsfq44lUmNnCwa+IJoD3l0lpFgsXwtDveS83mGTgXf/KAZP8WFBWYgAVFGVbbY//NTd8r9E/FZYCo/GVwbOfwEePBGytHnMYmzmWT7GM+Z/JQaxjLhnGdZFL31TLkdT8cJLi880wQIITbfffLcLgc7Q8O6ttZpHNrN01sSzA0jMe//cNyL7RNLVifcfZrufdzDh2bt87nNmIDPQ7PTsrAcEh9AVY2sKaaYHc/oLfHkUicLgRiulgWOQF0h6zCnraFeBvrAp4Eo744A4bbVrJAN2owdmhVG+qf8btj/6afYdK5i4+rHvKEvdP4MvRDLGYRPh7sBov3QjZ KJndcQjI K1rcm5bIKnPyLTpWHaiLm9+avCWK+YJI3YDy/Jl2lj+druHRb04dDsrZ0ANX4QKsFBXNFC3jGx3238J/7IR+l/biL6/tjh0Iuu71hl3ygPPPDcGuNv2XMzdPu0zskTDCY06vTDflloRVuyZwpaGKIIQnfIiogpt6Za+lWHUsxNNXWlWQwA7gvFLtcT0IyZM0f36YQwQxb8/gYTtGuSUTwYBrTzd3+BDHVy0w04ONNPZoseMC2SQI4fw/ABSpdF34OkTNKEmG9kJZsB80eDLuvl52R5ZhJ8yLKYimap6WKBqMe2+0Fs9EkYijQpSG9BudDXRn9tjr4kxYo1ZvbenZYFnYVIfWvbtZjj3rNmyQ/++saNJ2Jl3TepQsUmJ0ENos6GqpzYcmYbMQkZPmB839QkgRNZcYkJnKFO7Krtg5SuSifDpsgUD4bE69jrxX7xo74mHjrGBilAL/NRcQ= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: There is still some places where it does not be converted to folio, this patch convert all of them to folio. And this patch also does some trival cleanup to fix the code style problems. Signed-off-by: Muchun Song Reviewed-by: Mike Kravetz --- mm/hugetlb_vmemmap.c | 51 ++++++++++++++++++++++---------------------- 1 file changed, 25 insertions(+), 26 deletions(-) diff --git a/mm/hugetlb_vmemmap.c b/mm/hugetlb_vmemmap.c index ce920ca6c90ee..54f388aa361fb 100644 --- a/mm/hugetlb_vmemmap.c +++ b/mm/hugetlb_vmemmap.c @@ -280,7 +280,7 @@ static void vmemmap_restore_pte(pte_t *pte, unsigned long addr, * Return: %0 on success, negative error code otherwise. */ static int vmemmap_remap_split(unsigned long start, unsigned long end, - unsigned long reuse) + unsigned long reuse) { int ret; struct vmemmap_remap_walk walk = { @@ -447,14 +447,14 @@ EXPORT_SYMBOL(hugetlb_optimize_vmemmap_key); static bool vmemmap_optimize_enabled = IS_ENABLED(CONFIG_HUGETLB_PAGE_OPTIMIZE_VMEMMAP_DEFAULT_ON); core_param(hugetlb_free_vmemmap, vmemmap_optimize_enabled, bool, 0); -static int __hugetlb_vmemmap_restore_folio(const struct hstate *h, struct folio *folio, unsigned long flags) +static int __hugetlb_vmemmap_restore_folio(const struct hstate *h, + struct folio *folio, unsigned long flags) { int ret; - struct page *head = &folio->page; - unsigned long vmemmap_start = (unsigned long)head, vmemmap_end; + unsigned long vmemmap_start = (unsigned long)&folio->page, vmemmap_end; unsigned long vmemmap_reuse; - VM_WARN_ON_ONCE(!PageHuge(head)); + VM_WARN_ON_ONCE_FOLIO(!folio_test_hugetlb(folio), folio); if (!folio_test_hugetlb_vmemmap_optimized(folio)) return 0; @@ -517,7 +517,7 @@ long hugetlb_vmemmap_restore_folios(const struct hstate *h, list_for_each_entry_safe(folio, t_folio, folio_list, lru) { if (folio_test_hugetlb_vmemmap_optimized(folio)) { ret = __hugetlb_vmemmap_restore_folio(h, folio, - VMEMMAP_REMAP_NO_TLB_FLUSH); + VMEMMAP_REMAP_NO_TLB_FLUSH); if (ret) break; restored++; @@ -535,9 +535,9 @@ long hugetlb_vmemmap_restore_folios(const struct hstate *h, } /* Return true iff a HugeTLB whose vmemmap should and can be optimized. */ -static bool vmemmap_should_optimize(const struct hstate *h, const struct page *head) +static bool vmemmap_should_optimize_folio(const struct hstate *h, struct folio *folio) { - if (HPageVmemmapOptimized((struct page *)head)) + if (folio_test_hugetlb_vmemmap_optimized(folio)) return false; if (!READ_ONCE(vmemmap_optimize_enabled)) @@ -550,17 +550,16 @@ static bool vmemmap_should_optimize(const struct hstate *h, const struct page *h } static int __hugetlb_vmemmap_optimize_folio(const struct hstate *h, - struct folio *folio, - struct list_head *vmemmap_pages, - unsigned long flags) + struct folio *folio, + struct list_head *vmemmap_pages, + unsigned long flags) { int ret = 0; - struct page *head = &folio->page; - unsigned long vmemmap_start = (unsigned long)head, vmemmap_end; + unsigned long vmemmap_start = (unsigned long)&folio->page, vmemmap_end; unsigned long vmemmap_reuse; - VM_WARN_ON_ONCE(!PageHuge(head)); - if (!vmemmap_should_optimize(h, head)) + VM_WARN_ON_ONCE_FOLIO(!folio_test_hugetlb(folio), folio); + if (!vmemmap_should_optimize_folio(h, folio)) return ret; static_branch_inc(&hugetlb_optimize_vmemmap_key); @@ -588,7 +587,7 @@ static int __hugetlb_vmemmap_optimize_folio(const struct hstate *h, * the caller. */ ret = vmemmap_remap_free(vmemmap_start, vmemmap_end, vmemmap_reuse, - vmemmap_pages, flags); + vmemmap_pages, flags); if (ret) { static_branch_dec(&hugetlb_optimize_vmemmap_key); folio_clear_hugetlb_vmemmap_optimized(folio); @@ -615,12 +614,12 @@ void hugetlb_vmemmap_optimize_folio(const struct hstate *h, struct folio *folio) free_vmemmap_page_list(&vmemmap_pages); } -static int hugetlb_vmemmap_split(const struct hstate *h, struct page *head) +static int hugetlb_vmemmap_split_folio(const struct hstate *h, struct folio *folio) { - unsigned long vmemmap_start = (unsigned long)head, vmemmap_end; + unsigned long vmemmap_start = (unsigned long)&folio->page, vmemmap_end; unsigned long vmemmap_reuse; - if (!vmemmap_should_optimize(h, head)) + if (!vmemmap_should_optimize_folio(h, folio)) return 0; vmemmap_end = vmemmap_start + hugetlb_vmemmap_size(h); @@ -640,7 +639,7 @@ void hugetlb_vmemmap_optimize_folios(struct hstate *h, struct list_head *folio_l LIST_HEAD(vmemmap_pages); list_for_each_entry(folio, folio_list, lru) { - int ret = hugetlb_vmemmap_split(h, &folio->page); + int ret = hugetlb_vmemmap_split_folio(h, folio); /* * Spliting the PMD requires allocating a page, thus lets fail @@ -655,9 +654,10 @@ void hugetlb_vmemmap_optimize_folios(struct hstate *h, struct list_head *folio_l flush_tlb_all(); list_for_each_entry(folio, folio_list, lru) { - int ret = __hugetlb_vmemmap_optimize_folio(h, folio, - &vmemmap_pages, - VMEMMAP_REMAP_NO_TLB_FLUSH); + int ret; + + ret = __hugetlb_vmemmap_optimize_folio(h, folio, &vmemmap_pages, + VMEMMAP_REMAP_NO_TLB_FLUSH); /* * Pages to be freed may have been accumulated. If we @@ -671,9 +671,8 @@ void hugetlb_vmemmap_optimize_folios(struct hstate *h, struct list_head *folio_l flush_tlb_all(); free_vmemmap_page_list(&vmemmap_pages); INIT_LIST_HEAD(&vmemmap_pages); - __hugetlb_vmemmap_optimize_folio(h, folio, - &vmemmap_pages, - VMEMMAP_REMAP_NO_TLB_FLUSH); + __hugetlb_vmemmap_optimize_folio(h, folio, &vmemmap_pages, + VMEMMAP_REMAP_NO_TLB_FLUSH); } }