From patchwork Thu Nov 17 21:02:53 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sidhartha Kumar X-Patchwork-Id: 13047295 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id B79CFC4332F for ; Thu, 17 Nov 2022 21:03:47 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id CD3AF8E000A; Thu, 17 Nov 2022 16:03:39 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id C36898E0009; Thu, 17 Nov 2022 16:03:39 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id AB0B48E000A; Thu, 17 Nov 2022 16:03:39 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id 92C068E0009 for ; Thu, 17 Nov 2022 16:03:39 -0500 (EST) Received: from smtpin01.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id 615D240134 for ; Thu, 17 Nov 2022 21:03:39 +0000 (UTC) X-FDA: 80144160558.01.97187C9 Received: from mx0a-00069f02.pphosted.com (mx0a-00069f02.pphosted.com [205.220.165.32]) by imf13.hostedemail.com (Postfix) with ESMTP id EAA132000B for ; Thu, 17 Nov 2022 21:03:38 +0000 (UTC) Received: from pps.filterd (m0246629.ppops.net [127.0.0.1]) by mx0b-00069f02.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 2AHKOJkV004807; Thu, 17 Nov 2022 21:03:32 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding; s=corp-2022-7-12; bh=cqECmlw3uurUHKox1I6ZdisuXQoCt6s1cy/6t7wynMs=; b=owINbtNFwCFxdpC9OE3VsaWfEbaWpFOA3KdRDnycY5U/d6odcsBzxjGovspwsbFzkbDS vdobQvZR4I/u5gUzgC90nD4OHEEGWvKuQ8W7JmX0s0mW8Yz/JcJOjz65k8nBECzBblgH 5RSEPHCspapuA991Oc0+0UUGqAQcz9dLH1yWh15RIHhCPb9oWHx5GR55U83JJvDnXmAr 3R3bbJb4oB6tiiE6IEQFIbxWaN2Rhlx5Tbj+zc7+FWzMjI/CPecTn3dyQgDgpHrcG7Oa D9BGz+d+jzuucUCGUNr9aJSTGZKd+joAIhItJAXby1MbaFF9l4ULc44+qhX8AW894XaF hw== Received: from iadpaimrmta03.imrmtpd1.prodappiadaev1.oraclevcn.com (iadpaimrmta03.appoci.oracle.com [130.35.103.27]) by mx0b-00069f02.pphosted.com (PPS) with ESMTPS id 3kv3jste1j-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Thu, 17 Nov 2022 21:03:22 +0000 Received: from pps.filterd (iadpaimrmta03.imrmtpd1.prodappiadaev1.oraclevcn.com [127.0.0.1]) by iadpaimrmta03.imrmtpd1.prodappiadaev1.oraclevcn.com (8.17.1.5/8.17.1.5) with ESMTP id 2AHKZxKC010886; Thu, 17 Nov 2022 21:03:15 GMT Received: from pps.reinject (localhost [127.0.0.1]) by iadpaimrmta03.imrmtpd1.prodappiadaev1.oraclevcn.com (PPS) with ESMTPS id 3ku3kagy6v-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Thu, 17 Nov 2022 21:03:15 +0000 Received: from iadpaimrmta03.imrmtpd1.prodappiadaev1.oraclevcn.com (iadpaimrmta03.imrmtpd1.prodappiadaev1.oraclevcn.com [127.0.0.1]) by pps.reinject (8.17.1.5/8.17.1.5) with ESMTP id 2AHL37Fl032582; Thu, 17 Nov 2022 21:03:15 GMT Received: from sid-dell.us.oracle.com (dhcp-10-132-95-73.usdhcp.oraclecorp.com [10.132.95.73]) by iadpaimrmta03.imrmtpd1.prodappiadaev1.oraclevcn.com (PPS) with ESMTP id 3ku3kagy08-6; Thu, 17 Nov 2022 21:03:14 +0000 From: Sidhartha Kumar To: linux-kernel@vger.kernel.org, linux-mm@kvack.org Cc: akpm@linux-foundation.org, songmuchun@bytedance.com, mike.kravetz@oracle.com, willy@infradead.org, almasrymina@google.com, linmiaohe@huawei.com, hughd@google.com, Sidhartha Kumar Subject: [PATCH mm-unstable v2 05/10] mm/hugetlb: convert update_and_free_page() to folios Date: Thu, 17 Nov 2022 13:02:53 -0800 Message-Id: <20221117210258.12732-6-sidhartha.kumar@oracle.com> X-Mailer: git-send-email 2.38.1 In-Reply-To: <20221117210258.12732-1-sidhartha.kumar@oracle.com> References: <20221117210258.12732-1-sidhartha.kumar@oracle.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.219,Aquarius:18.0.895,Hydra:6.0.545,FMLib:17.11.122.1 definitions=2022-11-17_06,2022-11-17_01,2022-06-22_01 X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 bulkscore=0 mlxscore=0 adultscore=0 phishscore=0 malwarescore=0 suspectscore=0 mlxlogscore=999 spamscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2210170000 definitions=main-2211170150 X-Proofpoint-ORIG-GUID: 8oGgEcfOoIqJtLrVyJh52foqg_R44GrQ X-Proofpoint-GUID: 8oGgEcfOoIqJtLrVyJh52foqg_R44GrQ ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1668719019; a=rsa-sha256; cv=none; b=e3VzuU7kL+Trs3qoRzk0ZUYiQByuGwLqDFeAMrjv/Z2PIT+u5bxiBAEqFM26QDQH63aANt VDUMqiHNosWqb66o9fElkMMaygpX2IeUsLjAfIWDthAXGcgRMDAyOsHqHLfmurbm1Fj4Ht aMGYBtyLaa/tZZOu1T6uZaDAkN0h2QQ= ARC-Authentication-Results: i=1; imf13.hostedemail.com; dkim=pass header.d=oracle.com header.s=corp-2022-7-12 header.b=owINbtNF; dmarc=pass (policy=none) header.from=oracle.com; spf=pass (imf13.hostedemail.com: domain of sidhartha.kumar@oracle.com designates 205.220.165.32 as permitted sender) smtp.mailfrom=sidhartha.kumar@oracle.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1668719019; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=cqECmlw3uurUHKox1I6ZdisuXQoCt6s1cy/6t7wynMs=; b=VARhqwgYfrTQ64I3xLkoE8HnTDEYQ0zF2G5dEWZhrIAyIDboqbcF+YKDpgiSkH3lmHivqp DCIWQOzAz7tSO1ji/gbZskMRN69XM6vVHfr7t6R2PF2JWHOMEhL3/FmY2Ouv97HuxnU881 Tq6rUGXj2PHds6l7hA51NCm+tE9FWT4= Authentication-Results: imf13.hostedemail.com; dkim=pass header.d=oracle.com header.s=corp-2022-7-12 header.b=owINbtNF; dmarc=pass (policy=none) header.from=oracle.com; spf=pass (imf13.hostedemail.com: domain of sidhartha.kumar@oracle.com designates 205.220.165.32 as permitted sender) smtp.mailfrom=sidhartha.kumar@oracle.com X-Rspam-User: X-Stat-Signature: 5zamr919uacutwtsjhg6rhtboe38bai9 X-Rspamd-Queue-Id: EAA132000B X-Rspamd-Server: rspam11 X-HE-Tag: 1668719018-528991 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Make more progress on converting the free_huge_page() destructor to operate on folios by converting update_and_free_page() to folios. Signed-off-by: Sidhartha Kumar --- mm/hugetlb.c | 30 ++++++++++++++++-------------- 1 file changed, 16 insertions(+), 14 deletions(-) diff --git a/mm/hugetlb.c b/mm/hugetlb.c index d9604c0dac54..80301fab56d8 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -1478,7 +1478,7 @@ static void __remove_hugetlb_folio(struct hstate *h, struct folio *folio, * apply. * * This handles the case where more than one ref is held when and - * after update_and_free_page is called. + * after update_and_free_hugetlb_folio is called. * * In the case of demote we do not ref count the page as it will soon * be turned into a page of smaller size. @@ -1609,7 +1609,7 @@ static void __update_and_free_page(struct hstate *h, struct page *page) } /* - * As update_and_free_page() can be called under any context, so we cannot + * As update_and_free_hugetlb_folio() can be called under any context, so we cannot * use GFP_KERNEL to allocate vmemmap pages. However, we can defer the * actual freeing in a workqueue to prevent from using GFP_ATOMIC to allocate * the vmemmap pages. @@ -1657,11 +1657,11 @@ static inline void flush_free_hpage_work(struct hstate *h) flush_work(&free_hpage_work); } -static void update_and_free_page(struct hstate *h, struct page *page, +static void update_and_free_hugetlb_folio(struct hstate *h, struct folio *folio, bool atomic) { - if (!HPageVmemmapOptimized(page) || !atomic) { - __update_and_free_page(h, page); + if (!folio_test_hugetlb_vmemmap_optimized(folio) || !atomic) { + __update_and_free_page(h, &folio->page); return; } @@ -1672,16 +1672,18 @@ static void update_and_free_page(struct hstate *h, struct page *page, * empty. Otherwise, schedule_work() had been called but the workfn * hasn't retrieved the list yet. */ - if (llist_add((struct llist_node *)&page->mapping, &hpage_freelist)) + if (llist_add((struct llist_node *)&folio->mapping, &hpage_freelist)) schedule_work(&free_hpage_work); } static void update_and_free_pages_bulk(struct hstate *h, struct list_head *list) { struct page *page, *t_page; + struct folio *folio; list_for_each_entry_safe(page, t_page, list, lru) { - update_and_free_page(h, page, false); + folio = page_folio(page); + update_and_free_hugetlb_folio(h, folio, false); cond_resched(); } } @@ -1751,12 +1753,12 @@ void free_huge_page(struct page *page) if (folio_test_hugetlb_temporary(folio)) { remove_hugetlb_folio(h, folio, false); spin_unlock_irqrestore(&hugetlb_lock, flags); - update_and_free_page(h, page, true); + update_and_free_hugetlb_folio(h, folio, true); } else if (h->surplus_huge_pages_node[nid]) { /* remove the page from active list */ remove_hugetlb_folio(h, folio, true); spin_unlock_irqrestore(&hugetlb_lock, flags); - update_and_free_page(h, page, true); + update_and_free_hugetlb_folio(h, folio, true); } else { arch_clear_hugepage_flags(page); enqueue_huge_page(h, page); @@ -2170,8 +2172,8 @@ int dissolve_free_huge_page(struct page *page) spin_unlock_irq(&hugetlb_lock); /* - * Normally update_and_free_page will allocate required vmemmmap - * before freeing the page. update_and_free_page will fail to + * Normally update_and_free_hugtlb_folio will allocate required vmemmmap + * before freeing the page. update_and_free_hugtlb_folio will fail to * free the page if it can not allocate required vmemmap. We * need to adjust max_huge_pages if the page is not freed. * Attempt to allocate vmemmmap here so that we can take @@ -2179,7 +2181,7 @@ int dissolve_free_huge_page(struct page *page) */ rc = hugetlb_vmemmap_restore(h, &folio->page); if (!rc) { - update_and_free_page(h, &folio->page, false); + update_and_free_hugetlb_folio(h, folio, false); } else { spin_lock_irq(&hugetlb_lock); add_hugetlb_page(h, &folio->page, false); @@ -2816,7 +2818,7 @@ static int alloc_and_dissolve_huge_page(struct hstate *h, struct page *old_page, * Pages have been replaced, we can safely free the old one. */ spin_unlock_irq(&hugetlb_lock); - update_and_free_page(h, old_page, false); + update_and_free_hugetlb_folio(h, old_folio, false); } return ret; @@ -2825,7 +2827,7 @@ static int alloc_and_dissolve_huge_page(struct hstate *h, struct page *old_page, spin_unlock_irq(&hugetlb_lock); /* Page has a zero ref count, but needs a ref to be freed */ folio_ref_unfreeze(new_folio, 1); - update_and_free_page(h, new_page, false); + update_and_free_hugetlb_folio(h, new_folio, false); return ret; }