From patchwork Tue Nov 15 21:22:08 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sid Kumar X-Patchwork-Id: 13044189 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 904EFC43219 for ; Tue, 15 Nov 2022 21:22:49 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 71AE06B0071; Tue, 15 Nov 2022 16:22:48 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 697848E0002; Tue, 15 Nov 2022 16:22:48 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 33A208E0001; Tue, 15 Nov 2022 16:22:48 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id 231736B0071 for ; Tue, 15 Nov 2022 16:22:48 -0500 (EST) Received: from smtpin01.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id ED4D24062C for ; Tue, 15 Nov 2022 21:22:47 +0000 (UTC) X-FDA: 80136951174.01.DC67049 Received: from mx0a-00069f02.pphosted.com (mx0a-00069f02.pphosted.com [205.220.165.32]) by imf20.hostedemail.com (Postfix) with ESMTP id 7AA4A1C0006 for ; Tue, 15 Nov 2022 21:22:47 +0000 (UTC) Received: from pps.filterd (m0246617.ppops.net [127.0.0.1]) by mx0b-00069f02.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 2AFKIZIJ022487; Tue, 15 Nov 2022 21:22:32 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding; s=corp-2022-7-12; bh=ZXiZrA4HYBBXzWdHv73gcqz8SPH14/W1bs6g/VbOofA=; b=UGJlR5+iFPdQb2Z9YRyMvV8jqeCIocEiEEwm6SBPa/GfwkHtchO7z4zg0Cf4DphbDK5c huSsi3txtVfg5uteoiZGellrJMvwHDvwBZwTXUc5LS5LQdxv1+2i1LhbeO4DsPHkRqu7 f9oBrGL+21ziCm0ejApMGw6iiXeX0DqLWKEAORR7UO8N8B9tgPoztaRoQQpSw24zG+Xd 1P0cSEIQRB9D0Vt4zyJEBVy3DqHg9kxwGSmYtiW8GMSTWpDIbEDiG3eJdlbzioJcMbYg ess2zzzGPj9Cjq59YV6wemd6xeBDFzHKS8fzA0wVpI76ugbVwoIbytw2lOhqUqQSULyn Xg== Received: from iadpaimrmta01.imrmtpd1.prodappiadaev1.oraclevcn.com (iadpaimrmta01.appoci.oracle.com [130.35.100.223]) by mx0b-00069f02.pphosted.com (PPS) with ESMTPS id 3kv3ns2ybr-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Tue, 15 Nov 2022 21:22:32 +0000 Received: from pps.filterd (iadpaimrmta01.imrmtpd1.prodappiadaev1.oraclevcn.com [127.0.0.1]) by iadpaimrmta01.imrmtpd1.prodappiadaev1.oraclevcn.com (8.17.1.5/8.17.1.5) with ESMTP id 2AFLHbBh031840; Tue, 15 Nov 2022 21:22:30 GMT Received: from pps.reinject (localhost [127.0.0.1]) by iadpaimrmta01.imrmtpd1.prodappiadaev1.oraclevcn.com (PPS) with ESMTPS id 3kt1xcfvde-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Tue, 15 Nov 2022 21:22:30 +0000 Received: from iadpaimrmta01.imrmtpd1.prodappiadaev1.oraclevcn.com (iadpaimrmta01.imrmtpd1.prodappiadaev1.oraclevcn.com [127.0.0.1]) by pps.reinject (8.17.1.5/8.17.1.5) with ESMTP id 2AFLMSRc002082; Tue, 15 Nov 2022 21:22:29 GMT Received: from dhcp-10-132-95-73.usdhcp.oraclecorp.com.com (dhcp-10-132-95-73.usdhcp.oraclecorp.com [10.132.95.73]) by iadpaimrmta01.imrmtpd1.prodappiadaev1.oraclevcn.com (PPS) with ESMTP id 3kt1xcfvb0-2; Tue, 15 Nov 2022 21:22:29 +0000 From: Sidhartha Kumar To: linux-kernel@vger.kernel.org, linux-mm@kvack.org Cc: akpm@linux-foundation.org, songmuchun@bytedance.com, mike.kravetz@oracle.com, willy@infradead.org, almasrymina@google.com, linmiaohe@huawei.com, hughd@google.com, Sidhartha Kumar Subject: [PATCH mm-unstable 01/10] mm: add folio dtor and order setter functions Date: Tue, 15 Nov 2022 13:22:08 -0800 Message-Id: <20221115212217.19539-2-sidhartha.kumar@oracle.com> X-Mailer: git-send-email 2.38.1 In-Reply-To: <20221115212217.19539-1-sidhartha.kumar@oracle.com> References: <20221115212217.19539-1-sidhartha.kumar@oracle.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.219,Aquarius:18.0.895,Hydra:6.0.545,FMLib:17.11.122.1 definitions=2022-11-15_08,2022-11-15_03,2022-06-22_01 X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 mlxlogscore=912 phishscore=0 suspectscore=0 malwarescore=0 mlxscore=0 adultscore=0 bulkscore=0 spamscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2210170000 definitions=main-2211150146 X-Proofpoint-GUID: 13SXim-31TboRW7oy2yL6cFPd65kpzUs X-Proofpoint-ORIG-GUID: 13SXim-31TboRW7oy2yL6cFPd65kpzUs ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1668547367; a=rsa-sha256; cv=none; b=YPJA/fgrWfoZ694Dizp+VliE9IkggaNbuoCiebvaEA5l2Lr44FVRHuilWcYRYh97uFQi0R PqKS4Lni52AF1HuhdE1UKce1vvPImmXlOjtp9v9O/X9qDyjw9vflviqm2qQgi7GtgCUGmF DYJTA/9PtHMC+9aXpQeE0DnMFcl59BU= ARC-Authentication-Results: i=1; imf20.hostedemail.com; dkim=pass header.d=oracle.com header.s=corp-2022-7-12 header.b=UGJlR5+i; dmarc=pass (policy=none) header.from=oracle.com; spf=pass (imf20.hostedemail.com: domain of sidhartha.kumar@oracle.com designates 205.220.165.32 as permitted sender) smtp.mailfrom=sidhartha.kumar@oracle.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1668547367; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=ZXiZrA4HYBBXzWdHv73gcqz8SPH14/W1bs6g/VbOofA=; b=6hj+0HIL70WoJqUIOWPZ/3UjukYYprq1SUzS6/C6MYt3c/ayLrqAptUM1JCYBPC8jK6PnQ mxpy8U7AbcuIyXmj79n0Cll/f7+y3yVvkho1ZQv16I4aMPJvs1eJbKTJb+PT86kn84LWuk /EO3Mgy4nIx+je2YD6+kaYdySTvtrbo= X-Rspamd-Queue-Id: 7AA4A1C0006 Authentication-Results: imf20.hostedemail.com; dkim=pass header.d=oracle.com header.s=corp-2022-7-12 header.b=UGJlR5+i; dmarc=pass (policy=none) header.from=oracle.com; spf=pass (imf20.hostedemail.com: domain of sidhartha.kumar@oracle.com designates 205.220.165.32 as permitted sender) smtp.mailfrom=sidhartha.kumar@oracle.com X-Rspam-User: X-Rspamd-Server: rspam05 X-Stat-Signature: sw31dyapbkcb1y9figqx3foduzbws6be X-HE-Tag: 1668547367-701223 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Add folio equivalents for set_compound_order() and set_compound_page_dtor(). Also remove extra new-lines introduced by mm/hugetlb: convert move_hugetlb_state() to folios and mm/hugetlb_cgroup: convert hugetlb_cgroup_uncharge_page() to folios. Suggested-by: Mike Kravetz Suggested-by: Muchun Song Signed-off-by: Sidhartha Kumar --- include/linux/mm.h | 16 ++++++++++++++++ mm/hugetlb.c | 4 +--- 2 files changed, 17 insertions(+), 3 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index 982f2607180b..068686110729 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -895,6 +895,13 @@ static inline void set_compound_page_dtor(struct page *page, page[1].compound_dtor = compound_dtor; } +static inline void folio_set_compound_dtor(struct folio *folio, + enum compound_dtor_id compound_dtor) +{ + VM_BUG_ON_FOLIO(compound_dtor >= NR_COMPOUND_DTORS, folio); + folio->_folio_dtor = compound_dtor; +} + void destroy_large_folio(struct folio *folio); static inline int head_compound_pincount(struct page *head) @@ -910,6 +917,15 @@ static inline void set_compound_order(struct page *page, unsigned int order) #endif } +static inline void folio_set_compound_order(struct folio *folio, + unsigned int order) +{ + folio->_folio_order = order; +#ifdef CONFIG_64BIT + folio->_folio_nr_pages = order ? 1U << order : 0; +#endif +} + /* Returns the number of pages in this potentially compound page. */ static inline unsigned long compound_nr(struct page *page) { diff --git a/mm/hugetlb.c b/mm/hugetlb.c index f786993f92d0..1acde3b8251e 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -1771,7 +1771,7 @@ static void __prep_new_hugetlb_folio(struct hstate *h, struct folio *folio) { hugetlb_vmemmap_optimize(h, &folio->page); INIT_LIST_HEAD(&folio->lru); - folio->_folio_dtor = HUGETLB_PAGE_DTOR; + folio_set_compound_dtor(folio, HUGETLB_PAGE_DTOR); hugetlb_set_folio_subpool(folio, NULL); set_hugetlb_cgroup(folio, NULL); set_hugetlb_cgroup_rsvd(folio, NULL); @@ -2927,7 +2927,6 @@ struct page *alloc_huge_page(struct vm_area_struct *vma, * a reservation exists for the allocation. */ page = dequeue_huge_page_vma(h, vma, addr, avoid_reserve, gbl_chg); - if (!page) { spin_unlock_irq(&hugetlb_lock); page = alloc_buddy_huge_page_with_mpol(h, vma, addr); @@ -7317,7 +7316,6 @@ void move_hugetlb_state(struct folio *old_folio, struct folio *new_folio, int re int old_nid = folio_nid(old_folio); int new_nid = folio_nid(new_folio); - folio_set_hugetlb_temporary(old_folio); folio_clear_hugetlb_temporary(new_folio); From patchwork Tue Nov 15 21:22:09 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sid Kumar X-Patchwork-Id: 13044191 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 141F9C4332F for ; Tue, 15 Nov 2022 21:22:51 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id A6A9C8E0002; Tue, 15 Nov 2022 16:22:48 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 8DF116B0081; Tue, 15 Nov 2022 16:22:48 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 5B5596B0072; Tue, 15 Nov 2022 16:22:48 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id 43D328E0002 for ; Tue, 15 Nov 2022 16:22:48 -0500 (EST) Received: from smtpin29.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id 1A0EF1406C7 for ; Tue, 15 Nov 2022 21:22:48 +0000 (UTC) X-FDA: 80136951216.29.0BF982E Received: from mx0a-00069f02.pphosted.com (mx0a-00069f02.pphosted.com [205.220.165.32]) by imf17.hostedemail.com (Postfix) with ESMTP id 9055F40009 for ; Tue, 15 Nov 2022 21:22:47 +0000 (UTC) Received: from pps.filterd (m0246617.ppops.net [127.0.0.1]) by mx0b-00069f02.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 2AFKBwUw022508; Tue, 15 Nov 2022 21:22:35 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding; s=corp-2022-7-12; bh=l6pQ/QCJuSbmIlwypPHGacUcrC4q4nIGq/FftDEzLUE=; b=bMfqqNaiuAfZCh0AjmiUxL2cTySomJy85PP1PRggcBP8bh6HnIJnv8G2jUhaJBbEZa/Q LLH6jYllysUwJWEGfSVWlrMyYndJDbwNVuSl9ooQOEHkqHZZTiF5EMnscavuylzFaxbr Vu7ENFOcb78fmWUshJ8gM5MZ8tit4UmQet+ltyAqOMrzlIXeZJOKOBYbZcUz1neKUeyE FuNFJfgt5sIMZF8GSG0PlHf6hV92qjKcvc2W0luEOEJE2oH8//Qglvp4MoIph/e7CAB1 4JQlewj+iwYNIvAJwPkXmH2rvpEv/2gDw0/LEz5iKVVUlPLjHSw4BQZhPtk3zTrYeNrp 6w== Received: from iadpaimrmta01.imrmtpd1.prodappiadaev1.oraclevcn.com (iadpaimrmta01.appoci.oracle.com [130.35.100.223]) by mx0b-00069f02.pphosted.com (PPS) with ESMTPS id 3kv3ns2ybu-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Tue, 15 Nov 2022 21:22:34 +0000 Received: from pps.filterd (iadpaimrmta01.imrmtpd1.prodappiadaev1.oraclevcn.com [127.0.0.1]) by iadpaimrmta01.imrmtpd1.prodappiadaev1.oraclevcn.com (8.17.1.5/8.17.1.5) with ESMTP id 2AFJhvLf032077; Tue, 15 Nov 2022 21:22:31 GMT Received: from pps.reinject (localhost [127.0.0.1]) by iadpaimrmta01.imrmtpd1.prodappiadaev1.oraclevcn.com (PPS) with ESMTPS id 3kt1xcfveu-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Tue, 15 Nov 2022 21:22:31 +0000 Received: from iadpaimrmta01.imrmtpd1.prodappiadaev1.oraclevcn.com (iadpaimrmta01.imrmtpd1.prodappiadaev1.oraclevcn.com [127.0.0.1]) by pps.reinject (8.17.1.5/8.17.1.5) with ESMTP id 2AFLMSRe002082; Tue, 15 Nov 2022 21:22:31 GMT Received: from dhcp-10-132-95-73.usdhcp.oraclecorp.com.com (dhcp-10-132-95-73.usdhcp.oraclecorp.com [10.132.95.73]) by iadpaimrmta01.imrmtpd1.prodappiadaev1.oraclevcn.com (PPS) with ESMTP id 3kt1xcfvb0-3; Tue, 15 Nov 2022 21:22:30 +0000 From: Sidhartha Kumar To: linux-kernel@vger.kernel.org, linux-mm@kvack.org Cc: akpm@linux-foundation.org, songmuchun@bytedance.com, mike.kravetz@oracle.com, willy@infradead.org, almasrymina@google.com, linmiaohe@huawei.com, hughd@google.com, Sidhartha Kumar Subject: [PATCH mm-unstable 02/10] mm/hugetlb: convert destroy_compound_gigantic_page() to folios Date: Tue, 15 Nov 2022 13:22:09 -0800 Message-Id: <20221115212217.19539-3-sidhartha.kumar@oracle.com> X-Mailer: git-send-email 2.38.1 In-Reply-To: <20221115212217.19539-1-sidhartha.kumar@oracle.com> References: <20221115212217.19539-1-sidhartha.kumar@oracle.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.219,Aquarius:18.0.895,Hydra:6.0.545,FMLib:17.11.122.1 definitions=2022-11-15_08,2022-11-15_03,2022-06-22_01 X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 mlxlogscore=930 phishscore=0 suspectscore=0 malwarescore=0 mlxscore=0 adultscore=0 bulkscore=0 spamscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2210170000 definitions=main-2211150146 X-Proofpoint-GUID: qRqmsSvymaaVj34QTAbtu48juYVKn4H6 X-Proofpoint-ORIG-GUID: qRqmsSvymaaVj34QTAbtu48juYVKn4H6 ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1668547367; a=rsa-sha256; cv=none; b=amD5l9dLFtS/TeSyo94RF0oNwgMprAkioU0uV3QY3Q5aG1tl45ujkh7wsKQh+KL9WBNiNN xBLjDE5ALorImHq8beJXdHtlju5XBtMjdFzg6/uB2qEdTbdV174glrIOHItJG2GckbaKVr gODOv2B5P33F6OmJ4HH76KFpWw/bxiE= ARC-Authentication-Results: i=1; imf17.hostedemail.com; dkim=pass header.d=oracle.com header.s=corp-2022-7-12 header.b=bMfqqNai; spf=pass (imf17.hostedemail.com: domain of sidhartha.kumar@oracle.com designates 205.220.165.32 as permitted sender) smtp.mailfrom=sidhartha.kumar@oracle.com; dmarc=pass (policy=none) header.from=oracle.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1668547367; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=l6pQ/QCJuSbmIlwypPHGacUcrC4q4nIGq/FftDEzLUE=; b=TFeV3Csp2nj/wrKXMdWCaiZN4cdEj43/2NHCG1gZH+/Kg7gs5tGKkZMwX1MfcO/AGj7OPG URtO5WgaWqJlVn05bRxWbZJdA8roD8fsZl3EwZ9oP0Q1r5S5IeNHfJOB6pNnyLevCCxdWr 70id/v+be82SL+x+W2y79NMhhyjCGuk= X-Stat-Signature: a49ro9owoq8whfq9nrihrt67cbuazgp5 X-Rspamd-Queue-Id: 9055F40009 Authentication-Results: imf17.hostedemail.com; dkim=pass header.d=oracle.com header.s=corp-2022-7-12 header.b=bMfqqNai; spf=pass (imf17.hostedemail.com: domain of sidhartha.kumar@oracle.com designates 205.220.165.32 as permitted sender) smtp.mailfrom=sidhartha.kumar@oracle.com; dmarc=pass (policy=none) header.from=oracle.com X-Rspam-User: X-Rspamd-Server: rspam12 X-HE-Tag: 1668547367-555483 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Convert page operations within __destroy_compound_gigantic_page() to the corresponding folio operations. Signed-off-by: Sidhartha Kumar --- mm/hugetlb.c | 41 ++++++++++++++++++++--------------------- 1 file changed, 20 insertions(+), 21 deletions(-) diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 1acde3b8251e..cf52cd0d571e 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -1317,42 +1317,39 @@ static int hstate_next_node_to_free(struct hstate *h, nodemask_t *nodes_allowed) nr_nodes--) /* used to demote non-gigantic_huge pages as well */ -static void __destroy_compound_gigantic_page(struct page *page, +static void __destroy_compound_gigantic_folio(struct folio *folio, unsigned int order, bool demote) { int i; int nr_pages = 1 << order; struct page *p; - atomic_set(compound_mapcount_ptr(page), 0); - atomic_set(compound_pincount_ptr(page), 0); + atomic_set(folio_mapcount_ptr(folio), 0); + atomic_set(folio_pincount_ptr(folio), 0); for (i = 1; i < nr_pages; i++) { - p = nth_page(page, i); + p = folio_page(folio, i); p->mapping = NULL; clear_compound_head(p); if (!demote) set_page_refcounted(p); } - set_compound_order(page, 0); -#ifdef CONFIG_64BIT - page[1].compound_nr = 0; -#endif - __ClearPageHead(page); + folio_set_compound_order(folio, 0); + folio_clear_head(folio); } -static void destroy_compound_hugetlb_page_for_demote(struct page *page, +static void destroy_compound_hugetlb_folio_for_demote(struct folio *folio, unsigned int order) { - __destroy_compound_gigantic_page(page, order, true); + __destroy_compound_gigantic_folio(folio, order, true); } #ifdef CONFIG_ARCH_HAS_GIGANTIC_PAGE -static void destroy_compound_gigantic_page(struct page *page, +static void destroy_compound_gigantic_folio(struct folio *folio, unsigned int order) { - __destroy_compound_gigantic_page(page, order, false); + __destroy_compound_gigantic_folio(folio, order, false); } static void free_gigantic_page(struct page *page, unsigned int order) @@ -1421,7 +1418,7 @@ static struct page *alloc_gigantic_page(struct hstate *h, gfp_t gfp_mask, return NULL; } static inline void free_gigantic_page(struct page *page, unsigned int order) { } -static inline void destroy_compound_gigantic_page(struct page *page, +static inline void destroy_compound_gigantic_folio(struct folio *folio, unsigned int order) { } #endif @@ -1468,8 +1465,8 @@ static void __remove_hugetlb_page(struct hstate *h, struct page *page, * * For gigantic pages set the destructor to the null dtor. This * destructor will never be called. Before freeing the gigantic - * page destroy_compound_gigantic_page will turn the compound page - * into a simple group of pages. After this the destructor does not + * page destroy_compound_gigantic_folio will turn the folio into a + * simple group of pages. After this the destructor does not * apply. * * This handles the case where more than one ref is held when and @@ -1550,6 +1547,7 @@ static void add_hugetlb_page(struct hstate *h, struct page *page, static void __update_and_free_page(struct hstate *h, struct page *page) { int i; + struct folio *folio = page_folio(page); struct page *subpage; if (hstate_is_gigantic(h) && !gigantic_page_runtime_supported()) @@ -1578,8 +1576,8 @@ static void __update_and_free_page(struct hstate *h, struct page *page) * Move PageHWPoison flag from head page to the raw error pages, * which makes any healthy subpages reusable. */ - if (unlikely(PageHWPoison(page))) - hugetlb_clear_page_hwpoison(page); + if (unlikely(folio_test_hwpoison(folio))) + hugetlb_clear_page_hwpoison(&folio->page); for (i = 0; i < pages_per_huge_page(h); i++) { subpage = nth_page(page, i); @@ -1595,7 +1593,7 @@ static void __update_and_free_page(struct hstate *h, struct page *page) */ if (hstate_is_gigantic(h) || hugetlb_cma_page(page, huge_page_order(h))) { - destroy_compound_gigantic_page(page, huge_page_order(h)); + destroy_compound_gigantic_folio(folio, huge_page_order(h)); free_gigantic_page(page, huge_page_order(h)); } else { __free_pages(page, huge_page_order(h)); @@ -3426,6 +3424,7 @@ static int demote_free_huge_page(struct hstate *h, struct page *page) { int i, nid = page_to_nid(page); struct hstate *target_hstate; + struct folio *folio = page_folio(page); struct page *subpage; int rc = 0; @@ -3444,10 +3443,10 @@ static int demote_free_huge_page(struct hstate *h, struct page *page) } /* - * Use destroy_compound_hugetlb_page_for_demote for all huge page + * Use destroy_compound_hugetlb_folio_for_demote for all huge page * sizes as it will not ref count pages. */ - destroy_compound_hugetlb_page_for_demote(page, huge_page_order(h)); + destroy_compound_hugetlb_folio_for_demote(folio, huge_page_order(h)); /* * Taking target hstate mutex synchronizes with set_max_huge_pages. From patchwork Tue Nov 15 21:22:10 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sid Kumar X-Patchwork-Id: 13044188 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 85FDFC433FE for ; Tue, 15 Nov 2022 21:22:47 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 204AD8E0003; Tue, 15 Nov 2022 16:22:47 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 1B2388E0001; Tue, 15 Nov 2022 16:22:47 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 05F366B0074; Tue, 15 Nov 2022 16:22:46 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id E44F46B0071 for ; Tue, 15 Nov 2022 16:22:46 -0500 (EST) Received: from smtpin13.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id B188D8063E for ; Tue, 15 Nov 2022 21:22:46 +0000 (UTC) X-FDA: 80136951132.13.123F4CC Received: from mx0a-00069f02.pphosted.com (mx0a-00069f02.pphosted.com [205.220.165.32]) by imf12.hostedemail.com (Postfix) with ESMTP id 381B24000E for ; Tue, 15 Nov 2022 21:22:46 +0000 (UTC) Received: from pps.filterd (m0246627.ppops.net [127.0.0.1]) by mx0b-00069f02.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 2AFJXdNS016540; Tue, 15 Nov 2022 21:22:35 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding; s=corp-2022-7-12; bh=UrtIfo58Nj3Y1eE1w+Ej6DcDMXYyYYkYY6rJykOyjqU=; b=jjQ96U4b9SFf76nI6jMBxDSDj0xUqjQLdwwwEOlXCT3EC/0MeXkV/KrXiRLiQHkMu3sK CAdNhh8djw5jkhYaQAxkPnnF5v1aDpv2XjwmvTrrme6+1IhiQ3rDS+0/pTkkOX3gdeat 9TbZlK7MF7jlPOOUh+rTa3zc/L+u6FiM3pCM0omSgu7Csnj7JFEA2uzuIru633wY0Pz5 kPP86fn094zPwJhptGkG5RTkHvB1t9IClXDQGPt+HD2pfdAka3H10zr/SWSCqW2+KDK+ nSoSi5qmXdJ0QHdEQx8cIv6JEmlGEgUbZSNOmdGG/dxC2K0PAt6VOOGSa5xybtAvRG4B 5Q== Received: from iadpaimrmta01.imrmtpd1.prodappiadaev1.oraclevcn.com (iadpaimrmta01.appoci.oracle.com [130.35.100.223]) by mx0b-00069f02.pphosted.com (PPS) with ESMTPS id 3kv8ykj8mq-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Tue, 15 Nov 2022 21:22:34 +0000 Received: from pps.filterd (iadpaimrmta01.imrmtpd1.prodappiadaev1.oraclevcn.com [127.0.0.1]) by iadpaimrmta01.imrmtpd1.prodappiadaev1.oraclevcn.com (8.17.1.5/8.17.1.5) with ESMTP id 2AFJl43Y031742; Tue, 15 Nov 2022 21:22:33 GMT Received: from pps.reinject (localhost [127.0.0.1]) by iadpaimrmta01.imrmtpd1.prodappiadaev1.oraclevcn.com (PPS) with ESMTPS id 3kt1xcfvgc-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Tue, 15 Nov 2022 21:22:33 +0000 Received: from iadpaimrmta01.imrmtpd1.prodappiadaev1.oraclevcn.com (iadpaimrmta01.imrmtpd1.prodappiadaev1.oraclevcn.com [127.0.0.1]) by pps.reinject (8.17.1.5/8.17.1.5) with ESMTP id 2AFLMSRg002082; Tue, 15 Nov 2022 21:22:32 GMT Received: from dhcp-10-132-95-73.usdhcp.oraclecorp.com.com (dhcp-10-132-95-73.usdhcp.oraclecorp.com [10.132.95.73]) by iadpaimrmta01.imrmtpd1.prodappiadaev1.oraclevcn.com (PPS) with ESMTP id 3kt1xcfvb0-4; Tue, 15 Nov 2022 21:22:32 +0000 From: Sidhartha Kumar To: linux-kernel@vger.kernel.org, linux-mm@kvack.org Cc: akpm@linux-foundation.org, songmuchun@bytedance.com, mike.kravetz@oracle.com, willy@infradead.org, almasrymina@google.com, linmiaohe@huawei.com, hughd@google.com, Sidhartha Kumar Subject: [PATCH mm-unstable 03/10] mm/hugetlb: convert dissolve_free_huge_page() to folios Date: Tue, 15 Nov 2022 13:22:10 -0800 Message-Id: <20221115212217.19539-4-sidhartha.kumar@oracle.com> X-Mailer: git-send-email 2.38.1 In-Reply-To: <20221115212217.19539-1-sidhartha.kumar@oracle.com> References: <20221115212217.19539-1-sidhartha.kumar@oracle.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.219,Aquarius:18.0.895,Hydra:6.0.545,FMLib:17.11.122.1 definitions=2022-11-15_08,2022-11-15_03,2022-06-22_01 X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 mlxlogscore=952 phishscore=0 suspectscore=0 malwarescore=0 mlxscore=0 adultscore=0 bulkscore=0 spamscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2210170000 definitions=main-2211150146 X-Proofpoint-GUID: B3Cp_OgsD9uqaB9D70IPjTPw6dsvggnd X-Proofpoint-ORIG-GUID: B3Cp_OgsD9uqaB9D70IPjTPw6dsvggnd ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1668547366; a=rsa-sha256; cv=none; b=NdQHnR5hJtJb+RoYgFs1ffSZhyoQSrgwQQ0TjV6xxvMCfQKVg0pbTFWr8jDLJc+91jN1lo iI+VtMaCHfzafuzQ5rG4k3kXv8t3vyWcs3+5sf7aOBfWvBYyYcXb9LDX73ub+NaXjdYcLp L8iQ819sgbjDtFdLIN3bKZ9Gap8RqVs= ARC-Authentication-Results: i=1; imf12.hostedemail.com; dkim=pass header.d=oracle.com header.s=corp-2022-7-12 header.b=jjQ96U4b; dmarc=pass (policy=none) header.from=oracle.com; spf=pass (imf12.hostedemail.com: domain of sidhartha.kumar@oracle.com designates 205.220.165.32 as permitted sender) smtp.mailfrom=sidhartha.kumar@oracle.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1668547366; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=UrtIfo58Nj3Y1eE1w+Ej6DcDMXYyYYkYY6rJykOyjqU=; b=JDxutIBE7QVxAlhpe4GxEA/cGzZuXYPUrSOqBH8zmG5BevcCTlJJ2vXUjfAkmOYvcbDM1x +/g2W6wJTzZyoUMpULit2+KZPYdCHYc/lH9n/TmxjyGqklSv395GW8KiRsak44kdZVSaog aIBanhGsylToZTXCQS3k647P9NzyDWQ= X-Rspamd-Queue-Id: 381B24000E Authentication-Results: imf12.hostedemail.com; dkim=pass header.d=oracle.com header.s=corp-2022-7-12 header.b=jjQ96U4b; dmarc=pass (policy=none) header.from=oracle.com; spf=pass (imf12.hostedemail.com: domain of sidhartha.kumar@oracle.com designates 205.220.165.32 as permitted sender) smtp.mailfrom=sidhartha.kumar@oracle.com X-Rspam-User: X-Rspamd-Server: rspam05 X-Stat-Signature: gqrx1puqccgk8dgc9gw8px95i5kn6cm9 X-HE-Tag: 1668547366-573177 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Removes compound_head() call by using a folio rather than a head page. Signed-off-by: Sidhartha Kumar --- mm/hugetlb.c | 20 ++++++++++---------- 1 file changed, 10 insertions(+), 10 deletions(-) diff --git a/mm/hugetlb.c b/mm/hugetlb.c index cf52cd0d571e..19657f990900 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -2116,21 +2116,21 @@ static struct page *remove_pool_huge_page(struct hstate *h, int dissolve_free_huge_page(struct page *page) { int rc = -EBUSY; + struct folio *folio = page_folio(page); retry: /* Not to disrupt normal path by vainly holding hugetlb_lock */ - if (!PageHuge(page)) + if (!folio_test_hugetlb(folio)) return 0; spin_lock_irq(&hugetlb_lock); - if (!PageHuge(page)) { + if (!folio_test_hugetlb(folio)) { rc = 0; goto out; } - if (!page_count(page)) { - struct page *head = compound_head(page); - struct hstate *h = page_hstate(head); + if (!folio_ref_count(folio)) { + struct hstate *h = folio_hstate(folio); if (!available_huge_pages(h)) goto out; @@ -2138,7 +2138,7 @@ int dissolve_free_huge_page(struct page *page) * We should make sure that the page is already on the free list * when it is dissolved. */ - if (unlikely(!HPageFreed(head))) { + if (unlikely(!folio_test_hugetlb_freed(folio))) { spin_unlock_irq(&hugetlb_lock); cond_resched(); @@ -2153,7 +2153,7 @@ int dissolve_free_huge_page(struct page *page) goto retry; } - remove_hugetlb_page(h, head, false); + remove_hugetlb_page(h, &folio->page, false); h->max_huge_pages--; spin_unlock_irq(&hugetlb_lock); @@ -2165,12 +2165,12 @@ int dissolve_free_huge_page(struct page *page) * Attempt to allocate vmemmmap here so that we can take * appropriate action on failure. */ - rc = hugetlb_vmemmap_restore(h, head); + rc = hugetlb_vmemmap_restore(h, &folio->page); if (!rc) { - update_and_free_page(h, head, false); + update_and_free_page(h, &folio->page, false); } else { spin_lock_irq(&hugetlb_lock); - add_hugetlb_page(h, head, false); + add_hugetlb_page(h, &folio->page, false); h->max_huge_pages++; spin_unlock_irq(&hugetlb_lock); } From patchwork Tue Nov 15 21:22:11 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sid Kumar X-Patchwork-Id: 13044193 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id D11FEC4332F for ; Tue, 15 Nov 2022 21:22:53 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 1E4786B0072; Tue, 15 Nov 2022 16:22:49 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 16B5B8E0006; Tue, 15 Nov 2022 16:22:49 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id C8A948E0005; Tue, 15 Nov 2022 16:22:48 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 76A806B0075 for ; Tue, 15 Nov 2022 16:22:48 -0500 (EST) Received: from smtpin27.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay04.hostedemail.com (Postfix) with ESMTP id 4811F1A0652 for ; Tue, 15 Nov 2022 21:22:48 +0000 (UTC) X-FDA: 80136951216.27.6668787 Received: from mx0a-00069f02.pphosted.com (mx0a-00069f02.pphosted.com [205.220.165.32]) by imf09.hostedemail.com (Postfix) with ESMTP id C5BB514000B for ; Tue, 15 Nov 2022 21:22:47 +0000 (UTC) Received: from pps.filterd (m0246629.ppops.net [127.0.0.1]) by mx0b-00069f02.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 2AFKD5Uk008985; Tue, 15 Nov 2022 21:22:36 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding; s=corp-2022-7-12; bh=6dDPh7nZ5AaTXR9UjUhCTat+xUEbUzhgxymJR2d+XiE=; b=gVZUgu5hKXZJOrNa4NeIRAbYOhxnopxpxWpZ6DEEHPZFxB2gT77NjrDyZukbx0HqVbww /eMsKYaZLjyiDWHYgXWodAsps/JEjqDSKnzdCPMps2qnj0KSqHr2f2Q/y/aFokgT9Wyc SFZhzNY2yxNv5QXoioVxTqRPaPYNzBQD/fpQDeU4zLWh9LoeljXuAIRtID923Z1v7jA2 3DnXAd1oJ6L5rY1H0UdnY7/veoQKmrIkM3asrs7SH7lZhfTDpwGZ/SZ+UEUOPIo6mUuB v9sQGRcw0o9ibTcdzwqoLnOcFSaG8vL5OH9hNRjQOYFL77H3Bmnmvs49AkdJ6GrFSI0M +A== Received: from iadpaimrmta01.imrmtpd1.prodappiadaev1.oraclevcn.com (iadpaimrmta01.appoci.oracle.com [130.35.100.223]) by mx0b-00069f02.pphosted.com (PPS) with ESMTPS id 3kv3jsjx9j-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Tue, 15 Nov 2022 21:22:36 +0000 Received: from pps.filterd (iadpaimrmta01.imrmtpd1.prodappiadaev1.oraclevcn.com [127.0.0.1]) by iadpaimrmta01.imrmtpd1.prodappiadaev1.oraclevcn.com (8.17.1.5/8.17.1.5) with ESMTP id 2AFKc3tQ031778; Tue, 15 Nov 2022 21:22:34 GMT Received: from pps.reinject (localhost [127.0.0.1]) by iadpaimrmta01.imrmtpd1.prodappiadaev1.oraclevcn.com (PPS) with ESMTPS id 3kt1xcfvhn-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Tue, 15 Nov 2022 21:22:34 +0000 Received: from iadpaimrmta01.imrmtpd1.prodappiadaev1.oraclevcn.com (iadpaimrmta01.imrmtpd1.prodappiadaev1.oraclevcn.com [127.0.0.1]) by pps.reinject (8.17.1.5/8.17.1.5) with ESMTP id 2AFLMSRi002082; Tue, 15 Nov 2022 21:22:34 GMT Received: from dhcp-10-132-95-73.usdhcp.oraclecorp.com.com (dhcp-10-132-95-73.usdhcp.oraclecorp.com [10.132.95.73]) by iadpaimrmta01.imrmtpd1.prodappiadaev1.oraclevcn.com (PPS) with ESMTP id 3kt1xcfvb0-5; Tue, 15 Nov 2022 21:22:34 +0000 From: Sidhartha Kumar To: linux-kernel@vger.kernel.org, linux-mm@kvack.org Cc: akpm@linux-foundation.org, songmuchun@bytedance.com, mike.kravetz@oracle.com, willy@infradead.org, almasrymina@google.com, linmiaohe@huawei.com, hughd@google.com, Sidhartha Kumar Subject: [PATCH mm-unstable 04/10] mm/hugetlb: convert remove_hugetlb_page() to folios Date: Tue, 15 Nov 2022 13:22:11 -0800 Message-Id: <20221115212217.19539-5-sidhartha.kumar@oracle.com> X-Mailer: git-send-email 2.38.1 In-Reply-To: <20221115212217.19539-1-sidhartha.kumar@oracle.com> References: <20221115212217.19539-1-sidhartha.kumar@oracle.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.219,Aquarius:18.0.895,Hydra:6.0.545,FMLib:17.11.122.1 definitions=2022-11-15_08,2022-11-15_03,2022-06-22_01 X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 mlxlogscore=999 phishscore=0 suspectscore=0 malwarescore=0 mlxscore=0 adultscore=0 bulkscore=0 spamscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2210170000 definitions=main-2211150146 X-Proofpoint-ORIG-GUID: moFIy5L0NZEuXaiRmR-lLgVWrCPFAFIR X-Proofpoint-GUID: moFIy5L0NZEuXaiRmR-lLgVWrCPFAFIR ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1668547367; a=rsa-sha256; cv=none; b=j4b1zbntGVwzS3GqpyxDqZpsAgPR5ddxr8BEUCvHiVBp08Nip2R6SkT/lSPD+PbuUiFzd9 HZjHsI//LoiiBda4U8sd++Hf8wSQaVGBEf4d/ao3VT+FQnT1bx9tbw310hdGuPncpxJ1nd dLO8X2qOMRvNlQjjEJ6YpPtim5fSLwo= ARC-Authentication-Results: i=1; imf09.hostedemail.com; dkim=pass header.d=oracle.com header.s=corp-2022-7-12 header.b=gVZUgu5h; spf=pass (imf09.hostedemail.com: domain of sidhartha.kumar@oracle.com designates 205.220.165.32 as permitted sender) smtp.mailfrom=sidhartha.kumar@oracle.com; dmarc=pass (policy=none) header.from=oracle.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1668547367; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=6dDPh7nZ5AaTXR9UjUhCTat+xUEbUzhgxymJR2d+XiE=; b=x4YV2sYx2eUuNVAMlJ+/YZrrm/EvMSrl02N5HP4TFkyMIBUDsu7KcgMDoEmNhHCZjY4+Rm az5bQ9I1CgGPWSdSAusBjUm54pzSA9OU6hDZpckdA1VPtxPxlCoJOYtVDsrIurLx8D8vaR xAPJx/LJV4c5iTN7g7yBvIPh3wk5kj0= X-Rspam-User: X-Stat-Signature: 9k1sm6ghx9zfx799fxztxii6asrabucp X-Rspamd-Server: rspam03 X-Rspamd-Queue-Id: C5BB514000B Authentication-Results: imf09.hostedemail.com; dkim=pass header.d=oracle.com header.s=corp-2022-7-12 header.b=gVZUgu5h; spf=pass (imf09.hostedemail.com: domain of sidhartha.kumar@oracle.com designates 205.220.165.32 as permitted sender) smtp.mailfrom=sidhartha.kumar@oracle.com; dmarc=pass (policy=none) header.from=oracle.com X-HE-Tag: 1668547367-489271 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Removes page_folio() call by converting callers to directly pass a folio into __remove_hugetlb_page(). Signed-off-by: Sidhartha Kumar --- mm/hugetlb.c | 48 +++++++++++++++++++++++++----------------------- 1 file changed, 25 insertions(+), 23 deletions(-) diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 19657f990900..7804ba51a7b8 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -1423,19 +1423,18 @@ static inline void destroy_compound_gigantic_folio(struct folio *folio, #endif /* - * Remove hugetlb page from lists, and update dtor so that page appears + * Remove hugetlb folio from lists, and update dtor so that the folio appears * as just a compound page. * - * A reference is held on the page, except in the case of demote. + * A reference is held on the folio, except in the case of demote. * * Must be called with hugetlb lock held. */ -static void __remove_hugetlb_page(struct hstate *h, struct page *page, +static void __remove_hugetlb_folio(struct hstate *h, struct folio *folio, bool adjust_surplus, bool demote) { - int nid = page_to_nid(page); - struct folio *folio = page_folio(page); + int nid = folio_nid(folio); VM_BUG_ON_FOLIO(hugetlb_cgroup_from_folio(folio), folio); VM_BUG_ON_FOLIO(hugetlb_cgroup_from_folio_rsvd(folio), folio); @@ -1444,9 +1443,9 @@ static void __remove_hugetlb_page(struct hstate *h, struct page *page, if (hstate_is_gigantic(h) && !gigantic_page_runtime_supported()) return; - list_del(&page->lru); + list_del(&folio->lru); - if (HPageFreed(page)) { + if (folio_test_hugetlb_freed(folio)) { h->free_huge_pages--; h->free_huge_pages_node[nid]--; } @@ -1476,26 +1475,26 @@ static void __remove_hugetlb_page(struct hstate *h, struct page *page, * be turned into a page of smaller size. */ if (!demote) - set_page_refcounted(page); + folio_ref_unfreeze(folio, 1); if (hstate_is_gigantic(h)) - set_compound_page_dtor(page, NULL_COMPOUND_DTOR); + folio_set_compound_dtor(folio, NULL_COMPOUND_DTOR); else - set_compound_page_dtor(page, COMPOUND_PAGE_DTOR); + folio_set_compound_dtor(folio, COMPOUND_PAGE_DTOR); h->nr_huge_pages--; h->nr_huge_pages_node[nid]--; } -static void remove_hugetlb_page(struct hstate *h, struct page *page, +static void remove_hugetlb_folio(struct hstate *h, struct folio *folio, bool adjust_surplus) { - __remove_hugetlb_page(h, page, adjust_surplus, false); + __remove_hugetlb_folio(h, folio, adjust_surplus, false); } -static void remove_hugetlb_page_for_demote(struct hstate *h, struct page *page, +static void remove_hugetlb_folio_for_demote(struct hstate *h, struct folio *folio, bool adjust_surplus) { - __remove_hugetlb_page(h, page, adjust_surplus, true); + __remove_hugetlb_folio(h, folio, adjust_surplus, true); } static void add_hugetlb_page(struct hstate *h, struct page *page, @@ -1630,8 +1629,9 @@ static void free_hpage_workfn(struct work_struct *work) /* * The VM_BUG_ON_PAGE(!PageHuge(page), page) in page_hstate() * is going to trigger because a previous call to - * remove_hugetlb_page() will set_compound_page_dtor(page, - * NULL_COMPOUND_DTOR), so do not use page_hstate() directly. + * remove_hugetlb_folio() will call folio_set_compound_dtor + * (folio, NULL_COMPOUND_DTOR), so do not use page_hstate() + * directly. */ h = size_to_hstate(page_size(page)); @@ -1740,12 +1740,12 @@ void free_huge_page(struct page *page) h->resv_huge_pages++; if (folio_test_hugetlb_temporary(folio)) { - remove_hugetlb_page(h, page, false); + remove_hugetlb_folio(h, folio, false); spin_unlock_irqrestore(&hugetlb_lock, flags); update_and_free_page(h, page, true); } else if (h->surplus_huge_pages_node[nid]) { /* remove the page from active list */ - remove_hugetlb_page(h, page, true); + remove_hugetlb_folio(h, folio, true); spin_unlock_irqrestore(&hugetlb_lock, flags); update_and_free_page(h, page, true); } else { @@ -2080,6 +2080,7 @@ static struct page *remove_pool_huge_page(struct hstate *h, { int nr_nodes, node; struct page *page = NULL; + struct folio *folio; lockdep_assert_held(&hugetlb_lock); for_each_node_mask_to_free(h, nr_nodes, node, nodes_allowed) { @@ -2091,7 +2092,8 @@ static struct page *remove_pool_huge_page(struct hstate *h, !list_empty(&h->hugepage_freelists[node])) { page = list_entry(h->hugepage_freelists[node].next, struct page, lru); - remove_hugetlb_page(h, page, acct_surplus); + folio = page_folio(page); + remove_hugetlb_folio(h, folio, acct_surplus); break; } } @@ -2153,7 +2155,7 @@ int dissolve_free_huge_page(struct page *page) goto retry; } - remove_hugetlb_page(h, &folio->page, false); + remove_hugetlb_folio(h, folio, false); h->max_huge_pages--; spin_unlock_irq(&hugetlb_lock); @@ -2792,7 +2794,7 @@ static int alloc_and_dissolve_huge_page(struct hstate *h, struct page *old_page, * and enqueue_huge_page() for new_page. The counters will remain * stable since this happens under the lock. */ - remove_hugetlb_page(h, old_page, false); + remove_hugetlb_folio(h, old_folio, false); /* * Ref count on new page is already zero as it was dropped @@ -3219,7 +3221,7 @@ static void try_to_free_low(struct hstate *h, unsigned long count, goto out; if (PageHighMem(page)) continue; - remove_hugetlb_page(h, page, false); + remove_hugetlb_folio(h, page_folio(page), false); list_add(&page->lru, &page_list); } } @@ -3430,7 +3432,7 @@ static int demote_free_huge_page(struct hstate *h, struct page *page) target_hstate = size_to_hstate(PAGE_SIZE << h->demote_order); - remove_hugetlb_page_for_demote(h, page, false); + remove_hugetlb_folio_for_demote(h, folio, false); spin_unlock_irq(&hugetlb_lock); rc = hugetlb_vmemmap_restore(h, page); From patchwork Tue Nov 15 21:22:12 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sid Kumar X-Patchwork-Id: 13044192 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5F100C433FE for ; Tue, 15 Nov 2022 21:22:52 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id EAED18E0001; Tue, 15 Nov 2022 16:22:48 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id E5CAC6B0074; Tue, 15 Nov 2022 16:22:48 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id A67EE8E0001; Tue, 15 Nov 2022 16:22:48 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id 5DD406B0074 for ; Tue, 15 Nov 2022 16:22:48 -0500 (EST) Received: from smtpin30.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id 3D9C84062C for ; Tue, 15 Nov 2022 21:22:48 +0000 (UTC) X-FDA: 80136951216.30.0C0BEE2 Received: from mx0a-00069f02.pphosted.com (mx0a-00069f02.pphosted.com [205.220.165.32]) by imf07.hostedemail.com (Postfix) with ESMTP id BCD9740007 for ; Tue, 15 Nov 2022 21:22:47 +0000 (UTC) Received: from pps.filterd (m0246627.ppops.net [127.0.0.1]) by mx0b-00069f02.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 2AFJRpxm016588; Tue, 15 Nov 2022 21:22:37 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding; s=corp-2022-7-12; bh=LPo0ewPs3MTWKRLrDd7l9egwICWaBxr5EYJykOq0AaU=; b=XyaAkNcrt/2ShS96GLCoy/uUO7ULfjTR2XJd/Zxc/3AYq/Vcc7T+dPYpFjdySw0Ib3kq F9xGOduFna3KWxttgOPVNeVzlPfzY1BN7JESXXuF2iDsZXb7mMETcRXcuD/c0J0nMPPS yy6JpxRXmXV2WWVQfwa85Z14I6XmlhMbGLaxRseEbUzvYVBmfEulwRUzpBJPkKwghxpE kGbLqbqGpnyfZKNcnegAzncvG0NrYqXxkU4v2kiXUTEkne93D1qkqRkxWn1qM8DXG1Zn 62lJE8LZ0E7cX9EmVaG2tnuENq/kel+rKK1qmYs3RhfGUdmIXKwt55aT3kcoTdk3jGGD Wg== Received: from iadpaimrmta01.imrmtpd1.prodappiadaev1.oraclevcn.com (iadpaimrmta01.appoci.oracle.com [130.35.100.223]) by mx0b-00069f02.pphosted.com (PPS) with ESMTPS id 3kv8ykj8n1-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Tue, 15 Nov 2022 21:22:37 +0000 Received: from pps.filterd (iadpaimrmta01.imrmtpd1.prodappiadaev1.oraclevcn.com [127.0.0.1]) by iadpaimrmta01.imrmtpd1.prodappiadaev1.oraclevcn.com (8.17.1.5/8.17.1.5) with ESMTP id 2AFLHbBl031840; Tue, 15 Nov 2022 21:22:36 GMT Received: from pps.reinject (localhost [127.0.0.1]) by iadpaimrmta01.imrmtpd1.prodappiadaev1.oraclevcn.com (PPS) with ESMTPS id 3kt1xcfvk1-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Tue, 15 Nov 2022 21:22:36 +0000 Received: from iadpaimrmta01.imrmtpd1.prodappiadaev1.oraclevcn.com (iadpaimrmta01.imrmtpd1.prodappiadaev1.oraclevcn.com [127.0.0.1]) by pps.reinject (8.17.1.5/8.17.1.5) with ESMTP id 2AFLMSRk002082; Tue, 15 Nov 2022 21:22:35 GMT Received: from dhcp-10-132-95-73.usdhcp.oraclecorp.com.com (dhcp-10-132-95-73.usdhcp.oraclecorp.com [10.132.95.73]) by iadpaimrmta01.imrmtpd1.prodappiadaev1.oraclevcn.com (PPS) with ESMTP id 3kt1xcfvb0-6; Tue, 15 Nov 2022 21:22:35 +0000 From: Sidhartha Kumar To: linux-kernel@vger.kernel.org, linux-mm@kvack.org Cc: akpm@linux-foundation.org, songmuchun@bytedance.com, mike.kravetz@oracle.com, willy@infradead.org, almasrymina@google.com, linmiaohe@huawei.com, hughd@google.com, Sidhartha Kumar Subject: [PATCH mm-unstable 05/10] mm/hugetlb: convert update_and_free_page() to folios Date: Tue, 15 Nov 2022 13:22:12 -0800 Message-Id: <20221115212217.19539-6-sidhartha.kumar@oracle.com> X-Mailer: git-send-email 2.38.1 In-Reply-To: <20221115212217.19539-1-sidhartha.kumar@oracle.com> References: <20221115212217.19539-1-sidhartha.kumar@oracle.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.219,Aquarius:18.0.895,Hydra:6.0.545,FMLib:17.11.122.1 definitions=2022-11-15_08,2022-11-15_03,2022-06-22_01 X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 mlxlogscore=870 phishscore=0 suspectscore=0 malwarescore=0 mlxscore=0 adultscore=0 bulkscore=0 spamscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2210170000 definitions=main-2211150146 X-Proofpoint-GUID: nEUOO4GS8a_Dbdeejy_lg_4c6Rc2I0Tt X-Proofpoint-ORIG-GUID: nEUOO4GS8a_Dbdeejy_lg_4c6Rc2I0Tt ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1668547367; a=rsa-sha256; cv=none; b=5oXCMfRoCl6SAamK0HvgS4mohmETMl50B5+dyp+EFLBrg5dk6On7rKxxaqmIRa60vGcaHH b+H28JGqDkc9BB5r5DTlANlECbbmX97abVGDy7+dk91kPoJOjNJVq7Uivv+oouE/paA4+u sHtJ2kCFXAwxpoCRXAZpGIhY216BBUU= ARC-Authentication-Results: i=1; imf07.hostedemail.com; dkim=pass header.d=oracle.com header.s=corp-2022-7-12 header.b=XyaAkNcr; dmarc=pass (policy=none) header.from=oracle.com; spf=pass (imf07.hostedemail.com: domain of sidhartha.kumar@oracle.com designates 205.220.165.32 as permitted sender) smtp.mailfrom=sidhartha.kumar@oracle.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1668547367; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=LPo0ewPs3MTWKRLrDd7l9egwICWaBxr5EYJykOq0AaU=; b=8Ci5Lc4Q1i3hJTDs7eCFtPpyxCGM88g3MEzbk2ZfdhSzvBi44ka+LB82WiFPnNHQ/KCx6o Mtnl4K6ZA6/d4xGusYcMuigpM4gLcw8QacD+gufefgFEVV27kyT/Vq6ICXi3eJCgzAD09P 4cTyFzE0Nre7GxS3uBcajCssUKfKq9w= X-Stat-Signature: csyb1ttuiw1i9rqmz515qokw9o3e19mo X-Rspamd-Queue-Id: BCD9740007 X-Rspam-User: Authentication-Results: imf07.hostedemail.com; dkim=pass header.d=oracle.com header.s=corp-2022-7-12 header.b=XyaAkNcr; dmarc=pass (policy=none) header.from=oracle.com; spf=pass (imf07.hostedemail.com: domain of sidhartha.kumar@oracle.com designates 205.220.165.32 as permitted sender) smtp.mailfrom=sidhartha.kumar@oracle.com X-Rspamd-Server: rspam09 X-HE-Tag: 1668547367-153571 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Make more progress on converting the free_huge_page() destructor to operate on folios by converting update_and_free_page() to folios. Signed-off-by: Sidhartha Kumar --- mm/hugetlb.c | 30 ++++++++++++++++-------------- 1 file changed, 16 insertions(+), 14 deletions(-) diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 7804ba51a7b8..660ae46e741b 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -1469,7 +1469,7 @@ static void __remove_hugetlb_folio(struct hstate *h, struct folio *folio, * apply. * * This handles the case where more than one ref is held when and - * after update_and_free_page is called. + * after update_and_free_hugetlb_folio is called. * * In the case of demote we do not ref count the page as it will soon * be turned into a page of smaller size. @@ -1600,7 +1600,7 @@ static void __update_and_free_page(struct hstate *h, struct page *page) } /* - * As update_and_free_page() can be called under any context, so we cannot + * As update_and_free_hugetlb_folio() can be called under any context, so we cannot * use GFP_KERNEL to allocate vmemmap pages. However, we can defer the * actual freeing in a workqueue to prevent from using GFP_ATOMIC to allocate * the vmemmap pages. @@ -1648,11 +1648,11 @@ static inline void flush_free_hpage_work(struct hstate *h) flush_work(&free_hpage_work); } -static void update_and_free_page(struct hstate *h, struct page *page, +static void update_and_free_hugetlb_folio(struct hstate *h, struct folio *folio, bool atomic) { - if (!HPageVmemmapOptimized(page) || !atomic) { - __update_and_free_page(h, page); + if (!folio_test_hugetlb_vmemmap_optimized(folio) || !atomic) { + __update_and_free_page(h, &folio->page); return; } @@ -1663,16 +1663,18 @@ static void update_and_free_page(struct hstate *h, struct page *page, * empty. Otherwise, schedule_work() had been called but the workfn * hasn't retrieved the list yet. */ - if (llist_add((struct llist_node *)&page->mapping, &hpage_freelist)) + if (llist_add((struct llist_node *)&folio->mapping, &hpage_freelist)) schedule_work(&free_hpage_work); } static void update_and_free_pages_bulk(struct hstate *h, struct list_head *list) { struct page *page, *t_page; + struct folio *folio; list_for_each_entry_safe(page, t_page, list, lru) { - update_and_free_page(h, page, false); + folio = page_folio(page); + update_and_free_hugetlb_folio(h, folio, false); cond_resched(); } } @@ -1742,12 +1744,12 @@ void free_huge_page(struct page *page) if (folio_test_hugetlb_temporary(folio)) { remove_hugetlb_folio(h, folio, false); spin_unlock_irqrestore(&hugetlb_lock, flags); - update_and_free_page(h, page, true); + update_and_free_hugetlb_folio(h, folio, true); } else if (h->surplus_huge_pages_node[nid]) { /* remove the page from active list */ remove_hugetlb_folio(h, folio, true); spin_unlock_irqrestore(&hugetlb_lock, flags); - update_and_free_page(h, page, true); + update_and_free_hugetlb_folio(h, folio, true); } else { arch_clear_hugepage_flags(page); enqueue_huge_page(h, page); @@ -2160,8 +2162,8 @@ int dissolve_free_huge_page(struct page *page) spin_unlock_irq(&hugetlb_lock); /* - * Normally update_and_free_page will allocate required vmemmmap - * before freeing the page. update_and_free_page will fail to + * Normally update_and_free_hugtlb_folio will allocate required vmemmmap + * before freeing the page. update_and_free_hugtlb_folio will fail to * free the page if it can not allocate required vmemmap. We * need to adjust max_huge_pages if the page is not freed. * Attempt to allocate vmemmmap here so that we can take @@ -2169,7 +2171,7 @@ int dissolve_free_huge_page(struct page *page) */ rc = hugetlb_vmemmap_restore(h, &folio->page); if (!rc) { - update_and_free_page(h, &folio->page, false); + update_and_free_hugetlb_folio(h, folio, false); } else { spin_lock_irq(&hugetlb_lock); add_hugetlb_page(h, &folio->page, false); @@ -2807,7 +2809,7 @@ static int alloc_and_dissolve_huge_page(struct hstate *h, struct page *old_page, * Pages have been replaced, we can safely free the old one. */ spin_unlock_irq(&hugetlb_lock); - update_and_free_page(h, old_page, false); + update_and_free_hugetlb_folio(h, old_folio, false); } return ret; @@ -2816,7 +2818,7 @@ static int alloc_and_dissolve_huge_page(struct hstate *h, struct page *old_page, spin_unlock_irq(&hugetlb_lock); /* Page has a zero ref count, but needs a ref to be freed */ folio_ref_unfreeze(new_folio, 1); - update_and_free_page(h, new_page, false); + update_and_free_hugetlb_folio(h, new_folio, false); return ret; } From patchwork Tue Nov 15 21:22:13 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sid Kumar X-Patchwork-Id: 13044258 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id AB667C433FE for ; Tue, 15 Nov 2022 22:18:06 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 21DA66B0071; Tue, 15 Nov 2022 17:18:06 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 1CDCB6B0072; Tue, 15 Nov 2022 17:18:06 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 095E26B0073; Tue, 15 Nov 2022 17:18:06 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id F18A66B0071 for ; Tue, 15 Nov 2022 17:18:05 -0500 (EST) Received: from smtpin08.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id 8F9F5AB5D2 for ; Tue, 15 Nov 2022 22:18:05 +0000 (UTC) X-FDA: 80137090530.08.4A20698 Received: from mx0a-00069f02.pphosted.com (mx0a-00069f02.pphosted.com [205.220.165.32]) by imf09.hostedemail.com (Postfix) with ESMTP id 054C7140009 for ; Tue, 15 Nov 2022 22:18:04 +0000 (UTC) Received: from pps.filterd (m0246627.ppops.net [127.0.0.1]) by mx0b-00069f02.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 2AFJXSXl016551; Tue, 15 Nov 2022 21:22:39 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding; s=corp-2022-7-12; bh=+p46qtxfAvi4yBaujCKpTxRFQIybXD2YBopI7D5Y2Sc=; b=jTzarBSGS7mQIpmkVSAFfJ8vQ0qy2dKlMKEpNZ4Pqg6x4dpLT/YPry4Ga1CvdvTtL5Tz u+pZ6peWrKmzdd/eYs/ZXyYzQy0snjlYTJHDmIyqQAIRJns0srgYsWfG+a/5QMuE8bfq DbpZuWsNpyLXB+QbuGBaKFXKXIbbNczNTupl0QbIqpFeLqn+/r830IRi0RJ6LVNABLAg Ss9T3LjDCye5BIWNVVpqccC6SQuRmYW5PP5nvy/9WNf4ROOdts8yxBdck3ovj7CwBpKN FE2whxjklTS9Urd+z66pVYNIoj3HYHWL8+F9j6Iv6/OiPWW3EPcn6JQM21INZy1eSMXg Vg== Received: from iadpaimrmta01.imrmtpd1.prodappiadaev1.oraclevcn.com (iadpaimrmta01.appoci.oracle.com [130.35.100.223]) by mx0b-00069f02.pphosted.com (PPS) with ESMTPS id 3kv8ykj8n8-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Tue, 15 Nov 2022 21:22:38 +0000 Received: from pps.filterd (iadpaimrmta01.imrmtpd1.prodappiadaev1.oraclevcn.com [127.0.0.1]) by iadpaimrmta01.imrmtpd1.prodappiadaev1.oraclevcn.com (8.17.1.5/8.17.1.5) with ESMTP id 2AFK0D1N031734; Tue, 15 Nov 2022 21:22:37 GMT Received: from pps.reinject (localhost [127.0.0.1]) by iadpaimrmta01.imrmtpd1.prodappiadaev1.oraclevcn.com (PPS) with ESMTPS id 3kt1xcfvma-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Tue, 15 Nov 2022 21:22:37 +0000 Received: from iadpaimrmta01.imrmtpd1.prodappiadaev1.oraclevcn.com (iadpaimrmta01.imrmtpd1.prodappiadaev1.oraclevcn.com [127.0.0.1]) by pps.reinject (8.17.1.5/8.17.1.5) with ESMTP id 2AFLMSRm002082; Tue, 15 Nov 2022 21:22:36 GMT Received: from dhcp-10-132-95-73.usdhcp.oraclecorp.com.com (dhcp-10-132-95-73.usdhcp.oraclecorp.com [10.132.95.73]) by iadpaimrmta01.imrmtpd1.prodappiadaev1.oraclevcn.com (PPS) with ESMTP id 3kt1xcfvb0-7; Tue, 15 Nov 2022 21:22:36 +0000 From: Sidhartha Kumar To: linux-kernel@vger.kernel.org, linux-mm@kvack.org Cc: akpm@linux-foundation.org, songmuchun@bytedance.com, mike.kravetz@oracle.com, willy@infradead.org, almasrymina@google.com, linmiaohe@huawei.com, hughd@google.com, Sidhartha Kumar Subject: [PATCH mm-unstable 06/10] mm/hugetlb: convert add_hugetlb_page() to folios and add hugetlb_cma_folio() Date: Tue, 15 Nov 2022 13:22:13 -0800 Message-Id: <20221115212217.19539-7-sidhartha.kumar@oracle.com> X-Mailer: git-send-email 2.38.1 In-Reply-To: <20221115212217.19539-1-sidhartha.kumar@oracle.com> References: <20221115212217.19539-1-sidhartha.kumar@oracle.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.219,Aquarius:18.0.895,Hydra:6.0.545,FMLib:17.11.122.1 definitions=2022-11-15_08,2022-11-15_03,2022-06-22_01 X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 mlxlogscore=999 phishscore=0 suspectscore=0 malwarescore=0 mlxscore=0 adultscore=0 bulkscore=0 spamscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2210170000 definitions=main-2211150146 X-Proofpoint-GUID: iWP9XscYer9oVI762P9uash6HqGbu10c X-Proofpoint-ORIG-GUID: iWP9XscYer9oVI762P9uash6HqGbu10c ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1668550685; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=+p46qtxfAvi4yBaujCKpTxRFQIybXD2YBopI7D5Y2Sc=; b=0ZydnDXYmepLGokLeubinR5lj/R6/VlBY+yIzIbOVn/1RmkahxUVY5mg4NDuojjlvUMe3V uQKA+2SJRE1hu1U1L1OGstCLa3mTIfObzhOd3B4t9qEhSzBZqWq4I/yvu+waBs30rrSEoZ DLRr+Y0N7k9d2ZIFWOd5a5h/St+ulEE= ARC-Authentication-Results: i=1; imf09.hostedemail.com; dkim=pass header.d=oracle.com header.s=corp-2022-7-12 header.b=jTzarBSG; dmarc=pass (policy=none) header.from=oracle.com; spf=pass (imf09.hostedemail.com: domain of sidhartha.kumar@oracle.com designates 205.220.165.32 as permitted sender) smtp.mailfrom=sidhartha.kumar@oracle.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1668550685; a=rsa-sha256; cv=none; b=OMFMxyHywNrRFWqdhKwLLFFl8kxDHFWM1Y0P0ja7bNSPE0p97V3g1Y8nZOlSUAHUe34MCB kc3Yl47BHH78z8PVUSW1TD6j/SzJQ3mKxP58xutM6vGHUDkfRtBwAamnLRzrLFTSsaq1P5 UWMDEfGImu7GyAfDqbx6eyadywzqh+Y= X-Rspam-User: X-Stat-Signature: do4e1afxfkp6kwr3nmwjckqwzryg5woc X-Rspamd-Queue-Id: 054C7140009 Authentication-Results: imf09.hostedemail.com; dkim=pass header.d=oracle.com header.s=corp-2022-7-12 header.b=jTzarBSG; dmarc=pass (policy=none) header.from=oracle.com; spf=pass (imf09.hostedemail.com: domain of sidhartha.kumar@oracle.com designates 205.220.165.32 as permitted sender) smtp.mailfrom=sidhartha.kumar@oracle.com X-Rspamd-Server: rspam07 X-HE-Tag: 1668550684-207589 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Convert add_hugetlb_page() to take in a folio, also convert hugetlb_cma_page() to take in a folio. Signed-off-by: Sidhartha Kumar --- mm/hugetlb.c | 40 ++++++++++++++++++++-------------------- 1 file changed, 20 insertions(+), 20 deletions(-) diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 660ae46e741b..7382c162dbcd 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -53,13 +53,13 @@ struct hstate hstates[HUGE_MAX_HSTATE]; #ifdef CONFIG_CMA static struct cma *hugetlb_cma[MAX_NUMNODES]; static unsigned long hugetlb_cma_size_in_node[MAX_NUMNODES] __initdata; -static bool hugetlb_cma_page(struct page *page, unsigned int order) +static bool hugetlb_cma_folio(struct folio *folio, unsigned int order) { - return cma_pages_valid(hugetlb_cma[page_to_nid(page)], page, + return cma_pages_valid(hugetlb_cma[folio_nid(folio)], &folio->page, 1 << order); } #else -static bool hugetlb_cma_page(struct page *page, unsigned int order) +static bool hugetlb_cma_folio(struct folio *folio, unsigned int order) { return false; } @@ -1497,17 +1497,17 @@ static void remove_hugetlb_folio_for_demote(struct hstate *h, struct folio *foli __remove_hugetlb_folio(h, folio, adjust_surplus, true); } -static void add_hugetlb_page(struct hstate *h, struct page *page, +static void add_hugetlb_folio(struct hstate *h, struct folio *folio, bool adjust_surplus) { int zeroed; - int nid = page_to_nid(page); + int nid = folio_nid(folio); - VM_BUG_ON_PAGE(!HPageVmemmapOptimized(page), page); + VM_BUG_ON_FOLIO(!folio_test_hugetlb_vmemmap_optimized(folio), folio); lockdep_assert_held(&hugetlb_lock); - INIT_LIST_HEAD(&page->lru); + INIT_LIST_HEAD(&folio->lru); h->nr_huge_pages++; h->nr_huge_pages_node[nid]++; @@ -1516,21 +1516,21 @@ static void add_hugetlb_page(struct hstate *h, struct page *page, h->surplus_huge_pages_node[nid]++; } - set_compound_page_dtor(page, HUGETLB_PAGE_DTOR); - set_page_private(page, 0); + folio_set_compound_dtor(folio, HUGETLB_PAGE_DTOR); + folio_change_private(folio, 0); /* * We have to set HPageVmemmapOptimized again as above - * set_page_private(page, 0) cleared it. + * folio_change_private(folio, 0) cleared it. */ - SetHPageVmemmapOptimized(page); + folio_set_hugetlb_vmemmap_optimized(folio); /* - * This page is about to be managed by the hugetlb allocator and + * This folio is about to be managed by the hugetlb allocator and * should have no users. Drop our reference, and check for others * just in case. */ - zeroed = put_page_testzero(page); - if (!zeroed) + zeroed = folio_put_testzero(folio); + if (unlikely(!zeroed)) /* * It is VERY unlikely soneone else has taken a ref on * the page. In this case, we simply return as the @@ -1539,8 +1539,8 @@ static void add_hugetlb_page(struct hstate *h, struct page *page, */ return; - arch_clear_hugepage_flags(page); - enqueue_huge_page(h, page); + arch_clear_hugepage_flags(&folio->page); + enqueue_huge_page(h, &folio->page); } static void __update_and_free_page(struct hstate *h, struct page *page) @@ -1566,7 +1566,7 @@ static void __update_and_free_page(struct hstate *h, struct page *page) * page and put the page back on the hugetlb free list and treat * as a surplus page. */ - add_hugetlb_page(h, page, true); + add_hugetlb_folio(h, page_folio(page), true); spin_unlock_irq(&hugetlb_lock); return; } @@ -1591,7 +1591,7 @@ static void __update_and_free_page(struct hstate *h, struct page *page) * need to be given back to CMA in free_gigantic_page. */ if (hstate_is_gigantic(h) || - hugetlb_cma_page(page, huge_page_order(h))) { + hugetlb_cma_folio(folio, huge_page_order(h))) { destroy_compound_gigantic_folio(folio, huge_page_order(h)); free_gigantic_page(page, huge_page_order(h)); } else { @@ -2174,7 +2174,7 @@ int dissolve_free_huge_page(struct page *page) update_and_free_hugetlb_folio(h, folio, false); } else { spin_lock_irq(&hugetlb_lock); - add_hugetlb_page(h, &folio->page, false); + add_hugetlb_folio(h, folio, false); h->max_huge_pages++; spin_unlock_irq(&hugetlb_lock); } @@ -3442,7 +3442,7 @@ static int demote_free_huge_page(struct hstate *h, struct page *page) /* Allocation of vmemmmap failed, we can not demote page */ spin_lock_irq(&hugetlb_lock); set_page_refcounted(page); - add_hugetlb_page(h, page, false); + add_hugetlb_folio(h, page_folio(page), false); return rc; } From patchwork Tue Nov 15 21:22:14 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sid Kumar X-Patchwork-Id: 13044272 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id E0A46C4332F for ; Tue, 15 Nov 2022 23:06:37 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 0E2D56B0071; Tue, 15 Nov 2022 18:06:37 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 0936D6B0072; Tue, 15 Nov 2022 18:06:37 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id E765A8E0001; Tue, 15 Nov 2022 18:06:36 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id D42516B0071 for ; Tue, 15 Nov 2022 18:06:36 -0500 (EST) Received: from smtpin10.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay04.hostedemail.com (Postfix) with ESMTP id A9AB51A0A6E for ; Tue, 15 Nov 2022 23:06:36 +0000 (UTC) X-FDA: 80137212792.10.FCADA65 Received: from mx0a-00069f02.pphosted.com (mx0a-00069f02.pphosted.com [205.220.165.32]) by imf05.hostedemail.com (Postfix) with ESMTP id 46302100006 for ; Tue, 15 Nov 2022 23:06:36 +0000 (UTC) Received: from pps.filterd (m0246629.ppops.net [127.0.0.1]) by mx0b-00069f02.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 2AFKRoLs008998; Tue, 15 Nov 2022 21:22:41 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding; s=corp-2022-7-12; bh=0C32TPwYA1sxUQ1BprDeGSkpbbMnBeTp3PiSrVMBGtI=; b=SFRBsTfvH1LOKYOxqXdl718oidp1tjJfMSH1LtpfTfvfxmtbDgNiq9vLSe4bcCZD+m56 IXYnRszYpuvuNtBVqcw3ZXpkukPITqJimDsmKDEBxlw6JhMDSlUhsbUQR+Jx+bS4XLcG yTZ+2DsRzNFQeXhF++fqIK9rKtY+uy2HnCyG3wHi3SxuEvCglhyZGYTr3PLtj/42MyP1 FQ2UnfPmvSQNt1sPg/oOCTxXJDbHAYLSsSfegi65THFR7Tfn7SwKItIvtnNDICRWdWPb owdeqypV99huuL2iOf8fDtXMGbBztvlQK8jeZQhaefr0Hxvegk+RXRCE9PYol+Dx3VbG Sw== Received: from iadpaimrmta01.imrmtpd1.prodappiadaev1.oraclevcn.com (iadpaimrmta01.appoci.oracle.com [130.35.100.223]) by mx0b-00069f02.pphosted.com (PPS) with ESMTPS id 3kv3jsjxa1-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Tue, 15 Nov 2022 21:22:40 +0000 Received: from pps.filterd (iadpaimrmta01.imrmtpd1.prodappiadaev1.oraclevcn.com [127.0.0.1]) by iadpaimrmta01.imrmtpd1.prodappiadaev1.oraclevcn.com (8.17.1.5/8.17.1.5) with ESMTP id 2AFK0D1P031734; Tue, 15 Nov 2022 21:22:38 GMT Received: from pps.reinject (localhost [127.0.0.1]) by iadpaimrmta01.imrmtpd1.prodappiadaev1.oraclevcn.com (PPS) with ESMTPS id 3kt1xcfvnc-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Tue, 15 Nov 2022 21:22:38 +0000 Received: from iadpaimrmta01.imrmtpd1.prodappiadaev1.oraclevcn.com (iadpaimrmta01.imrmtpd1.prodappiadaev1.oraclevcn.com [127.0.0.1]) by pps.reinject (8.17.1.5/8.17.1.5) with ESMTP id 2AFLMSRo002082; Tue, 15 Nov 2022 21:22:38 GMT Received: from dhcp-10-132-95-73.usdhcp.oraclecorp.com.com (dhcp-10-132-95-73.usdhcp.oraclecorp.com [10.132.95.73]) by iadpaimrmta01.imrmtpd1.prodappiadaev1.oraclevcn.com (PPS) with ESMTP id 3kt1xcfvb0-8; Tue, 15 Nov 2022 21:22:38 +0000 From: Sidhartha Kumar To: linux-kernel@vger.kernel.org, linux-mm@kvack.org Cc: akpm@linux-foundation.org, songmuchun@bytedance.com, mike.kravetz@oracle.com, willy@infradead.org, almasrymina@google.com, linmiaohe@huawei.com, hughd@google.com, Sidhartha Kumar Subject: [PATCH mm-unstable 07/10] mm/hugetlb: convert enqueue_huge_page() to folios Date: Tue, 15 Nov 2022 13:22:14 -0800 Message-Id: <20221115212217.19539-8-sidhartha.kumar@oracle.com> X-Mailer: git-send-email 2.38.1 In-Reply-To: <20221115212217.19539-1-sidhartha.kumar@oracle.com> References: <20221115212217.19539-1-sidhartha.kumar@oracle.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.219,Aquarius:18.0.895,Hydra:6.0.545,FMLib:17.11.122.1 definitions=2022-11-15_08,2022-11-15_03,2022-06-22_01 X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 mlxlogscore=999 phishscore=0 suspectscore=0 malwarescore=0 mlxscore=0 adultscore=0 bulkscore=0 spamscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2210170000 definitions=main-2211150146 X-Proofpoint-ORIG-GUID: ok8lAA7QoayIlTeg983FneMaP8FuRyUG X-Proofpoint-GUID: ok8lAA7QoayIlTeg983FneMaP8FuRyUG ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1668553596; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=0C32TPwYA1sxUQ1BprDeGSkpbbMnBeTp3PiSrVMBGtI=; b=SWNSir6QwrLwUzSmIN90jojijSmS2gneBgQpSGMawiDBjPMD/BYGmondacw5cZ3TXEx/Kg 5kitdPeH6WDnd1ehFExX+upSwV9vdaBFJ+vgcXACqeY0UpoBvYmcMpxCf6ppQcbYEY5VEV IZSgoPq6Cg19NzUqoI3Wmz3aN+HGy+A= ARC-Authentication-Results: i=1; imf05.hostedemail.com; dkim=pass header.d=oracle.com header.s=corp-2022-7-12 header.b=SFRBsTfv; spf=pass (imf05.hostedemail.com: domain of sidhartha.kumar@oracle.com designates 205.220.165.32 as permitted sender) smtp.mailfrom=sidhartha.kumar@oracle.com; dmarc=pass (policy=none) header.from=oracle.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1668553596; a=rsa-sha256; cv=none; b=gCRaT6jmSWIHGv0MWX8NReHdZgrqXLzm1+S5pEptLGWg4ChceN90ictBM3xtR8+OyuO5Rr p5IifCWdPryRttN3+eJZicTJsr6T7HUReOZY5zU2pVDrDnwxnYKdHWwmMK7m/1mymmESFX dO27ypC+u3t7FrmKsC/+avADbs0uSpA= X-Rspam-User: X-Rspamd-Server: rspam01 X-Rspamd-Queue-Id: 46302100006 Authentication-Results: imf05.hostedemail.com; dkim=pass header.d=oracle.com header.s=corp-2022-7-12 header.b=SFRBsTfv; spf=pass (imf05.hostedemail.com: domain of sidhartha.kumar@oracle.com designates 205.220.165.32 as permitted sender) smtp.mailfrom=sidhartha.kumar@oracle.com; dmarc=pass (policy=none) header.from=oracle.com X-Stat-Signature: kjcpyxriwpxw1z48rbutxm3g3p197xpj X-HE-Tag: 1668553596-554125 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Convert callers of enqueue_huge_page() to pass in a folio, function is renamed to enqueue_hugetlb_folio(). Signed-off-by: Sidhartha Kumar --- mm/hugetlb.c | 22 +++++++++++----------- 1 file changed, 11 insertions(+), 11 deletions(-) diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 7382c162dbcd..ebb98c1af2fb 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -1119,17 +1119,17 @@ static bool vma_has_reserves(struct vm_area_struct *vma, long chg) return false; } -static void enqueue_huge_page(struct hstate *h, struct page *page) +static void enqueue_hugetlb_folio(struct hstate *h, struct folio *folio) { - int nid = page_to_nid(page); + int nid = folio_nid(folio); lockdep_assert_held(&hugetlb_lock); - VM_BUG_ON_PAGE(page_count(page), page); + VM_BUG_ON_FOLIO(folio_ref_count(folio), folio); - list_move(&page->lru, &h->hugepage_freelists[nid]); + list_move(&folio->lru, &h->hugepage_freelists[nid]); h->free_huge_pages++; h->free_huge_pages_node[nid]++; - SetHPageFreed(page); + folio_set_hugetlb_freed(folio); } static struct page *dequeue_huge_page_node_exact(struct hstate *h, int nid) @@ -1540,7 +1540,7 @@ static void add_hugetlb_folio(struct hstate *h, struct folio *folio, return; arch_clear_hugepage_flags(&folio->page); - enqueue_huge_page(h, &folio->page); + enqueue_hugetlb_folio(h, folio); } static void __update_and_free_page(struct hstate *h, struct page *page) @@ -1752,7 +1752,7 @@ void free_huge_page(struct page *page) update_and_free_hugetlb_folio(h, folio, true); } else { arch_clear_hugepage_flags(page); - enqueue_huge_page(h, page); + enqueue_hugetlb_folio(h, folio); spin_unlock_irqrestore(&hugetlb_lock, flags); } } @@ -2427,7 +2427,7 @@ static int gather_surplus_pages(struct hstate *h, long delta) if ((--needed) < 0) break; /* Add the page to the hugetlb allocator */ - enqueue_huge_page(h, page); + enqueue_hugetlb_folio(h, page_folio(page)); } free: spin_unlock_irq(&hugetlb_lock); @@ -2793,8 +2793,8 @@ static int alloc_and_dissolve_huge_page(struct hstate *h, struct page *old_page, * Ok, old_page is still a genuine free hugepage. Remove it from * the freelist and decrease the counters. These will be * incremented again when calling __prep_account_new_huge_page() - * and enqueue_huge_page() for new_page. The counters will remain - * stable since this happens under the lock. + * and enqueue_hugetlb_folio() for new_folio. The counters will + * remain stable since this happens under the lock. */ remove_hugetlb_folio(h, old_folio, false); @@ -2803,7 +2803,7 @@ static int alloc_and_dissolve_huge_page(struct hstate *h, struct page *old_page, * earlier. It can be directly added to the pool free list. */ __prep_account_new_huge_page(h, nid); - enqueue_huge_page(h, new_page); + enqueue_hugetlb_folio(h, new_folio); /* * Pages have been replaced, we can safely free the old one. From patchwork Tue Nov 15 21:22:15 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sid Kumar X-Patchwork-Id: 13044194 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 63F59C433FE for ; Tue, 15 Nov 2022 21:22:55 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id D03ED6B0074; Tue, 15 Nov 2022 16:22:50 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id C8AE28E0006; Tue, 15 Nov 2022 16:22:50 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id A3E928E0005; Tue, 15 Nov 2022 16:22:50 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id 8F22E6B0074 for ; Tue, 15 Nov 2022 16:22:50 -0500 (EST) Received: from smtpin16.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id 48CD2C0B77 for ; Tue, 15 Nov 2022 21:22:50 +0000 (UTC) X-FDA: 80136951300.16.BF70509 Received: from mx0a-00069f02.pphosted.com (mx0a-00069f02.pphosted.com [205.220.165.32]) by imf24.hostedemail.com (Postfix) with ESMTP id B2215180013 for ; Tue, 15 Nov 2022 21:22:49 +0000 (UTC) Received: from pps.filterd (m0246617.ppops.net [127.0.0.1]) by mx0b-00069f02.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 2AFKIZIS022487; Tue, 15 Nov 2022 21:22:42 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding; s=corp-2022-7-12; bh=YCHxFVBKjRFyvrHnAhnUSTSjkexP1PKdOWE1OJA0lOs=; b=h75cORDGzkayEzPMs91By8rUbmv7gsVsPPcFN73BBIw83r1FejZmPW4feg1SejSqTmmH exgeRW111Q+3Ws5sbwb++Komo53zqC9FtqJDowPjIameRtqNHfwYnP7xnH9oRmRfewpB omKNoTg8WaNsVoHBqvvI60aak8NRavikp8GEjvZnihCShiM8cZYXQLBnQk4SwP9e9Wnm tolqvB2MKAUGdOvT7GbvGSOOS+OM0naOXYJCJo5iND8X2rdaokBMOXLgwI+wcS1QP6d9 IltiM/ZmkWeLI/IXh9wDRe6cbK3CMeRL+Enc496O9iRecOJ5N1qKLTXkfNBf4KAfU/kv wg== Received: from iadpaimrmta01.imrmtpd1.prodappiadaev1.oraclevcn.com (iadpaimrmta01.appoci.oracle.com [130.35.100.223]) by mx0b-00069f02.pphosted.com (PPS) with ESMTPS id 3kv3ns2ycp-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Tue, 15 Nov 2022 21:22:42 +0000 Received: from pps.filterd (iadpaimrmta01.imrmtpd1.prodappiadaev1.oraclevcn.com [127.0.0.1]) by iadpaimrmta01.imrmtpd1.prodappiadaev1.oraclevcn.com (8.17.1.5/8.17.1.5) with ESMTP id 2AFJo7KO032016; Tue, 15 Nov 2022 21:22:40 GMT Received: from pps.reinject (localhost [127.0.0.1]) by iadpaimrmta01.imrmtpd1.prodappiadaev1.oraclevcn.com (PPS) with ESMTPS id 3kt1xcfvpe-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Tue, 15 Nov 2022 21:22:40 +0000 Received: from iadpaimrmta01.imrmtpd1.prodappiadaev1.oraclevcn.com (iadpaimrmta01.imrmtpd1.prodappiadaev1.oraclevcn.com [127.0.0.1]) by pps.reinject (8.17.1.5/8.17.1.5) with ESMTP id 2AFLMSRq002082; Tue, 15 Nov 2022 21:22:39 GMT Received: from dhcp-10-132-95-73.usdhcp.oraclecorp.com.com (dhcp-10-132-95-73.usdhcp.oraclecorp.com [10.132.95.73]) by iadpaimrmta01.imrmtpd1.prodappiadaev1.oraclevcn.com (PPS) with ESMTP id 3kt1xcfvb0-9; Tue, 15 Nov 2022 21:22:39 +0000 From: Sidhartha Kumar To: linux-kernel@vger.kernel.org, linux-mm@kvack.org Cc: akpm@linux-foundation.org, songmuchun@bytedance.com, mike.kravetz@oracle.com, willy@infradead.org, almasrymina@google.com, linmiaohe@huawei.com, hughd@google.com, Sidhartha Kumar Subject: [PATCH mm-unstable 08/10] mm/hugetlb: convert free_gigantic_page() to folios Date: Tue, 15 Nov 2022 13:22:15 -0800 Message-Id: <20221115212217.19539-9-sidhartha.kumar@oracle.com> X-Mailer: git-send-email 2.38.1 In-Reply-To: <20221115212217.19539-1-sidhartha.kumar@oracle.com> References: <20221115212217.19539-1-sidhartha.kumar@oracle.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.219,Aquarius:18.0.895,Hydra:6.0.545,FMLib:17.11.122.1 definitions=2022-11-15_08,2022-11-15_03,2022-06-22_01 X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 mlxlogscore=999 phishscore=0 suspectscore=0 malwarescore=0 mlxscore=0 adultscore=0 bulkscore=0 spamscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2210170000 definitions=main-2211150146 X-Proofpoint-GUID: FGfWrX5ONbv4_-bCfGT8BRk2eBO2_vdw X-Proofpoint-ORIG-GUID: FGfWrX5ONbv4_-bCfGT8BRk2eBO2_vdw ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1668547369; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=YCHxFVBKjRFyvrHnAhnUSTSjkexP1PKdOWE1OJA0lOs=; b=42wNERvynjeOqO2usvcEHuT9J2oZHMBVZxq6HiPdCwjlpjlzcH4JxV7gdbDQmzQ7/NL4M1 zIFPITbpJbSdPHWqLh4NRPVm/aLO5CfMXUPabtX6FJEgz4SXGk3fELj38djnV6mHJHPHUs 4YRUYq/gCfmJA6NMcMp4yl5MdxMX2bo= ARC-Authentication-Results: i=1; imf24.hostedemail.com; dkim=pass header.d=oracle.com header.s=corp-2022-7-12 header.b=h75cORDG; dmarc=pass (policy=none) header.from=oracle.com; spf=pass (imf24.hostedemail.com: domain of sidhartha.kumar@oracle.com designates 205.220.165.32 as permitted sender) smtp.mailfrom=sidhartha.kumar@oracle.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1668547369; a=rsa-sha256; cv=none; b=CX9eg0eS0wUOqCtLOCwelRK9coVh92MS3EXynJEZwZZg9FpO1n9Ml5au+4wKSoW117zLIG 43KwsUc4+rbmwTQWt2ROS4C2+uHXyDH82sSFF6aUrvbV4CwRXNxRSvqy+FuDyonjpXQyVa oASl1PQ3DRQUaR8n27tSL97zGixNxGk= X-Rspam-User: X-Stat-Signature: a33zkfzhwzpqwy8ckz69ahs85k5kpfcd X-Rspamd-Queue-Id: B2215180013 Authentication-Results: imf24.hostedemail.com; dkim=pass header.d=oracle.com header.s=corp-2022-7-12 header.b=h75cORDG; dmarc=pass (policy=none) header.from=oracle.com; spf=pass (imf24.hostedemail.com: domain of sidhartha.kumar@oracle.com designates 205.220.165.32 as permitted sender) smtp.mailfrom=sidhartha.kumar@oracle.com X-Rspamd-Server: rspam07 X-HE-Tag: 1668547369-804944 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Convert callers of free_gigantic_page() to use folios, function is then renamed to free_gigantic_folio(). Signed-off-by: Sidhartha Kumar --- mm/hugetlb.c | 31 ++++++++++++++++++------------- 1 file changed, 18 insertions(+), 13 deletions(-) diff --git a/mm/hugetlb.c b/mm/hugetlb.c index ebb98c1af2fb..bc039ff28b8f 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -1352,18 +1352,20 @@ static void destroy_compound_gigantic_folio(struct folio *folio, __destroy_compound_gigantic_folio(folio, order, false); } -static void free_gigantic_page(struct page *page, unsigned int order) +static void free_gigantic_folio(struct folio *folio, unsigned int order) { /* * If the page isn't allocated using the cma allocator, * cma_release() returns false. */ #ifdef CONFIG_CMA - if (cma_release(hugetlb_cma[page_to_nid(page)], page, 1 << order)) + int nid = folio_nid(folio); + + if (cma_release(hugetlb_cma[nid], &folio->page, 1 << order)) return; #endif - free_contig_range(page_to_pfn(page), 1 << order); + free_contig_range(folio_pfn(folio), 1 << order); } #ifdef CONFIG_CONTIG_ALLOC @@ -1417,7 +1419,8 @@ static struct page *alloc_gigantic_page(struct hstate *h, gfp_t gfp_mask, { return NULL; } -static inline void free_gigantic_page(struct page *page, unsigned int order) { } +static inline void free_gigantic_folio(struct folio *folio, + unsigned int order) { } static inline void destroy_compound_gigantic_folio(struct folio *folio, unsigned int order) { } #endif @@ -1556,7 +1559,7 @@ static void __update_and_free_page(struct hstate *h, struct page *page) * If we don't know which subpages are hwpoisoned, we can't free * the hugepage, so it's leaked intentionally. */ - if (HPageRawHwpUnreliable(page)) + if (folio_test_hugetlb_raw_hwp_unreliable(folio)) return; if (hugetlb_vmemmap_restore(h, page)) { @@ -1566,7 +1569,7 @@ static void __update_and_free_page(struct hstate *h, struct page *page) * page and put the page back on the hugetlb free list and treat * as a surplus page. */ - add_hugetlb_folio(h, page_folio(page), true); + add_hugetlb_folio(h, folio, true); spin_unlock_irq(&hugetlb_lock); return; } @@ -1579,7 +1582,7 @@ static void __update_and_free_page(struct hstate *h, struct page *page) hugetlb_clear_page_hwpoison(&folio->page); for (i = 0; i < pages_per_huge_page(h); i++) { - subpage = nth_page(page, i); + subpage = folio_page(folio, i); subpage->flags &= ~(1 << PG_locked | 1 << PG_error | 1 << PG_referenced | 1 << PG_dirty | 1 << PG_active | 1 << PG_private | @@ -1588,12 +1591,12 @@ static void __update_and_free_page(struct hstate *h, struct page *page) /* * Non-gigantic pages demoted from CMA allocated gigantic pages - * need to be given back to CMA in free_gigantic_page. + * need to be given back to CMA in free_gigantic_folio. */ if (hstate_is_gigantic(h) || hugetlb_cma_folio(folio, huge_page_order(h))) { destroy_compound_gigantic_folio(folio, huge_page_order(h)); - free_gigantic_page(page, huge_page_order(h)); + free_gigantic_folio(folio, huge_page_order(h)); } else { __free_pages(page, huge_page_order(h)); } @@ -2013,6 +2016,7 @@ static struct page *alloc_fresh_huge_page(struct hstate *h, nodemask_t *node_alloc_noretry) { struct page *page; + struct folio *folio; bool retry = false; retry: @@ -2023,14 +2027,14 @@ static struct page *alloc_fresh_huge_page(struct hstate *h, nid, nmask, node_alloc_noretry); if (!page) return NULL; - + folio = page_folio(page); if (hstate_is_gigantic(h)) { if (!prep_compound_gigantic_page(page, huge_page_order(h))) { /* * Rare failure to convert pages to compound page. * Free pages and try again - ONCE! */ - free_gigantic_page(page, huge_page_order(h)); + free_gigantic_folio(folio, huge_page_order(h)); if (!retry) { retry = true; goto retry; @@ -2038,7 +2042,7 @@ static struct page *alloc_fresh_huge_page(struct hstate *h, return NULL; } } - prep_new_huge_page(h, page, page_to_nid(page)); + prep_new_huge_page(h, page, folio_nid(folio)); return page; } @@ -3039,6 +3043,7 @@ static void __init gather_bootmem_prealloc(void) list_for_each_entry(m, &huge_boot_pages, list) { struct page *page = virt_to_page(m); + struct folio *folio = page_folio(page); struct hstate *h = m->hstate; VM_BUG_ON(!hstate_is_gigantic(h)); @@ -3049,7 +3054,7 @@ static void __init gather_bootmem_prealloc(void) free_huge_page(page); /* add to the hugepage allocator */ } else { /* VERY unlikely inflated ref count on a tail page */ - free_gigantic_page(page, huge_page_order(h)); + free_gigantic_folio(folio, huge_page_order(h)); } /* From patchwork Tue Nov 15 21:22:16 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sid Kumar X-Patchwork-Id: 13044195 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 053FFC43219 for ; Tue, 15 Nov 2022 21:22:57 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id F224C6B0075; Tue, 15 Nov 2022 16:22:52 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id ED2D86B007B; Tue, 15 Nov 2022 16:22:52 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id C8A718E0005; Tue, 15 Nov 2022 16:22:52 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id AB8B06B0075 for ; Tue, 15 Nov 2022 16:22:52 -0500 (EST) Received: from smtpin20.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id 57357AB70B for ; Tue, 15 Nov 2022 21:22:52 +0000 (UTC) X-FDA: 80136951384.20.0DB11A0 Received: from mx0a-00069f02.pphosted.com (mx0a-00069f02.pphosted.com [205.220.165.32]) by imf18.hostedemail.com (Postfix) with ESMTP id 0B1BF1C0012 for ; Tue, 15 Nov 2022 21:22:50 +0000 (UTC) Received: from pps.filterd (m0246629.ppops.net [127.0.0.1]) by mx0b-00069f02.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 2AFKD5Up008985; Tue, 15 Nov 2022 21:22:43 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding; s=corp-2022-7-12; bh=c35oFeYVRDc1jiMWKSJGilR0YmnuM0FOLWWkNYqbAmY=; b=Ui1mhG+cz1UlBz0n8duynNVXMyHR4z2mJEbk8ardEqok8DaVR8+IahFBB1W2gmiG3pY7 DJGbyu6Wl+K+z/+ltjLb80PCBEQ9n+HvSdnyKdVLQvhM0wZsV7HhneVygIFeMWea7XKf hamImMdruJ6fQy+j098g4TcLUH+3NcGaySuHikjmiufbLLM43xFrshprAeO8vGDdgID5 jtvo+MFJZzd44kr/8gky9va4ielNACFseGvGQJemPNZvc42sKslUtv3gZATapcrhn8uW gfIVrA9e3GuVz/ScE901avIdAia2JbYSedscxctrSVci18rkFqiQYG2tllVDLTtJFWHy Ww== Received: from iadpaimrmta01.imrmtpd1.prodappiadaev1.oraclevcn.com (iadpaimrmta01.appoci.oracle.com [130.35.100.223]) by mx0b-00069f02.pphosted.com (PPS) with ESMTPS id 3kv3jsjxab-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Tue, 15 Nov 2022 21:22:43 +0000 Received: from pps.filterd (iadpaimrmta01.imrmtpd1.prodappiadaev1.oraclevcn.com [127.0.0.1]) by iadpaimrmta01.imrmtpd1.prodappiadaev1.oraclevcn.com (8.17.1.5/8.17.1.5) with ESMTP id 2AFJo7KP032016; Tue, 15 Nov 2022 21:22:41 GMT Received: from pps.reinject (localhost [127.0.0.1]) by iadpaimrmta01.imrmtpd1.prodappiadaev1.oraclevcn.com (PPS) with ESMTPS id 3kt1xcfvqu-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Tue, 15 Nov 2022 21:22:41 +0000 Received: from iadpaimrmta01.imrmtpd1.prodappiadaev1.oraclevcn.com (iadpaimrmta01.imrmtpd1.prodappiadaev1.oraclevcn.com [127.0.0.1]) by pps.reinject (8.17.1.5/8.17.1.5) with ESMTP id 2AFLMSRs002082; Tue, 15 Nov 2022 21:22:40 GMT Received: from dhcp-10-132-95-73.usdhcp.oraclecorp.com.com (dhcp-10-132-95-73.usdhcp.oraclecorp.com [10.132.95.73]) by iadpaimrmta01.imrmtpd1.prodappiadaev1.oraclevcn.com (PPS) with ESMTP id 3kt1xcfvb0-10; Tue, 15 Nov 2022 21:22:40 +0000 From: Sidhartha Kumar To: linux-kernel@vger.kernel.org, linux-mm@kvack.org Cc: akpm@linux-foundation.org, songmuchun@bytedance.com, mike.kravetz@oracle.com, willy@infradead.org, almasrymina@google.com, linmiaohe@huawei.com, hughd@google.com, Sidhartha Kumar Subject: [PATCH mm-unstable 09/10] mm/hugetlb: convert hugetlb prep functions to folios Date: Tue, 15 Nov 2022 13:22:16 -0800 Message-Id: <20221115212217.19539-10-sidhartha.kumar@oracle.com> X-Mailer: git-send-email 2.38.1 In-Reply-To: <20221115212217.19539-1-sidhartha.kumar@oracle.com> References: <20221115212217.19539-1-sidhartha.kumar@oracle.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.219,Aquarius:18.0.895,Hydra:6.0.545,FMLib:17.11.122.1 definitions=2022-11-15_08,2022-11-15_03,2022-06-22_01 X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 mlxlogscore=999 phishscore=0 suspectscore=0 malwarescore=0 mlxscore=0 adultscore=0 bulkscore=0 spamscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2210170000 definitions=main-2211150146 X-Proofpoint-ORIG-GUID: 13LmkHReTmM1lw843VOWh17NJgZFrTi9 X-Proofpoint-GUID: 13LmkHReTmM1lw843VOWh17NJgZFrTi9 ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1668547371; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=c35oFeYVRDc1jiMWKSJGilR0YmnuM0FOLWWkNYqbAmY=; b=M7xq4KCjJTlBMm03lnzAxSryOMhHSuGOU00KdVq04C5wXNAMlwzn9YYPyC5YcBRtt1JUSa 7FscNk2ASV/R6XjRj9FFZ8gghiwE7JolPz9wIQ5H2AhLKLNYVya1p/BBuc6kTkPRh/Rcfk WGsDRNPao/jBKEruVCOPmqI1aoUlnBQ= ARC-Authentication-Results: i=1; imf18.hostedemail.com; dkim=pass header.d=oracle.com header.s=corp-2022-7-12 header.b=Ui1mhG+c; dmarc=pass (policy=none) header.from=oracle.com; spf=pass (imf18.hostedemail.com: domain of sidhartha.kumar@oracle.com designates 205.220.165.32 as permitted sender) smtp.mailfrom=sidhartha.kumar@oracle.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1668547371; a=rsa-sha256; cv=none; b=kCDX60J09kuwIw6nTZr9PYkmoyEs9aAriBF3isrSSTOaaKcrQwhGP+PinrvfuPg4kmgcJL Scwl+l2HBe+RuKdXrnkQXQLrVU/SQtbLwjerf2eB/wKxDiiyS4vkdEYm2UhejnrZ5B2HNX MA+Y6Krknoy0Z4Urpfa+Dt31AhLiEBk= X-Rspamd-Server: rspam08 X-Rspamd-Queue-Id: 0B1BF1C0012 X-Rspam-User: Authentication-Results: imf18.hostedemail.com; dkim=pass header.d=oracle.com header.s=corp-2022-7-12 header.b=Ui1mhG+c; dmarc=pass (policy=none) header.from=oracle.com; spf=pass (imf18.hostedemail.com: domain of sidhartha.kumar@oracle.com designates 205.220.165.32 as permitted sender) smtp.mailfrom=sidhartha.kumar@oracle.com X-Stat-Signature: m1wm5qae7cyk3c3q5nnh8x5hj4yjr96c X-HE-Tag: 1668547370-761648 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Convert prep_new_huge_page() and __prep_compound_gigantic_page() to folios. Signed-off-by: Sidhartha Kumar --- mm/hugetlb.c | 59 +++++++++++++++++++++++++--------------------------- 1 file changed, 28 insertions(+), 31 deletions(-) diff --git a/mm/hugetlb.c b/mm/hugetlb.c index bc039ff28b8f..c1d68648943a 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -1780,28 +1780,26 @@ static void __prep_new_hugetlb_folio(struct hstate *h, struct folio *folio) set_hugetlb_cgroup_rsvd(folio, NULL); } -static void prep_new_huge_page(struct hstate *h, struct page *page, int nid) +static void prep_new_hugetlb_folio(struct hstate *h, struct folio *folio, int nid) { - struct folio *folio = page_folio(page); - __prep_new_hugetlb_folio(h, folio); spin_lock_irq(&hugetlb_lock); __prep_account_new_huge_page(h, nid); spin_unlock_irq(&hugetlb_lock); } -static bool __prep_compound_gigantic_page(struct page *page, unsigned int order, - bool demote) +static bool __prep_compound_gigantic_folio(struct folio *folio, + unsigned int order, bool demote) { int i, j; int nr_pages = 1 << order; struct page *p; - /* we rely on prep_new_huge_page to set the destructor */ - set_compound_order(page, order); - __SetPageHead(page); + /* we rely on prep_new_hugetlb_folio to set the destructor */ + folio_set_compound_order(folio, order); + __folio_set_head(folio); for (i = 0; i < nr_pages; i++) { - p = nth_page(page, i); + p = folio_page(folio, i); /* * For gigantic hugepages allocated through bootmem at @@ -1842,42 +1840,40 @@ static bool __prep_compound_gigantic_page(struct page *page, unsigned int order, VM_BUG_ON_PAGE(page_count(p), p); } if (i != 0) - set_compound_head(p, page); + set_compound_head(p, &folio->page); } - atomic_set(compound_mapcount_ptr(page), -1); - atomic_set(compound_pincount_ptr(page), 0); + atomic_set(folio_mapcount_ptr(folio), -1); + atomic_set(folio_pincount_ptr(folio), 0); return true; out_error: /* undo page modifications made above */ for (j = 0; j < i; j++) { - p = nth_page(page, j); + p = folio_page(folio, j); if (j != 0) clear_compound_head(p); set_page_refcounted(p); } /* need to clear PG_reserved on remaining tail pages */ for (; j < nr_pages; j++) { - p = nth_page(page, j); + p = folio_page(folio, j); __ClearPageReserved(p); } - set_compound_order(page, 0); -#ifdef CONFIG_64BIT - page[1].compound_nr = 0; -#endif - __ClearPageHead(page); + folio_set_compound_order(folio, 0); + __folio_clear_head(folio); return false; } -static bool prep_compound_gigantic_page(struct page *page, unsigned int order) +static bool prep_compound_gigantic_folio(struct folio *folio, + unsigned int order) { - return __prep_compound_gigantic_page(page, order, false); + return __prep_compound_gigantic_folio(folio, order, false); } -static bool prep_compound_gigantic_page_for_demote(struct page *page, +static bool prep_compound_gigantic_folio_for_demote(struct folio *folio, unsigned int order) { - return __prep_compound_gigantic_page(page, order, true); + return __prep_compound_gigantic_folio(folio, order, true); } /* @@ -2029,7 +2025,7 @@ static struct page *alloc_fresh_huge_page(struct hstate *h, return NULL; folio = page_folio(page); if (hstate_is_gigantic(h)) { - if (!prep_compound_gigantic_page(page, huge_page_order(h))) { + if (!prep_compound_gigantic_folio(folio, huge_page_order(h))) { /* * Rare failure to convert pages to compound page. * Free pages and try again - ONCE! @@ -2042,7 +2038,7 @@ static struct page *alloc_fresh_huge_page(struct hstate *h, return NULL; } } - prep_new_huge_page(h, page, folio_nid(folio)); + prep_new_hugetlb_folio(h, folio, folio_nid(folio)); return page; } @@ -3047,10 +3043,10 @@ static void __init gather_bootmem_prealloc(void) struct hstate *h = m->hstate; VM_BUG_ON(!hstate_is_gigantic(h)); - WARN_ON(page_count(page) != 1); - if (prep_compound_gigantic_page(page, huge_page_order(h))) { - WARN_ON(PageReserved(page)); - prep_new_huge_page(h, page, page_to_nid(page)); + WARN_ON(folio_ref_count(folio) != 1); + if (prep_compound_gigantic_folio(folio, huge_page_order(h))) { + WARN_ON(folio_test_reserved(folio)); + prep_new_hugetlb_folio(h, folio, folio_nid(folio)); free_huge_page(page); /* add to the hugepage allocator */ } else { /* VERY unlikely inflated ref count on a tail page */ @@ -3469,13 +3465,14 @@ static int demote_free_huge_page(struct hstate *h, struct page *page) for (i = 0; i < pages_per_huge_page(h); i += pages_per_huge_page(target_hstate)) { subpage = nth_page(page, i); + folio = page_folio(subpage); if (hstate_is_gigantic(target_hstate)) - prep_compound_gigantic_page_for_demote(subpage, + prep_compound_gigantic_folio_for_demote(folio, target_hstate->order); else prep_compound_page(subpage, target_hstate->order); set_page_private(subpage, 0); - prep_new_huge_page(target_hstate, subpage, nid); + prep_new_hugetlb_folio(target_hstate, folio, nid); free_huge_page(subpage); } mutex_unlock(&target_hstate->resize_lock); From patchwork Tue Nov 15 21:22:17 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sid Kumar X-Patchwork-Id: 13044196 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7FFE7C433FE for ; Tue, 15 Nov 2022 21:22:58 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 7D0A18E0006; Tue, 15 Nov 2022 16:22:56 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 6E4F88E0005; Tue, 15 Nov 2022 16:22:56 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 50EB88E0006; Tue, 15 Nov 2022 16:22:56 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id 3D9EC8E0005 for ; Tue, 15 Nov 2022 16:22:56 -0500 (EST) Received: from smtpin02.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id B720E80327 for ; Tue, 15 Nov 2022 21:22:55 +0000 (UTC) X-FDA: 80136951510.02.893C960 Received: from mx0b-00069f02.pphosted.com (mx0b-00069f02.pphosted.com [205.220.177.32]) by imf09.hostedemail.com (Postfix) with ESMTP id 35170140012 for ; Tue, 15 Nov 2022 21:22:53 +0000 (UTC) Received: from pps.filterd (m0246631.ppops.net [127.0.0.1]) by mx0b-00069f02.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 2AFJMu9g003946; Tue, 15 Nov 2022 21:22:44 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding; s=corp-2022-7-12; bh=S8tyJWIYcZoH4JvVl6w79xgzUHmCh2dSs4Q8Fb3RnFo=; b=DQJFwLo5AMfmIVSrB2x1noVsrKi+CrlMZVhDAgvn0aKTv4iJFtAYy4zif3aNae+ax8vh efIy/3TnR0In+DetYPzaBVUWTHUHpfSDkMW667A7LCGmeFe+caNyiu+SznQ9fgTgzECs 7W2Er6OMjRKOFKjlsPXcLWqnlzglwrvDfRcNyKLIjW7T4crbnp09Zb9Jfh0R3j+rKsmx blHoiQ6SwCyJW1mtNfMED047ykfKXbrJyHpWklSQ2FoWLD4HrQkvt8krJI0uYMD2rRKe j1GXOKBnlClxJYGGbJ/Vc10CQoNfN89UMHbXHxBPAWx92zQx6JZVlYho4rcgx0eHntmn kQ== Received: from iadpaimrmta01.imrmtpd1.prodappiadaev1.oraclevcn.com (iadpaimrmta01.appoci.oracle.com [130.35.100.223]) by mx0b-00069f02.pphosted.com (PPS) with ESMTPS id 3kv3n131y0-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Tue, 15 Nov 2022 21:22:43 +0000 Received: from pps.filterd (iadpaimrmta01.imrmtpd1.prodappiadaev1.oraclevcn.com [127.0.0.1]) by iadpaimrmta01.imrmtpd1.prodappiadaev1.oraclevcn.com (8.17.1.5/8.17.1.5) with ESMTP id 2AFK1dFO031835; Tue, 15 Nov 2022 21:22:43 GMT Received: from pps.reinject (localhost [127.0.0.1]) by iadpaimrmta01.imrmtpd1.prodappiadaev1.oraclevcn.com (PPS) with ESMTPS id 3kt1xcfvsd-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Tue, 15 Nov 2022 21:22:43 +0000 Received: from iadpaimrmta01.imrmtpd1.prodappiadaev1.oraclevcn.com (iadpaimrmta01.imrmtpd1.prodappiadaev1.oraclevcn.com [127.0.0.1]) by pps.reinject (8.17.1.5/8.17.1.5) with ESMTP id 2AFLMSRu002082; Tue, 15 Nov 2022 21:22:42 GMT Received: from dhcp-10-132-95-73.usdhcp.oraclecorp.com.com (dhcp-10-132-95-73.usdhcp.oraclecorp.com [10.132.95.73]) by iadpaimrmta01.imrmtpd1.prodappiadaev1.oraclevcn.com (PPS) with ESMTP id 3kt1xcfvb0-11; Tue, 15 Nov 2022 21:22:42 +0000 From: Sidhartha Kumar To: linux-kernel@vger.kernel.org, linux-mm@kvack.org Cc: akpm@linux-foundation.org, songmuchun@bytedance.com, mike.kravetz@oracle.com, willy@infradead.org, almasrymina@google.com, linmiaohe@huawei.com, hughd@google.com, Sidhartha Kumar Subject: [PATCH mm-unstable 10/10] mm/hugetlb: change hugetlb allocation functions to return a folio Date: Tue, 15 Nov 2022 13:22:17 -0800 Message-Id: <20221115212217.19539-11-sidhartha.kumar@oracle.com> X-Mailer: git-send-email 2.38.1 In-Reply-To: <20221115212217.19539-1-sidhartha.kumar@oracle.com> References: <20221115212217.19539-1-sidhartha.kumar@oracle.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.219,Aquarius:18.0.895,Hydra:6.0.545,FMLib:17.11.122.1 definitions=2022-11-15_08,2022-11-15_03,2022-06-22_01 X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 mlxlogscore=999 phishscore=0 suspectscore=0 malwarescore=0 mlxscore=0 adultscore=0 bulkscore=0 spamscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2210170000 definitions=main-2211150146 X-Proofpoint-GUID: hiib2QocS7DmD6684nO_8-iB3ymcMvIO X-Proofpoint-ORIG-GUID: hiib2QocS7DmD6684nO_8-iB3ymcMvIO ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1668547374; a=rsa-sha256; cv=none; b=w2uSOmScrmkiEoYlF0icfnsL7ac4sHk0prCw3/6spPtLtdGp09UFhqRjX75bVdidAwvBFx bBOqcKJP2Kja9bN65yVjZPVekwRbQTstaM7wK7hRYYLZ9LMiP0f2lfmzFH+qJWJWBvQRhV 0VBpfMz8G3vx2/KGkGfZLjSc9bEDaww= ARC-Authentication-Results: i=1; imf09.hostedemail.com; dkim=pass header.d=oracle.com header.s=corp-2022-7-12 header.b=DQJFwLo5; spf=pass (imf09.hostedemail.com: domain of sidhartha.kumar@oracle.com designates 205.220.177.32 as permitted sender) smtp.mailfrom=sidhartha.kumar@oracle.com; dmarc=pass (policy=none) header.from=oracle.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1668547374; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=S8tyJWIYcZoH4JvVl6w79xgzUHmCh2dSs4Q8Fb3RnFo=; b=lu9IRP45l928KVaYrIwHo+fNWS2NmzLOiYvDswJ8XaSpX55DjUwOmBApQgjgdPA18icdPn 7PENeup0d4PrtsRJiL7X/cKOpKPmjVueO2v2XpQ2gCIgqOeUi+fBzS2yZuQL2MS524g97r 1Arcveo1WQI4fWCcrp6R2AJXj6jlcyc= X-Stat-Signature: gpobnsj8wsempazyi7tikowzb61kcqs7 X-Rspamd-Queue-Id: 35170140012 Authentication-Results: imf09.hostedemail.com; dkim=pass header.d=oracle.com header.s=corp-2022-7-12 header.b=DQJFwLo5; spf=pass (imf09.hostedemail.com: domain of sidhartha.kumar@oracle.com designates 205.220.177.32 as permitted sender) smtp.mailfrom=sidhartha.kumar@oracle.com; dmarc=pass (policy=none) header.from=oracle.com X-Rspam-User: X-Rspamd-Server: rspam12 X-HE-Tag: 1668547373-913024 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Many hugetlb allocation helper functions have now been converting to folios, update their higher level callers to be compatible with folios. Signed-off-by: Sidhartha Kumar --- mm/hugetlb.c | 98 ++++++++++++++++++++++++---------------------------- 1 file changed, 46 insertions(+), 52 deletions(-) diff --git a/mm/hugetlb.c b/mm/hugetlb.c index c1d68648943a..ab20cfb0ff05 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -1369,7 +1369,7 @@ static void free_gigantic_folio(struct folio *folio, unsigned int order) } #ifdef CONFIG_CONTIG_ALLOC -static struct page *alloc_gigantic_page(struct hstate *h, gfp_t gfp_mask, +static struct folio *alloc_gigantic_folio(struct hstate *h, gfp_t gfp_mask, int nid, nodemask_t *nodemask) { unsigned long nr_pages = pages_per_huge_page(h); @@ -1385,7 +1385,7 @@ static struct page *alloc_gigantic_page(struct hstate *h, gfp_t gfp_mask, page = cma_alloc(hugetlb_cma[nid], nr_pages, huge_page_order(h), true); if (page) - return page; + return page_folio(page); } if (!(gfp_mask & __GFP_THISNODE)) { @@ -1396,17 +1396,16 @@ static struct page *alloc_gigantic_page(struct hstate *h, gfp_t gfp_mask, page = cma_alloc(hugetlb_cma[node], nr_pages, huge_page_order(h), true); if (page) - return page; + return page_folio(page); } } } #endif - - return alloc_contig_pages(nr_pages, gfp_mask, nid, nodemask); + return page_folio(alloc_contig_pages(nr_pages, gfp_mask, nid, nodemask)); } #else /* !CONFIG_CONTIG_ALLOC */ -static struct page *alloc_gigantic_page(struct hstate *h, gfp_t gfp_mask, +static struct folio *alloc_gigantic_folio(struct hstate *h, gfp_t gfp_mask, int nid, nodemask_t *nodemask) { return NULL; @@ -1414,7 +1413,7 @@ static struct page *alloc_gigantic_page(struct hstate *h, gfp_t gfp_mask, #endif /* CONFIG_CONTIG_ALLOC */ #else /* !CONFIG_ARCH_HAS_GIGANTIC_PAGE */ -static struct page *alloc_gigantic_page(struct hstate *h, gfp_t gfp_mask, +static struct folio *alloc_gigantic_folio(struct hstate *h, gfp_t gfp_mask, int nid, nodemask_t *nodemask) { return NULL; @@ -1938,7 +1937,7 @@ pgoff_t hugetlb_basepage_index(struct page *page) return (index << compound_order(page_head)) + compound_idx; } -static struct page *alloc_buddy_huge_page(struct hstate *h, +static struct folio *alloc_buddy_hugetlb_folio(struct hstate *h, gfp_t gfp_mask, int nid, nodemask_t *nmask, nodemask_t *node_alloc_noretry) { @@ -1997,7 +1996,7 @@ static struct page *alloc_buddy_huge_page(struct hstate *h, if (node_alloc_noretry && !page && alloc_try_hard) node_set(nid, *node_alloc_noretry); - return page; + return page_folio(page); } /* @@ -2007,23 +2006,21 @@ static struct page *alloc_buddy_huge_page(struct hstate *h, * Note that returned page is 'frozen': ref count of head page and all tail * pages is zero. */ -static struct page *alloc_fresh_huge_page(struct hstate *h, +static struct folio *alloc_fresh_hugetlb_folio(struct hstate *h, gfp_t gfp_mask, int nid, nodemask_t *nmask, nodemask_t *node_alloc_noretry) { - struct page *page; struct folio *folio; bool retry = false; retry: if (hstate_is_gigantic(h)) - page = alloc_gigantic_page(h, gfp_mask, nid, nmask); + folio = alloc_gigantic_folio(h, gfp_mask, nid, nmask); else - page = alloc_buddy_huge_page(h, gfp_mask, + folio = alloc_buddy_hugetlb_folio(h, gfp_mask, nid, nmask, node_alloc_noretry); - if (!page) + if (!folio) return NULL; - folio = page_folio(page); if (hstate_is_gigantic(h)) { if (!prep_compound_gigantic_folio(folio, huge_page_order(h))) { /* @@ -2040,7 +2037,7 @@ static struct page *alloc_fresh_huge_page(struct hstate *h, } prep_new_hugetlb_folio(h, folio, folio_nid(folio)); - return page; + return folio; } /* @@ -2050,21 +2047,21 @@ static struct page *alloc_fresh_huge_page(struct hstate *h, static int alloc_pool_huge_page(struct hstate *h, nodemask_t *nodes_allowed, nodemask_t *node_alloc_noretry) { - struct page *page; + struct folio *folio; int nr_nodes, node; gfp_t gfp_mask = htlb_alloc_mask(h) | __GFP_THISNODE; for_each_node_mask_to_alloc(h, nr_nodes, node, nodes_allowed) { - page = alloc_fresh_huge_page(h, gfp_mask, node, nodes_allowed, - node_alloc_noretry); - if (page) + folio = alloc_fresh_hugetlb_folio(h, gfp_mask, node, + nodes_allowed, node_alloc_noretry); + if (folio) break; } - if (!page) + if (!folio) return 0; - free_huge_page(page); /* free it into the hugepage allocator */ + free_huge_page(&folio->page); /* free it into the hugepage allocator */ return 1; } @@ -2225,7 +2222,7 @@ int dissolve_free_huge_pages(unsigned long start_pfn, unsigned long end_pfn) static struct page *alloc_surplus_huge_page(struct hstate *h, gfp_t gfp_mask, int nid, nodemask_t *nmask) { - struct page *page = NULL; + struct folio *folio = NULL; if (hstate_is_gigantic(h)) return NULL; @@ -2235,8 +2232,8 @@ static struct page *alloc_surplus_huge_page(struct hstate *h, gfp_t gfp_mask, goto out_unlock; spin_unlock_irq(&hugetlb_lock); - page = alloc_fresh_huge_page(h, gfp_mask, nid, nmask, NULL); - if (!page) + folio = alloc_fresh_hugetlb_folio(h, gfp_mask, nid, nmask, NULL); + if (!folio) return NULL; spin_lock_irq(&hugetlb_lock); @@ -2248,43 +2245,42 @@ static struct page *alloc_surplus_huge_page(struct hstate *h, gfp_t gfp_mask, * codeflow */ if (h->surplus_huge_pages >= h->nr_overcommit_huge_pages) { - SetHPageTemporary(page); + folio_set_hugetlb_temporary(folio); spin_unlock_irq(&hugetlb_lock); - free_huge_page(page); + free_huge_page(&folio->page); return NULL; } h->surplus_huge_pages++; - h->surplus_huge_pages_node[page_to_nid(page)]++; + h->surplus_huge_pages_node[folio_nid(folio)]++; out_unlock: spin_unlock_irq(&hugetlb_lock); - return page; + return &folio->page; } static struct page *alloc_migrate_huge_page(struct hstate *h, gfp_t gfp_mask, int nid, nodemask_t *nmask) { - struct page *page; + struct folio *folio; if (hstate_is_gigantic(h)) return NULL; - page = alloc_fresh_huge_page(h, gfp_mask, nid, nmask, NULL); - if (!page) + folio = alloc_fresh_hugetlb_folio(h, gfp_mask, nid, nmask, NULL); + if (!folio) return NULL; /* fresh huge pages are frozen */ - set_page_refcounted(page); - + folio_ref_unfreeze(folio, 1); /* * We do not account these pages as surplus because they are only * temporary and will be released properly on the last reference */ - SetHPageTemporary(page); + folio_set_hugetlb_temporary(folio); - return page; + return &folio->page; } /* @@ -2734,19 +2730,18 @@ void restore_reserve_on_error(struct hstate *h, struct vm_area_struct *vma, } /* - * alloc_and_dissolve_huge_page - Allocate a new page and dissolve the old one + * alloc_and_dissolve_hugetlb_folio - Allocate a new folio and dissolve + * the old one * @h: struct hstate old page belongs to * @old_page: Old page to dissolve * @list: List to isolate the page in case we need to * Returns 0 on success, otherwise negated error. */ -static int alloc_and_dissolve_huge_page(struct hstate *h, struct page *old_page, - struct list_head *list) +static int alloc_and_dissolve_hugetlb_folio(struct hstate *h, + struct folio *old_folio, struct list_head *list) { gfp_t gfp_mask = htlb_alloc_mask(h) | __GFP_THISNODE; - struct folio *old_folio = page_folio(old_page); int nid = folio_nid(old_folio); - struct page *new_page; struct folio *new_folio; int ret = 0; @@ -2757,26 +2752,25 @@ static int alloc_and_dissolve_huge_page(struct hstate *h, struct page *old_page, * the pool. This simplifies and let us do most of the processing * under the lock. */ - new_page = alloc_buddy_huge_page(h, gfp_mask, nid, NULL, NULL); - if (!new_page) + new_folio = alloc_buddy_hugetlb_folio(h, gfp_mask, nid, NULL, NULL); + if (!new_folio) return -ENOMEM; - new_folio = page_folio(new_page); __prep_new_hugetlb_folio(h, new_folio); retry: spin_lock_irq(&hugetlb_lock); if (!folio_test_hugetlb(old_folio)) { /* - * Freed from under us. Drop new_page too. + * Freed from under us. Drop new_folio too. */ goto free_new; } else if (folio_ref_count(old_folio)) { /* - * Someone has grabbed the page, try to isolate it here. + * Someone has grabbed the folio, try to isolate it here. * Fail with -EBUSY if not possible. */ spin_unlock_irq(&hugetlb_lock); - ret = isolate_hugetlb(old_page, list); + ret = isolate_hugetlb(&old_folio->page, list); spin_lock_irq(&hugetlb_lock); goto free_new; } else if (!folio_test_hugetlb_freed(old_folio)) { @@ -2854,7 +2848,7 @@ int isolate_or_dissolve_huge_page(struct page *page, struct list_head *list) if (folio_ref_count(folio) && !isolate_hugetlb(&folio->page, list)) ret = 0; else if (!folio_ref_count(folio)) - ret = alloc_and_dissolve_huge_page(h, &folio->page, list); + ret = alloc_and_dissolve_hugetlb_folio(h, folio, list); return ret; } @@ -3072,14 +3066,14 @@ static void __init hugetlb_hstate_alloc_pages_onenode(struct hstate *h, int nid) if (!alloc_bootmem_huge_page(h, nid)) break; } else { - struct page *page; + struct folio *folio; gfp_t gfp_mask = htlb_alloc_mask(h) | __GFP_THISNODE; - page = alloc_fresh_huge_page(h, gfp_mask, nid, + folio = alloc_fresh_hugetlb_folio(h, gfp_mask, nid, &node_states[N_MEMORY], NULL); - if (!page) + if (!folio) break; - free_huge_page(page); /* free it into the hugepage allocator */ + free_huge_page(&folio->page); /* free it into the hugepage allocator */ } cond_resched(); }