From patchwork Thu Nov 21 18:52:15 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Zi Yan X-Patchwork-Id: 13882300 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 473B4E64004 for ; Thu, 21 Nov 2024 18:54:39 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id D231B6B0083; Thu, 21 Nov 2024 13:54:38 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id CD28F6B0085; Thu, 21 Nov 2024 13:54:38 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id B25906B0089; Thu, 21 Nov 2024 13:54:38 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 8D86C6B0083 for ; Thu, 21 Nov 2024 13:54:38 -0500 (EST) Received: from smtpin19.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id 48F231C8107 for ; Thu, 21 Nov 2024 18:54:38 +0000 (UTC) X-FDA: 82811001000.19.B9F1360 Received: from NAM11-DM6-obe.outbound.protection.outlook.com (mail-dm6nam11on2040.outbound.protection.outlook.com [40.107.223.40]) by imf06.hostedemail.com (Postfix) with ESMTP id C33F518000B for ; Thu, 21 Nov 2024 18:53:57 +0000 (UTC) Authentication-Results: imf06.hostedemail.com; dkim=pass header.d=Nvidia.com header.s=selector2 header.b=t2zOiWS9; dmarc=pass (policy=reject) header.from=nvidia.com; arc=pass ("microsoft.com:s=arcselector10001:i=1"); spf=pass (imf06.hostedemail.com: domain of ziy@nvidia.com designates 40.107.223.40 as permitted sender) smtp.mailfrom=ziy@nvidia.com ARC-Seal: i=2; s=arc-20220608; d=hostedemail.com; t=1732215208; a=rsa-sha256; cv=pass; b=17HJ72bF5YKRqUb8IEvdz4wqTQh2KOhryb+PQysxxdn4hkF1NpH+h7ylY+44RATNrm7GA4 HlGQSx2g31hgfJCtjq24eA6VRx120yqEPJ4YQEz8Sa7vA1+DMkOPNjH7fgwlF1jjDnBPMT bvbh+7xX7P2YpbI0rXmDeA66+qbafnw= ARC-Authentication-Results: i=2; imf06.hostedemail.com; dkim=pass header.d=Nvidia.com header.s=selector2 header.b=t2zOiWS9; dmarc=pass (policy=reject) header.from=nvidia.com; arc=pass ("microsoft.com:s=arcselector10001:i=1"); spf=pass (imf06.hostedemail.com: domain of ziy@nvidia.com designates 40.107.223.40 as permitted sender) smtp.mailfrom=ziy@nvidia.com ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1732215207; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=vKlBDIl4xrdnKlrWfw8XLDZsIOdmGI3JDlSG+Z0iz/4=; b=nYyfr8ngnMgtoqxEbqhuabXmwL3Romrm1gRpblTyTqIEh3ejKHEd48NzU2REDY3JIl72ZG Q59B+PTAynWJpqgJK3KgRygR4njgdpebv8PsC69aiSSjnK56QS3Yjp+vEWhf4HlHrKjssd V+iDFyPz/b4wrIsAokAXv2Q4YaRPBsk= ARC-Seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=odmP8xx/H2RJIEu7jJwDSB6G1nTlObgsoIZvlM+rQOfwQ+ARqhbltupXRnW98hVWdBVdVZSp4WqBrcxw1wkRQ2YvLH7Urr46lCIh67g83KZIxpn9JJd3Ln+s23RUVSs4296LhOTb3SkLWQ4IkMckIQUjTQcSbtNOahQvUZJw1UfKnJCLXj6bmXacJTUhu54WbImBDKA/S3mknHfUgMafgB7N4iJ24i3GUjXn3hhzydsBEvddKEE2p2zXTUd5jxhwuowaeyyAixY31F9m1Kbehbcofz/HHiW8YXM6uQBwEv/CPCie5HIJ7u4M6C7Fo84ax5DkJ3KcmcVNswBF05hzKA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=vKlBDIl4xrdnKlrWfw8XLDZsIOdmGI3JDlSG+Z0iz/4=; b=J5oHsDkZ352Jt5h4vaTtx0RyaH8hYvx/ZlxNupnRfK4Gu9zxcZERx7QC3QoDFLj1pkf+mfkMdmPchGgpzBsS0/EOWz7W6e5B19Q69XQex+WaQwc8XQPbiZqyXgpfZyjvow3Y+rpW6tsBGUg6D58Tu9LAsnuk1tPUIETN/Ocw/AljXd0L+y9gFLfpRsDr5EtD2cCvpQ6PRhO5deoiUd+sEi5DBdKymiXmE0bU4ZMTBcsUv2dFVrqfIy7dBfgyZCzCCO99CMlbndw79N20k4WMmkG783RRZX1SzYQ6f6gPF7xOykfmFRzrHv0lAqp2zRF6tD6DgXy7WNfQiAYq0idyUA== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=nvidia.com; dmarc=pass action=none header.from=nvidia.com; dkim=pass header.d=nvidia.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=vKlBDIl4xrdnKlrWfw8XLDZsIOdmGI3JDlSG+Z0iz/4=; b=t2zOiWS9BWppiTs1AWRAsyjBsxCObECUieKUf/z/SQ7aVIzdsgSx9UmJBArqJ+hQr84vqoi1J0RAMFZmcMucrnLO6WZnu7K7Fm8nYRkQfvAfvcvtHUV8H33a08J7Xa5UTbbxSc1lHibnuOkzulyjekFODtifUVvJirtfln/j3ZS5Zv7qoMGE9A0HWdiPkLHkhcBwuu5NUVIjZk+txIIaC5hBeC7YxoFXXZSYwA3Zl6aUJyfmbBmKXd1PMb5IK+yZXj50EG5k3Mn41A+qRfMaNx5lwpwLTqwfyI/OPrDLF0BB80tGPbFiyX3Fr/kA7niK4lB7pSlvNp10xTMjF88uHw== Received: from BL4PR12MB9478.namprd12.prod.outlook.com (2603:10b6:208:58e::9) by MW6PR12MB8834.namprd12.prod.outlook.com (2603:10b6:303:23c::21) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8182.17; Thu, 21 Nov 2024 18:52:36 +0000 Received: from BL4PR12MB9478.namprd12.prod.outlook.com ([fe80::b90:212f:996:6eb9]) by BL4PR12MB9478.namprd12.prod.outlook.com ([fe80::b90:212f:996:6eb9%5]) with mapi id 15.20.8158.024; Thu, 21 Nov 2024 18:52:36 +0000 From: Zi Yan To: linux-mm@kvack.org, "Kirill A . Shutemov" , "Matthew Wilcox (Oracle)" Cc: Ryan Roberts , Hugh Dickins , David Hildenbrand , Yang Shi , Miaohe Lin , Kefeng Wang , Yu Zhao , John Hubbard , linux-kernel@vger.kernel.org, Zi Yan Subject: [PATCH v3 4/9] mm/huge_memory: remove the old, unused __split_huge_page() Date: Thu, 21 Nov 2024 13:52:15 -0500 Message-ID: <20241121185220.2271520-5-ziy@nvidia.com> X-Mailer: git-send-email 2.45.2 In-Reply-To: <20241121185220.2271520-1-ziy@nvidia.com> References: <20241121185220.2271520-1-ziy@nvidia.com> X-ClientProxiedBy: BL1PR13CA0213.namprd13.prod.outlook.com (2603:10b6:208:2bf::8) To BL4PR12MB9478.namprd12.prod.outlook.com (2603:10b6:208:58e::9) MIME-Version: 1.0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: BL4PR12MB9478:EE_|MW6PR12MB8834:EE_ X-MS-Office365-Filtering-Correlation-Id: 3a323e4a-f4eb-4661-b6ff-08dd0a5daa55 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0;ARA:13230040|7416014|376014|1800799024|366016; X-Microsoft-Antispam-Message-Info: l8nFepLP1LGeAaG9ZAdxhJ9ygX0qR6BmS+r4SX7YwBLRnNaZAfoH7FCC3JRSlTjfNWFYS8qS0e++rheL11fnPtv/MJ8L72wMX51E4Pt8KyhvWYhiWe8MOXKc4jwbvcocyS4mUErVAto5sKPyHP3nIDbNWwjC5lUweLrDcj1XaEjDpG9y91ZsyLtxGZkJ5SXXNReppA38e2drSP2G4UVqdz5fvehIgQUboWZXsFkUgvm+DIWpArIr1UombvMm5qyzkq2NF4dJ3H6xHt8Us4vh7ki90ptYYawqqUDoRNLi1F4geWeRhziBkygCPDbDCsY3h4RMFe66wpqJpJ3Zk/OKpv62OpSwhFnOCPsfY3xF7CMj9/OatPIfJ/9gOIjU5YGT6VQPuEKZiXkKeRNK76KyijAntUynQ5jeh8Mw2p6SVWDNLWi75JvtyUZ42Va5k1AkKmKezE5wp9/rumKsDMO/63B/tj/bZY2o7nMD9JdiJYd5Qawve1YyBgvj00CYJAdBGtqU/GCXI6r5Qveb80ipqMFzbNNdvHPo9kf1h7KN2TpPcqgNC0V51Sb7jBQh9aKRmskXvK1B0y4eZN9XNvkgauNUg1/zhVICXwfe42i8FDfhk9Y3jmjgWRstTDnu+BA4xU9xIpduMMQ0M0V2fASFrdzuUieSoxMl0tH0mDRblD44UvKZ/S1tAr0hK2FPb3xuciafTtf7DDGxRJidgRWRfViJU3q1aoSyjrUbg1xyf95zgPljwTqMTOS3+bYe5IO/NMIQodnd0IrFnwrdpnpMMopRfkNITzWJZiYK3PRasWBBPFU1BhL4frbPPkyU0H0DpWwRzLv2wOKEJ9PQHvqfgQTtR6zvQUkdLiCSWx1QDzALs8julDQgBK2APJ/mBrdywJi26Q5OyLAN+j47CEi4fUr7vKxb7p5PMNmrBRKHCbAYMCHaDlE3FSbBQx17hbIZ4ek2yScZkQKnlyCl+Yo244RK4iND7mHLC9/OTOx1ynEOnqYvzaZjcAYbUodkYq5ggJAjU1+eRsaiCcLq7VhCJkAPS8u+c8opBVMm2Zt/LUYZWrcy7I590TQp+pBmjqjHV3F9+l6tbPQmY68GUafpZjZZSAJq0uxNd0grbHWctC5REb4obcZRWiBxCHQVdpLf55b/ZTludc46J8iiYGZV+a0uclZeQO6RaHMuXG3KzIrsWYj3bEaxvR6F9OOlXZJBd/PmP++nFGkNINIET1a/7RiBrQ90hrB0rFHiL88YzuMP8ICo2wKxvPWN36J54VjoWLID2vq4kci1sP+wwGJng+xfY4u/7kQfExDIuDFqhv9SIgvERzqKKB3gPV+ddJit X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BL4PR12MB9478.namprd12.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230040)(7416014)(376014)(1800799024)(366016);DIR:OUT;SFP:1101; X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1 X-MS-Exchange-AntiSpam-MessageData-0: KII3TseLC2jiAdvkGOMl6Lnkdj80sDHhwCyQ2gdBvg/yKgoZZJEA4v5HuI5mLHYnFMppWClTk9z+VJSYRmDaPUagfve68lJHwSx+ohVdrFZZY6N4iZEx7NFgcqtUXZlj/CgtX4qtVJqW5CBElWIpQaM5UmMUlmi1ADS+6kpGkEu5775RqQwxaxm7zvIQjEHfI4xhsYDb2vN4UF8G710UAVdtAcMfo9mXAqrsk7V9ZjxR3jivsemUJ3Bd4BRuQI/CKoxUWzMwjwclIpjZaGJSPQrs5NhtqOfqcLvTTasfIHbtY8c7u2WZ8xdtghMxay+CHMuQNs0M2giEjAEejUp1Ip6340qKhoX7hycj9dNsWTr5tK2Ljhlp85zMVzsoh8Io/3JNngVKwarAg8b1YnkVbdJctdZtqWYN502VNhvN8gxad2oSaAhu0ce826PYKx94mMssKOWH2xnXTCQuG/Ko1Y2qYR0+ReHnLZKKob+yjmpJTV8Aqu4+fjc8zmGr8LEo6XMrBi7XzcJ98cCeR8yzh4cHXIoTjjAEQtp6GLGql9XLpIzcDia8wNebOPToywhVMj6Sh0Hal/Qs1XdlcYeXcpVv/jaWSXvYxS6oU7ObaT6eOEq9MAhciTqL/GPKD48dmWYkXWmCdERvi+y/ZHA4o6jH8AEq8pFjiyCN0PWngFxJNIzJ3KNtLNUMzqf6SwlmV8p+kgP3YWt62gB1pwsF1dcC6F8POXW7NDAO30NiHEOm8MNgY1gfskqnNF7uog97mjdsfk5Yxqe7NUAagOIXjTj44WMs65M2foKIFTFaGRjdN/hxq3nLTyYmAAxibOzYbtbVxvzdEF6IQjNYIQ3dv5Rl9IA1ft1pUQMEnD5B8NGa72NEZ9mvFFUsstmJeUJWNNvbN0Su+fb2jzmQk3WzgqpHUb02Dp0VHmVGGtR50uVpzZP4F/UL76C9KwryHerJLpw421T786+eQzNLqS41CobL0siAJob8/Lr9CWYbxjDnf0lICqJu9SYX19J7HxI7Wi7rd/kxYTU8Y1TBfVncWyCymHfrIAN2RxLqoe/UX3uQf/ShXY3j99iiJqkSOxu8lbZhe9I2yxO6VKW5kIgtDVpbpkQe2S4bVYS2uxQVc7rYaatrbSqHhBd4qYrd00sJrqw/wd8qyHadMBeAudzHXrTtFj4oB/+/h5SSnj8jKoQh5nmeMuVzySmjAlgN1FRoHCey9A1tvXkJN5G45x+nSBADB7UJ6r0FbLEIZv1RJeC59HIWyzP13FATACY/0wKfogzkhmQW4GTEFu/iSiwfywSgC2BwAsX+g62t7Fl+nNILRxY3DCWVNsyWYnhKkMXU3tEY7GsipppJsRbzm6HtnmgkLq/wlZuQa41CGoSMeaA++zog70rOYLo9G0q0AU0w7nRqKZsBz7Ax44ed4sc499/0kzZdq/sUoPW/dyqa82ufeyPqvXatGKvvf3IcZIowtaQZSVJVIvgf3cquD+REGf9BO+W2WW/8Db+3EWh1Nrzzl9BDCa47Hr7NLoQEmhwuKSlsmYMyVlHyIllbKoaX/4d0T6ts7K38qGTQ+R6LNro= X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-Network-Message-Id: 3a323e4a-f4eb-4661-b6ff-08dd0a5daa55 X-MS-Exchange-CrossTenant-AuthSource: BL4PR12MB9478.namprd12.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-OriginalArrivalTime: 21 Nov 2024 18:52:36.3643 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-MailboxType: HOSTED X-MS-Exchange-CrossTenant-UserPrincipalName: RspBYy5LcZecZYwfVe26IbB8sTKIu4vAlfsnl7d3JHhSaEaLormmHJ8na+s0zWdc X-MS-Exchange-Transport-CrossTenantHeadersStamped: MW6PR12MB8834 X-Rspam-User: X-Rspamd-Queue-Id: C33F518000B X-Rspamd-Server: rspam11 X-Stat-Signature: 3eqkc5zoxao6jak3picmjsiimhpqh7g5 X-HE-Tag: 1732215237-862386 X-HE-Meta: U2FsdGVkX1/aFcctJhlPuN+A5Z+csLreooDo5wF/EIGDaURrsjeqSj6kriYkBDappOvAr7LEm9g9iypPUxGpzaBBJHf79rkwAlezwFr/HGh9oc4Kj+t0Vv4EoGlJDMtBUoqMIXdli+AcQLi+ZqsCEhIIxPFgQVCpGXW12whBsIjdLQ+QhFM/40MxoYYT4YLm010NM1JmTen+O5JfgLV0pV2Fc7G8lfWGK4BBuHlLeQ88JvBQ1+CrgSI4CAOxb6vJchvt4PfS5iOKJF4v2VnLg6mxhccghyuKlBXR99jHj2IIlvdp57ajPoCDp1jqKu8DceN8WaBKB+1Lt/g9iuiQS1rIIWzGEB5V+QUQqgwEol9VNP2+08FWtw9ammIfx8L7QfZqzBDsp/mxa0sCSfx/L7ua/DRgp7oJbcfrC524KX+YN8f5lV8mO5OGc8RlyNY2yhBU4zkHe1GVH8AybaQ0oB2kl/eSyIl7I/v4yKZ5k/NGGbwEN0/RhqjLm9ya813CHnRmFX73XSn1yI1S23mudsrVtSqxv8KcLZE8r9VCb/TNsFvBrmUVgO/3YjaivPVrDc0UHs2ETykPNBgyPUMSGrLX++/4C1r/oshYwnls6ZFERsQrA7WWJXLyz2ijXOhi91pZXQO2dihqVTnTpud78+tNuPD6/wiXXCOxnOOtLP1Y+Fu6y8rShIiOgmQwnZJ8uz3V/+B1ZC8xxKVhgvrZJ7rmiUa6mu5pKWdeZh0EQokAxinc/Ij+WIbQGU2vB0FOpfL3+nKBsevbJI0aTNqwalFdIBCMKqReIwuiXS/UVzlN0HTTC3VBCsrKwGmqgF+11bRJuNt0LO/CKNI5KkU6giWxDmb1uQgYle8M2p0VfQBVhrK3N2pDbcF4AzSAajFlLIr1S5r219cXkgEn/1T9PSOSa5bfHPxiwKzsQeeYK1b9TzUrdwmQ8IlAO7thqt6s78i0/q3TPzECEbN00O2 HCnIxpzk O+hdgUT1nfxSPsvWZXdSJeYjGDB6EoiB1P+HgRSBjO0Hit9hgDqOLLL0J81/4Gl3J5X/n5D0jQCQwiG0Qv12YajPOQAR7PSZKlWInrg5rRxqoPZ99j91NuUB5UlwpHTeHQbzwV4EfrFAPQTnpcD9EnRE04LDfK47CZ0LuOsybQTSUonkB0o4mZ8JKx7jpbuS6yzBIYBkujF6RfTsQ4Lh7Ezc5/T5hAbnDJ1Qq8GFy417WNscQAYAb7qxJi6L2KqEKORAHle+UNK8SXrrm/+Iid9U6ZGGdcq+lU+JLAmT9JjCTB1y7NIHm88Xx63NBY9Z+1FO3oeRPeZlx582kjWUdoxnWTumTcwaLhxdCuFnleoqKFHUoEq5NrKg7rSnsZzzmNOGPUwBc1X9NFW0= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Now split_huge_page_to_list_to_order() uses the new backend split code in __folio_split_without_mapping(), the old __split_huge_page() and __split_huge_page_tail() can be removed. Signed-off-by: Zi Yan --- mm/huge_memory.c | 207 ----------------------------------------------- 1 file changed, 207 deletions(-) diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 3704e14b823a..9b3688870a16 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -3146,213 +3146,6 @@ static void lru_add_page_tail(struct folio *folio, struct page *tail, } } -static void __split_huge_page_tail(struct folio *folio, int tail, - struct lruvec *lruvec, struct list_head *list, - unsigned int new_order) -{ - struct page *head = &folio->page; - struct page *page_tail = head + tail; - /* - * Careful: new_folio is not a "real" folio before we cleared PageTail. - * Don't pass it around before clear_compound_head(). - */ - struct folio *new_folio = (struct folio *)page_tail; - - VM_BUG_ON_PAGE(atomic_read(&page_tail->_mapcount) != -1, page_tail); - - /* - * Clone page flags before unfreezing refcount. - * - * After successful get_page_unless_zero() might follow flags change, - * for example lock_page() which set PG_waiters. - * - * Note that for mapped sub-pages of an anonymous THP, - * PG_anon_exclusive has been cleared in unmap_folio() and is stored in - * the migration entry instead from where remap_page() will restore it. - * We can still have PG_anon_exclusive set on effectively unmapped and - * unreferenced sub-pages of an anonymous THP: we can simply drop - * PG_anon_exclusive (-> PG_mappedtodisk) for these here. - */ - page_tail->flags &= ~PAGE_FLAGS_CHECK_AT_PREP; - page_tail->flags |= (head->flags & - ((1L << PG_referenced) | - (1L << PG_swapbacked) | - (1L << PG_swapcache) | - (1L << PG_mlocked) | - (1L << PG_uptodate) | - (1L << PG_active) | - (1L << PG_workingset) | - (1L << PG_locked) | - (1L << PG_unevictable) | -#ifdef CONFIG_ARCH_USES_PG_ARCH_2 - (1L << PG_arch_2) | -#endif -#ifdef CONFIG_ARCH_USES_PG_ARCH_3 - (1L << PG_arch_3) | -#endif - (1L << PG_dirty) | - LRU_GEN_MASK | LRU_REFS_MASK)); - - /* ->mapping in first and second tail page is replaced by other uses */ - VM_BUG_ON_PAGE(tail > 2 && page_tail->mapping != TAIL_MAPPING, - page_tail); - new_folio->mapping = folio->mapping; - new_folio->index = folio->index + tail; - - /* - * page->private should not be set in tail pages. Fix up and warn once - * if private is unexpectedly set. - */ - if (unlikely(page_tail->private)) { - VM_WARN_ON_ONCE_PAGE(true, page_tail); - page_tail->private = 0; - } - if (folio_test_swapcache(folio)) - new_folio->swap.val = folio->swap.val + tail; - - /* Page flags must be visible before we make the page non-compound. */ - smp_wmb(); - - /* - * Clear PageTail before unfreezing page refcount. - * - * After successful get_page_unless_zero() might follow put_page() - * which needs correct compound_head(). - */ - clear_compound_head(page_tail); - if (new_order) { - prep_compound_page(page_tail, new_order); - folio_set_large_rmappable(new_folio); - } - - /* Finally unfreeze refcount. Additional reference from page cache. */ - page_ref_unfreeze(page_tail, - 1 + ((!folio_test_anon(folio) || folio_test_swapcache(folio)) ? - folio_nr_pages(new_folio) : 0)); - - if (folio_test_young(folio)) - folio_set_young(new_folio); - if (folio_test_idle(folio)) - folio_set_idle(new_folio); - - folio_xchg_last_cpupid(new_folio, folio_last_cpupid(folio)); - - /* - * always add to the tail because some iterators expect new - * pages to show after the currently processed elements - e.g. - * migrate_pages - */ - lru_add_page_tail(folio, page_tail, lruvec, list); -} - -static void __split_huge_page(struct page *page, struct list_head *list, - pgoff_t end, unsigned int new_order) -{ - struct folio *folio = page_folio(page); - struct page *head = &folio->page; - struct lruvec *lruvec; - struct address_space *swap_cache = NULL; - unsigned long offset = 0; - int i, nr_dropped = 0; - unsigned int new_nr = 1 << new_order; - int order = folio_order(folio); - unsigned int nr = 1 << order; - - /* complete memcg works before add pages to LRU */ - split_page_memcg(head, order, new_order); - - if (folio_test_anon(folio) && folio_test_swapcache(folio)) { - offset = swap_cache_index(folio->swap); - swap_cache = swap_address_space(folio->swap); - xa_lock(&swap_cache->i_pages); - } - - /* lock lru list/PageCompound, ref frozen by page_ref_freeze */ - lruvec = folio_lruvec_lock(folio); - - ClearPageHasHWPoisoned(head); - - for (i = nr - new_nr; i >= new_nr; i -= new_nr) { - struct folio *tail; - __split_huge_page_tail(folio, i, lruvec, list, new_order); - tail = page_folio(head + i); - /* Some pages can be beyond EOF: drop them from page cache */ - if (tail->index >= end) { - if (shmem_mapping(folio->mapping)) - nr_dropped++; - else if (folio_test_clear_dirty(tail)) - folio_account_cleaned(tail, - inode_to_wb(folio->mapping->host)); - __filemap_remove_folio(tail, NULL); - folio_put(tail); - } else if (!folio_test_anon(folio)) { - __xa_store(&folio->mapping->i_pages, tail->index, - tail, 0); - } else if (swap_cache) { - __xa_store(&swap_cache->i_pages, offset + i, - tail, 0); - } - } - - if (!new_order) - ClearPageCompound(head); - else { - struct folio *new_folio = (struct folio *)head; - - folio_set_order(new_folio, new_order); - } - unlock_page_lruvec(lruvec); - /* Caller disabled irqs, so they are still disabled here */ - - split_page_owner(head, order, new_order); - pgalloc_tag_split(folio, order, new_order); - - /* See comment in __split_huge_page_tail() */ - if (folio_test_anon(folio)) { - /* Additional pin to swap cache */ - if (folio_test_swapcache(folio)) { - folio_ref_add(folio, 1 + new_nr); - xa_unlock(&swap_cache->i_pages); - } else { - folio_ref_inc(folio); - } - } else { - /* Additional pin to page cache */ - folio_ref_add(folio, 1 + new_nr); - xa_unlock(&folio->mapping->i_pages); - } - local_irq_enable(); - - if (nr_dropped) - shmem_uncharge(folio->mapping->host, nr_dropped); - remap_page(folio, nr, PageAnon(head) ? RMP_USE_SHARED_ZEROPAGE : 0); - - /* - * set page to its compound_head when split to non order-0 pages, so - * we can skip unlocking it below, since PG_locked is transferred to - * the compound_head of the page and the caller will unlock it. - */ - if (new_order) - page = compound_head(page); - - for (i = 0; i < nr; i += new_nr) { - struct page *subpage = head + i; - struct folio *new_folio = page_folio(subpage); - if (subpage == page) - continue; - folio_unlock(new_folio); - - /* - * Subpages may be freed if there wasn't any mapping - * like if add_to_swap() is running on a lru page that - * had its mapping zapped. And freeing these pages - * requires taking the lru_lock so we do the put_page - * of the tail pages after the split is complete. - */ - free_page_and_swap_cache(subpage); - } -} - /* Racy check whether the huge page can be split */ bool can_split_folio(struct folio *folio, int caller_pins, int *pextra_pins) {