From patchwork Fri Mar 7 17:39:58 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Zi Yan X-Patchwork-Id: 14006836 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id E40BAC28B23 for ; Fri, 7 Mar 2025 17:43:19 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 7FE91280002; Fri, 7 Mar 2025 12:43:18 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 7AF0A280001; Fri, 7 Mar 2025 12:43:18 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 62918280002; Fri, 7 Mar 2025 12:43:18 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id 3CC34280001 for ; Fri, 7 Mar 2025 12:43:18 -0500 (EST) Received: from smtpin29.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id ED06EB7DDC for ; Fri, 7 Mar 2025 17:43:18 +0000 (UTC) X-FDA: 83195476476.29.D7E2957 Received: from NAM11-DM6-obe.outbound.protection.outlook.com (mail-dm6nam11on2087.outbound.protection.outlook.com [40.107.223.87]) by imf29.hostedemail.com (Postfix) with ESMTP id 238D812000E for ; Fri, 7 Mar 2025 17:43:14 +0000 (UTC) Authentication-Results: imf29.hostedemail.com; dkim=pass header.d=Nvidia.com header.s=selector2 header.b=BKSGt6P0; spf=pass (imf29.hostedemail.com: domain of ziy@nvidia.com designates 40.107.223.87 as permitted sender) smtp.mailfrom=ziy@nvidia.com; dmarc=pass (policy=reject) header.from=nvidia.com; arc=pass ("microsoft.com:s=arcselector10001:i=1") ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1741369396; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=YLKTs1iCIsIrZWDcg4jplK2n3gitTWDTC5hh13uSgm8=; b=aFfPWj9+EaT2cKEj8/mRi9GoOmmf7TcCQuw6qsjoxr5IzzhnUyIuCwWjaYGvlTCxP0n46X 4BcRHlqNxjxur0G/qqqE7SK6VuhTQbx7/g6vgoqGNk1uOOfZ0BzYTHgDVvqoXWN7WI/pmn 4nRaz9gJ5cYlA6xCDqyRKUMOWzopOe4= ARC-Authentication-Results: i=2; imf29.hostedemail.com; dkim=pass header.d=Nvidia.com header.s=selector2 header.b=BKSGt6P0; spf=pass (imf29.hostedemail.com: domain of ziy@nvidia.com designates 40.107.223.87 as permitted sender) smtp.mailfrom=ziy@nvidia.com; dmarc=pass (policy=reject) header.from=nvidia.com; arc=pass ("microsoft.com:s=arcselector10001:i=1") ARC-Seal: i=2; s=arc-20220608; d=hostedemail.com; t=1741369396; a=rsa-sha256; cv=pass; b=60NH3FlPzE36HE3H8bDrwmvpSZW5m9mFTjcBObTA8p5M3w9Cxw9n+hqGqvwey1uJSgmXAi MexP7BVzJHNh6xd2MBxaEt7zgrecLrIGinN+RN3oNSf7jPjbYG3p2q7UYjwEJz/0vkpyRO rMchQf9MJy/BnA1/G47kT2TQIQs2DhM= ARC-Seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=QpEP95e5z+L003b4we555aJBN4V+WVGpj1omJoResylGjija/HZ/TndvUEbl+an0DphILt6g8mhrKVhVWkHcu8UiuJ1ke5/U0kJnXuHIS57UtF+8nq5OmHasDUJscHFQ2Cw6LLF/Y3JUkbpVDES8pTt80kq2KxUZQF96rj3Vkx8a2uNgX/wUnfEMGaYSuchAMyp2LAqbaWHbdwJzwOMrxiKaGdlZZ2XoRi2Fo36K5Nv3M/ORWYEP27gee7pfJ+hHnPsIius2tu6vY7hJxwgdvi0pB94DQzylVTFS3kYzlOqEFsFuslVkI+6+DAi3q51ZnOcGfGUaDTwEFX2UIzq49w== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=YLKTs1iCIsIrZWDcg4jplK2n3gitTWDTC5hh13uSgm8=; b=PY24TeKfcKLIx5R2zQ685xDJqjqQ8vvClWR54xFHgeilLsoLEImQtSZc8h7x5onPF6xpfQOGWKPhDZiPgiN2wQsNiGkIl09nvz16MAW7D2d66jJ+LltYZVKGQXGP9fNKAcqcfa4QC0fH/FSlD9YfcF1bCUmArZdPjdMYbDoJW+ROQ6nBRvsOuU55jQOBR0rOjIjCXXaR6dEtCRR4MqD3bJnLanoWGwxtqZXrW9xrYNR4bzl+Y7aZmwv5zvpDLSTwuWSX6A9P6N20RbyJJmQSk0CEFcY4PL4xW3+aS+/9A9LW9LZjZPaeSNhjejlodBgmAgRVs0V7cgns2obRS1eVIw== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=nvidia.com; dmarc=pass action=none header.from=nvidia.com; dkim=pass header.d=nvidia.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=YLKTs1iCIsIrZWDcg4jplK2n3gitTWDTC5hh13uSgm8=; b=BKSGt6P0KJiq0PRePOhIarnQJOm/epk49luC7YovfM23iaSDaUEY5yf3HbbdFu1JTw+gRrjHMxgJZS3/D/IoAIQ5BP/W6RagKQJEh5jRDtJjE5zRX1M8Mk46KUqGbTO7XyGbuL6x629bY9USykrM0imnduZRIgGti5UEYLw02nLW9xzoHa+TMlFAENEyOb9BTpnEMePLma5aBpo2OjhauAUNb7Cin9mcNxhKXjwgc7M5vbv0voAaLlfXnsyJ+jM9kEk40CoOW7Sk0kSf6cgJmlDo+Wv5/lnW1XygaFx9CBwLyDF5EIvO53Cuafgk13S1XGDyMgsXH9EdXsy38G19qA== Received: from DS7PR12MB9473.namprd12.prod.outlook.com (2603:10b6:8:252::5) by LV2PR12MB6014.namprd12.prod.outlook.com (2603:10b6:408:170::18) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8511.19; Fri, 7 Mar 2025 17:40:14 +0000 Received: from DS7PR12MB9473.namprd12.prod.outlook.com ([fe80::5189:ecec:d84a:133a]) by DS7PR12MB9473.namprd12.prod.outlook.com ([fe80::5189:ecec:d84a:133a%5]) with mapi id 15.20.8511.020; Fri, 7 Mar 2025 17:40:14 +0000 From: Zi Yan To: linux-mm@kvack.org, Andrew Morton , Hugh Dickins , "Matthew Wilcox (Oracle)" Cc: Ryan Roberts , "Kirill A . Shutemov" , David Hildenbrand , Yang Shi , Miaohe Lin , Kefeng Wang , Yu Zhao , John Hubbard , Baolin Wang , linux-kselftest@vger.kernel.org, linux-kernel@vger.kernel.org, Zi Yan , Kairui Song Subject: [PATCH v10 5/8] mm/huge_memory: remove the old, unused __split_huge_page() Date: Fri, 7 Mar 2025 12:39:58 -0500 Message-ID: <20250307174001.242794-6-ziy@nvidia.com> X-Mailer: git-send-email 2.47.2 In-Reply-To: <20250307174001.242794-1-ziy@nvidia.com> References: <20250307174001.242794-1-ziy@nvidia.com> X-ClientProxiedBy: BN9PR03CA0090.namprd03.prod.outlook.com (2603:10b6:408:fc::35) To DS7PR12MB9473.namprd12.prod.outlook.com (2603:10b6:8:252::5) MIME-Version: 1.0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: DS7PR12MB9473:EE_|LV2PR12MB6014:EE_ X-MS-Office365-Filtering-Correlation-Id: 1b0d24c2-36c3-4f4a-fb49-08dd5d9f1e55 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0;ARA:13230040|376014|7416014|366016|1800799024; X-Microsoft-Antispam-Message-Info: 7dNS6wnaeyVLEN+8lDt7Y1y9Pv9t6ohlJjXtMic17wl6TbYCAKz/FKGBparewf73fj3UzmVLHlqUC0pM+/CJx4He1N9fRg0G7TxgU66XbHbtJMeb/ZI5f38lcTDgMRpph0cmkpW540Zo3Gkd8rBOQkd5D1/xpKriXvSg6i74zl/Zi2cJyij+uPeQ/XvnrLbSEiRJDSjXP6X69QFaS0XHdYX/lPeMQ91R0nE/zoVDCFKdwFeP+TVwtbmVFZ7o5CFgMgiIs3PKeXSWpkbbyANuebgVwmYnF6JIf5p567T2wN8pr0T6OGU2hL15+GTyUYOPIFKneUfaMxPc0X4pPu9ghOC9V3uaJl1vMzTw4gsKU40hkhBy7SRMts4ByQCkMOe702nkLeyCkZG81fyqZl5ziZolFyH4sexec+UCkkSYJavr0hCIYjfKYDK9b4UJbQG+Fq31u1BDhvV5i2Kjk5KI4LZWCQBAw5oS2qLaJUzdOg5Nhh+JIJHGOJX+QFRtOVBnrKe2Uop/ei/+zwEEKQfAZGJ/QMU2E6Caw5L7S0fNGk3wbDrk3pQueh+vwC0QXe3/liccBP3JzowYnb6jnG1iMRCTr0XxGzWM34HQ8fw1IGsmdPIqbrZ3r0oAk95p77xHyyn1hvrIuXCRg4tZfmIan2kVqN3PWkPOVM7zQjyj8iMIYjImb86qNjFKotsnwBVsiPG9scoc5faEn0FpuU+FOpcSBoJSLRtvKzJ/c3+fwUsRw2suvQk2UGDyUf9oYEgKy8ZLIr8K25zzHNOZA4SHAZU+J0C5ELXV3wJv5w8K8Dmguhfszo6VY3/45FrTcnqz1nJoYeunGXlWG59vS9oJ2pXrBi0GzYnBmBvvGLHpAn9bOV9sRPkLkZUQ526UD03bteBIzjIZ/S6K2ggYTcOcyEX6U1n1STyzoedW4446O6aF2jPaVQIDxmqIZ4AkmMV+x1cScdOt48elmXBXZliNn++OtkuxKgvNfKR2b7VX0tzu2SYJRwySVuTM11pWo7PljjNK/faXNDIb6i3IioZa4TX66cwVEKomBW8Xr/6YjJWmEd58TUcsbpSj35Uw2StxLzocnm8ndP6C9Uimm7WzWQVy+u9sKjs4SJbzZom4mI+imrw9xaie9PzN+a9BBiY5jkzpYi7A9aYGVQWPUp4MJHDIS0NQtdUiJPPC6sptyvX/7XuEEr2iy3QOtiPcUgWqlOCCBusA8JOSTHi+V6vOER6tfUZQcrR3kPt4Tb7uTm8x4shfx07fOzJihrFviM9eyE0RUPRktGASgdO6V9c6ItkZqEKWDtthzeUtTpa4ZKzwbfUsUsO4Kwp3dPyf4xyOMVJztFDvoY1V4lEkLZ3awOwQskVLSQpwrTV34CZmOsVrxbRRJnWaCQKrmZnG2Zb6 X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR12MB9473.namprd12.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230040)(376014)(7416014)(366016)(1800799024);DIR:OUT;SFP:1101; X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1 X-MS-Exchange-AntiSpam-MessageData-0: 9Cb+vYkv9A4M56bdX1KAmZn9CSy1OmDHZNeRvMBNFf+3znUzAk/2hFQGjqvBfyS7r7WSTqpn2QBivIbsuQ+nwmlTd+GNXhnP/xDVvehSVz+k/IJzKxxg2bNC/18bvjyUrfmd9/c5WVdEaC7xLiUV1hAKLKYiWSAOEFd3dM4w4mauImMhRf//hJ+iNEup8yVNBIsUUoAG5Zy+QqcAVIGDlVDijafp3n2UBIOsWHIQ3xATZwm1UMRAvO+5gle/0wXpx5ZgwOk8Vk4r+OOM4NXOCVlr1IkQLt84KWuqA6E0QRYOJ0+TC7PdX8LWXzdjEp3Yuye++A6OpXgiTbZA7wnfJbr18pTt1UeAReGeW616leYR3/TLVBZn4aBTOM6QR0uDQhKxlTnXOYJpN1RXOuzFgYKihOyvFJ81Z2Px7eOlNVw/eQoO4jSEDbegZcbhjWyS4oCMe+Y+v5ZYkbx0mhEbuLvCX3oQrYcReT9+Hk7ACIjEitj/4Dg3oG2f4ZzL6JhxhzXGKYDHOu4+oGQH9ejqQNWDdSh6hDEZhHJOZQcXCN20h1zoHN6UiWgBsnQU0B4xlKoFs0CHTKKg3aIAQ+4QjE/H7VnwRwjozrMblB4owtdv0ydqPJ5NE19KUQwGYaaRE3jLnXr7AsEnXpeXII4te5ttmRnQ6iSe/PZ28FddNjn8mmmHm1pGWsx7SAO+fBjjuzqmkE6USQrpeWTEJ1tnRpTChJfWPe0WiY0xny5l/DzXHsrucOzNJvVl+ecc1mFjEA3EtyouIQly4BIxD4f5G8KsebyW2ouhCa1UD2HxiGhfATH+00RuISglwPWnk1tY0UqONT2CIBSlAEXkh0yofNNls15FlGRJTydDPcbHBpmg9lL9b2sPK/J2dnEHXxugN7497vm/0zPBpCt8gpF2/MvgHA3PhV2M8EfAs3cJ3aPegt+O/JH50kZQ0eHnWRZeLSJw5biwN20657n23Ax3Yl1ZPZD88lNPTtVhx8GdNi+5hBAY12+ovK5Z3K5S1rmkBJZ3/kudwsfWuEZ6s3tJrFK05A+VIEI9bCRW8cUab95BVAZm/FI+yvN1kiF9keO8AZ+F50TQFVBDS2o2+TatF00LWXun06HRv3EmX6d41877K5FkfMUH3vKo/d5M03sIKT6/W1PDUKbcOs4yr8kIfas7vbJJecjBY0VA6+d87/acx9mYvxmRsaZnQHh27q7sL+0WqIHXpyE3UwYHLXpkL1HS7uh+Ri35KeFKxKIJlfrygAjy4BXxRL19FlDdSxOON0QRyzvk/0o0joXaHMfGhejXxvTZSjH6WjbhldVUaqjoKZ3Taf/eSsoljiIAZ0dUTdpOQF9atoQ4enZjmJVvYaUClhf4Nv9PWoFLbijTu7dQr2wAvfgBYkVZB9U02vUt+mb/QYp9KnDj2XLsGMVirGLpyWDBtuJDfa2u4AS9UJ7d6czl8z0Di1z46ZBL5rYAUlNBNHNxB678nkZCSLRgIrzYWRz2mnMm1MmcSaiQw9Xewhe9wI/1vPKcczUxEkCdMD+w8rzK/Okc/z0zg/KW8pNdGHqqy4zbPGPfbT7sC8w= X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-Network-Message-Id: 1b0d24c2-36c3-4f4a-fb49-08dd5d9f1e55 X-MS-Exchange-CrossTenant-AuthSource: DS7PR12MB9473.namprd12.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-OriginalArrivalTime: 07 Mar 2025 17:40:14.7980 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-MailboxType: HOSTED X-MS-Exchange-CrossTenant-UserPrincipalName: zTfnPpzrj+iZMQEZGuOkdhmpc0CbgPsbgZTG/41JX58sOvJpMXU1I16eK8ThqFz8 X-MS-Exchange-Transport-CrossTenantHeadersStamped: LV2PR12MB6014 X-Rspam-User: X-Stat-Signature: b4meawydip3mxospyfa7k6mgtobi83o8 X-Rspamd-Queue-Id: 238D812000E X-Rspamd-Server: rspam07 X-HE-Tag: 1741369394-663619 X-HE-Meta: U2FsdGVkX1/wjsf2tGExlCI+r+FSlW4KGn3Zj7OavuYKmW/bB8X1VvKXwGm6uuctRpZJ/IsFnI9PUYtZPID6D3Uzzs6NCk/tqzOa+ecARyBDz5oAV0G0XrcLOlkwoBPidrqaa+QsRRKuYd1rPbDuns3mmThXIBsYHe01aA87JtKg77n9FVmOvN1d7pXjjyb6cmMPQ2qh8Vsxvl9Om23iRnJqmlwI2/yQmO7rt0llEb2nQS4KjPmSsGMv32rxkTcHepSXyl8HIC0dxVydhk5ndGr1m5igZ0LV5YDrXV0x+7gC/gCNA0z/IRyOV54TURAwi2GmYUL56M5ujPTg1TAbs/tZkX+fun1BacWAuQq6AVNYela9y2cKQdoa+9kXwQMjz6dBvqYWdTcmKMKI3dS/4oMwhgPjFP6s91lf3oCcS6b2uogyo3H0v6q0BCIMuvjfaUigNNCtgzVGMtzEpP+CIQinheiSX4iANqrF/wAAs8OzK38nYi64Sm0saQ4jW41N88oJtEX3bBBgIP/TcmNGS2yrN2pHm2PK816QSUSnJtUOV4o0GsLYvHwjo/0USVNbwiwyFCi0aLQ/M2nNDzhqxLnXMkASK+XH+ceGs0rLuW03icwnOB1cfEdlXKXtZP/ZSPCnx3/UBszBOFSZkwFRGl5rPO7inTZ4MDCbtZYf0RPdlKuBRR7SRgVk8PrKYhAKo6xwppmZMizBWpOak3fp9afDCA6LVe91+ztBA78DFln3IehvCJefOh/g6OaZr/8+mpqXhX5BCQms1r3OHP5mmllBT78QlsW5s+e+g2lEIPGsamU5/lh1Hadxlu8RVvc/hhKSwdMRhCW2EqK9cGSQBLoIdo6a8vito5ZFXETe3J+5TiCZKpN2mmLuBGBs3wzcULMnYKY9UIHvOD6XlpLond55CTvW29ACR67i+L4pvrW7fMPoPFIs1gy43s1Qtvzn+NKOxmY6o9cOeKQ7fR5 JDUZSAJv Q/O2Oi4pPB4+MeY8ngtB3uZ986iHctEj8bKfsMb1PQI03TjDwcFm7SLTPf0GTTBKutMPGg99CDGRtT3XhtJXjCPUlq6DlaEn9G6kKKAfRBsiPfvADQKm3bC/hnEtrIgq0YW+5FePzCx17JbGZ1BfbB5717icjdKeIuVuxbs8St7SSgQp4kfw2FNUq+wZBAL2Kp1C1he+QP6Q/plthJkXVw/EtFDePsuctUvXkw9IdidhXDoPd0JYm8o9KZ/lrd+XlGXhwtdtF/HnlOv984I+Osjdg1QQMiztpGBjDQw6AXWzgqKFjvsIotPwGcZLdFWn1My+YdSWYv+QL96m231REumFxksTgNBS7m+FzPAevVm4twDfEGT23d2nhy/C8MH/UyXPycaqp9Uj1sz4CFx5uu/y7zmKy+V7xL7BZH3yCEFPWN1pzhxfkV6Jy/PdmsujZ7uG1W2/Ihnf3CECQXFx6/GP8/kT7IZlJq3lq+u9tClJvGUQbfd07zqok/EB/w4EtlkyJPmq5wYQQoSaiXkfTbWjDz/YMpS9qAI1VIEiFZRX+1GLdowPN83UwDV2WrclOUeurYwl510vBOLe5WNPnsTA1gBFhHy6fx+mqKRSZAIdiKcRGCguLJm+DiTGzME3p+VU7XrDbQU46E0VxQAivKOZoTUOZ0n2gFF4+VJVwz0vF0bfOqhkt0iq8uucSdUnr2306Ibd+jhKZKH6xGR24DjN1EZ852j5UFaAvvY6w9P+iQzLe88rDxf9tbA== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Now split_huge_page_to_list_to_order() uses the new backend split code in __split_unmapped_folio(), the old __split_huge_page() and __split_huge_page_tail() can be removed. Signed-off-by: Zi Yan Cc: Baolin Wang Cc: David Hildenbrand Cc: Hugh Dickins Cc: John Hubbard Cc: Kefeng Wang Cc: Kirill A. Shuemov Cc: Matthew Wilcox Cc: Miaohe Lin Cc: Ryan Roberts Cc: Yang Shi Cc: Yu Zhao Cc: Kairui Song --- mm/huge_memory.c | 215 ----------------------------------------------- 1 file changed, 215 deletions(-) diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 3e05e62fdccb..6cc97d592797 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -3284,221 +3284,6 @@ static void lru_add_page_tail(struct folio *folio, struct page *tail, } } -static void __split_huge_page_tail(struct folio *folio, int tail, - struct lruvec *lruvec, struct list_head *list, - unsigned int new_order) -{ - struct page *head = &folio->page; - struct page *page_tail = head + tail; - /* - * Careful: new_folio is not a "real" folio before we cleared PageTail. - * Don't pass it around before clear_compound_head(). - */ - struct folio *new_folio = (struct folio *)page_tail; - - VM_BUG_ON_PAGE(atomic_read(&page_tail->_mapcount) != -1, page_tail); - - /* - * Clone page flags before unfreezing refcount. - * - * After successful get_page_unless_zero() might follow flags change, - * for example lock_page() which set PG_waiters. - * - * Note that for mapped sub-pages of an anonymous THP, - * PG_anon_exclusive has been cleared in unmap_folio() and is stored in - * the migration entry instead from where remap_page() will restore it. - * We can still have PG_anon_exclusive set on effectively unmapped and - * unreferenced sub-pages of an anonymous THP: we can simply drop - * PG_anon_exclusive (-> PG_mappedtodisk) for these here. - */ - page_tail->flags &= ~PAGE_FLAGS_CHECK_AT_PREP; - page_tail->flags |= (head->flags & - ((1L << PG_referenced) | - (1L << PG_swapbacked) | - (1L << PG_swapcache) | - (1L << PG_mlocked) | - (1L << PG_uptodate) | - (1L << PG_active) | - (1L << PG_workingset) | - (1L << PG_locked) | - (1L << PG_unevictable) | -#ifdef CONFIG_ARCH_USES_PG_ARCH_2 - (1L << PG_arch_2) | -#endif -#ifdef CONFIG_ARCH_USES_PG_ARCH_3 - (1L << PG_arch_3) | -#endif - (1L << PG_dirty) | - LRU_GEN_MASK | LRU_REFS_MASK)); - - /* ->mapping in first and second tail page is replaced by other uses */ - VM_BUG_ON_PAGE(tail > 2 && page_tail->mapping != TAIL_MAPPING, - page_tail); - new_folio->mapping = folio->mapping; - new_folio->index = folio->index + tail; - - /* - * page->private should not be set in tail pages. Fix up and warn once - * if private is unexpectedly set. - */ - if (unlikely(page_tail->private)) { - VM_WARN_ON_ONCE_PAGE(true, page_tail); - page_tail->private = 0; - } - if (folio_test_swapcache(folio)) - new_folio->swap.val = folio->swap.val + tail; - - /* Page flags must be visible before we make the page non-compound. */ - smp_wmb(); - - /* - * Clear PageTail before unfreezing page refcount. - * - * After successful get_page_unless_zero() might follow put_page() - * which needs correct compound_head(). - */ - clear_compound_head(page_tail); - if (new_order) { - prep_compound_page(page_tail, new_order); - folio_set_large_rmappable(new_folio); - } - - /* Finally unfreeze refcount. Additional reference from page cache. */ - page_ref_unfreeze(page_tail, - 1 + ((!folio_test_anon(folio) || folio_test_swapcache(folio)) ? - folio_nr_pages(new_folio) : 0)); - - if (folio_test_young(folio)) - folio_set_young(new_folio); - if (folio_test_idle(folio)) - folio_set_idle(new_folio); - - folio_xchg_last_cpupid(new_folio, folio_last_cpupid(folio)); - - /* - * always add to the tail because some iterators expect new - * pages to show after the currently processed elements - e.g. - * migrate_pages - */ - lru_add_page_tail(folio, page_tail, lruvec, list); -} - -static void __split_huge_page(struct page *page, struct list_head *list, - pgoff_t end, unsigned int new_order) -{ - struct folio *folio = page_folio(page); - struct page *head = &folio->page; - struct lruvec *lruvec; - struct address_space *swap_cache = NULL; - unsigned long offset = 0; - int i, nr_dropped = 0; - unsigned int new_nr = 1 << new_order; - int order = folio_order(folio); - unsigned int nr = 1 << order; - - /* - * Reset any memcg data overlay in the tail pages. folio_nr_pages() - * is unreliable after this point. - */ -#ifdef NR_PAGES_IN_LARGE_FOLIO - folio->_nr_pages = 0; -#endif - - /* complete memcg works before add pages to LRU */ - split_page_memcg(head, order, new_order); - - if (folio_test_anon(folio) && folio_test_swapcache(folio)) { - offset = swap_cache_index(folio->swap); - swap_cache = swap_address_space(folio->swap); - xa_lock(&swap_cache->i_pages); - } - - /* lock lru list/PageCompound, ref frozen by page_ref_freeze */ - lruvec = folio_lruvec_lock(folio); - - folio_clear_has_hwpoisoned(folio); - - for (i = nr - new_nr; i >= new_nr; i -= new_nr) { - struct folio *tail; - __split_huge_page_tail(folio, i, lruvec, list, new_order); - tail = page_folio(head + i); - /* Some pages can be beyond EOF: drop them from page cache */ - if (tail->index >= end) { - if (shmem_mapping(folio->mapping)) - nr_dropped += new_nr; - else if (folio_test_clear_dirty(tail)) - folio_account_cleaned(tail, - inode_to_wb(folio->mapping->host)); - __filemap_remove_folio(tail, NULL); - folio_put(tail); - } else if (!folio_test_anon(folio)) { - __xa_store(&folio->mapping->i_pages, tail->index, - tail, 0); - } else if (swap_cache) { - __xa_store(&swap_cache->i_pages, offset + i, - tail, 0); - } - } - - if (!new_order) - ClearPageCompound(head); - else { - struct folio *new_folio = (struct folio *)head; - - folio_set_order(new_folio, new_order); - } - unlock_page_lruvec(lruvec); - /* Caller disabled irqs, so they are still disabled here */ - - split_page_owner(head, order, new_order); - pgalloc_tag_split(folio, order, new_order); - - /* See comment in __split_huge_page_tail() */ - if (folio_test_anon(folio)) { - /* Additional pin to swap cache */ - if (folio_test_swapcache(folio)) { - folio_ref_add(folio, 1 + new_nr); - xa_unlock(&swap_cache->i_pages); - } else { - folio_ref_inc(folio); - } - } else { - /* Additional pin to page cache */ - folio_ref_add(folio, 1 + new_nr); - xa_unlock(&folio->mapping->i_pages); - } - local_irq_enable(); - - if (nr_dropped) - shmem_uncharge(folio->mapping->host, nr_dropped); - remap_page(folio, nr, PageAnon(head) ? RMP_USE_SHARED_ZEROPAGE : 0); - - /* - * set page to its compound_head when split to non order-0 pages, so - * we can skip unlocking it below, since PG_locked is transferred to - * the compound_head of the page and the caller will unlock it. - */ - if (new_order) - page = compound_head(page); - - for (i = 0; i < nr; i += new_nr) { - struct page *subpage = head + i; - struct folio *new_folio = page_folio(subpage); - if (subpage == page) - continue; - folio_unlock(new_folio); - - /* - * Subpages may be freed if there wasn't any mapping - * like if add_to_swap() is running on a lru page that - * had its mapping zapped. And freeing these pages - * requires taking the lru_lock so we do the put_page - * of the tail pages after the split is complete. - */ - free_page_and_swap_cache(subpage); - } -} - /* Racy check whether the huge page can be split */ bool can_split_folio(struct folio *folio, int caller_pins, int *pextra_pins) {