From patchwork Thu Jan 16 21:10:39 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Zi Yan X-Patchwork-Id: 13942313 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3A922C02183 for ; Thu, 16 Jan 2025 21:13:19 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id C0C9D280009; Thu, 16 Jan 2025 16:13:18 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id BBD08280001; Thu, 16 Jan 2025 16:13:18 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id A0ED1280009; Thu, 16 Jan 2025 16:13:18 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id 7E298280001 for ; Thu, 16 Jan 2025 16:13:18 -0500 (EST) Received: from smtpin15.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id 05F20C1222 for ; Thu, 16 Jan 2025 21:13:18 +0000 (UTC) X-FDA: 83014565676.15.72FD8F2 Received: from NAM12-DM6-obe.outbound.protection.outlook.com (mail-dm6nam12on2062.outbound.protection.outlook.com [40.107.243.62]) by imf06.hostedemail.com (Postfix) with ESMTP id 353A218000B for ; Thu, 16 Jan 2025 21:13:15 +0000 (UTC) Authentication-Results: imf06.hostedemail.com; dkim=pass header.d=Nvidia.com header.s=selector2 header.b=iJhc1Wsq; spf=pass (imf06.hostedemail.com: domain of ziy@nvidia.com designates 40.107.243.62 as permitted sender) smtp.mailfrom=ziy@nvidia.com; dmarc=pass (policy=reject) header.from=nvidia.com; arc=pass ("microsoft.com:s=arcselector10001:i=1") ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1737061995; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=dOqylMIuudUNGPgU0fB6u1DImPM4IOntSMTYtCzMeCs=; b=p23OYKj43BdEFxuaeJr1465KYuNqz0VmKVsA4UrLJ/9RYqWSvvEnri05ZHQ+Vw9hO2Rt/M y55zqateuorMdXaskCTEQXWFdwgrQ+IaO7KY7RaEcLO/NZ+93AlBfmsbc1R8tHmkVHPEJ3 xbm8bAHChVJHbsv7tdwvyC504hRfZwQ= ARC-Authentication-Results: i=2; imf06.hostedemail.com; dkim=pass header.d=Nvidia.com header.s=selector2 header.b=iJhc1Wsq; spf=pass (imf06.hostedemail.com: domain of ziy@nvidia.com designates 40.107.243.62 as permitted sender) smtp.mailfrom=ziy@nvidia.com; dmarc=pass (policy=reject) header.from=nvidia.com; arc=pass ("microsoft.com:s=arcselector10001:i=1") ARC-Seal: i=2; s=arc-20220608; d=hostedemail.com; t=1737061995; a=rsa-sha256; cv=pass; b=62kpssUIs6MZsSkCIFg9PY75gF1zQJzlq0AKz6yiZBnjkO3LO9XAfaMlIl3miDP/Bd1xxD 7Lic/j87UVrqKU2QBbsGuEi9TM470dkenxVqIu5RTFC3TngGRldDK3qHYU+3C7hYBVSf6L tDW1J1hbflmJUELdcZUasnjmg+dYOes= ARC-Seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=f1A+g/Cyfm3Fo/aTnxoUqtuIalPPP7LkL6fmrZ60snSzGjCRmxnD8RiEkmudre2fR+FBhX71saUdjRIvypN/K0NBZswSZ+WvKCeH4IMicpbnom/eDOpNGQ6WrHaSmEs3jMa8nanIbvVYgDDfa0exg5q+HKL6CSLwTofc16v9VM2iA6kdV/XT2Y13ySyoAzTzkOKaelu/7TosBQJT0kOyV63dgBEgtmv0MjiR26VD1Pllt2NZZTXWQ0+4cHUk59URKniZcitXcUrdUup51nrG0XaxhAG6Bapo/1hIT0MzQzAqU1LSg7PoUgFKwo/dAohyUgn5R1wUdAXuPOTpcrAcBA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=dOqylMIuudUNGPgU0fB6u1DImPM4IOntSMTYtCzMeCs=; b=SNFRVRpPCpxTg7KRylbNKLPqE+xxpS0QDsvohkHo196Xi40edFw+IqZsTmKfOnszaC8DwJKYBWib6MMyxVsrBb5ZfwTKRn6WxTcBmUu5WvkHQLvIgtG2ihSu31b3YStyw9yiC20tx9iXBayUcDJhsCEkfQhs+ol/rcWq7Ot/6YQ8yCZXv6kl1HstMe0be1oL45XKx7ILUH4ct/gXUMri91JasqSHXvQdCRIQwAOUCEsL3HdbIazK81f6Qd0+KcZbOG4iUNo7dmibK2hGRetp1ek3ShKX6bgKiJEGczn+HSehg8kIfxRhv9VR61uJRyXbQ8iDeeRUFXjPzD71DlU/ZQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=nvidia.com; dmarc=pass action=none header.from=nvidia.com; dkim=pass header.d=nvidia.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=dOqylMIuudUNGPgU0fB6u1DImPM4IOntSMTYtCzMeCs=; b=iJhc1WsqGJl3QUeUE9KPJYoyhqBiOJmMWOsgBzo4fj+aKrq+WpQx8Z5Xq+t7xccFbqY0KoNpb0x+KumOLP/nJi0dEvbGqmk5Pfq8bTmTe9BzSF4UJb/iBmeyZkULjLctdfLAGGATzzXtJY4COpQduN87KR6pQRrThN18jhQJhemj12oM9dvIMfGnRBKHRYJO+wQbe1EBN0Y8xFMpzvcYDnKA6YLWynWCU0NbvTnRPr1T+KM4pn2NkNfHCrl7Ve3wVgSLwS3UBDxqUPFHl6yp/jcjQhC1u+7SUPLPoVXEEgfaqJvg7U3KTLK11KbNnYJQ/4u3tnjiZCxmAZj5gEfOZQ== Received: from DS7PR12MB9473.namprd12.prod.outlook.com (2603:10b6:8:252::5) by LV8PR12MB9232.namprd12.prod.outlook.com (2603:10b6:408:182::20) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8356.15; Thu, 16 Jan 2025 21:11:01 +0000 Received: from DS7PR12MB9473.namprd12.prod.outlook.com ([fe80::5189:ecec:d84a:133a]) by DS7PR12MB9473.namprd12.prod.outlook.com ([fe80::5189:ecec:d84a:133a%3]) with mapi id 15.20.8356.014; Thu, 16 Jan 2025 21:11:01 +0000 From: Zi Yan To: linux-mm@kvack.org, Andrew Morton , "Kirill A . Shutemov" , "Matthew Wilcox (Oracle)" Cc: Ryan Roberts , Hugh Dickins , David Hildenbrand , Yang Shi , Miaohe Lin , Kefeng Wang , Yu Zhao , John Hubbard , Baolin Wang , linux-kselftest@vger.kernel.org, linux-kernel@vger.kernel.org, Zi Yan Subject: [PATCH v5 07/10] mm/huge_memory: remove the old, unused __split_huge_page() Date: Thu, 16 Jan 2025 16:10:39 -0500 Message-ID: <20250116211042.741543-8-ziy@nvidia.com> X-Mailer: git-send-email 2.45.2 In-Reply-To: <20250116211042.741543-1-ziy@nvidia.com> References: <20250116211042.741543-1-ziy@nvidia.com> X-ClientProxiedBy: BN0PR04CA0022.namprd04.prod.outlook.com (2603:10b6:408:ee::27) To DS7PR12MB9473.namprd12.prod.outlook.com (2603:10b6:8:252::5) MIME-Version: 1.0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: DS7PR12MB9473:EE_|LV8PR12MB9232:EE_ X-MS-Office365-Filtering-Correlation-Id: 0887114e-ab42-414c-bc27-08dd367247df X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0;ARA:13230040|376014|7416014|366016|1800799024; X-Microsoft-Antispam-Message-Info: 7wz4c+IEnmhMm0/HWlAeVoInSzarIXxxIVi5okZg403MXOJXDcQyXmAu5+BF8gpCHWuYOnmkCgBuNxnTLPT2Nd/hx1ey10eyfRkgIfVcAjm/DXNpVQPEahT1zPXzw3UmjhDBdsI+jHQhiG/i2EqpLq43Hgkel+cxuJdLq6+w09JyXMxGy8D1KdfVGptArXi71nxt5o3qq55TfL9rm9Atf2j7JOAS81YnqQG95FxU2+QjLcs4Gq2ONw/FI9ePveacznlwgVEv4cU93+0cFqHf9Mi8oDAqNA8BuuKAxoYD/azkN5HjIve4D/L82pc5jl7eNCPyL9U3nEtrSBaoEsk087XeMdyM7nkeDbJQerLjde2lp9XaDBn7LWaP+bSmUTp0oW7AU3VxQWozcarXfi+TzHYm75szQJwaUdJ2TcBmRrr4pbSfTFqsSsDC3+YcwRnKa1EJIuHjEiryK3QhQK24jOWj9TvopYtpo7UfnJwKBqar5/j0Y9t25ZxEoo+bs3HbZ9FtcovrnoCW3vUVodVLyOCMJBoDJyQebWryltViOPxH6EQRMRb/kngIWyogpG8DtFBeDetVKtXilltz2ypnx583mW/AergdA0D9+0RHaisFFReQ+nB0ZhQ0fAVcJQdsUONKMpI9YkvM584tw0TULe4reVGpTNQ29dhFYjkBjohe4maHmC8qNFG+7HepvwHSlrxfJePrXUKq/PP13ysIH9psz9wr0QcfVSeptS1wn3+GJIwBQlrv/r15AvfbJQey3uNTm3NxhXjIJHLkzYM/IWrVJQE/OOIvmrzVVGv6+qiYzWQ2SMvxXjzd30ldqnh5PkXwKpJ7tM8e3dN+2fQFriP7wJHiUiUS6kfa/Sgb82uqaWD3KFRoXfdn0mD4G5G6fgL7WlSRumdpWhpJsZUDQC9pRnstCYmaQn4R4PVU189l+qPf8dAdSrxJ7qxV2EAWe407w0s8wwT9ZKOCZRn3J5C48HpKaebDxngMtRxtQxtnL96y/tWOEbfZq4hh35oQN57dRg0G2cKhIIMZUviY19PfwvYMeT3RddrahwcPD5eQKq2n5/0tWwyB4dauCRwtWddeZc0eaa2YFUJDDpl3BZcZ6YToRnBkkq02PD2oybrZ25JQlu6HOcrM0XWhF0hueu7GKWlxaGR0D5pjs86t7K9MKNNtdcnhSk+1Bl+dkwyt57cF3D1OuTkbmJoUlbrLrMaBV4hDEFrqVm5RgvnZHrJh7Q+bJoPpdjTzyrUXieE8LY9qdgwL1UL9tWyocY3JjftrA3LF6pa5Ng0mQ1Am/YfHjwJ9MO5BSTitdNRTPlE5/dZE9lc+MPJ72iliFf7yUZ2F0JJKorLR6BVI5dDX+rf+58cIV0IbnweMRnsQYtah9O9iLm33hg24W+PMuiWP X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR12MB9473.namprd12.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230040)(376014)(7416014)(366016)(1800799024);DIR:OUT;SFP:1101; X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1 X-MS-Exchange-AntiSpam-MessageData-0: fEOikwA2HHd58CXC+r3iDUhtrou7NjE0y+W3u56Jk6++vcgOMGo7E7RJFiV2ZHefpZncSXJzxN/lnZelSr5mUmoZ+z8SpAyyUzRBq5GVC4Nvk0OTAmvEeqGz/9llWQM9P/VcxVKxa3GHZnOvn3qo/BZp2BjVKLMcgenHaE5sPsNqUO7FF9UaQyo7/jdmYi3do6E21P0hIATTYqamJ/Xxe+BqHXSzg4WQKJEUGV+Iq1si3hvV0mPAu6YMlv7YS7YSkYrPA+ApL/NXY4bC8JmBb1CjID4yD/C1CpNvBQVd8gEtBODL82UnmBq0chg9ddwEoGD7jnsAWXIrOjQj/hiLu7bU7Ge3p0/d0DWezESilMRAap2ZghF9k9c2us0ePCeDPqOwqYtadLyOXUiF5SNM7tAJm7bpNP62CCui+01FMgOO7C/kIDZ34QPZBxCH/4RAu2FCaKYpx5MC9ew82Ui92ExK++yaA7SXqinwbQmas0F0bw+FudwG+D5vRLCCIwton6RJAkuXC6LattfqPSGsZJYAahNJIfgmra2VxFMruEGdyUSh7xHdSLRmsn5hWFolWQcGyZSMqGtamE5y1EAmg49gD1/nO275T7JBFUT6qge24N71hOEBsJfYoVI45SjIiOtqEsKfcE6ds4IQ1nzKsab/ffhFYuBBa7pZm4HvSYyJb+Kpaja5aM2O0itOw/OcvKbmovz095SuD6K2Wq9N6FEFm8uidY+/Wp+ajSj5Dq+gROSe1zDq4Na5RIvSJ2+3LjOrBbeYIpz1Zh/Q72io1isCKw9MmBJnbn75R40r7tGCe5PQg6HQ8cpV7vhL9sxKD/eGQtGr6v4HTCN/m+EgcYRLsrJ9vnzFXrbQvO2Of3vF3+LHM2qXqCcSKCOH7a/+H0Suyt0hFA1Isp9lGjB8/vgeiS1kTYE5DH2hmJlpw9vh+/MFaFBlTPSop/k9TB83xwkBEZzGsun+DcXp4DXv2yGcM2XHhPB+tZTdEAOeEGCW75IOJpwqDTSd+o5AbpwVliKFSKzeT6RtkX9fiQ8EPuPqiBRl4Tsfp8G6+rT/iPcu+3Ne6lXG9Jt9PN5U80Hn28WkajA7MpdsfSVEAm4J9UvLec25FbQyiCgATsflzYKedsrCCw9LkcfRWqMxDI4I77LafUq+OFxJa9ZOva5LHGnTCqaVrA8kztN6D1DPW8bkgIVyUDmPlvsiIPXpA3qxHgkjPlpCzY5utiLkKyPkjskx4v8BMQcDiRKWqTfWvdaezsvy+48H+HaJ5ruefi4hrcFWX/AQNjvK0/ZmyDquE5i/LvWcpHLswtl0rfi6IMYUgAgvOuqS0jMZLivqDVg4j34XItg2eD3QH3kMFipVhGpEBRyz9NObYRzABX2xJqgZb9KOk6xsW27SsskGz9InpFIIMssNEDZ2pBrDawOPA2KSvJwBdjyc4TcIuk6dsYS46Y+XlVgdx/8aFQoF00HIJL57g0a+34EIrOeRm16po799epBTSW4C1dlFEuQ/6+P6ulJnoKI6O/E9ApWTcsxNqCQq7ZQELbgj3XWPOYK6R71en23SkJUkfbADNCsqgAe2HrLQbyVjMw9ktHbPvihF X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-Network-Message-Id: 0887114e-ab42-414c-bc27-08dd367247df X-MS-Exchange-CrossTenant-AuthSource: DS7PR12MB9473.namprd12.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-OriginalArrivalTime: 16 Jan 2025 21:11:01.7617 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-MailboxType: HOSTED X-MS-Exchange-CrossTenant-UserPrincipalName: d2vr0WTdj5VVU9RQDfYQT3p+iQmZ3G7GW/Wm6lqbivi7B1nEKDBTCGBbNvokaYPT X-MS-Exchange-Transport-CrossTenantHeadersStamped: LV8PR12MB9232 X-Rspamd-Queue-Id: 353A218000B X-Rspam-User: X-Rspamd-Server: rspam11 X-Stat-Signature: pz9a51yykax53fehuzqu3i7jbp8i93gw X-HE-Tag: 1737061995-890796 X-HE-Meta: U2FsdGVkX1/C0wYVYmvE2rgYs8jbQ7byHY75TlNzwwMgChSB8igCg90Cdl4XaQUbHY37XejOTsIOvzg32xfK9mzJarAZA1++s1Q5AwKK4pRl0P/z445t5HRSS1CZ+7dyggh+ttK3nBcMB80Vzi9rrEWM06elgHluqDfacG1mzJ5a29by/PhnfraIeiztkcKJPR7Wq6IqEPBLpXHsXP6rPcTSU6p4Q9GhSU2ISVe7+u0UTo2XAws9DSg6HrhS/fJEXBJm4Qj5aFa3HFvMooYvBZLvJ24iMDv9+QTuaiceSQ//IqI1aVGHXdNo6xCc/prRRxo+vVECdOXPnBmrWIUZQmjN3w7+C+Xp2YkV4W9xPIJlkS7U2WtFTwE38YAqMEmNTppjp3+YeHnCR/Mv8pYcmLs4D4cevToJRCSZV+hmfuPoD4/gbCtIEeu3Z3VBvtuW2iZ+wx+QXTGjMtditVptb94LJBB3VXGh0B6ZK+WqZz0NeGUlW5HqJ7ohVLzo91+2OEJro3v+xTWZria2mf5Rh9vuxnybKTj/D6DqkJvbiso43qmnh2o3zAoR7Oc5v74S8ZkOtJPeA91tl9LdJ4gtUV49nAxdSlUzsomLCFML6oil+7EztsMh8NDTzNXWSDO26gI53nfM6AGGTBYUeDkm5k5u8oSDfVsjfdOKrHRxrGIra7mE+rx70jSa1ZWU/G309tlEbGAi301r5lxOP4ZLa31LSLSM9//RKD0xsVmeqfBU2rVs+3IJBIk2RJ2z3bw13+Q+4S+9jCGXr/9jZyWiLKQr45R1mJkcPymtXlRro0+XU7/JmVP2/OUtlAtfv6meHj/GbQKkbdJC0Xfjd3eOH80VqzumKxopAfLlnBTlk4SQWnnau99awIinVc/f6zPxrPgvYwUIUOqGEm9iNn2+SurJeSTpFE4xvzlsnYGqGuyK6+FvBG1b4YHV6vfW9ItJ6BNe8k6YWqTDg7MctJC SkLXKHEt dtcez+o2o0gAukW0X3r2Iplezbq6UwBAxeIkS1fvMevk21Gzy5cf73osDHbPA22fSJzFiZaaJvX3eaZI40Ssf4e4CWsBA4mbtzmoXNeJdkti80TC8rmvC2cegU1YC8dTcpQp0fvj36HaEwHPPhtmsFvwciKf5yZQ46L3YSjOG+k5y/sRUD8vctV7qqICNODwiCCFTDWsktLyDjPENEYWaJE3Yyx9iB27dauyOari3L+C81BFrXAp2R43SPSlaQ5L501tQb7cgJgHIFc1WECTZZReUBRDKtASNQc9pHgrHU+rN1xEkZd8E1m1AnYvAA0Q9fULC5MwPYL2brKgVRda9Z3Ka+C2OkncOtTCw/LimNOuUfwuaoFOY/Xpc5Hu/NlF+ysqjuTTthtdbYoc= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Now split_huge_page_to_list_to_order() uses the new backend split code in __folio_split_without_mapping(), the old __split_huge_page() and __split_huge_page_tail() can be removed. Signed-off-by: Zi Yan --- mm/huge_memory.c | 207 ----------------------------------------------- 1 file changed, 207 deletions(-) diff --git a/mm/huge_memory.c b/mm/huge_memory.c index d9f5ca61d78c..2fead9586e34 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -3165,213 +3165,6 @@ static void lru_add_page_tail(struct folio *folio, struct page *tail, } } -static void __split_huge_page_tail(struct folio *folio, int tail, - struct lruvec *lruvec, struct list_head *list, - unsigned int new_order) -{ - struct page *head = &folio->page; - struct page *page_tail = head + tail; - /* - * Careful: new_folio is not a "real" folio before we cleared PageTail. - * Don't pass it around before clear_compound_head(). - */ - struct folio *new_folio = (struct folio *)page_tail; - - VM_BUG_ON_PAGE(atomic_read(&page_tail->_mapcount) != -1, page_tail); - - /* - * Clone page flags before unfreezing refcount. - * - * After successful get_page_unless_zero() might follow flags change, - * for example lock_page() which set PG_waiters. - * - * Note that for mapped sub-pages of an anonymous THP, - * PG_anon_exclusive has been cleared in unmap_folio() and is stored in - * the migration entry instead from where remap_page() will restore it. - * We can still have PG_anon_exclusive set on effectively unmapped and - * unreferenced sub-pages of an anonymous THP: we can simply drop - * PG_anon_exclusive (-> PG_mappedtodisk) for these here. - */ - page_tail->flags &= ~PAGE_FLAGS_CHECK_AT_PREP; - page_tail->flags |= (head->flags & - ((1L << PG_referenced) | - (1L << PG_swapbacked) | - (1L << PG_swapcache) | - (1L << PG_mlocked) | - (1L << PG_uptodate) | - (1L << PG_active) | - (1L << PG_workingset) | - (1L << PG_locked) | - (1L << PG_unevictable) | -#ifdef CONFIG_ARCH_USES_PG_ARCH_2 - (1L << PG_arch_2) | -#endif -#ifdef CONFIG_ARCH_USES_PG_ARCH_3 - (1L << PG_arch_3) | -#endif - (1L << PG_dirty) | - LRU_GEN_MASK | LRU_REFS_MASK)); - - /* ->mapping in first and second tail page is replaced by other uses */ - VM_BUG_ON_PAGE(tail > 2 && page_tail->mapping != TAIL_MAPPING, - page_tail); - new_folio->mapping = folio->mapping; - new_folio->index = folio->index + tail; - - /* - * page->private should not be set in tail pages. Fix up and warn once - * if private is unexpectedly set. - */ - if (unlikely(page_tail->private)) { - VM_WARN_ON_ONCE_PAGE(true, page_tail); - page_tail->private = 0; - } - if (folio_test_swapcache(folio)) - new_folio->swap.val = folio->swap.val + tail; - - /* Page flags must be visible before we make the page non-compound. */ - smp_wmb(); - - /* - * Clear PageTail before unfreezing page refcount. - * - * After successful get_page_unless_zero() might follow put_page() - * which needs correct compound_head(). - */ - clear_compound_head(page_tail); - if (new_order) { - prep_compound_page(page_tail, new_order); - folio_set_large_rmappable(new_folio); - } - - /* Finally unfreeze refcount. Additional reference from page cache. */ - page_ref_unfreeze(page_tail, - 1 + ((!folio_test_anon(folio) || folio_test_swapcache(folio)) ? - folio_nr_pages(new_folio) : 0)); - - if (folio_test_young(folio)) - folio_set_young(new_folio); - if (folio_test_idle(folio)) - folio_set_idle(new_folio); - - folio_xchg_last_cpupid(new_folio, folio_last_cpupid(folio)); - - /* - * always add to the tail because some iterators expect new - * pages to show after the currently processed elements - e.g. - * migrate_pages - */ - lru_add_page_tail(folio, page_tail, lruvec, list); -} - -static void __split_huge_page(struct page *page, struct list_head *list, - pgoff_t end, unsigned int new_order) -{ - struct folio *folio = page_folio(page); - struct page *head = &folio->page; - struct lruvec *lruvec; - struct address_space *swap_cache = NULL; - unsigned long offset = 0; - int i, nr_dropped = 0; - unsigned int new_nr = 1 << new_order; - int order = folio_order(folio); - unsigned int nr = 1 << order; - - /* complete memcg works before add pages to LRU */ - split_page_memcg(head, order, new_order); - - if (folio_test_anon(folio) && folio_test_swapcache(folio)) { - offset = swap_cache_index(folio->swap); - swap_cache = swap_address_space(folio->swap); - xa_lock(&swap_cache->i_pages); - } - - /* lock lru list/PageCompound, ref frozen by page_ref_freeze */ - lruvec = folio_lruvec_lock(folio); - - folio_clear_has_hwpoisoned(folio); - - for (i = nr - new_nr; i >= new_nr; i -= new_nr) { - struct folio *tail; - __split_huge_page_tail(folio, i, lruvec, list, new_order); - tail = page_folio(head + i); - /* Some pages can be beyond EOF: drop them from page cache */ - if (tail->index >= end) { - if (shmem_mapping(folio->mapping)) - nr_dropped += new_nr; - else if (folio_test_clear_dirty(tail)) - folio_account_cleaned(tail, - inode_to_wb(folio->mapping->host)); - __filemap_remove_folio(tail, NULL); - folio_put(tail); - } else if (!folio_test_anon(folio)) { - __xa_store(&folio->mapping->i_pages, tail->index, - tail, 0); - } else if (swap_cache) { - __xa_store(&swap_cache->i_pages, offset + i, - tail, 0); - } - } - - if (!new_order) - ClearPageCompound(head); - else { - struct folio *new_folio = (struct folio *)head; - - folio_set_order(new_folio, new_order); - } - unlock_page_lruvec(lruvec); - /* Caller disabled irqs, so they are still disabled here */ - - split_page_owner(head, order, new_order); - pgalloc_tag_split(folio, order, new_order); - - /* See comment in __split_huge_page_tail() */ - if (folio_test_anon(folio)) { - /* Additional pin to swap cache */ - if (folio_test_swapcache(folio)) { - folio_ref_add(folio, 1 + new_nr); - xa_unlock(&swap_cache->i_pages); - } else { - folio_ref_inc(folio); - } - } else { - /* Additional pin to page cache */ - folio_ref_add(folio, 1 + new_nr); - xa_unlock(&folio->mapping->i_pages); - } - local_irq_enable(); - - if (nr_dropped) - shmem_uncharge(folio->mapping->host, nr_dropped); - remap_page(folio, nr, PageAnon(head) ? RMP_USE_SHARED_ZEROPAGE : 0); - - /* - * set page to its compound_head when split to non order-0 pages, so - * we can skip unlocking it below, since PG_locked is transferred to - * the compound_head of the page and the caller will unlock it. - */ - if (new_order) - page = compound_head(page); - - for (i = 0; i < nr; i += new_nr) { - struct page *subpage = head + i; - struct folio *new_folio = page_folio(subpage); - if (subpage == page) - continue; - folio_unlock(new_folio); - - /* - * Subpages may be freed if there wasn't any mapping - * like if add_to_swap() is running on a lru page that - * had its mapping zapped. And freeing these pages - * requires taking the lru_lock so we do the put_page - * of the tail pages after the split is complete. - */ - free_page_and_swap_cache(subpage); - } -} - /* Racy check whether the huge page can be split */ bool can_split_folio(struct folio *folio, int caller_pins, int *pextra_pins) {