From patchwork Wed Feb 5 03:14:14 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Zi Yan X-Patchwork-Id: 13960506 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id BE89CC02192 for ; Wed, 5 Feb 2025 03:17:49 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 38725280015; Tue, 4 Feb 2025 22:17:49 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 30EFD280008; Tue, 4 Feb 2025 22:17:49 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 11702280015; Tue, 4 Feb 2025 22:17:49 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id DC99E280008 for ; Tue, 4 Feb 2025 22:17:48 -0500 (EST) Received: from smtpin14.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id 977BEB0AF9 for ; Wed, 5 Feb 2025 03:17:48 +0000 (UTC) X-FDA: 83084431416.14.483D84C Received: from NAM02-DM3-obe.outbound.protection.outlook.com (mail-dm3nam02on2041.outbound.protection.outlook.com [40.107.95.41]) by imf06.hostedemail.com (Postfix) with ESMTP id CAB3B18000B for ; Wed, 5 Feb 2025 03:17:45 +0000 (UTC) Authentication-Results: imf06.hostedemail.com; dkim=pass header.d=Nvidia.com header.s=selector2 header.b=pqT4V5e4; spf=pass (imf06.hostedemail.com: domain of ziy@nvidia.com designates 40.107.95.41 as permitted sender) smtp.mailfrom=ziy@nvidia.com; arc=pass ("microsoft.com:s=arcselector10001:i=1"); dmarc=pass (policy=reject) header.from=nvidia.com ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1738725465; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=JcHn6EpuIJHmIzyx4MrjM6i0D2F+qnS76CpNUvLHJI0=; b=kojhvXJR+MslgdyTtzjoNDyM3QbuhyPjA193VsOB9Z0w94OaZBA5E5jQXjqfnZyHMYEmLm tPKDYsoYsbZhC7DiK5U3zSXRl1fcQEfeZOdW0wYvERa3pwAr9QuyEfiie3uTq2y0O+fcWa 3yiuuEF6QVIfTXsnGy/njgAQtGfpNmQ= ARC-Authentication-Results: i=2; imf06.hostedemail.com; dkim=pass header.d=Nvidia.com header.s=selector2 header.b=pqT4V5e4; spf=pass (imf06.hostedemail.com: domain of ziy@nvidia.com designates 40.107.95.41 as permitted sender) smtp.mailfrom=ziy@nvidia.com; arc=pass ("microsoft.com:s=arcselector10001:i=1"); dmarc=pass (policy=reject) header.from=nvidia.com ARC-Seal: i=2; s=arc-20220608; d=hostedemail.com; t=1738725465; a=rsa-sha256; cv=pass; b=X4aLSdxqcbVkJGCnKKm3+wgpPX+JRZ5XBbgxLEsgfN5hjK2QdfBKkzkX4+PhNa+b5JGFbr ksAm49zwRpVZaoFOeawUrZdUmEvz5VK+y8zW5HgSZkkmTq2f15xcDR8g2lOZbLFerTzEeA tdyBS6GHZLolJOrBUytB/oCN1D8rIoA= ARC-Seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=eVtbKZVerXU6l2l1pOBH88ianttDfd2BNBdr+gnrEi9MuztfGZir1UxEJ2wF0WnjAue2Z9c6ClqMSmBLWh9Nq00NEj0ScagGKGwRiJZxlfjWN+Vs+G7eYFG/bis4+jfyDB1puPl/MF/dbg1PI64CxzfGM4Naupddg6RuKG82Cda7YFt+NuM7654/Fwj/1Dsmp9TwZF9UvJzfZXdIFkhpXoYLhF2vbdQ1eYiEOT55piACOWw8OrY1X/nwGHuLtsAm+2LU9k+nURNEugU72Rw4RpgtiLNZJzRm+h5BbdH+zpQAOWCWF4uIIjabb+ba301iIUf/DEAYtg25HatrLsfckw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=JcHn6EpuIJHmIzyx4MrjM6i0D2F+qnS76CpNUvLHJI0=; b=OYOci7TyJR6dc64R32iraKkqb3FktMJC9be3K3b0NW4yylbaqfwfxwoA0MV7yuAVT8Mn32Y3l2JMI4fkBnUV8HBDDdfkbAkPvsYZSfR7YFtuEPpdJHtcWNW//H9Llv2GMf5OTLF55kAoG5fKZefoIMD9ytF+aa8jOVxpaT1jjtROZZwE9jFP22EdhttBvNteGNCPETVKk1uUz1u6ieKVll195iYJFJ3Z9KmQIJ/QQt3tqcwiwaPYSTGSKOBeI2Y7m5Oqax+WScIsPoGVy1He9YZto6KDhCwMt3DDtp5QTzDS0SjFXDhbAHp5AGT4XA1mr+rxs59Qk3DHbI/oZ6jpug== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=nvidia.com; dmarc=pass action=none header.from=nvidia.com; dkim=pass header.d=nvidia.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=JcHn6EpuIJHmIzyx4MrjM6i0D2F+qnS76CpNUvLHJI0=; b=pqT4V5e4zSJ3P0L5C8q7EBD32na9cDLP+PC9L86/9Ckls3HsEfvm8hbob0mtSJDkaqyLL1QFOmr6OgA9RfDRj/w0u8yvb4DmEiO9iFckO910BegVf1flkOYwnOUWRJk6F8IrdmiEpliMpEDa7JLM8BjeE5/1Ptk5PuYZKVySBXhgpm37YzI9ZBcJN+F9cSPiAV7OrMzhRb5Wa2TfdBrIXfaQeIJJNBaWIZnJG9mtsRSj0eFZ3WVnf4PCkxnTRjWi60Ev0Lgvsf+6Ddih5n+u9Kv6/CP50PO9s6BiVjVEZC2x81+gM4gMBERcCOrllaRH/ZEJwXLh+7QPVMgIe5Rplg== Received: from DS7PR12MB9473.namprd12.prod.outlook.com (2603:10b6:8:252::5) by MW6PR12MB8865.namprd12.prod.outlook.com (2603:10b6:303:23b::21) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8398.24; Wed, 5 Feb 2025 03:14:40 +0000 Received: from DS7PR12MB9473.namprd12.prod.outlook.com ([fe80::5189:ecec:d84a:133a]) by DS7PR12MB9473.namprd12.prod.outlook.com ([fe80::5189:ecec:d84a:133a%6]) with mapi id 15.20.8398.021; Wed, 5 Feb 2025 03:14:40 +0000 From: Zi Yan To: linux-mm@kvack.org, Andrew Morton , "Kirill A . Shutemov" , "Matthew Wilcox (Oracle)" Cc: Ryan Roberts , Hugh Dickins , David Hildenbrand , Yang Shi , Miaohe Lin , Kefeng Wang , Yu Zhao , John Hubbard , Baolin Wang , linux-kselftest@vger.kernel.org, linux-kernel@vger.kernel.org, Zi Yan Subject: [PATCH v6 4/7] mm/huge_memory: remove the old, unused __split_huge_page() Date: Tue, 4 Feb 2025 22:14:14 -0500 Message-ID: <20250205031417.1771278-5-ziy@nvidia.com> X-Mailer: git-send-email 2.47.2 In-Reply-To: <20250205031417.1771278-1-ziy@nvidia.com> References: <20250205031417.1771278-1-ziy@nvidia.com> X-ClientProxiedBy: BN9PR03CA0152.namprd03.prod.outlook.com (2603:10b6:408:f4::7) To DS7PR12MB9473.namprd12.prod.outlook.com (2603:10b6:8:252::5) MIME-Version: 1.0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: DS7PR12MB9473:EE_|MW6PR12MB8865:EE_ X-MS-Office365-Filtering-Correlation-Id: 2877f672-caaa-4fee-afcd-08dd45933ac2 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0;ARA:13230040|1800799024|366016|7416014|376014; X-Microsoft-Antispam-Message-Info: iA/dEYNJE9qhZAY/Syio/XNTBAWc2PV1yJc5Z4t8e6vjaiTna3Dt+t3Uas4mqGfICst/UDdgF050Y1VE9WUDIjA2K3VUAtN53D1eCw+1TPzY2EiwPcKUP7SVFbKet3vtt7azWMmeGNmwqkaP6HYZMksaa6N6o2JwMHkxeAA4Z8ui+k78QGG/7S6jIcjNd3jBTshtvk9WJmsg2VclVfkO+ovndZOq3Ibi0bC8kJ29Adlh7tT+IVtxsQrYB+oIE5W19WbTqH+UAB/Hs5BV1ExCF2P8cMt+N312eUdyo0HrRDOJHmhnNgvea5nJ2Fd4ntelcenVRRZFvNxLB9rwrJ/l/lJ7a1mFGSrewKu6KtUXMAf5b+LfSIBJv3Vbum7XlZqwRNFszJDOFc8+FEldSOFxlKLJtZtRTNrN6wabAtvuwHp6INKoSH0/taHs/9/6ZTrvHOLo4fMeFN6Pa6M369ir+MG+X8rZh5l1F4YsEEqYm6fAGVzzWrfjH45x0NUEV/0Kd4HWvl7TtnyOO1Xfh9bUZDCCh9lz9TIOwB29eTQDJqADpU3ICcWFEm3/twtQIbDUP6GDMLkn9Yv/UUY8xWEEATL0mDpSuxA+MhvpKrV5EDKRYmtxpghK6KOW38iL4BsRJ+/mieN0gkHpkBPO9IVWFE/hi0bsRR++E/mgwp+5myUKimcCVteSeSvPt+2JO8jshMEXdK2rcqf4mvWXRDGS9BsZjIaQQCXav9kGTmg3M1Y6CdAg2sIBHpd/+i2GX75vP6MJWgSQ7ngjLPD9ExF55+OuNd79rhD2jJfAMF+NYKAHiH9h4Nw9KHJsEpcQtHtPbK8iM1XveF1wOIjXL5yZ5V9BSw1pm+TmDQiEK3spiE86J7D0I0tWirEf7WiaN+44sHHfEyzNm7F1XFRjVHeAOlaHdkRFZIiALNHCOgUustvH1Q+uJ59xtnN2koNe8iLUTsGE2wL3l44gVO4qe+lUZZUsu1tQte0qLb/H+35Lu/p6DJQzw+vbNfrK4kWc5HIgjKE4pQhVC2AiYhxD8dMqlMaTJ+XYPnjWe8LY0N4sH5pAr3za4Li51iUygTiFNQnJz4Bnlg1j0xCJyoniF2zr/ip6RbfqXUEwihsklyHcurZSDzveDvxGUyozy4XRPtI5lp4ikN3Q9pSE1ze1GI/6LgyaxZ48mVIXKxQ2oi9LvORmkYVnjJmXbPryi1An921LHE/pnPG8CcQIG1ZK46e4FKxoXIpuSFXDyrPFWrhQNceaykh8m92mXYeZMHJzPnnAeENUegHf7w0Gfe0XUotsjGQGcglTjAMonC2+clVZzQRje5quobG1StnK1XRUPs04lhH1KKBzIvz8Mr3gTaPTbLJGCIcHgcFYAFa80sAprGbvdGIF6THi5AT7Yq2L8XZS X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR12MB9473.namprd12.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230040)(1800799024)(366016)(7416014)(376014);DIR:OUT;SFP:1101; X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1 X-MS-Exchange-AntiSpam-MessageData-0: AnZvlcee+FarLZaMILPlin/JYgah2RQ9vdMUBvsk4wu9D47bZUUJxVKaAiL6vOvfRZfeTlQhXdiKSgchY8AULAC32fM/OpVxEpKZzHjbm4+rQyQP/e7nrrF3ADWoPDJ5PAyU6DnOLVcERAquwQPVygkksnVg7RGGxODXzE9LXnn54tpwaIg7WAXr2aYQwCCFM7ZWkImrhTTBCRL40BT51SAxWlXkGHx5Pb1kWx9Ln9HVmML7OT0pi7vkZzBxRXg/F6muvc89CtSxR/c4GEE60x4702/VvEleiwbeof33zNAlk8+5y+VMXGCJb9hmCzC7ygByQ86iMdSOyQyXP2ldF8e7uQz8dlcIbiOFS80zseuU+E8svs4RxlJyJzD3VkpWwCEpQGYlqQ06QRIMQA4nN9Wk6TW5QUeewV9vP+qrQCZctOw2Jnb2aa6B53jArbovh60GChc+H1HdyI6vTYWvMZEicnyrzDhPfjMGK8gM+TRT7POBNsBKaKIKyIi3ixNIb46chUoe+4+kyvnqHtFum3f8oL9LwQMdwJYNTbyOoHGw/csD0q2Q9feG9Dy7nOIq+hXsdYEIY6mi5X3y7cHzM3avC08it67r8f9vVCRrEwGYEhfep+oYG0X0WLsqN18VF5/YNoCxG/n6qmPHigdZFXpKbC71CM9UnjWSpoK+r2IgxYX+sRcA4MkDhI7Yhdc/u8mwSnxuTjyFVhJgYUkHtl7n3lhUP7B0T+B9VNyNBIiavHs/RhDswRLJ5qO8AH/jfrH05OI9fHe/msSRvT24D9KFkDV9CwdJv6EMOn7FBMe6EN3QqXU/YlDaeMJt2VK10W+O11Zdc6g8voYnsb7D2KYasavisOD/0V2Bc/dozY0ioXQ96x/yRbhowzh3ZT4lC/2PzS7V9huAKzbcGkUckc8v8YEnlSEn54OTCVcYTkpc8U7l+DRXxmQkfN/k/a+ijvJYhoQbH4RoG6NL4nJOzcQsXRnlhJ6fvwhZtcpwdYSWCj7GZ+detopzXJMIpk75SZGmnMdPMEPklT8T/dzzCf7bUD0Y7OukoeDH27yme6wyKGbsOPdGsAonIrSB8Bq3MFL1Vs/9dXrOvB9dC1pfGMvlHZm1tRjubEm3HlYkARPwVvEVYEudSdtSxUH+MQ/tZWfw/fDSp44i31kkbAXiDUJvTr7pW9jA7dJn9BIkYNzQttLnndXGuy4O9VSKsdEp8ILgE8KziNokI0ppQMhuzRtnJtZrns2feM6DATzi3h2Olh0RmZ336jqBeNvZPMJH19b0S224NrkCS8SgJFSbcqwC8wKscAvLV0BDIO8vKA68HtoiwTfbIuRKUMbysap/vhg9UNd5a0Mde2oVLLTbeA3OIQ9+AIGEB1hp+Nav1kH8OUpnsfjY/3NOY0IaH/XGmUh0O466wECbgKBkGzC+EvndlJokTvAAwblCp4pocL9xmFdoXcYHQAScPje0ds9xHzWAYmuWnUrtParpZxsxMZ2dSNsEBN5dbgET/JgpZdC9h39LrKnOx2qqoacZgvmuQPFsDjg5xFCHlEZ/8/0e3e5LZ17DY+l5LLducE/WzCPXdJwM842CMVWKOPBskxBY X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-Network-Message-Id: 2877f672-caaa-4fee-afcd-08dd45933ac2 X-MS-Exchange-CrossTenant-AuthSource: DS7PR12MB9473.namprd12.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-OriginalArrivalTime: 05 Feb 2025 03:14:40.5993 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-MailboxType: HOSTED X-MS-Exchange-CrossTenant-UserPrincipalName: 4UNldyv358+ZOAd0yjLhMFGzUvSRS2AOyo2S471MNz5vz7rNQbV9vUuAd7XBuGOV X-MS-Exchange-Transport-CrossTenantHeadersStamped: MW6PR12MB8865 X-Rspamd-Server: rspam05 X-Rspamd-Queue-Id: CAB3B18000B X-Stat-Signature: 7dm8zufogdpsguzwywbjqaueqn4ob9yy X-Rspam-User: X-HE-Tag: 1738725465-504003 X-HE-Meta: U2FsdGVkX1+AKW11ChQvq66JX22DhFG0acMasG2E0icgxcTD8rcKxpvVMLjWWeK3Nqqmuiqy8WnaYBrmpHGbb4OyvCq0N7CPVcwP6Xt7DTsY6CarC03hQl25o3jbm6b4EwY89eaokH6kkv8TV35khOeoFA6+h1sxn5AttZUyla7ZAvs1tJhw/HaBvjIENKhUCoiWd/Wmx9BWpIGN5/RLvBRBGfCYkW/0DU7Xue9QyQss5i91cIgCBzdaJ4JZ6qyH84s2jpUIpn/5JI09P50UM6b+4ElEi+rrtJaVAKUtsbTTv7XdeuIxm9TRblt5UDxwQlxuBuPc5HSGfa0+VkXiazKTlVpqIedm16bYXG7YPdw07q97rIoqLo5KIxK8uUGUgK9XUl+gSaOqHXgg19JhfZeeRKI/nJuhNlBwfvFJVj2IezAeQf85p0OSQFnbuCXMxVBpFjSAwq5Un343yVLSvdmoNkScVaU/FlCWAijJrDCJzsl7RyL+N60+CKNrmUlVVfYWEKbJ9tw2vDmrbLJqOQGLOQczk+2/AICwHCeRzzAoTUTW7Wdy2qbfAkSHMIVWC90lD5q0Z1/+ceOo4N+nh0GMOfYPxgkKfIPkcYWPsdcNxs1uzJJ5ZUYSL78LSS1U/S3sE2P3KeL6lGqA977A0JiRV8Z65tPvAbYZtxkEjfCDK9nAVXHwTv4Dty5Wf3wL4QiED8DYUEXaCodVhVVPYj9yjlWEGpABt1CqNZrbTg1BdlQ3XMFsX7CBO/82eiYBm0W4Htvy2JCTffP4GFUzKINQoFM+49UTcSh/d76M4lsBPmQU6coUH9b4u3q4UraMQoZY+0fe5WdNWPzAW2k2jOqBluosSBuXnTPv4pfCU/TEEQ5zg9MclnCNsJZzptPj6MIyV4ZOqw0ADPaHTFIKh2lFaXxHSMJcvmFmYYzAwRqGQdkrrQrh7avxoK8sU3VVGr/D3NAS4iAJY2zRtJt ZS1JURnN ElRI2mLK8cQ6ZTe8qwW0KJ3bjhYxNFiNVRkLrHPG+Kyk2xgGCtagsqrKYMtbgCimWeLVE2slt+vqH73rYR4J/+PLVx/6jQ7afN7/HCHT8WeEHWvhRmzmpD1T8f9SP5jVC0hJyDlzw2E8a/zYi4Kn1+8rBu3CypEKqPsf2aZMWRErlCrNYrzKnGyTpgUAZTpG0ELLACKEEQA4UFdArwy+CxRa+JEyrFeQf9w85zhw4tDPWQIBVLPCP6okCjIX5DvaMi/HXACRfQlj+IiYbIlX4JTxNgIKbRG5B+/JaUt3Cb5GSL2WYYRdGMfgcSoT3qwErfDAXXEMm3us13nfj2Ki4v4ooHhgy/cyPRNvJGI3uabHqxHelqfAQWKJzxlJkqtWDlcaQk10NMJSbFUU= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Now split_huge_page_to_list_to_order() uses the new backend split code in __folio_split_without_mapping(), the old __split_huge_page() and __split_huge_page_tail() can be removed. Signed-off-by: Zi Yan --- mm/huge_memory.c | 207 ----------------------------------------------- 1 file changed, 207 deletions(-) diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 20d7be07cd7b..36594eef5c24 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -3168,213 +3168,6 @@ static void lru_add_page_tail(struct folio *folio, struct page *tail, } } -static void __split_huge_page_tail(struct folio *folio, int tail, - struct lruvec *lruvec, struct list_head *list, - unsigned int new_order) -{ - struct page *head = &folio->page; - struct page *page_tail = head + tail; - /* - * Careful: new_folio is not a "real" folio before we cleared PageTail. - * Don't pass it around before clear_compound_head(). - */ - struct folio *new_folio = (struct folio *)page_tail; - - VM_BUG_ON_PAGE(atomic_read(&page_tail->_mapcount) != -1, page_tail); - - /* - * Clone page flags before unfreezing refcount. - * - * After successful get_page_unless_zero() might follow flags change, - * for example lock_page() which set PG_waiters. - * - * Note that for mapped sub-pages of an anonymous THP, - * PG_anon_exclusive has been cleared in unmap_folio() and is stored in - * the migration entry instead from where remap_page() will restore it. - * We can still have PG_anon_exclusive set on effectively unmapped and - * unreferenced sub-pages of an anonymous THP: we can simply drop - * PG_anon_exclusive (-> PG_mappedtodisk) for these here. - */ - page_tail->flags &= ~PAGE_FLAGS_CHECK_AT_PREP; - page_tail->flags |= (head->flags & - ((1L << PG_referenced) | - (1L << PG_swapbacked) | - (1L << PG_swapcache) | - (1L << PG_mlocked) | - (1L << PG_uptodate) | - (1L << PG_active) | - (1L << PG_workingset) | - (1L << PG_locked) | - (1L << PG_unevictable) | -#ifdef CONFIG_ARCH_USES_PG_ARCH_2 - (1L << PG_arch_2) | -#endif -#ifdef CONFIG_ARCH_USES_PG_ARCH_3 - (1L << PG_arch_3) | -#endif - (1L << PG_dirty) | - LRU_GEN_MASK | LRU_REFS_MASK)); - - /* ->mapping in first and second tail page is replaced by other uses */ - VM_BUG_ON_PAGE(tail > 2 && page_tail->mapping != TAIL_MAPPING, - page_tail); - new_folio->mapping = folio->mapping; - new_folio->index = folio->index + tail; - - /* - * page->private should not be set in tail pages. Fix up and warn once - * if private is unexpectedly set. - */ - if (unlikely(page_tail->private)) { - VM_WARN_ON_ONCE_PAGE(true, page_tail); - page_tail->private = 0; - } - if (folio_test_swapcache(folio)) - new_folio->swap.val = folio->swap.val + tail; - - /* Page flags must be visible before we make the page non-compound. */ - smp_wmb(); - - /* - * Clear PageTail before unfreezing page refcount. - * - * After successful get_page_unless_zero() might follow put_page() - * which needs correct compound_head(). - */ - clear_compound_head(page_tail); - if (new_order) { - prep_compound_page(page_tail, new_order); - folio_set_large_rmappable(new_folio); - } - - /* Finally unfreeze refcount. Additional reference from page cache. */ - page_ref_unfreeze(page_tail, - 1 + ((!folio_test_anon(folio) || folio_test_swapcache(folio)) ? - folio_nr_pages(new_folio) : 0)); - - if (folio_test_young(folio)) - folio_set_young(new_folio); - if (folio_test_idle(folio)) - folio_set_idle(new_folio); - - folio_xchg_last_cpupid(new_folio, folio_last_cpupid(folio)); - - /* - * always add to the tail because some iterators expect new - * pages to show after the currently processed elements - e.g. - * migrate_pages - */ - lru_add_page_tail(folio, page_tail, lruvec, list); -} - -static void __split_huge_page(struct page *page, struct list_head *list, - pgoff_t end, unsigned int new_order) -{ - struct folio *folio = page_folio(page); - struct page *head = &folio->page; - struct lruvec *lruvec; - struct address_space *swap_cache = NULL; - unsigned long offset = 0; - int i, nr_dropped = 0; - unsigned int new_nr = 1 << new_order; - int order = folio_order(folio); - unsigned int nr = 1 << order; - - /* complete memcg works before add pages to LRU */ - split_page_memcg(head, order, new_order); - - if (folio_test_anon(folio) && folio_test_swapcache(folio)) { - offset = swap_cache_index(folio->swap); - swap_cache = swap_address_space(folio->swap); - xa_lock(&swap_cache->i_pages); - } - - /* lock lru list/PageCompound, ref frozen by page_ref_freeze */ - lruvec = folio_lruvec_lock(folio); - - folio_clear_has_hwpoisoned(folio); - - for (i = nr - new_nr; i >= new_nr; i -= new_nr) { - struct folio *tail; - __split_huge_page_tail(folio, i, lruvec, list, new_order); - tail = page_folio(head + i); - /* Some pages can be beyond EOF: drop them from page cache */ - if (tail->index >= end) { - if (shmem_mapping(folio->mapping)) - nr_dropped += new_nr; - else if (folio_test_clear_dirty(tail)) - folio_account_cleaned(tail, - inode_to_wb(folio->mapping->host)); - __filemap_remove_folio(tail, NULL); - folio_put(tail); - } else if (!folio_test_anon(folio)) { - __xa_store(&folio->mapping->i_pages, tail->index, - tail, 0); - } else if (swap_cache) { - __xa_store(&swap_cache->i_pages, offset + i, - tail, 0); - } - } - - if (!new_order) - ClearPageCompound(head); - else { - struct folio *new_folio = (struct folio *)head; - - folio_set_order(new_folio, new_order); - } - unlock_page_lruvec(lruvec); - /* Caller disabled irqs, so they are still disabled here */ - - split_page_owner(head, order, new_order); - pgalloc_tag_split(folio, order, new_order); - - /* See comment in __split_huge_page_tail() */ - if (folio_test_anon(folio)) { - /* Additional pin to swap cache */ - if (folio_test_swapcache(folio)) { - folio_ref_add(folio, 1 + new_nr); - xa_unlock(&swap_cache->i_pages); - } else { - folio_ref_inc(folio); - } - } else { - /* Additional pin to page cache */ - folio_ref_add(folio, 1 + new_nr); - xa_unlock(&folio->mapping->i_pages); - } - local_irq_enable(); - - if (nr_dropped) - shmem_uncharge(folio->mapping->host, nr_dropped); - remap_page(folio, nr, PageAnon(head) ? RMP_USE_SHARED_ZEROPAGE : 0); - - /* - * set page to its compound_head when split to non order-0 pages, so - * we can skip unlocking it below, since PG_locked is transferred to - * the compound_head of the page and the caller will unlock it. - */ - if (new_order) - page = compound_head(page); - - for (i = 0; i < nr; i += new_nr) { - struct page *subpage = head + i; - struct folio *new_folio = page_folio(subpage); - if (subpage == page) - continue; - folio_unlock(new_folio); - - /* - * Subpages may be freed if there wasn't any mapping - * like if add_to_swap() is running on a lru page that - * had its mapping zapped. And freeing these pages - * requires taking the lru_lock so we do the put_page - * of the tail pages after the split is complete. - */ - free_page_and_swap_cache(subpage); - } -} - /* Racy check whether the huge page can be split */ bool can_split_folio(struct folio *folio, int caller_pins, int *pextra_pins) {