From patchwork Mon Jan 6 16:55:10 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Zi Yan X-Patchwork-Id: 13927684 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id D196EE77198 for ; Mon, 6 Jan 2025 16:56:18 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 6516A6B009D; Mon, 6 Jan 2025 11:56:18 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 600296B009E; Mon, 6 Jan 2025 11:56:18 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 42C966B009F; Mon, 6 Jan 2025 11:56:18 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id DD3006B009D for ; Mon, 6 Jan 2025 11:56:17 -0500 (EST) Received: from smtpin18.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id 6A5A916010F for ; Mon, 6 Jan 2025 16:56:17 +0000 (UTC) X-FDA: 82977629994.18.E92FF08 Received: from NAM11-BN8-obe.outbound.protection.outlook.com (mail-bn8nam11on2077.outbound.protection.outlook.com [40.107.236.77]) by imf29.hostedemail.com (Postfix) with ESMTP id 83748120006 for ; Mon, 6 Jan 2025 16:56:14 +0000 (UTC) Authentication-Results: imf29.hostedemail.com; dkim=pass header.d=Nvidia.com header.s=selector2 header.b=B9uy7mxT; spf=pass (imf29.hostedemail.com: domain of ziy@nvidia.com designates 40.107.236.77 as permitted sender) smtp.mailfrom=ziy@nvidia.com; dmarc=pass (policy=reject) header.from=nvidia.com; arc=pass ("microsoft.com:s=arcselector10001:i=1") ARC-Seal: i=2; s=arc-20220608; d=hostedemail.com; t=1736182574; a=rsa-sha256; cv=pass; b=bPEMSC2P5SJCqVxI9y7wO3mJx+A9pO6bX/P4DHhxtkxtqvezFkiPDzqr3qWkUOVZtBOHzu lTyQ47I7iOW/81q5Z93OmBSbeZVncjjYvEqQ71aChJb92o2jUKI2870NRpQED8DI1WsKGw iO2zvGHw+lyrsKLCloya+BsNGAMcuzI= ARC-Authentication-Results: i=2; imf29.hostedemail.com; dkim=pass header.d=Nvidia.com header.s=selector2 header.b=B9uy7mxT; spf=pass (imf29.hostedemail.com: domain of ziy@nvidia.com designates 40.107.236.77 as permitted sender) smtp.mailfrom=ziy@nvidia.com; dmarc=pass (policy=reject) header.from=nvidia.com; arc=pass ("microsoft.com:s=arcselector10001:i=1") ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1736182574; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=gpE6Cvj8Jcb1HWC151x52uK5cybchd3jkxrLMwW6R08=; b=pHYOrGerPUPsR8g7DjmJWM78N6//GOBq/u7VSmpkNwxEIO2y4H5IYwihX/EPadynJpmnYV wu8gQqO7Jp/upt5LFNBg3R3qJzfp6IjAX7UMfY+QN5H5oZ8c3JHyeNNh3iasOYq2g1dnK0 du/ErUJF2wusZBjJ9twkouz6j0vhV2M= ARC-Seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=xwb28ox0KNWiIoZ2DzLKAGdxxO4h3D+eFuVnUOrWKqeNg9jJXV9JXsTe28p6Nx55JDzl8Ih/7K5+7uU8TXYYJVFDQE3DY0OipZkfT3mjZuTdzLqCKd5PoYc5Yon16G7QB0MRrd21/UtBLhnJXEpbwtsjle4uHDSJ8oubwXcu99jn/K/XH4U+M1MIvbwkra1vdGQ+jMkWgdOEoWRuifh/tqEqaxDItDiWeZ2dsdMIlucoVqhzGUVrDP8bFNUUMLcvVZV4U6Kh/FarVxcxLizN5X7Wtzn2rLIe/GMMbjnwrsmoZIBp5Iczy773SrPv/SH1K/CSo5XDC37eEKjvwkeokA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=gpE6Cvj8Jcb1HWC151x52uK5cybchd3jkxrLMwW6R08=; b=BEV2LES6F+mhYmjzOtvy/IxP6EfQDQB+5K51Pz9+g+ZqUGQUE7dpg2tUTEFybH5pHECsuAE0/oWwgpvRScQJGHvdWV+uOoi7AwdJw3ZuJlHlV+scwjsZiBV7GTchXsUC6IPJg4El2zns+rZEuMcqr9zxk/lOS0els+IRznaM2Tv6cXdWknC5IP49loxFZtaSZLo/+IAfI+nNN7l3fgj80Ia6WY8BXEJ6z0/UWtLJHXr0xNsAIAE8VR1YrEmP2o5s/+OcEuuY/PljXHDp2S6ehVGtjgWBSobCOwzrRKd1qkXYQ2Cc0Cc7y5RDBDDat4Uk85Zud3lhYgHYKukAa0lYpQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=nvidia.com; dmarc=pass action=none header.from=nvidia.com; dkim=pass header.d=nvidia.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=gpE6Cvj8Jcb1HWC151x52uK5cybchd3jkxrLMwW6R08=; b=B9uy7mxTuk50MIFI4y3EANTB12Vv3t/1utVy2TZtV1vTdM+osUlJuFA4wc6EbR0EI/vfE8fPScjtDSDoHQqHVIgbRZ84E2QNxu+9A1jvC8900xalsSXsSLlxXl1yCCvv6Pdh2e+6XFLcZXCfb2eXDiUy12MX9dtNEfTlBBOJVIPc3tjLrmdMpzGOISN3q8Jq+ALBe5blhyLhuqvNjPwoCC4rXPGLbKKdcpNvt292yN/IFdu1OEt70TEkkG7yOjp82SkqyiP0WimBApTLcdSQ50tXlVM7C6CVFgPseJtxZ2qC9GUva0nf6NPQeaC7UMiVhXql3NzTeX2eyx00bPyctA== Received: from DS7PR12MB9473.namprd12.prod.outlook.com (2603:10b6:8:252::5) by PH7PR12MB6634.namprd12.prod.outlook.com (2603:10b6:510:211::22) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8314.17; Mon, 6 Jan 2025 16:55:30 +0000 Received: from DS7PR12MB9473.namprd12.prod.outlook.com ([fe80::5189:ecec:d84a:133a]) by DS7PR12MB9473.namprd12.prod.outlook.com ([fe80::5189:ecec:d84a:133a%3]) with mapi id 15.20.8314.018; Mon, 6 Jan 2025 16:55:30 +0000 From: Zi Yan To: linux-mm@kvack.org, "Kirill A . Shutemov" , "Matthew Wilcox (Oracle)" Cc: Ryan Roberts , Hugh Dickins , David Hildenbrand , Yang Shi , Miaohe Lin , Kefeng Wang , Yu Zhao , John Hubbard , linux-kernel@vger.kernel.org, Zi Yan Subject: [PATCH v4 07/10] mm/huge_memory: remove the old, unused __split_huge_page() Date: Mon, 6 Jan 2025 11:55:10 -0500 Message-ID: <20250106165513.104899-8-ziy@nvidia.com> X-Mailer: git-send-email 2.45.2 In-Reply-To: <20250106165513.104899-1-ziy@nvidia.com> References: <20250106165513.104899-1-ziy@nvidia.com> X-ClientProxiedBy: MN2PR08CA0025.namprd08.prod.outlook.com (2603:10b6:208:239::30) To DS7PR12MB9473.namprd12.prod.outlook.com (2603:10b6:8:252::5) MIME-Version: 1.0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: DS7PR12MB9473:EE_|PH7PR12MB6634:EE_ X-MS-Office365-Filtering-Correlation-Id: bf447af2-d373-478f-1c58-08dd2e72ed72 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0;ARA:13230040|366016|7416014|376014|1800799024; X-Microsoft-Antispam-Message-Info: NG2GRtAhRQA8TT552CQNEs4mnZOIxKIPIiGApwwrCaOVGIsJRcn6ABEe17kmcytBYn+ZyiKCIHxgQnpvYxwtbk3FuTrqsGuNwK8fhXXXgtpDo2mMQqaH2tLSKUeI7+sbhRWxw1d+K+VCjvv+FAmRE+N+5kecy8p90eRUh6mfbAq4WoSHyzv2gB2zan8QCOUk6GRbqsMueIb10cDjssiEIqRP2CwvgXQeJrs2C5l9hTDy98asnSnCTJyedfKcKS2o17pE6pTuHq70UbAhj2F9MWF77uoEaAa9Y15FLEEdtqdU0GmTJ+vwyYA7O88Q+i43zlKOuE9PA+9/9Cct76SFbVGNHwABHNKOWqSTF9/joR6F3YkK5qxyyAVAJvAOsn5R2/0COuFvOL+W5YU5Ej455amAqrn3qLLBPce0s5/PEdIW+M3ZApvPpyfNJKi4JDtZKrfNnAYQtfgRNUooX6ayB5LUyT111ku1g7/XAdkciUfVNIKFrlX7ltS0rjZOjBdak8l8V7qSxqY+JdbczTzR3V2C2fcaNtVAB7EfMEa/NvUzHWyWCNPiITEDmqukDaQElOvMRPp4EUa5oplOdDS4S0SKX34G35YQsDMLk7XOYxLuoOuVr+wdWQNIhXvv7JP+7I6xEgghvREQscCAgCWPbu3pwKg22JvmDXxuTjI2OX0JIWEfYrIQMQNCWGvlc7RwNHISjoNkxQ0cqS97ZN8J/WxYvDKqJr03cMWAMvsnD0WxL7x3bZw0bbOLcwC19IooRpbJvYGFMC4iyyNe4WhytVXvqc9Yz/BtkQJtiM5KFDN2S3LgOa2pm5rJRzVgnhug3goU4q0W1SMnb0W2WwJi+vq4gy42aLJFIEAtwwUyZ2OQeEXUNh+2rECbtn4vJlRigj/h3L/9yy7p6OFHfpvpj9sk4Z8/PjsmTWmyX/gvuK5caZmQ2hy0vbrySOhQq02peIfStZk9ix8EjKA0EsKk/ybB+sPId0EqziU8nWJIJcS6gwANRxt61mjj/HkOvtmkscznEWpZk49pR2bot9Cgtu9azhrz4XxtjxCkksusRa5pNjcXHKhZHR0o1/tbCwqtm/u0lGkO+b8qe6SlhvwcIhEfuDroquZ8B/+vtBWfhk9yFm29WVL9Hm0I9bX7Jy79UFuwtN+NDfHk6d5dlIucbj97iG5Huu5fvBB7gPrFt7nm6tt8fWBVxN8OkoWyCClVA0j+sGZSXvNVxrS5Jr1z1fQYwO+kSF40MS6oAk+jwrzgX710UBM5oDvXiY+Dk6J+N6JwsBN5Trf+J7kn00hoounpODJYL/U45v08QALa5lrwWRjV8WWwlXsQT9FCFkAVI60p/l7n3nvx/Z8lNOEb0/ESWRsLDmQbT78GDFjCkw+1QXuGY60MlFcDV66yRCAC X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR12MB9473.namprd12.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230040)(366016)(7416014)(376014)(1800799024);DIR:OUT;SFP:1101; X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1 X-MS-Exchange-AntiSpam-MessageData-0: HsYWv9i12XBcFTiHK0FlBWXZ3NNNbQp4YsuCBTb8lwWPLK/bVC5Q2i4+kekYaZUiJosn6qPKi0wsHp9QP6YTlz0n2CUj/RMChoptX7DgpMvgJuAu+VhCC95OqhLA9ABBrOfrVkZRLfLkMRqB3IVgCewsNhI42pwcoxFTdtxWnYNudYuar5k/iDKmuAGeJMxUpD+VXdsgXkcvh5whU4EtdRivWVImaSWbiWSXLV+NaL5wYo9hsHx1CDK1Qo23t/xu+V9sZ1dVGPB5dkgJPdBfzSH4XiBNUsY2enzqd4cZ7oYLZUQ1PRVYCwok1IcKArkVldkdmbOzndeaxWCq/SgqcJYHfjZnAYS53nNzUxdO0mzkZvaFMUYV8+U5jsBkq1xhBkbbGSIdkrm9ssy7iBCW92ECFzqt1O2rKoW2M6IT3QKiLU0/kT8PuJd1uxpiqHGwRfoiFiWCYfizPIKEak7oJuEd8V4AuOcNf6jmXvC2oSkJPrEGGo2+wMj0hGrWRDxgNcWkypxs63m50tTbbQfItH0/B7GwM0Q2oM9ozTFvasr6yCXCPK2C1iXW99biaLyuUi09KAgUU3HrQCs+daMFfmJqFDnQ+fE5Gn0/VKanjqo7j7FICxAbU76njvsaLhNCWoq19RBrAgnILOGqE7NcVJBPKrcv5aKMFnfD6/H/71/OErAufNJ3R5M1lJYTilgpcXGEk+baB64L7z67E3PlMTprDSynNTUY+X3Qii4EN1tT1kc3VIqmqdKJIJGvBnB8Rz99dfdY7oVWnxU7uRhs3PbY8LnCr6O1YxCq+pFCta50GbgosvxUxQvUe2goO5dNBINV/3p6yfv4tT1XiDZ6lzt58/raCjPu8g7oaWkiWcwXgZe40gzR6TBAgZrRyWZpUhLPTa8RNwNNYOddDbgiWToP9xqiDTQRy1eE+fNmptDk9vkeCEZHFIsXR3s8jVY1zKP2Q87i0+xwk7AqUpCxFT3XJ5DEzlB52eN9aANUY79MCCaOjFTKZb7LwebAPa9q6dixXk8lqL37VDJLshcAMjC/fcl3I4fTRCRXThTU99Shk9BliDNokYGL4gebb8R0JIdD5ZjPtgOe0MVPVM2ZvQKXiGeSoKvZkuJ47BegWle9m7wYGjQs9nB1DH/YGAa9m3r86MXYwonUfAIuYiMTLmMoVY8ApVboIZSHmW/8VGfagJMnDZJkttQFYUSqMjK1+wt2yhX388/t31iTxbIkv6G1BtPDBV9rwlecyt4dJzkNxck3kqISL5+bdX9GzKKEubL3PaM8P2SDKV5EPQ82FpCsA0jhMZUkNvsxcDoOkF2MuQRHqeZOPzEL7kPFlMt1D4NOA/l79JsqNT/rjziLYmSq8lrMO6/v5n6M8pWDvnJ/8aZmUznAutR1msy+SjZneNueU3kgzv/4dnGCjw5zWGuAKc5AtOwt1Dl2eQzU+KJAR7WJkirYJ7hfjH7F8parG3CssnmpQN/0O9FpBUBnpuFjskVJ4Y9ksUCVqN4iy18Bdy6mCeBg2o/sNE/ZnzgrD95Qg2+U5vrdGiRsgYGvmr4SrMbS1ARK7A/t1qtsZuzG0BblqekuxAEIeVoKfFe0 X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-Network-Message-Id: bf447af2-d373-478f-1c58-08dd2e72ed72 X-MS-Exchange-CrossTenant-AuthSource: DS7PR12MB9473.namprd12.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-OriginalArrivalTime: 06 Jan 2025 16:55:30.2529 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-MailboxType: HOSTED X-MS-Exchange-CrossTenant-UserPrincipalName: 5kGKvaDurbkvQXN/sdLFAwQOPbwtMTPBlus0kdCZLxW2AQytfQ62RegWuhEOIhhS X-MS-Exchange-Transport-CrossTenantHeadersStamped: PH7PR12MB6634 X-Rspamd-Queue-Id: 83748120006 X-Stat-Signature: jm3ae4e73r1hwiw9d8jmi5hdfd4cwmm6 X-Rspam-User: X-Rspamd-Server: rspam09 X-HE-Tag: 1736182574-562934 X-HE-Meta: U2FsdGVkX1/xER/8QdJoXO71QEfrKEW+qClVdDex7kqqSqokvYkg4w4bm8r2QuN7oM1hbooCiJANpH8Yt4Fow3H+/qfrYb24BSVn9XGmI4YqHnMLXtR5+jaWY164CW3/TjKlqnRAm/7AhpkF9BLZQa5w2qhOs5M+i+/cl61AxwmDk6NtGdwidInRfL/wAl9avrZfKDahd9DVxQ5XLv+JrSBGyN6MzpV9A5C8UG/Ot1y7NELlwV2GPvYvWrxME+h6VT0Qch9LEt5fJNswWi/DmuGnvWttd2C+O3UhZNInbdWSEiUUl4Ggvf9Iqb99BJbQ9n7Ac2eTst0awydqYPLKhPfb+tb1Grgza6T5F4BmJvw+WzdMyKE59ltXJxJe2NZ1j+D/lA6GD21ebadjsldcFATRh4tOfoayYn2NDe0IEltYcP0+HE2y5fy2nmh36Jp0Mtg6kIrql/TvlOP22NEu+sZL9GKOayJPHzXVZ7xIRbhjJY18zVr1QasBghyIVyH767W5FH0TYXOctlzXH/sp/2wFJskRMRwkFCH6oPCDxHb0qhdUYIvDQ1kZTvZV2PgVUUvZBvGyrwnXOem1SulTyUKYBoZ8i0pBj0PUDamdxKTl9qm2Mknn/Oh/FJWfT+rsoKZtOBTvr0eB48kgQFIaH6fWgY+IgiY6EndAb+oihuBTFqXvdkleEiWEHWxQniYtfOUJxU1H6UfZeKgt9WbS80VzY6rfitCIokTVITOVZ0kU1DzcGdurjbz+FEKUgeKlEpL+Y4t1k/FWxYtr6LNdgQUusRmqBjHqdCfM0vYLdHSpjN3L8jMh+8aviQkDRAELE6JQfp+yfZmHz5D/QDlwrGwYXLzTzEOeSibP4C7zmoNeofRyc5iOjeCzVwfhR/DfjH7cnpQZnY2DNSEF6Sp42onrkxZXDB1cynef9/g2AYwiMfQwjczyxIfEzozDpVR2YZZ9EKpAm8javzRIocJ vPtuIDhU gPAtHiuiEYbhv3edacHmhdW6X8X1pMV4D4OmrzaKu5qRgStIiNrs+CcNGXwHyj8FH2NHMA2r6YjBKfst871An4YPoaxmt2MSYBxZtsKLKkOipK9Ipe3KH8jBW7e4K7+rHFl5NSf0O5IS4lP5/YtnXEYyjODBi/hvR00cFgYw0mUJNXWpceoo+2gsfhmjeIA4DmbToiQcE5Efzl58AZvXQQ0rJoPRwtusRbpsSZuJx9MPy+Y0x1PNnpP5xUeudup4m7pT6dq/whbTp+NJ5yKgu/oriSlYZGkp8mjNFJzXwEdYl4FLfwFGQ0tUAWlU9vwk+2VuDfbRPDfxQ4k8JqjMEUifF3EDQF1b0wcg+Hc23tWQcsfsUyk/f25vdg/nou6aUmxF3Bq4KaQ/rE7E= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Now split_huge_page_to_list_to_order() uses the new backend split code in __folio_split_without_mapping(), the old __split_huge_page() and __split_huge_page_tail() can be removed. Signed-off-by: Zi Yan --- mm/huge_memory.c | 207 ----------------------------------------------- 1 file changed, 207 deletions(-) diff --git a/mm/huge_memory.c b/mm/huge_memory.c index e5f70ff88a1a..ec27287b7cbb 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -3153,213 +3153,6 @@ static void lru_add_page_tail(struct folio *folio, struct page *tail, } } -static void __split_huge_page_tail(struct folio *folio, int tail, - struct lruvec *lruvec, struct list_head *list, - unsigned int new_order) -{ - struct page *head = &folio->page; - struct page *page_tail = head + tail; - /* - * Careful: new_folio is not a "real" folio before we cleared PageTail. - * Don't pass it around before clear_compound_head(). - */ - struct folio *new_folio = (struct folio *)page_tail; - - VM_BUG_ON_PAGE(atomic_read(&page_tail->_mapcount) != -1, page_tail); - - /* - * Clone page flags before unfreezing refcount. - * - * After successful get_page_unless_zero() might follow flags change, - * for example lock_page() which set PG_waiters. - * - * Note that for mapped sub-pages of an anonymous THP, - * PG_anon_exclusive has been cleared in unmap_folio() and is stored in - * the migration entry instead from where remap_page() will restore it. - * We can still have PG_anon_exclusive set on effectively unmapped and - * unreferenced sub-pages of an anonymous THP: we can simply drop - * PG_anon_exclusive (-> PG_mappedtodisk) for these here. - */ - page_tail->flags &= ~PAGE_FLAGS_CHECK_AT_PREP; - page_tail->flags |= (head->flags & - ((1L << PG_referenced) | - (1L << PG_swapbacked) | - (1L << PG_swapcache) | - (1L << PG_mlocked) | - (1L << PG_uptodate) | - (1L << PG_active) | - (1L << PG_workingset) | - (1L << PG_locked) | - (1L << PG_unevictable) | -#ifdef CONFIG_ARCH_USES_PG_ARCH_2 - (1L << PG_arch_2) | -#endif -#ifdef CONFIG_ARCH_USES_PG_ARCH_3 - (1L << PG_arch_3) | -#endif - (1L << PG_dirty) | - LRU_GEN_MASK | LRU_REFS_MASK)); - - /* ->mapping in first and second tail page is replaced by other uses */ - VM_BUG_ON_PAGE(tail > 2 && page_tail->mapping != TAIL_MAPPING, - page_tail); - new_folio->mapping = folio->mapping; - new_folio->index = folio->index + tail; - - /* - * page->private should not be set in tail pages. Fix up and warn once - * if private is unexpectedly set. - */ - if (unlikely(page_tail->private)) { - VM_WARN_ON_ONCE_PAGE(true, page_tail); - page_tail->private = 0; - } - if (folio_test_swapcache(folio)) - new_folio->swap.val = folio->swap.val + tail; - - /* Page flags must be visible before we make the page non-compound. */ - smp_wmb(); - - /* - * Clear PageTail before unfreezing page refcount. - * - * After successful get_page_unless_zero() might follow put_page() - * which needs correct compound_head(). - */ - clear_compound_head(page_tail); - if (new_order) { - prep_compound_page(page_tail, new_order); - folio_set_large_rmappable(new_folio); - } - - /* Finally unfreeze refcount. Additional reference from page cache. */ - page_ref_unfreeze(page_tail, - 1 + ((!folio_test_anon(folio) || folio_test_swapcache(folio)) ? - folio_nr_pages(new_folio) : 0)); - - if (folio_test_young(folio)) - folio_set_young(new_folio); - if (folio_test_idle(folio)) - folio_set_idle(new_folio); - - folio_xchg_last_cpupid(new_folio, folio_last_cpupid(folio)); - - /* - * always add to the tail because some iterators expect new - * pages to show after the currently processed elements - e.g. - * migrate_pages - */ - lru_add_page_tail(folio, page_tail, lruvec, list); -} - -static void __split_huge_page(struct page *page, struct list_head *list, - pgoff_t end, unsigned int new_order) -{ - struct folio *folio = page_folio(page); - struct page *head = &folio->page; - struct lruvec *lruvec; - struct address_space *swap_cache = NULL; - unsigned long offset = 0; - int i, nr_dropped = 0; - unsigned int new_nr = 1 << new_order; - int order = folio_order(folio); - unsigned int nr = 1 << order; - - /* complete memcg works before add pages to LRU */ - split_page_memcg(head, order, new_order); - - if (folio_test_anon(folio) && folio_test_swapcache(folio)) { - offset = swap_cache_index(folio->swap); - swap_cache = swap_address_space(folio->swap); - xa_lock(&swap_cache->i_pages); - } - - /* lock lru list/PageCompound, ref frozen by page_ref_freeze */ - lruvec = folio_lruvec_lock(folio); - - ClearPageHasHWPoisoned(head); - - for (i = nr - new_nr; i >= new_nr; i -= new_nr) { - struct folio *tail; - __split_huge_page_tail(folio, i, lruvec, list, new_order); - tail = page_folio(head + i); - /* Some pages can be beyond EOF: drop them from page cache */ - if (tail->index >= end) { - if (shmem_mapping(folio->mapping)) - nr_dropped += new_nr; - else if (folio_test_clear_dirty(tail)) - folio_account_cleaned(tail, - inode_to_wb(folio->mapping->host)); - __filemap_remove_folio(tail, NULL); - folio_put(tail); - } else if (!folio_test_anon(folio)) { - __xa_store(&folio->mapping->i_pages, tail->index, - tail, 0); - } else if (swap_cache) { - __xa_store(&swap_cache->i_pages, offset + i, - tail, 0); - } - } - - if (!new_order) - ClearPageCompound(head); - else { - struct folio *new_folio = (struct folio *)head; - - folio_set_order(new_folio, new_order); - } - unlock_page_lruvec(lruvec); - /* Caller disabled irqs, so they are still disabled here */ - - split_page_owner(head, order, new_order); - pgalloc_tag_split(folio, order, new_order); - - /* See comment in __split_huge_page_tail() */ - if (folio_test_anon(folio)) { - /* Additional pin to swap cache */ - if (folio_test_swapcache(folio)) { - folio_ref_add(folio, 1 + new_nr); - xa_unlock(&swap_cache->i_pages); - } else { - folio_ref_inc(folio); - } - } else { - /* Additional pin to page cache */ - folio_ref_add(folio, 1 + new_nr); - xa_unlock(&folio->mapping->i_pages); - } - local_irq_enable(); - - if (nr_dropped) - shmem_uncharge(folio->mapping->host, nr_dropped); - remap_page(folio, nr, PageAnon(head) ? RMP_USE_SHARED_ZEROPAGE : 0); - - /* - * set page to its compound_head when split to non order-0 pages, so - * we can skip unlocking it below, since PG_locked is transferred to - * the compound_head of the page and the caller will unlock it. - */ - if (new_order) - page = compound_head(page); - - for (i = 0; i < nr; i += new_nr) { - struct page *subpage = head + i; - struct folio *new_folio = page_folio(subpage); - if (subpage == page) - continue; - folio_unlock(new_folio); - - /* - * Subpages may be freed if there wasn't any mapping - * like if add_to_swap() is running on a lru page that - * had its mapping zapped. And freeing these pages - * requires taking the lru_lock so we do the put_page - * of the tail pages after the split is complete. - */ - free_page_and_swap_cache(subpage); - } -} - /* Racy check whether the huge page can be split */ bool can_split_folio(struct folio *folio, int caller_pins, int *pextra_pins) {