From patchwork Thu Dec 5 00:18:32 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Zi Yan X-Patchwork-Id: 13894602 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 990C2E7716D for ; Thu, 5 Dec 2024 00:18:58 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 32B396B0089; Wed, 4 Dec 2024 19:18:58 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 2DAC36B008A; Wed, 4 Dec 2024 19:18:58 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 156536B008C; Wed, 4 Dec 2024 19:18:58 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id EA79E6B0089 for ; Wed, 4 Dec 2024 19:18:57 -0500 (EST) Received: from smtpin20.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id A81D4A12C8 for ; Thu, 5 Dec 2024 00:18:57 +0000 (UTC) X-FDA: 82858994022.20.456CC7F Received: from NAM10-MW2-obe.outbound.protection.outlook.com (mail-mw2nam10on2069.outbound.protection.outlook.com [40.107.94.69]) by imf01.hostedemail.com (Postfix) with ESMTP id 68A8A40012 for ; Thu, 5 Dec 2024 00:18:43 +0000 (UTC) Authentication-Results: imf01.hostedemail.com; dkim=pass header.d=Nvidia.com header.s=selector2 header.b="Arg/tCuF"; spf=pass (imf01.hostedemail.com: domain of ziy@nvidia.com designates 40.107.94.69 as permitted sender) smtp.mailfrom=ziy@nvidia.com; dmarc=pass (policy=reject) header.from=nvidia.com; arc=pass ("microsoft.com:s=arcselector10001:i=1") ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1733357918; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=o3FI5qHRFlXzJea8dxSObFOP35mYmvtNR3JUb0l4oa0=; b=PTcb4mpb8uqMgy/FVAOAhWoluAYIeeKTiRO4AWEv05iPlZVVMhuGOR7tbF7ZARi8CHT2Oq R3fsIfFTcvnxGtycjIMLjO9755KHuiGgUKl3HXT9aNepV36g89V+DBwIJDrSzdZKKiHbfF qujz/nsrTbd3+vlCloeYQZhFVXYyJpo= ARC-Authentication-Results: i=2; imf01.hostedemail.com; dkim=pass header.d=Nvidia.com header.s=selector2 header.b="Arg/tCuF"; spf=pass (imf01.hostedemail.com: domain of ziy@nvidia.com designates 40.107.94.69 as permitted sender) smtp.mailfrom=ziy@nvidia.com; dmarc=pass (policy=reject) header.from=nvidia.com; arc=pass ("microsoft.com:s=arcselector10001:i=1") ARC-Seal: i=2; s=arc-20220608; d=hostedemail.com; t=1733357918; a=rsa-sha256; cv=pass; b=xdSjMWz2Szpqv8Rso2lKkv40kQF2VOEF2C6TDeA/3QdkJwUKa0aPJdbK5pazP4AQLS3+hi g7tMoW7xBvUH/PFvHkUPl/QR7+Ly6eXZ0NviKyWCo+UAKhhbt8KHUW39knk4W/QwN/OKlf 4h9WdI0MCzU1HoJYBhXp/8zesHapdgc= ARC-Seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=WYAxkO5X1ZpCnVGef2IYXN355Vc51xdVwFcK2UlggAckh9Nlc979K3crexg9mhP44hl5SZMlitAg+vJFIZwIUn3tC92EIdoZWHMiMSvmQmfMq9esw2vFrgJd0geKWlU9Z/GU7WwcqP0vA9w9NVm3r3cFIPC93dIHMcq1OPuM8mTo18HE/+ob1/bjFpdl7+TgCKFNGWUWo/KpIauQClBX9ko6CEqGWszR8KoRNVeExk2IzOfreqdvthRhLYjIq2kxBImGCElbjf4A8HcPP4hKstOp5w9Pjl3ZJ7Y0aQNSNuExtYf+V3MNMzk47lj6Nyr8hWvYxTxpJZNoQHivEuDIfQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=o3FI5qHRFlXzJea8dxSObFOP35mYmvtNR3JUb0l4oa0=; b=yOdDw4ZUpPXfPA9z9Bo1QrWKIyLY40csLJKk9wbYTA3VggaIiYxAXd3dPHUSASuXnqvbu6F2h2B5ib/zPVNHT348kaREIU0gOACEGDsNPot89If/7Xzgmio/ZEwI44JvY86ehaNu+3WblNfg6iO9BbsD+yUEKTE9Pcc8BLmFQIXqS4L9DHTx/e9FKfh88uGm2XKdk9GqANoNs5YP1DmcKdvyeFUZtdA3lNF1vYr4vXphpYs5TYg7GvEh3ccJas320T/NdlGZaChEUI2f8oFOqTL/+XjJjBVGMEy7CU8wlAngQ2WPeP1FB4pRkNd5NNIHNin0hlyjhsbArK0PaBVV4w== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=nvidia.com; dmarc=pass action=none header.from=nvidia.com; dkim=pass header.d=nvidia.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=o3FI5qHRFlXzJea8dxSObFOP35mYmvtNR3JUb0l4oa0=; b=Arg/tCuFbFCM8LjwI1hy42LgmlnpJC96nx+vYnN/SqZ8WtMrJ5uEJvVAnrwWFa/Qz7s+gtTDBWh9JTU6cg1M6lBdJLLdS+AE2//aBR0RX4i+XSwgqs8BQyM3E+d4yF87IMvr7DRqlT15PtGak04ClyXsnYYPOak2evBZCBeCVJiY2lj19HsBLN08AiM2vupbqmqomHjj4sJzPpZdEwnVyVR9owQxgIwFVmLXHfTolPJGQpSn2jJdlAHKG4us7g/Q/WlYg+yhFv8SD4J0YsNhRiNtpgYMxEDYNBkE57KnXvcTI7aSTAQnz7oi5jHX/a5pfdBphJ1SBfkrqjWCVUF4fQ== Received: from BL4PR12MB9478.namprd12.prod.outlook.com (2603:10b6:208:58e::9) by IA1PR12MB7589.namprd12.prod.outlook.com (2603:10b6:208:42b::13) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8230.12; Thu, 5 Dec 2024 00:18:45 +0000 Received: from BL4PR12MB9478.namprd12.prod.outlook.com ([fe80::b90:212f:996:6eb9]) by BL4PR12MB9478.namprd12.prod.outlook.com ([fe80::b90:212f:996:6eb9%6]) with mapi id 15.20.8230.010; Thu, 5 Dec 2024 00:18:45 +0000 From: Zi Yan To: linux-mm@kvack.org, "Kirill A . Shutemov" , "Matthew Wilcox (Oracle)" Cc: Ryan Roberts , Hugh Dickins , David Hildenbrand , Yang Shi , Miaohe Lin , Kefeng Wang , Yu Zhao , John Hubbard , linux-kernel@vger.kernel.org, Zi Yan Subject: [PATCH RESEND v3 2/9] mm/huge_memory: move folio split common code to __folio_split() Date: Wed, 4 Dec 2024 19:18:32 -0500 Message-ID: <20241205001839.2582020-3-ziy@nvidia.com> X-Mailer: git-send-email 2.45.2 In-Reply-To: <20241205001839.2582020-1-ziy@nvidia.com> References: <20241205001839.2582020-1-ziy@nvidia.com> X-ClientProxiedBy: BN0PR04CA0088.namprd04.prod.outlook.com (2603:10b6:408:ea::33) To BL4PR12MB9478.namprd12.prod.outlook.com (2603:10b6:208:58e::9) MIME-Version: 1.0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: BL4PR12MB9478:EE_|IA1PR12MB7589:EE_ X-MS-Office365-Filtering-Correlation-Id: a2f1fb4d-54d1-4876-80c8-08dd14c26200 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0;ARA:13230040|366016|376014|7416014|1800799024; X-Microsoft-Antispam-Message-Info: 1QolBZRmmAEHgGrtorOG8B6yOOdrb/VPCJazNbulkZaYXmPCGu94LXY+R+CwVWOCkJJ1JjMEm2ub4jbEZYe9y+2e6uxlIs/k5pytUl8Dg1P9chelq0zC+tAlJ9+iq4PTQ/9qNQ1KFGylwL+Vuu55aqwuQn0GANb2XxkyU6o6car7QQhGI+0P7tfj1D4ky6SqZf14JAis0KIvSUz88uggHjmi7qSYGA5mBsA/iP603T8J8MwAAzL3Lk2hMZ89hhoo9g5JVqCcTKva+NI5XCey6hM75ya+Z+uLK4CTpG586iHMAIWXfLS4bZbhkHzwf0FxZ82bZ0ovBRYyPhA8ETf5XDiEkNc6HFvcvjP24SMfbuzJ77Ri+u5O06dtSyROvA5D0Ag1Sppwv4OqguWle+d7cUeKS+6MpqFNNaBq5qFkv7f13ljFbIBYT+kR3HitYcN5r2LzsqkIZi0RdPgJD0zmDcl46W8sb2N+5iM84+oJdPIqB71dvtUj+DnmJkv1GtEpu83wDoveTQBoH8zqUL78atE+WstgRhVz45ha7J0spEMQWg4uRyUmnpBN7wkr2rDS2ZSBaPO2DGfP0ITgx46/3Zz9KKFUWpQ+UwPFquOjw6vgOvDDa1bJBuYzwIRVS0X27gZwfPeOnOaxP+WHeJ+7QHDVQ3GIxk5m8kMwveyF3FAg8r9jFbqKPPAdEEhkm2ambpjpXtQU7+GmnMZqR5vsqLiq4v8a2xuMYJfgaVGUlk546b2rDZLcexJoW98gDEZOnyxYQw2E8VOvVaounTr1frDS8H1X1NkFAxrKAplNaPrP+6wWgELJgkzvibZJqjDtnIzed4L5TbMqQPAg29APw9gU3QY2xpw0b8Hws68RuRyhOQrk+8uOaawAzid1sb99BC8lMUepQcdG4cnrBCAFdX8JjXGig36pOYiEiCXQAbymtdbN6JhFprIID6UJgjm9x/SYnp/nWy3WNh/1FhQaZNX3SJOxC9EA4pa2L1/Y7wWbyNUzE8xNAxfIsauAmcF+QqEFRqDc8PyrtOTOydqOAVCwHgWGMc1Pekeqcz9HL5hnIe7V3qgegGeLsCYRuESSOPdvDSZ+8X9RZTcbobYiZrv7ijRWW9Debzeb8v4fd0usP/aHBTokq+o6/m6Mg6DpLMIoKScGO+/BasAL2st7cyXq5G+4TtvjPfyZRD0PxqDuH7NUG6jv1k91qHKWdRmpOdzXQL4pxOPJxdJnKekxwcPFIboI0TLJTiSqmtpwymCC1Y5vutiz1RDOGRA8ZWuDkaKmoI3uAQcIYaCF9JsNfg+gnOe9ML1x9BQ39r0r7EJen4Ba23jdeEWk/aCkD8dYLHRENgC6Sht7/8Wkye2nTg== X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BL4PR12MB9478.namprd12.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230040)(366016)(376014)(7416014)(1800799024);DIR:OUT;SFP:1101; X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1 X-MS-Exchange-AntiSpam-MessageData-0: 5Jf/1vSd6GO9UXzUD44HeU7v16p5gBh1Dbf0E4ug2pAtGfeZHlPvwGudrDQ/9FkMXpdEzx1d7zONNHtoiFHgJbvYBnpfJSmTpaGP8lNR143rlONjFZVym0L4JOLGbl7tVOkNQf1x61LztEbtOEZDyL5PPOkW0jxRRJzWKOTt4t9RuYFJ+CCWe6vaWvYy1/0wfQ/DOWsXpVZ/43HG2Mw4x+FrAC3GiJZ6y+fDOCEaxGn4sdlh7Lm0DjVsi6Qfc9P9Qv04nXVrfHKwzUysCl8ESJljbzK0xom7YGC5I2a4k+0pwDSZVEqPGFwcpXT0SzF0QC8W3EsE3u3QRPNFf0xpXCRWJR4h8vnqmFP57Z6iahCHjH7hBtWOa3JQoMRrqi94KA99Twpt8QaMlRcBTB+Q7cL+l0Jo9oY1JVzZiEMjEcKl4nKib3aPPg7bf2hzSkaVtK1+xsnWPFM9dum6rpmM6O1a47CDKSL/RTgmqrDPjn9dT2ax/6VW23RYtwkUbFGdkE+/8ZBKzzdrPDML0cVYKPC1K4fJiwRqCxBelRcaE21vnu9n+jr0lm7vGN+8ZS5s8dLhF5p+fnMFP55BEk41YvC9GSmJIcf8ecxBeYQs9lV/hovpJLXv++1ZIFBOnOqwG/mz/dJAut7+matmHpZ85Gt4u3ToUP/2XKxCYlL6EX+jIUCzZMFqR/yFrwh4hRdAv9fALjUm8YRLYm0AcQdxyCg7gJzRkkXehqQa0Ndf7nMGQOy+q9ZKIn6cn8GhDyPq8UMvTbJHpMdZL4dyupZlpGzGnbSVzNnmkMHiSxTxUA/2C0QDWsBxl90l1DAnvMkYzwVQjJhyo09xV1uKbch3ghCueMNFLQj25+Q5QFiCGJvwZqO9JbUcStkk+VX/iUGMSAJja04YL4GatSy8P6cK/sQMdgFQHOvZjr4G6YsHI1ozxBMp0dG4KDcH1z4fUTMqcqI7KCZ96vpcmjDT8k2whBQdb7wad2RUDIkN49l8cRSAf4BBg0lipqsRz5s19XM0Oc9mwQOsbGARZXrTrencS2T+IrWU5GqpBTlkKqMWGjp1UQ3l2HD8RcMXz/wGOwOD2/ZHPJdcycEUbfPjyzMoMH/C9Z/u3bLB4H1BzBtaMVRL6pefZ1R1WzioJpeTnJeKa7xB4y8OYTQdrnSoJwYQvanXXMOtWvjL1r19q72oahj/DLm1bik0g4ZvioCJm3PaUdnjX1CaV16qy6NXZtxX5fNVm3QJuxcYnFkmI8M+pdd1dCOw+fg5XfRJGgcbGQBIMJ2GcCDAMJ1VnHk9Ug11l9nDlZ5M29qtrlvXPEdu9QRI1szOQKRnjKALuG5AqNYB1EZWCzU0mJVh61oYJEF/Ev0xhCytKqry37EjAxdbexMhO+JnZmW4p+hWqMILzOVHeSbh+I2bmrcYjGhHZrvNy8Yc3+7ifR6/up2UFAWwpx//vzizXU6T1CM0/XwaB5DyV7BeDj1VhfF47lbwWRCyOuLsx0aiWK6HTL5Dr2zq8zKTBX8LqhTqNGe0Y6jDe3o8Q/NjozCGcy3riRXxPim+XBdNd1a9eYH2v6e3yxug4LCEsTBZcbR8LNsDIFfnsVhE X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-Network-Message-Id: a2f1fb4d-54d1-4876-80c8-08dd14c26200 X-MS-Exchange-CrossTenant-AuthSource: BL4PR12MB9478.namprd12.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-OriginalArrivalTime: 05 Dec 2024 00:18:45.7880 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-MailboxType: HOSTED X-MS-Exchange-CrossTenant-UserPrincipalName: dIhU24BY9d6ANMjCa8XUQ9wvRdxYxMQ/vVBeV6i3VAKXVLyk6k+Ts555wfi0eDzh X-MS-Exchange-Transport-CrossTenantHeadersStamped: IA1PR12MB7589 X-Rspamd-Server: rspam06 X-Rspamd-Queue-Id: 68A8A40012 X-Rspam-User: X-Stat-Signature: ztk3oe7cwiqu3rhews5ocg5jsgm1uszk X-HE-Tag: 1733357923-328291 X-HE-Meta: U2FsdGVkX1/qwW8wVr7iRP6lemnUqpyExvU2WkV+AbsWpTNaRowQi0jQssPzObVI0m4RAVRLL5tYIvBWblaAeUFatBVDxYfkwRU8C0dQBe52GmFbIiP0r3XwV0u+qMobxB7thUikKc9jZj3W5PV++mp4hiD3Qe+sR3KlwipBCZsD64kADrnaRBiyq9V+GHivYSydTrUNpkl+IxXGjlNTQKVJTpUrKOsBp/85ft5cTVJhht3ARURZW5hiImhLi/RzCWajDlTn0/M7aBCRXwnTQuZZQgpNpEUUVW9zrchu64QG6RW7nS6m60WyNL1WGXW68KFrTAo0rsXo/ImIVgOxThKrmz3Zr26sb8FEgyRpz/O1gYB8J8cIkOp3a/RW8kpr+u1UrIHX5m6lSIBEZoK3H/zr3d3O0+hdAXXAMxzYU2+cte3r5CCmsdLJ6UjjQpobUmkaoYDFeozbSFVuI7hyt04h/oI/F5ftcd2GOfH4dD4HqbzURwKtWV7D/KGhbO4YOJ1uF4O5UkU12+IfDypUP3XuI/70KcO5+I/lKkRCIBLssjWOB0v0iMkEAhVVP2brvW5VEXGp6sTRjqPRJR7OqLXWxiauL/KN4p5KvnjK5eh6TZZLq+ORl8n+P8eGOHV6bIr99hgB6nQbeZZJyfKrrvGY2BxRfOflDGeBsOxQaZoLpZ+cU8wWBaIbewgPF+4vN6eyWygO0Tzce9bn/PkPUNexGhgJQtoAMr3Vor8LI9T9C6JyBGzkXZixZZCD3M1cM6kXVIKXdP337hPWaVDXsJqvIsRAKNMnVH1dLxi83egip+OSjHCYL9BtVo6SinpnDusWAwUjOzxPNQkL9YfgOsyVTVWKdiJL0yvTXYPEhq6mAIempIAC5eU6qFLBmb9IU3TnnnhZT40v5urmWNaOEPAz/SE8+OO+LzjXaF1fhhmIouA+JoGhTM9H2s3ZjYNXMUqYE4Dqwn0rlRlzYlT tmCbHDQq aYNLE46ellPy9EkyWzM+wW03Or7cbVxwLFfcHhBvyn5oHd2KRS/opj21j5/RHej4rLcriw9yJJVfS1naYl57eP6fLjHlvOJK+CRgV/yIlhE/1qdxs/Fzcr1a0VuamBabviYjyl6zMWtQgxBoIGANyulh9ztt3YaXP3z4+aJh7+k9SPGvD5Aq9vlfbYc848P0aEfi2AFXqN8YEkf0dhTgG6MaVSf8ZSy4TCri6+D4QUC+L6PB6ijKn7tjn31MJc/S9pH0cwsvdtUzpL/yaTlhlAhDDfD3nn/UyfUHdnL8L2IJmOana4MMkA8rTsREC/0ZRBfJXVvGUeRulH+uhPiRcbmdv/KVfKccKooWloYupvFsrYr0KQ4HsyU0FHtDFolMAEe3jc4IeEKpjjHU= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: This is a preparation patch for folio_split(). In the upcoming patch folio_split() will share folio unmapping and remapping code with split_huge_page_to_list_to_order(), so move the code to a common function __folio_split() first. Signed-off-by: Zi Yan --- mm/huge_memory.c | 107 +++++++++++++++++++++++++---------------------- 1 file changed, 57 insertions(+), 50 deletions(-) diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 0cde13286bb0..e928082be3b2 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -3730,57 +3730,9 @@ static int __split_unmapped_folio(struct folio *folio, int new_order, return ret; } -/* - * This function splits a large folio into smaller folios of order @new_order. - * @page can point to any page of the large folio to split. The split operation - * does not change the position of @page. - * - * Prerequisites: - * - * 1) The caller must hold a reference on the @page's owning folio, also known - * as the large folio. - * - * 2) The large folio must be locked. - * - * 3) The folio must not be pinned. Any unexpected folio references, including - * GUP pins, will result in the folio not getting split; instead, the caller - * will receive an -EAGAIN. - * - * 4) @new_order > 1, usually. Splitting to order-1 anonymous folios is not - * supported for non-file-backed folios, because folio->_deferred_list, which - * is used by partially mapped folios, is stored in subpage 2, but an order-1 - * folio only has subpages 0 and 1. File-backed order-1 folios are supported, - * since they do not use _deferred_list. - * - * After splitting, the caller's folio reference will be transferred to @page, - * resulting in a raised refcount of @page after this call. The other pages may - * be freed if they are not mapped. - * - * If @list is null, tail pages will be added to LRU list, otherwise, to @list. - * - * Pages in @new_order will inherit the mapping, flags, and so on from the - * huge page. - * - * Returns 0 if the huge page was split successfully. - * - * Returns -EAGAIN if the folio has unexpected reference (e.g., GUP) or if - * the folio was concurrently removed from the page cache. - * - * Returns -EBUSY when trying to split the huge zeropage, if the folio is - * under writeback, if fs-specific folio metadata cannot currently be - * released, or if some unexpected race happened (e.g., anon VMA disappeared, - * truncation). - * - * Callers should ensure that the order respects the address space mapping - * min-order if one is set for non-anonymous folios. - * - * Returns -EINVAL when trying to split to an order that is incompatible - * with the folio. Splitting to order 0 is compatible with all folios. - */ -int split_huge_page_to_list_to_order(struct page *page, struct list_head *list, - unsigned int new_order) +static int __folio_split(struct folio *folio, unsigned int new_order, + struct page *page, struct list_head *list) { - struct folio *folio = page_folio(page); struct deferred_split *ds_queue = get_deferred_split_queue(folio); /* reset xarray order to new order after split */ XA_STATE_ORDER(xas, &folio->mapping->i_pages, folio->index, new_order); @@ -3996,6 +3948,61 @@ int split_huge_page_to_list_to_order(struct page *page, struct list_head *list, return ret; } +/* + * This function splits a large folio into smaller folios of order @new_order. + * @page can point to any page of the large folio to split. The split operation + * does not change the position of @page. + * + * Prerequisites: + * + * 1) The caller must hold a reference on the @page's owning folio, also known + * as the large folio. + * + * 2) The large folio must be locked. + * + * 3) The folio must not be pinned. Any unexpected folio references, including + * GUP pins, will result in the folio not getting split; instead, the caller + * will receive an -EAGAIN. + * + * 4) @new_order > 1, usually. Splitting to order-1 anonymous folios is not + * supported for non-file-backed folios, because folio->_deferred_list, which + * is used by partially mapped folios, is stored in subpage 2, but an order-1 + * folio only has subpages 0 and 1. File-backed order-1 folios are supported, + * since they do not use _deferred_list. + * + * After splitting, the caller's folio reference will be transferred to @page, + * resulting in a raised refcount of @page after this call. The other pages may + * be freed if they are not mapped. + * + * If @list is null, tail pages will be added to LRU list, otherwise, to @list. + * + * Pages in @new_order will inherit the mapping, flags, and so on from the + * huge page. + * + * Returns 0 if the huge page was split successfully. + * + * Returns -EAGAIN if the folio has unexpected reference (e.g., GUP) or if + * the folio was concurrently removed from the page cache. + * + * Returns -EBUSY when trying to split the huge zeropage, if the folio is + * under writeback, if fs-specific folio metadata cannot currently be + * released, or if some unexpected race happened (e.g., anon VMA disappeared, + * truncation). + * + * Callers should ensure that the order respects the address space mapping + * min-order if one is set for non-anonymous folios. + * + * Returns -EINVAL when trying to split to an order that is incompatible + * with the folio. Splitting to order 0 is compatible with all folios. + */ +int split_huge_page_to_list_to_order(struct page *page, struct list_head *list, + unsigned int new_order) +{ + struct folio *folio = page_folio(page); + + return __folio_split(folio, new_order, page, list); +} + int min_order_for_split(struct folio *folio) { if (folio_test_anon(folio))