From patchwork Wed Feb 5 03:14:11 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Zi Yan X-Patchwork-Id: 13960492 Received: from NAM02-BN1-obe.outbound.protection.outlook.com (mail-bn1nam02on2077.outbound.protection.outlook.com [40.107.212.77]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 1F9342135A4; Wed, 5 Feb 2025 03:14:38 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=fail smtp.client-ip=40.107.212.77 ARC-Seal: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1738725281; cv=fail; b=fm6klVnJXHM0LxNcET03DAP03pB48BoopRW+RE3rYcrJHeHYmDGRcfPB76diBLr6ufTtOFgFEs3QI49qfFt9keyssxB1PLJwwH3kiAYXdnoh7hfImJMCa2XhFd0JxwzvST0uXuN7NOiFCIVKFDfxuC6CBCht1mIB/25uvF9WIj8= ARC-Message-Signature: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1738725281; c=relaxed/simple; bh=rcHUtHAoeJ1bgu4vJoD57UrtHK/q3qQJwgUtdEaVu3s=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: Content-Type:MIME-Version; b=nzWYDs3Hi/jFkO+BbDAhnh7geM0XUnlMJkZd3VaSRTkYzNyqWwCbCxzpVTBFtaSaaBaHeTDvvEN5OVMWqsgCny2Dqr1V3WJFVvYJ2dWDQ+ECYHH6tQyf88ZiGHnmgdi8ByTPRdtDhNko78HydFUaOZCoShG8HHLLEDmZeHjhKWQ= ARC-Authentication-Results: i=2; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=nvidia.com; spf=fail smtp.mailfrom=nvidia.com; dkim=pass (2048-bit key) header.d=Nvidia.com header.i=@Nvidia.com header.b=SGoYa0kD; arc=fail smtp.client-ip=40.107.212.77 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=nvidia.com Authentication-Results: smtp.subspace.kernel.org; spf=fail smtp.mailfrom=nvidia.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=Nvidia.com header.i=@Nvidia.com header.b="SGoYa0kD" ARC-Seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=YyIf/PBpOaTlL6+mpqI81ubZ7jdVEOaQnvOE9ZbSDCQfollYTsO/IhNjLHEWUlFkDOcsHUFG8NZCky4fje/lecrVqBCTFid3C/BK4FZ/rSYlpOro6XePYPeJc46Npu/RYQBdbShUMwDYqmbxbqg3kf69zmMyN5I4+HUiUpLj6bPsP7ZV7PeBaMjhlDIapm83e2t568d/H/3/v9fMAXMe0AztL3P/u8DQmoS+ZLmNOelrT9lamAxmbhrp43QK0PlSi2pXJuPqjiieD1zz3YJOEwLy5eY4tJsIHgCRpyTUWDu4HkV792bEAI2knFDI5iExpHUuvIDvIlnB16HxqEyvcA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=NpRDRb/eZrelexIks4B6+tOxHpJiNe6Y7P2MSAq/biI=; b=UgEFrymwOYQkjoPgaPnQH2gEFyjEADRNmh9kalxs9Ptl91co9wNHBauddjwuFnOIRv5xJZU1cVjeYXzHgbJ8LIGBdAtnB5SKECn30fnxPTWldGg15SdkwuRNVxJonbj22IEWZb8i4xcDNjPYL0ENXzJIBPF/SpAhUD00xu991WsRyoammM81ANPME4urQBtKPp8gp/l5so8tBOftlEXzj2BZnIUYX+y+Pz8wwNPBLeMEsaro2DU7r7TX8wEWnVDdMDvAQfu0Ra8r44f6slG8F3bcSVQcb1Da8bqachypB0/JdKc7Axw4pdawGnv72vVZTcVVt8//feL4GzXw05zHtg== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=nvidia.com; dmarc=pass action=none header.from=nvidia.com; dkim=pass header.d=nvidia.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=NpRDRb/eZrelexIks4B6+tOxHpJiNe6Y7P2MSAq/biI=; b=SGoYa0kDdMlfAqTu7milsI2JnZjzhe/t+S8H5uez1F5/JXxyxEfc5UVM3gl3j4J4/4r9gMY/OE503XjBiXR9kTn+JoimkQsH9JLj1zll0s6Aji7bf5EAnEW25mE79j17gMIEb4uhY0J/drKcAtCMJi+d2WsUpmcTSv+YLK/Y1/tYqk74SfvZQyZQD9O7A1i8y8pbzhHbkpov86k3yTAXSMAehIE201zZ0bX04HEpH3PuKYilwSAzxFy6z/fDmjREf4h6evP2uQu0BpZL5fGyXInwv+ddn2pAQgHegHgp2SMLiTfm9DRzPWxx1oVf0bt04DuYFSesvwrs2TtH+9KGHw== Authentication-Results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=nvidia.com; Received: from DS7PR12MB9473.namprd12.prod.outlook.com (2603:10b6:8:252::5) by MW6PR12MB8865.namprd12.prod.outlook.com (2603:10b6:303:23b::21) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8398.24; Wed, 5 Feb 2025 03:14:36 +0000 Received: from DS7PR12MB9473.namprd12.prod.outlook.com ([fe80::5189:ecec:d84a:133a]) by DS7PR12MB9473.namprd12.prod.outlook.com ([fe80::5189:ecec:d84a:133a%6]) with mapi id 15.20.8398.021; Wed, 5 Feb 2025 03:14:36 +0000 From: Zi Yan To: linux-mm@kvack.org, Andrew Morton , "Kirill A . Shutemov" , "Matthew Wilcox (Oracle)" Cc: Ryan Roberts , Hugh Dickins , David Hildenbrand , Yang Shi , Miaohe Lin , Kefeng Wang , Yu Zhao , John Hubbard , Baolin Wang , linux-kselftest@vger.kernel.org, linux-kernel@vger.kernel.org, Zi Yan Subject: [PATCH v6 1/7] mm/huge_memory: add two new (not yet used) functions for folio_split() Date: Tue, 4 Feb 2025 22:14:11 -0500 Message-ID: <20250205031417.1771278-2-ziy@nvidia.com> X-Mailer: git-send-email 2.47.2 In-Reply-To: <20250205031417.1771278-1-ziy@nvidia.com> References: <20250205031417.1771278-1-ziy@nvidia.com> X-ClientProxiedBy: BN9PR03CA0154.namprd03.prod.outlook.com (2603:10b6:408:f4::9) To DS7PR12MB9473.namprd12.prod.outlook.com (2603:10b6:8:252::5) Precedence: bulk X-Mailing-List: linux-kselftest@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: DS7PR12MB9473:EE_|MW6PR12MB8865:EE_ X-MS-Office365-Filtering-Correlation-Id: 8acc5693-0d1e-4cde-ed25-08dd45933827 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0;ARA:13230040|1800799024|366016|7416014|376014; X-Microsoft-Antispam-Message-Info: MLdyIRtvaPeBkbyAfJBeX5IxzSLqrSZyC7m3nEA89GzxQ7oesRYs8ok65QkV+4pjBDvRNeviEvAGzYoGLqkXgNtptbeEMpbEQMK9DXMdPNyShO2a7SU8hvmb9TxXnUVQKn0CVw/m1KQ/B2BXpDdFD9+hS6wcj7tcczvjspMykSAWWs2c5a6Qhy35sPeT7EQwf0bH6mWgnZIIHzRMtKs4gCw6ijOhUQnosUDq7ejssauGAYGqrUYZgPGi90sNfGXHynoi/wzN4AoCFAE1omT8chf7W8ApT1RoEeZFVHhut2/N1HbaQk68Qsk0mVZLg30v2VS5Quh2i5DUxt46TD35o1RTfJ9G+B1sq3cPtvbhooRmryUhC3lXfIt5sk40xVHl1azBtxhn0erdi95uEhtidOlqjsXo+NMPVXC540Te03r82aX1tBgtjjiAAL8AE7lGtw35/kR75jSMIy4RFHZJ6FqhEevlrztfK0z5fsD4pgJDkg9T4c374fAPtyHbZDjmplfc0WakiAXqsl+6aH2gDLKb0/E5zwcoU+Yb7MJVoPosExYlOjt7laThGHFZmFZDvzatKaT4VcPziRHsxcfB7pFqHuSdybcNzwNZ8AGspUMviIYbXL1+eM3nU1TJaYo0QZpti4NQzggeMMWumXkKulFyCBm3xXX+zPAXJwXrgmf9oMHgCcw9Cy5v/rMls8Na5BYozDN4/XgvOamUoVZCgY4HNDHsHj2HjIswqFH0i6UpGL6BshMJT9fMCvgFDKEukzkrEKVwWkvctOoBtUrvqn/VPPbElJqQGx/aHFhYcExLtw+ho+YABwhtQIYaby/hHXNil6I+UprjeQRp4eb5zBe8k9eYRM1P8OCeg9sKKd7t8qrWxweNQLy5mgjsbE3z+WggIKzpthR4tMZ0sFr4MYk2N8nOG20SqGHEq1lPllqb88gR9gxFow+mi1YDTYY5z8wB6CtO/xDco9bGWCELALHtyA2F5m9qQctmLUcDf+6ws157YsW/3qUJRNU2j6wqh4V4FDX3btakBUab/H6zuQx9bWcCMwSOqS1rYKNfSK55GI7ta8WOSZITkVKjA8qgq0/+XiqV8klhWDUluBRfMttTDJiyKgmab0Mc8ATLgF8Gm/cjwhStYxlKgmGAh+ayYoVLfk1zd5qvPqsL1bXyCeISgT0T/8c5/pq+KHR7YBi25+BDQOeoHGyXQRGUYJ81KKPOdt39mJGBUUwNkDbdtYTV6AYVeXwKg4l1V9Liux2s7Kz/t/zJgG54uF3/68ZM9z1af1NRJHSb7LSby5sPTSiNIQEvXygL2TsHw43YDJKujIH65LxnG4BCXlV/FKr+jBkqBCwDC5q/MJPe7RrDTbg2kBPCWQcximxQDS4xFfbRtXMpFXLajabF/OSGhiEP X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR12MB9473.namprd12.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230040)(1800799024)(366016)(7416014)(376014);DIR:OUT;SFP:1101; X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1 X-MS-Exchange-AntiSpam-MessageData-0: kXWlfIqh1Wh0FmJfAjMYZFDIunvC+yzw2QKvMu8anLkvjLaS2zcP4V1jr7DVr4SLceqMJ0HTY4FWFWYCKBRramahf9QQkQJ4d73cwGnZY9Q/iGFX84CcMMHpCAOp2xngKIA31uiKyHM8ZfwWf2nHlfQfhACQumRoM039ryWrKeNDAG3mIa3I/I1vDMWw9pEArCSXmqwpot+oCJWCRaOUS1QzCfqWzcYqus2dzQPTAbv6L9+XHpHqaqodDogQDZlxLvVuN9egOeya+sUyfPgksp3GfWScdnlJwCVH8Xy2pwtzNjh8V87FwWEAPi8mkd1LbQz9k+nUWHFieHPmtkdoHeTs04iIQ+9nnL8EFXwao0zuPwWT3X6mVp+wPGdN0YinmPDVApmaNjHwYvNWT4PiNDQCWoRry5J9qcmcVP3QNdY29NgfLy9lpT3UytjNftN6J0Agmqr+LKU15LJqQRHRnOHHeci0bW/eqpldL5fz2bFU7HpSVUsdLp+Y4yOI+MfRdUBCwVoGZv3E2P/ABp9ChtK8QMbgMxeQOa6rQhGZ52LY/AXAciXUdFhQaSWIguoOoBMyvn1V9npvPinCJ02Nalf54+roapbeOjR1CtDRf8QgsVowKjnt+Qv3MKbvLXX232Cf2U0Xo6WypQtyrRlCgrxf1x94ujPhFJyvwsy+1h2UvtYQAyQAphSdOSKcXvBKitskTYgZtb00+RKopJVHdMgBCAKEM1v9bWsbb6OU6/LEYjXJ9f82nT4aNeLTCYzH3vzAJL2Jy5lffmFrKZBYaH4ruOCrNDjpY8eMKeGLpbmwM7Op4kFa64iAykHPNgVNWEa6plHGUB7yJc+1TISGtbPTfl+PVy2bV2TmaWlXqGR+hiLenBNEl9skrEsAxUvVAtd08wpbcmjSBpcuCZHI/L1+adEIw+lhP0tA8YXsmQN/ZRwFUAOVdy10S7IpoTkeukq2svJyf15qKshfpNx+P0q7pa7WnLmcVCd1jHxfZDBhNN53yThphDcAaRMBywCOP2dvX0dF5DL3xpsviVKtgmuQxA0rJhk9zFFDgHuafO78gvHd2Zd5RnFiHJKjGKnhMZ4qDsPKUaAgDl/hR3TTUdZ57us80YKrpqADE7P3biCaRafnRULrO21IPH+66peTY51TXC/KWVr3Tz1siyLfFO3Bv7M/kcB1QcwxmSe5UM2cMt8pbrDlLliqPPM1iq9DcvcSkWT7svfR1d6Qnh7hzfZ9ER+KJmnj8EwwUlAX3AmKyBjNHThjWz0RK30nBPTafSMN3gW/Y+O1Rc0kV6qBX+WWIS3O34ibIqkptwIPw2+A7WpJG9Vv5VZghXlXnOw0t7iQq8JSBn65Ce5dhkK7rveeRLRrrm27lP5LWtR5OBzjk/wDx06PXHTp+QuE9LNT83takSC6RYMZ1A2C97f5yYLs4gxT//3yPXOlnF34HPS0CiTMaj04IStt2LRxNNW96zH45TuezL8XIFpZpEUueZ+F3kyGeIpNBpJD5w+rnaZw8sm+Nn6nFdekI+FtXfGE+F++UmFrQVc2A1hjfuX84mCozCaPe4RsSEgrzVLZYxY260ijDrXU7BdDXf7WufZg X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-Network-Message-Id: 8acc5693-0d1e-4cde-ed25-08dd45933827 X-MS-Exchange-CrossTenant-AuthSource: DS7PR12MB9473.namprd12.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-OriginalArrivalTime: 05 Feb 2025 03:14:36.2238 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-MailboxType: HOSTED X-MS-Exchange-CrossTenant-UserPrincipalName: F5V0fw+iOPOEUOQLz2YX6gZ678+g7vL7QIF50790EI1KHlaa9rrViffR5JHZ1E2r X-MS-Exchange-Transport-CrossTenantHeadersStamped: MW6PR12MB8865 This is a preparation patch, both added functions are not used yet. The added __split_unmapped_folio() is able to split a folio with its mapping removed in two manners: 1) uniform split (the existing way), and 2) buddy allocator like split. The added __split_folio_to_order() can split a folio into any lower order. For uniform split, __split_unmapped_folio() calls it once to split the given folio to the new order. For buddy allocator split, __split_unmapped_folio() calls it (folio_order - new_order) times and each time splits the folio containing the given page to one lower order. Signed-off-by: Zi Yan --- mm/huge_memory.c | 350 ++++++++++++++++++++++++++++++++++++++++++++++- 1 file changed, 349 insertions(+), 1 deletion(-) diff --git a/mm/huge_memory.c b/mm/huge_memory.c index de72713b1c45..1948d86ac4ce 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -3149,7 +3149,6 @@ static void remap_page(struct folio *folio, unsigned long nr, int flags) static void lru_add_page_tail(struct folio *folio, struct page *tail, struct lruvec *lruvec, struct list_head *list) { - VM_BUG_ON_FOLIO(!folio_test_large(folio), folio); VM_BUG_ON_FOLIO(PageLRU(tail), folio); lockdep_assert_held(&lruvec->lru_lock); @@ -3393,6 +3392,355 @@ bool can_split_folio(struct folio *folio, int caller_pins, int *pextra_pins) caller_pins; } +/* + * It splits @folio into @new_order folios and copies the @folio metadata to + * all the resulting folios. + */ +static int __split_folio_to_order(struct folio *folio, int new_order) +{ + int curr_order = folio_order(folio); + long nr_pages = folio_nr_pages(folio); + long new_nr_pages = 1 << new_order; + long index; + + if (curr_order <= new_order) + return -EINVAL; + + /* + * Skip the first new_nr_pages, since the new folio from them have all + * the flags from the original folio. + */ + for (index = new_nr_pages; index < nr_pages; index += new_nr_pages) { + struct page *head = &folio->page; + struct page *new_head = head + index; + + /* + * Careful: new_folio is not a "real" folio before we cleared PageTail. + * Don't pass it around before clear_compound_head(). + */ + struct folio *new_folio = (struct folio *)new_head; + + VM_BUG_ON_PAGE(atomic_read(&new_head->_mapcount) != -1, new_head); + + /* + * Clone page flags before unfreezing refcount. + * + * After successful get_page_unless_zero() might follow flags change, + * for example lock_page() which set PG_waiters. + * + * Note that for mapped sub-pages of an anonymous THP, + * PG_anon_exclusive has been cleared in unmap_folio() and is stored in + * the migration entry instead from where remap_page() will restore it. + * We can still have PG_anon_exclusive set on effectively unmapped and + * unreferenced sub-pages of an anonymous THP: we can simply drop + * PG_anon_exclusive (-> PG_mappedtodisk) for these here. + */ + new_head->flags &= ~PAGE_FLAGS_CHECK_AT_PREP; + new_head->flags |= (head->flags & + ((1L << PG_referenced) | + (1L << PG_swapbacked) | + (1L << PG_swapcache) | + (1L << PG_mlocked) | + (1L << PG_uptodate) | + (1L << PG_active) | + (1L << PG_workingset) | + (1L << PG_locked) | + (1L << PG_unevictable) | +#ifdef CONFIG_ARCH_USES_PG_ARCH_2 + (1L << PG_arch_2) | +#endif +#ifdef CONFIG_ARCH_USES_PG_ARCH_3 + (1L << PG_arch_3) | +#endif + (1L << PG_dirty) | + LRU_GEN_MASK | LRU_REFS_MASK)); + + /* ->mapping in first and second tail page is replaced by other uses */ + VM_BUG_ON_PAGE(new_nr_pages > 2 && new_head->mapping != TAIL_MAPPING, + new_head); + new_head->mapping = head->mapping; + new_head->index = head->index + index; + + /* + * page->private should not be set in tail pages. Fix up and warn once + * if private is unexpectedly set. + */ + if (unlikely(new_head->private)) { + VM_WARN_ON_ONCE_PAGE(true, new_head); + new_head->private = 0; + } + + if (folio_test_swapcache(folio)) + new_folio->swap.val = folio->swap.val + index; + + /* Page flags must be visible before we make the page non-compound. */ + smp_wmb(); + + /* + * Clear PageTail before unfreezing page refcount. + * + * After successful get_page_unless_zero() might follow put_page() + * which needs correct compound_head(). + */ + clear_compound_head(new_head); + if (new_order) { + prep_compound_page(new_head, new_order); + folio_set_large_rmappable(new_folio); + + folio_set_order(folio, new_order); + } + + if (folio_test_young(folio)) + folio_set_young(new_folio); + if (folio_test_idle(folio)) + folio_set_idle(new_folio); + + folio_xchg_last_cpupid(new_folio, folio_last_cpupid(folio)); + } + + if (!new_order) + ClearPageCompound(&folio->page); + + return 0; +} + +/* + * It splits an unmapped @folio to lower order smaller folios in two ways. + * @folio: the to-be-split folio + * @new_order: the smallest order of the after split folios (since buddy + * allocator like split generates folios with orders from @folio's + * order - 1 to new_order). + * @page: in buddy allocator like split, the folio containing @page will be + * split until its order becomes @new_order. + * @list: the after split folios will be added to @list if it is not NULL, + * otherwise to LRU lists. + * @end: the end of the file @folio maps to. -1 if @folio is anonymous memory. + * @xas: xa_state pointing to folio->mapping->i_pages and locked by caller + * @mapping: @folio->mapping + * @uniform_split: if the split is uniform or not (buddy allocator like split) + * + * + * 1. uniform split: the given @folio into multiple @new_order small folios, + * where all small folios have the same order. This is done when + * uniform_split is true. + * 2. buddy allocator like split: the given @folio is split into half and one + * of the half (containing the given page) is split into half until the + * given @page's order becomes @new_order. This is done when uniform_split is + * false. + * + * The high level flow for these two methods are: + * 1. uniform split: a single __split_folio_to_order() is called to split the + * @folio into @new_order, then we traverse all the resulting folios one by + * one in PFN ascending order and perform stats, unfreeze, adding to list, + * and file mapping index operations. + * 2. buddy allocator like split: in general, folio_order - @new_order calls to + * __split_folio_to_order() are called in the for loop to split the @folio + * to one lower order at a time. The resulting small folios are processed + * like what is done during the traversal in 1, except the one containing + * @page, which is split in next for loop. + * + * After splitting, the caller's folio reference will be transferred to the + * folio containing @page. The other folios may be freed if they are not mapped. + * + * In terms of locking, after splitting, + * 1. uniform split leaves @page (or the folio contains it) locked; + * 2. buddy allocator like split leaves @folio locked. + * + * + * For !uniform_split, when -ENOMEM is returned, the original folio might be + * split. The caller needs to check the input folio. + */ +static int __split_unmapped_folio(struct folio *folio, int new_order, + struct page *page, struct list_head *list, pgoff_t end, + struct xa_state *xas, struct address_space *mapping, + bool uniform_split) +{ + struct lruvec *lruvec; + struct address_space *swap_cache = NULL; + struct folio *origin_folio = folio; + struct folio *next_folio = folio_next(folio); + struct folio *new_folio; + struct folio *next; + int order = folio_order(folio); + int split_order; + int start_order = uniform_split ? new_order : order - 1; + int nr_dropped = 0; + int ret = 0; + bool stop_split = false; + + if (folio_test_anon(folio) && folio_test_swapcache(folio)) { + /* a swapcache folio can only be uniformly split to order-0 */ + if (!uniform_split || new_order != 0) + return -EINVAL; + + swap_cache = swap_address_space(folio->swap); + xa_lock(&swap_cache->i_pages); + } + + if (folio_test_anon(folio)) + mod_mthp_stat(order, MTHP_STAT_NR_ANON, -1); + + /* lock lru list/PageCompound, ref frozen by page_ref_freeze */ + lruvec = folio_lruvec_lock(folio); + + folio_clear_has_hwpoisoned(folio); + + /* + * split to new_order one order at a time. For uniform split, + * folio is split to new_order directly. + */ + for (split_order = start_order; + split_order >= new_order && !stop_split; + split_order--) { + int old_order = folio_order(folio); + struct folio *release; + struct folio *end_folio = folio_next(folio); + int status; + + /* order-1 anonymous folio is not supported */ + if (folio_test_anon(folio) && split_order == 1) + continue; + if (uniform_split && split_order != new_order) + continue; + + if (mapping) { + /* + * uniform split has xas_split_alloc() called before + * irq is disabled, since xas_nomem() might not be + * able to allocate enough memory. + */ + if (uniform_split) + xas_split(xas, folio, old_order); + else { + xas_set_order(xas, folio->index, split_order); + xas_split_alloc(xas, folio, folio_order(folio), + GFP_NOWAIT); + if (xas_error(xas)) { + ret = xas_error(xas); + stop_split = true; + goto after_split; + } + xas_split(xas, folio, old_order); + } + } + + /* complete memcg works before add pages to LRU */ + split_page_memcg(&folio->page, old_order, split_order); + split_page_owner(&folio->page, old_order, split_order); + pgalloc_tag_split(folio, old_order, split_order); + + status = __split_folio_to_order(folio, split_order); + + if (status < 0) { + stop_split = true; + ret = -EINVAL; + } + +after_split: + /* + * Iterate through after-split folios and perform related + * operations. But in buddy allocator like split, the folio + * containing the specified page is skipped until its order + * is new_order, since the folio will be worked on in next + * iteration. + */ + for (release = folio, next = folio_next(folio); + release != end_folio; + release = next, next = folio_next(next)) { + /* + * for buddy allocator like split, the folio containing + * page will be split next and should not be released, + * until the folio's order is new_order or stop_split + * is set to true by the above xas_split() failure. + */ + if (release == page_folio(page)) { + folio = release; + if (split_order != new_order && !stop_split) + continue; + } + if (folio_test_anon(release)) { + mod_mthp_stat(folio_order(release), + MTHP_STAT_NR_ANON, 1); + } + + /* + * Unfreeze refcount first. Additional reference from + * page cache. + */ + folio_ref_unfreeze(release, + 1 + ((!folio_test_anon(origin_folio) || + folio_test_swapcache(origin_folio)) ? + folio_nr_pages(release) : 0)); + + if (release != origin_folio) + lru_add_page_tail(origin_folio, &release->page, + lruvec, list); + + /* Some pages can be beyond EOF: drop them from page cache */ + if (release->index >= end) { + if (shmem_mapping(origin_folio->mapping)) + nr_dropped += folio_nr_pages(release); + else if (folio_test_clear_dirty(release)) + folio_account_cleaned(release, + inode_to_wb(origin_folio->mapping->host)); + __filemap_remove_folio(release, NULL); + folio_put(release); + } else if (!folio_test_anon(release)) { + __xa_store(&origin_folio->mapping->i_pages, + release->index, &release->page, 0); + } else if (swap_cache) { + __xa_store(&swap_cache->i_pages, + swap_cache_index(release->swap), + &release->page, 0); + } + } + } + + unlock_page_lruvec(lruvec); + + if (folio_test_anon(origin_folio)) { + if (folio_test_swapcache(origin_folio)) + xa_unlock(&swap_cache->i_pages); + } else + xa_unlock(&mapping->i_pages); + + /* Caller disabled irqs, so they are still disabled here */ + local_irq_enable(); + + if (nr_dropped) + shmem_uncharge(mapping->host, nr_dropped); + + remap_page(origin_folio, 1 << order, + folio_test_anon(origin_folio) ? + RMP_USE_SHARED_ZEROPAGE : 0); + + /* + * At this point, folio should contain the specified page. + * For uniform split, it is left for caller to unlock. + * For buddy allocator like split, the first after-split folio is left + * for caller to unlock. + */ + for (new_folio = origin_folio, next = folio_next(origin_folio); + new_folio != next_folio; + new_folio = next, next = folio_next(next)) { + if (uniform_split && new_folio == folio) + continue; + if (!uniform_split && new_folio == origin_folio) + continue; + + folio_unlock(new_folio); + /* + * Subpages may be freed if there wasn't any mapping + * like if add_to_swap() is running on a lru page that + * had its mapping zapped. And freeing these pages + * requires taking the lru_lock so we do the put_page + * of the tail pages after the split is complete. + */ + free_page_and_swap_cache(&new_folio->page); + } + return ret; +} + /* * This function splits a large folio into smaller folios of order @new_order. * @page can point to any page of the large folio to split. The split operation From patchwork Wed Feb 5 03:14:12 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Zi Yan X-Patchwork-Id: 13960494 Received: from NAM02-BN1-obe.outbound.protection.outlook.com (mail-bn1nam02on2077.outbound.protection.outlook.com [40.107.212.77]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id D883421422B; Wed, 5 Feb 2025 03:14:43 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=fail smtp.client-ip=40.107.212.77 ARC-Seal: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1738725285; cv=fail; b=erM54jSGSvywTLNsQf3fFM2nWq4WOathThZ3Kkj8lHs/mkCowUvfzuxc0vjG8+l1T6OyVbZidTGlLXTo2QOiXK900Q8302EJy54s8BuG28eKSe6hNBOWKiMmXyjYTzkPEQRPE1VaW73APBPdLMP00yLvINtTNeIveMj77/JXq54= ARC-Message-Signature: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1738725285; c=relaxed/simple; bh=1d+NvaOvoKvlX9KzmLvvhy1ae5cs7bf5tRsjdEZoy5M=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: Content-Type:MIME-Version; b=m5IVvhv9HmooaauVt4RkTtVqGFPtR0ijH/xUPdtrF0HmnQbQF9BCiWv8EUgvwrUS4S9BuTVOjMvN+71LrYKmRtQxCOdnvRNdB9MfRps5lSYVt6iTRGoIUh0HktkHl5/2jS9u9wKhxpEQVDEXUrocxfTz4EMVx4ZwKt2/q9003y4= ARC-Authentication-Results: i=2; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=nvidia.com; spf=fail smtp.mailfrom=nvidia.com; dkim=pass (2048-bit key) header.d=Nvidia.com header.i=@Nvidia.com header.b=SPzXvQEV; arc=fail smtp.client-ip=40.107.212.77 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=nvidia.com Authentication-Results: smtp.subspace.kernel.org; spf=fail smtp.mailfrom=nvidia.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=Nvidia.com header.i=@Nvidia.com header.b="SPzXvQEV" ARC-Seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=eLaAIBJoN7XenzwludgH2JkvonCO6zkalTWIELsuFfrgHLPYJCn182Yk3kMe+Y64TUl9M3IccHbQWNyqFxqxxTU6k0bW5ig9pciP9crAxYapLzXXoxuPudNktVuEDiX2siq1cgRKm6Yv7YgRrfpVyGN+O/EtmxWsKm6dg69H/7QvBvVkJ/GVxuQm94etNYXlEIXnl6wSyphL8dUA7E9FFX/GBhB4UJdan8Bfvn0V9YInqIiMnxNB4KEp5W/6Y67LQbPGCUIHNgCJixHSH3rjM4MFDtrS1M6MmpDbX3+xPrJUbpHBHVtWlk2tLuff5B0UcKcJq1cBM6bj8wQyM4Rtzg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=jW2+/ILTH9BszGEwh1OHt2pRusN1IYxi52xgipsQPkk=; b=tslar8kePCXBR9Q5lbVE+PaxIY9vOVJXVvhOb9SWDyocX67TPOS6rjwB1mVnZye5p2bA+QvttL7pzwXqSfkqo2oGQP5a1Sjb3Gbe4qUICoXC3hsvDG/nd+gb4R/eOWyVnRUIwNMYHKsDzgLbwKnp/jvNw9a0wdfkvHnvypCrdZmKhtm9wKk0AELNFE+SebOfXXRJn+OdVfpoOc6cMXg7BdbVgs0qGygty7i/7SXljX0CoEh0AyvbuXsOy7ZsKlXbsjZf7FUuSpXkAWlDsxQR7Qwqg/Z2JCkuISPfWvGHoTkP30/OxG9GAYkdE8NIajZBoUU2f5TSBySJYFPWhhR21A== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=nvidia.com; dmarc=pass action=none header.from=nvidia.com; dkim=pass header.d=nvidia.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=jW2+/ILTH9BszGEwh1OHt2pRusN1IYxi52xgipsQPkk=; b=SPzXvQEVpmWOddzzZKkCaoqUcaRmo8PVsxv70kPr7SUhwKvu/nA9ERktvIo1gBxjzYo6JA3K/8mobqbUl++6fOVyS3lwfrL64euLOPMC4X+uA5sKiCoa+Q58+Oz3AVzx5f66a3sGkecYwQ+wDYosInUGlcU4NQutp0b7OHlxQhvZxm0kcOPU8/SLI2b9RhE5hCQj9Fbtq52WnWf9Kuq6kB9ZSy6iIlLAmAp+w27YYMLDlV3G7mwdeqJZB/UqaEiq2EsqloC77JiGQe24GWPXLycxF77FSh/1b/w5wUzM/GFh8gh6bfjWF539A/r1lgfsDkT90O3Rxr87iEKpOLxflQ== Authentication-Results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=nvidia.com; Received: from DS7PR12MB9473.namprd12.prod.outlook.com (2603:10b6:8:252::5) by MW6PR12MB8865.namprd12.prod.outlook.com (2603:10b6:303:23b::21) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8398.24; Wed, 5 Feb 2025 03:14:37 +0000 Received: from DS7PR12MB9473.namprd12.prod.outlook.com ([fe80::5189:ecec:d84a:133a]) by DS7PR12MB9473.namprd12.prod.outlook.com ([fe80::5189:ecec:d84a:133a%6]) with mapi id 15.20.8398.021; Wed, 5 Feb 2025 03:14:37 +0000 From: Zi Yan To: linux-mm@kvack.org, Andrew Morton , "Kirill A . Shutemov" , "Matthew Wilcox (Oracle)" Cc: Ryan Roberts , Hugh Dickins , David Hildenbrand , Yang Shi , Miaohe Lin , Kefeng Wang , Yu Zhao , John Hubbard , Baolin Wang , linux-kselftest@vger.kernel.org, linux-kernel@vger.kernel.org, Zi Yan Subject: [PATCH v6 2/7] mm/huge_memory: move folio split common code to __folio_split() Date: Tue, 4 Feb 2025 22:14:12 -0500 Message-ID: <20250205031417.1771278-3-ziy@nvidia.com> X-Mailer: git-send-email 2.47.2 In-Reply-To: <20250205031417.1771278-1-ziy@nvidia.com> References: <20250205031417.1771278-1-ziy@nvidia.com> X-ClientProxiedBy: BN9PR03CA0171.namprd03.prod.outlook.com (2603:10b6:408:f4::26) To DS7PR12MB9473.namprd12.prod.outlook.com (2603:10b6:8:252::5) Precedence: bulk X-Mailing-List: linux-kselftest@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: DS7PR12MB9473:EE_|MW6PR12MB8865:EE_ X-MS-Office365-Filtering-Correlation-Id: 0aab9b85-bee2-407a-b150-08dd45933906 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0;ARA:13230040|1800799024|366016|7416014|376014; X-Microsoft-Antispam-Message-Info: CvpErN0+xu8cdCDZkKTVKFtEx63dvMMkG+BORhaZg3QhrCVNg65zSb4eB31q7EG3ua63paypEeoWGsaagufmRKJHuHO3nCqUeL06M0pRimOiNwiq0h2I9Ij4fb9Cdsa0Ou421fcVDqGA2soVjQzCem2xw90boziwPLKf3L6l0wXhGtVJnQk04VkQF8Lc77fPVsft2FO+/X3mYnTCbBw+0D+loHAvIL+y6hE/iwPo9+TghFotAkjoUAVndCsOY2BoZHclrzWzFunLoBRqDaKyrD4YhBu78aC3cjUlS2IMJ2VDfDKfBfGp8Akmb8gxs1x8vo9E+5hW+aqPZgrco2XIPNt8b8mML9W9HuJ5W4n8K4iI9XxtLM81aq8iSCJ3vBIoq44F3MGBnVQ5BbPudERc+Ot/hF/fKy4I004xpyPKqm5r5ugVFOg5UoblXiwFYNfC5tqAbdapPXbQ70z44Sd0Zs1ogAflU/nnOU/7PRWk/2l/5mGwEvFnjt/9eEd62KACoZk7zyYwSmSXZt7SaMIwQU2Z9e525FLVsl6o0pKgvQsFxepbhfYUeiBZ7rbKt5idZvr3YbNfGMJe5Ef3q/YIXqhbCfvrFQvPSpPs0LaEGlai8LH4GgfqOdTR2QtJxbokSeC0vjdJMBwQCMBquaj1f58okTTGZIK+wreVvIdxiEYm5cygR9ubY7rZbxPsZO+whOKZzpmGDDsuUgFwi1OGO5nmhMl0OOOTyTCwGwPzn5hBy8ZgJhi4y/LpcvNvEOdvnLqi4//ivkrRtFwXKPnAZv6Dus2kse1QvbVL9KeHw+cB2CtKBlswolFgHAmqdmqhmFJ3WzWW6FDSNZMtNXe6VaAswVOUHESik6spM7TPJ0eDAcwMjCvzvHhhaD2TbiJtxCFN0/fjm96RLfSMut7WbUbWpnNxQ8iAFeFzAPgaBqokR0ekgPu9daBGXHdv7Qr8b46+tS8+ZilbfX6ouD9VEU9ABDTdJpEf6drXLRTB8jttqTy5wRMMuu4eMfqEGX8exNlQ1e501eqhkcoWDoy7I5YVBmkNSQeg3R3SXCpktXRQJDA5aiHwrWF6eoNqUfhtIUzYWy6hIZnM+ce4lh67bXzY5LCLk3M8OfvIRhrwcUtDuiD4EcYVoz8zSibzGOBpLL8udcj+oby69xrFGI3xyP0N5YACvCZ0ep39wGrydUUFzojaKI5OGHXXw5oWIdEDqIqHRTCGTO4ADAJTc947ZSgxs9MO6FhMbUbsBrbfmF4U2kbnZ9K92jfMMdKF66yDyD9chS9vX/omiLkpgqJGJJ89LaWKfMGVXqy3C5FQqdT1TCQpklt0g1eseB+3/BSr7VoF+VzbjBKGDWDXFCbhcaGF1KOks6zuIbITu02J7yWzXibsahH29k202HgUi3BY X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR12MB9473.namprd12.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230040)(1800799024)(366016)(7416014)(376014);DIR:OUT;SFP:1101; X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1 X-MS-Exchange-AntiSpam-MessageData-0: NHrkXLebJi81Pp153/HLPvJRlbkgW4zSQ7AnCsi/HIexMRQ0RSt7chiPKWG2TWNHl3+s0r6V9g4vEDqpbCAehWhjae1ul739bkWKaflEkDfXHNm8vqe3tV0au4vKkRJCz4fU3Kz96vZ16Lq5c8Xx6wrXTf8E+VwL77g8JWL8+AaOmIrgidXudfbV7Y9Brr15th/oGa8jp5ZgOBAqMKbs402obosWGhakgds1J8wzptVx1BZ2wktBZF+wLgSb5Z2BY/nIEaBvQKKXaDh2kHOR9PgNUYNeMQ3fKE7eIqu1oPNCPX81lbatxX5/4X9n6+kPQdmK1zLA08bEELNZYzs4KMGxyNaXCASY/DwJqx0R9osEjZOLan+80WBl2P3iDhc2uGqnaFn78Uok+jZbFk+MytZgQm1zopPcSZ4CxlGsp9BWYYjDwAuJnMUNVPHAI1oETTWihafzlyt1tYJcvkWLeP21b8/anRC6JHPBy9EQRUm5hZi7FY9krkWijVJY8X9kAZVz/jX1bCO2n8AKyI7GA5RmjsgBmqe0DxwA3btIQOFXBGvoqVxsryACUFvxg3AJ7S4iviSa/h30Mf0Cr7SGGMDMzXNDqMBXmsMlp+7H4OvYt9aBvzeQW1E/pYgVTPWZRr7an9O32Tv2bA77Csa399OcqRrq8EPd0eq3FWSvwEuxphSaN0xVXCtaOirSBNMo8GChCSyF3ALLgA9OTDCOPzVEfCd+U0MPs+RBX2aDJCSe+reIq7l9d+gU15FOfHroAi1l2+zMhhj5DhHtA2xJ7W6h45OGW3FCRLdwt2DEQ1gWj8zqAiKvjBDN2+P0Z5t6MWudwdtWmnbiLZ4Z9y0gCBCWhA1a3HFZngi421FIwAMiOM0dIs1c2nAGh1oJiyICDs8ui6+zQhPx6+kpUWBUBIWpEZCUHy6b4Q2Jw5taHkHSe8lTQDLGPICyurAOLvaCks6uCTxEYuFcwdr/sUC3wZW2k5hArNyuJNG7xtgIZrkbTRoXxXXGVOQhX48lOgMzEejoNzzP3Eq2L62k5MArLYb7DfeTcpneyqJxvxZrN1AqxeAfrf86k3zkqSgAvMM0NfAPrKaer9hqaPPgY7OwAO58KmBdGoSc2qcmKmBgT0di7AQucyMISsTGKvJF65Tbr1Aa+P7XEmd9zVkEhcd5YSLNbyfFt5EgvItRev0z0i1aaFuRZgcbSyQBdXhtwym4XFpAWJ3Cy1dP2R7lC73C+VH1cMnNnbYL9+PK2jMg0IUkcqGm5KYI0NKmvhuXUangBCr9S2/EpZEQKrKemUvXY2BX7kzak+80NLhg4E7uTUg1hFx8S5mrfhy1WNNeq28S2XghdRE+mPcCbIZFmz3M9iChermszQhZcQc+EuX8z5yievgeoXFm+vVO0X0ehD4OA4nU1iqsHhQPc4oQPAgE+UW5EjCKtJ6gYUGIoX9p2APkohV28jUglh+aGMtuRaUDFTX+52Ce38LNS+9EbeBTkLQ4SjiIOJUTSoezcO/Gm04B02hN/bmd9P76mrdzSxh7LDAHAPKqBjbvx5kh9O69ZspoWqs6sByn03tI30HB59rXlWRoT20AC06cVFuS9ioq X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-Network-Message-Id: 0aab9b85-bee2-407a-b150-08dd45933906 X-MS-Exchange-CrossTenant-AuthSource: DS7PR12MB9473.namprd12.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-OriginalArrivalTime: 05 Feb 2025 03:14:37.6949 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-MailboxType: HOSTED X-MS-Exchange-CrossTenant-UserPrincipalName: V/ElNChP3AbTR5isYhkDh+1/43nOCjuQyDsFtQITf63yFz+4HCuL3M1niN5cSnbl X-MS-Exchange-Transport-CrossTenantHeadersStamped: MW6PR12MB8865 This is a preparation patch for folio_split(). In the upcoming patch folio_split() will share folio unmapping and remapping code with split_huge_page_to_list_to_order(), so move the code to a common function __folio_split() first. Signed-off-by: Zi Yan --- mm/huge_memory.c | 107 +++++++++++++++++++++++++---------------------- 1 file changed, 57 insertions(+), 50 deletions(-) diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 1948d86ac4ce..848bf297e307 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -3741,57 +3741,9 @@ static int __split_unmapped_folio(struct folio *folio, int new_order, return ret; } -/* - * This function splits a large folio into smaller folios of order @new_order. - * @page can point to any page of the large folio to split. The split operation - * does not change the position of @page. - * - * Prerequisites: - * - * 1) The caller must hold a reference on the @page's owning folio, also known - * as the large folio. - * - * 2) The large folio must be locked. - * - * 3) The folio must not be pinned. Any unexpected folio references, including - * GUP pins, will result in the folio not getting split; instead, the caller - * will receive an -EAGAIN. - * - * 4) @new_order > 1, usually. Splitting to order-1 anonymous folios is not - * supported for non-file-backed folios, because folio->_deferred_list, which - * is used by partially mapped folios, is stored in subpage 2, but an order-1 - * folio only has subpages 0 and 1. File-backed order-1 folios are supported, - * since they do not use _deferred_list. - * - * After splitting, the caller's folio reference will be transferred to @page, - * resulting in a raised refcount of @page after this call. The other pages may - * be freed if they are not mapped. - * - * If @list is null, tail pages will be added to LRU list, otherwise, to @list. - * - * Pages in @new_order will inherit the mapping, flags, and so on from the - * huge page. - * - * Returns 0 if the huge page was split successfully. - * - * Returns -EAGAIN if the folio has unexpected reference (e.g., GUP) or if - * the folio was concurrently removed from the page cache. - * - * Returns -EBUSY when trying to split the huge zeropage, if the folio is - * under writeback, if fs-specific folio metadata cannot currently be - * released, or if some unexpected race happened (e.g., anon VMA disappeared, - * truncation). - * - * Callers should ensure that the order respects the address space mapping - * min-order if one is set for non-anonymous folios. - * - * Returns -EINVAL when trying to split to an order that is incompatible - * with the folio. Splitting to order 0 is compatible with all folios. - */ -int split_huge_page_to_list_to_order(struct page *page, struct list_head *list, - unsigned int new_order) +static int __folio_split(struct folio *folio, unsigned int new_order, + struct page *page, struct list_head *list) { - struct folio *folio = page_folio(page); struct deferred_split *ds_queue = get_deferred_split_queue(folio); /* reset xarray order to new order after split */ XA_STATE_ORDER(xas, &folio->mapping->i_pages, folio->index, new_order); @@ -4001,6 +3953,61 @@ int split_huge_page_to_list_to_order(struct page *page, struct list_head *list, return ret; } +/* + * This function splits a large folio into smaller folios of order @new_order. + * @page can point to any page of the large folio to split. The split operation + * does not change the position of @page. + * + * Prerequisites: + * + * 1) The caller must hold a reference on the @page's owning folio, also known + * as the large folio. + * + * 2) The large folio must be locked. + * + * 3) The folio must not be pinned. Any unexpected folio references, including + * GUP pins, will result in the folio not getting split; instead, the caller + * will receive an -EAGAIN. + * + * 4) @new_order > 1, usually. Splitting to order-1 anonymous folios is not + * supported for non-file-backed folios, because folio->_deferred_list, which + * is used by partially mapped folios, is stored in subpage 2, but an order-1 + * folio only has subpages 0 and 1. File-backed order-1 folios are supported, + * since they do not use _deferred_list. + * + * After splitting, the caller's folio reference will be transferred to @page, + * resulting in a raised refcount of @page after this call. The other pages may + * be freed if they are not mapped. + * + * If @list is null, tail pages will be added to LRU list, otherwise, to @list. + * + * Pages in @new_order will inherit the mapping, flags, and so on from the + * huge page. + * + * Returns 0 if the huge page was split successfully. + * + * Returns -EAGAIN if the folio has unexpected reference (e.g., GUP) or if + * the folio was concurrently removed from the page cache. + * + * Returns -EBUSY when trying to split the huge zeropage, if the folio is + * under writeback, if fs-specific folio metadata cannot currently be + * released, or if some unexpected race happened (e.g., anon VMA disappeared, + * truncation). + * + * Callers should ensure that the order respects the address space mapping + * min-order if one is set for non-anonymous folios. + * + * Returns -EINVAL when trying to split to an order that is incompatible + * with the folio. Splitting to order 0 is compatible with all folios. + */ +int split_huge_page_to_list_to_order(struct page *page, struct list_head *list, + unsigned int new_order) +{ + struct folio *folio = page_folio(page); + + return __folio_split(folio, new_order, page, list); +} + int min_order_for_split(struct folio *folio) { if (folio_test_anon(folio)) From patchwork Wed Feb 5 03:14:13 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Zi Yan X-Patchwork-Id: 13960495 Received: from NAM02-BN1-obe.outbound.protection.outlook.com (mail-bn1nam02on2077.outbound.protection.outlook.com [40.107.212.77]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 09D50214A78; Wed, 5 Feb 2025 03:14:46 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=fail smtp.client-ip=40.107.212.77 ARC-Seal: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1738725288; cv=fail; b=ZpfoxTx4Jq31NO2cShOJHtCWYnFVi0RICoYWHgXkzpE8Puwk+Iq+0OvcEcNu+UQtZO5jzNQY+UG5CcXL2icOcd2lxoNdIqqGyQ/F532DHauvsZPBN2F/zLAyjUhNBPWhWVVEc0JmDXi/Y1LLaQJ8A/Gnhlwo6TBAPXKRdg8QtCc= ARC-Message-Signature: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1738725288; c=relaxed/simple; bh=Qe7Zu51Yoj1Bho+dn3xJ3yUXCwtoz5Ncj63nD55sUhw=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: Content-Type:MIME-Version; b=UwgCegSQuXA7hnZSJMipGBwA2aK4qhE0x6a6hBmQnxEdauZgxYrHIML+7d0v3GutW5rgR8uit0yJbYdYIE13ypHUIYuff0biJGiC/0uQVeDreqKvgla5Kckc7kjoOga3KJU6T+z5ApttBQApaT81e8aDM2n7tmAazrGVQxyKyr8= ARC-Authentication-Results: i=2; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=nvidia.com; spf=fail smtp.mailfrom=nvidia.com; dkim=pass (2048-bit key) header.d=Nvidia.com header.i=@Nvidia.com header.b=Cm7gZ5K6; arc=fail smtp.client-ip=40.107.212.77 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=nvidia.com Authentication-Results: smtp.subspace.kernel.org; spf=fail smtp.mailfrom=nvidia.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=Nvidia.com header.i=@Nvidia.com header.b="Cm7gZ5K6" ARC-Seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=BDjAh+0h84qFq4oh4cZSMT40f5nT86XqG0+B7WNPm4hPZpOXz7k/4eDkg51nYNyX5QgrjBZCTzRYOi8RjgFxtKjdMXEo41KvUWQ0PGLFAr6mkDYgrfGd3fsr+ZriML/hF5B//+8VGNvyKkwk/ZjHOf+RKTZg637QVE4fj/unGTfG7KztMSwcYwnEYoELBn2Ur8FkzubqUKxOHYdmGPAprQVd2QCLSHdwo5Sa99o5mQRO3zOszwcFNWaXmnpPb1ZfE84cEuHB1cDbPseNAvsnx03OTw8wj8y+cql1Yat2kwZajdQn6W+BRSdQgHyR7BGZp8VQmy7H1D4XevE6MWachg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=dzWG5ERT8VgWun3QsLhrd/RvRcNSHM5c6WUx/NfEVqk=; b=TtQAvrmcVepixcx4JnkHgTdPo+CFntc/JstoON7dKYmuBWxcR0A8qumCp32K9ZjM1XjMj+nAnA3sYSAf6F56jSobnpexdNOkUunrzry84wT2Y/KrDGQZTM8KAjHAkud8Z4B2UTGVEWfs+9Wv8yPR9XnSOb7jCuZqX99/RBXHiNWfRVlDt71q6XqPsjPbl5ahponIAOm1DqIGbJGNVdzW9ilOFXX6k9fkH6NChjGTgOJlvmtff7gyYIMuf1T3hdf2L0IqwbKwtsww/HSriiWVgq6Gm9vRpDrRpC+8CYcHWalejgDLlORNN9aMunorOl94GRcSxsn0YQQ/RDiA4du5JA== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=nvidia.com; dmarc=pass action=none header.from=nvidia.com; dkim=pass header.d=nvidia.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=dzWG5ERT8VgWun3QsLhrd/RvRcNSHM5c6WUx/NfEVqk=; b=Cm7gZ5K6Y2olsZcad1/wm9arZUN8w6rPJDvvIZtkPWfMjs8CEnHm3q6iI3NrpUn4oHfPcw/OKYEW2nbhhiyfqpI9oCNZcTM9e5yOUOteE81zo8LuDgk9wHZcVftnMw8U936lpx2c2J2LldUv3KyXwdPo8aH5mPmcL77Xcq9hMQZWKd9IDXG6WWrqeojfVMnQGUsYp50QImHDCVAy6BoNEcuOo9hB3SOXwBN7UDEI44rEc0jIiBZHmw8yO3/qWGE9P4PUBJGKoOnm3VUYm5Nc9t09dYztjFsEcu/hq/aJ0JfmGzx1Xsv+8u7fysX/6FuK5nDrKMjjcsOBGPkypzNE9w== Authentication-Results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=nvidia.com; Received: from DS7PR12MB9473.namprd12.prod.outlook.com (2603:10b6:8:252::5) by MW6PR12MB8865.namprd12.prod.outlook.com (2603:10b6:303:23b::21) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8398.24; Wed, 5 Feb 2025 03:14:39 +0000 Received: from DS7PR12MB9473.namprd12.prod.outlook.com ([fe80::5189:ecec:d84a:133a]) by DS7PR12MB9473.namprd12.prod.outlook.com ([fe80::5189:ecec:d84a:133a%6]) with mapi id 15.20.8398.021; Wed, 5 Feb 2025 03:14:39 +0000 From: Zi Yan To: linux-mm@kvack.org, Andrew Morton , "Kirill A . Shutemov" , "Matthew Wilcox (Oracle)" Cc: Ryan Roberts , Hugh Dickins , David Hildenbrand , Yang Shi , Miaohe Lin , Kefeng Wang , Yu Zhao , John Hubbard , Baolin Wang , linux-kselftest@vger.kernel.org, linux-kernel@vger.kernel.org, Zi Yan Subject: [PATCH v6 3/7] mm/huge_memory: add buddy allocator like folio_split() Date: Tue, 4 Feb 2025 22:14:13 -0500 Message-ID: <20250205031417.1771278-4-ziy@nvidia.com> X-Mailer: git-send-email 2.47.2 In-Reply-To: <20250205031417.1771278-1-ziy@nvidia.com> References: <20250205031417.1771278-1-ziy@nvidia.com> X-ClientProxiedBy: BN9PR03CA0151.namprd03.prod.outlook.com (2603:10b6:408:f4::6) To DS7PR12MB9473.namprd12.prod.outlook.com (2603:10b6:8:252::5) Precedence: bulk X-Mailing-List: linux-kselftest@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: DS7PR12MB9473:EE_|MW6PR12MB8865:EE_ X-MS-Office365-Filtering-Correlation-Id: 530884e8-2f75-4a96-4d3b-08dd459339e3 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0;ARA:13230040|1800799024|366016|7416014|376014; X-Microsoft-Antispam-Message-Info: Kxx4RrBBW60SEdG9ClslL6RLWw8o+5hrXHaLwMx6eYU31lIvthpI3QFJBdgOBiGyrdz1pjIf5+4bAnsJC2Sux2qRXuaV1vJcvQf58ZXw8bMRyZ/NPDhH0ITiThw14IfnjQbANWJuQXIoiT8h/iMrgOtrX92s+/UI+pW5bYGHzbh9NmMgoKJu/G9JHIylmkjRqhSjwZMZ8jBHzxP5gjVijg72UwjC3Adgb3oGjEycxwAl5LoOQgSsEid+Y6+m+Qp3ChfQyan5GuBZA7wr8BY1NqjnnhnzvIiHxsvOgSvQE009uRkiKXkNjFbyfG6ixWWVJ7IdGJ+ViDHwZ5kQDDS3nCYXjwAJZFw5kbbXRCHA9ZkXW+q9qwjqoaoI3B5PhTwFcbgZyE7NJtX9xiK42ALN2grMc+YQdmyus/flBXpZ4aOoURTlgKYnGrUv4fK6nbvzDLUcy7W0YIU9HD47ZfqBU/jzMgPW50x8PbqQ/Nljk/9I5lj99VPE4/b5n1KdPXzI//649qw2ZKNwaAsrppbujCtE56hxqGDGweRumnUaXV4pJ1dvKW4IHwcGgpMRXriB/iuqtgtAKUgfymswBa18ozh4cmMPOcGLis0c5p9VfOQrNLmNkqLTjC/zhFYRpah0oPIMueC0FnQeRmmm0Odi1W9jygcXAsQoOSuMVJ8z7Umv9rhUcKI0WmHpXurOQhHWi7q2c7cS1W2SOEnuiBbl6Q9b+oCRkzT4o3SRrDckY/s+GXzFfnVgSZD7Lm9S3lVUBiyJcB+imrfs9FKFZrJIWJ6IECsDXvcf0SDqoeMJ5abYNiDkH2sdE3Mwjy+o6BFrZkno0vSjrc5Yuq9rEp+gDfVkBNtEZa+TzkDuq3DVpuuvR3shAHC2UVOAr7lINoWLBiYenzWxJdqGLV2EuSLtrCgqeXPwcBp/NbbQuK8lLqCiMx5rDTtm57ksSNh8LG1FZBD8IGGtzL+qkLbrubjY0RXBQC1R3gCp1GP4Fw88i6h9XdZPv0tABmOammw1DKAyLLp09rLoB6K1Lf4BJa3trKazUP9/xP1hY3aKHo7PmoPTCZEDvfJ3hWMTjIdZrE07Xwlrb27yG0cD1CV2PFnbTNgHDSGNaUBssW1vVeVq7shOJChIfBW8cUhbISdhy7AI20+YRKAbQInCuNoqc35Nukdr8TQMED2VPWiC7pdy2HUHBqDWS47MXD3QS38vVd7N0lXNggwCIWi0EEpSRhbF9YeQ5GgKiKflGH58O7vULVukkqUtfRh032H5bk1hS8Ly7gCu1vlnago8dwl9sl3OGBqky5N840S6/NXZxhhis7JHqJPNMZrhCCaImGfV5hiQrEVJH0REss6J4b3CzIhS5XgCX5zxIG4DqQQnhTpB+AlcsrB970V3lHmnvvolqI7R X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR12MB9473.namprd12.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230040)(1800799024)(366016)(7416014)(376014);DIR:OUT;SFP:1101; X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1 X-MS-Exchange-AntiSpam-MessageData-0: RxgMQedwVZb02AuBs3yg0R9AtS65uevfRFw+wbkRUUPfuNKb6p6Wpe7bQzHBdG5QtDU+44c01T+zut+NwyNefzYA64Tjexvy7g7hOxKAQhCA3qRzd81nn6Gt32sfHgM6oWsJ2c3V0SMNSMvTMvDZwwuLJEgemJ38sLNiLXLvTaEzUo6rhQp5yhEVfmcCTte+x72tk3zLpFnxxhJvzTmCPKfqVIZIgvIfHQeXnedyduBOROjZUomE7U42+CoPj14IESowFdL68ajN3zfCabN9ShoRIlpp6VlZ/UsSUL4t9I+pCyuO5n+hDGAEe/dXdSeEqntAohe81YxVwIf09WUt+aILnRXxLCcVGfRrnbTkOUQpsZUO9CYYXQXjqpbxnGFqD4HVMNHUemKF6jvVDsjviAqmkrUfSK0FTIcB94GKRHLg/quX56ROPmqSOriUqEhpyDWRrzKKmMh0e+ePItXwvBL08HtEut6A16ymJJ/5udzd79u+G/Z05tTZNB/WOEWJH+Pg5dC3/qUOM7jqpdjCtKbIF6JkBgYdst3Xw+XS8AsFt6ebX62JXpQJatKKT+3FM6S//13lRyCysldRXzRrWEGyJJMcwgY8tP+eF3QBSBrh1vzg2BOWBQ+HP9Rp1O83cZOYzrPp802YxOEE+az+FKGMDoy5jAtWCIfwGOxGi8jPVvfiUyJkw8IvOqbLcB83HiEP6ExYZ32tCXpJRmQ57H9S1IVOERRvHjawX/HgijfK+co/how5hKvA7jhQkvZ8M4I1IXkuV6xIlr7rAzYNcTrFjkTN5k9jWwn4n3uSbIGLq6hW0JfXZs+mLyqpbImUaRNe7cthSfKb0I4AhZxrCZ0+IlColUjb435NoJOOA0nFsn/XHlyaCTAsjmRgBCpiYtAGMGaDJvg4oChEXnviV25qzCqwrdhnO7TRZdOF/pNVVjkzBJCwizPOH0gSpeR0OWMjvqr8+Tcr1o6cFAfwtmPc04tngC76enfEJSvzliIofHvOn7wVTHB6dx1KWcHj6NuskUe4ci0Kh1Xli9Ewhwc5A3Y4pEqEHcQMgltd5XDaaQm7TpI/uihE7PwFqLfR6Iohfhisld0eCph4PaZ6GvbL/I/ONFiqS/DolEovkrwH4QM3Rt3eOQwbXnELOZXHpblawXfCVvUf59fymSgA2ay+s9A0dz36Oqf10IfqE4JY4pTSPfVdCYWzTzMGlXycuNyXQU2Hb8S0YanVnl1SN8ZAJHmysr2lJdLW5RT3GksX+YqBZBKyeSQUDPfFbXrSmVH8VQkeLSf4/2IQeVw1NQZC0aiJNaH5+90OnrJubcmH21eyfNLSPToDrk0e3ns1sSu/UE52HHon6zvl1aHfWmhPLTtlNglCOEtrvUz9Fs7RODn6iA+G1gpjboBPUlvbFJ5WZTBljUX1aSb8q9upPyDfslGuCWTLrY/qtzniAmmRCySJZV08q7d6m++uJGCgDBjR1jBE0gjtS1vFjOWOs3rQPLVZOk8Gr9eJe6vzGlXVF7b5iWIew7NAGlmSeP0DTHGR078zVpFqdBMYgkQa5v3QH4CXQxMzVPMRyuhYA4CJG0Chnkwh4UNyIBUyk462 X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-Network-Message-Id: 530884e8-2f75-4a96-4d3b-08dd459339e3 X-MS-Exchange-CrossTenant-AuthSource: DS7PR12MB9473.namprd12.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-OriginalArrivalTime: 05 Feb 2025 03:14:39.1744 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-MailboxType: HOSTED X-MS-Exchange-CrossTenant-UserPrincipalName: ZJY5IMvfPDTbM58wA0oR0L0SIkzH/hGSQ4aOGAtRKFDxv7PvQzI47JfziCJvxBkp X-MS-Exchange-Transport-CrossTenantHeadersStamped: MW6PR12MB8865 folio_split() splits a large folio in the same way as buddy allocator splits a large free page for allocation. The purpose is to minimize the number of folios after the split. For example, if user wants to free the 3rd subpage in a order-9 folio, folio_split() will split the order-9 folio as: O-0, O-0, O-0, O-0, O-2, O-3, O-4, O-5, O-6, O-7, O-8 if it is anon O-1, O-0, O-0, O-2, O-3, O-4, O-5, O-6, O-7, O-9 if it is pagecache Since anon folio does not support order-1 yet. It generates fewer folios than existing page split approach, which splits the order-9 to 512 order-0 folios. folio_split() and existing split_huge_page_to_list_to_order() share the folio unmapping and remapping code in __folio_split() and the common backend split code in __split_unmapped_folio() using uniform_split variable to distinguish their operations. uniform_split_supported() and non_uniform_split_supported() are added to factor out check code and will be used outside __folio_split() in the following commit. Signed-off-by: Zi Yan --- mm/huge_memory.c | 134 ++++++++++++++++++++++++++++++++++------------- 1 file changed, 97 insertions(+), 37 deletions(-) diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 848bf297e307..20d7be07cd7b 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -3741,12 +3741,68 @@ static int __split_unmapped_folio(struct folio *folio, int new_order, return ret; } +static bool non_uniform_split_supported(struct folio *folio, unsigned int new_order, + bool warns) +{ + /* order-1 is not supported for anonymous THP. */ + if (folio_test_anon(folio) && new_order == 1) { + VM_WARN_ONCE(warns, "Cannot split to order-1 folio"); + return false; + } + + /* + * No split if the file system does not support large folio. + * Note that we might still have THPs in such mappings due to + * CONFIG_READ_ONLY_THP_FOR_FS. But in that case, the mapping + * does not actually support large folios properly. + */ + if (IS_ENABLED(CONFIG_READ_ONLY_THP_FOR_FS) && + !mapping_large_folio_support(folio->mapping)) { + VM_WARN_ONCE(warns, + "Cannot split file folio to non-0 order"); + return false; + } + + /* Only swapping a whole PMD-mapped folio is supported */ + if (folio_test_swapcache(folio)) { + VM_WARN_ONCE(warns, + "Cannot split swapcache folio to non-0 order"); + return false; + } + + return true; +} + +/* See comments in non_uniform_split_supported() */ +static bool uniform_split_supported(struct folio *folio, unsigned int new_order, + bool warns) +{ + if (folio_test_anon(folio) && new_order == 1) { + VM_WARN_ONCE(warns, "Cannot split to order-1 folio"); + return false; + } + + if (new_order) { + if (IS_ENABLED(CONFIG_READ_ONLY_THP_FOR_FS) && + !mapping_large_folio_support(folio->mapping)) { + VM_WARN_ONCE(warns, + "Cannot split file folio to non-0 order"); + return false; + } + if (folio_test_swapcache(folio)) { + VM_WARN_ONCE(warns, + "Cannot split swapcache folio to non-0 order"); + return false; + } + } + return true; +} + static int __folio_split(struct folio *folio, unsigned int new_order, - struct page *page, struct list_head *list) + struct page *page, struct list_head *list, bool uniform_split) { struct deferred_split *ds_queue = get_deferred_split_queue(folio); - /* reset xarray order to new order after split */ - XA_STATE_ORDER(xas, &folio->mapping->i_pages, folio->index, new_order); + XA_STATE(xas, &folio->mapping->i_pages, folio->index); bool is_anon = folio_test_anon(folio); struct address_space *mapping = NULL; struct anon_vma *anon_vma = NULL; @@ -3761,29 +3817,11 @@ static int __folio_split(struct folio *folio, unsigned int new_order, if (new_order >= folio_order(folio)) return -EINVAL; - if (is_anon) { - /* order-1 is not supported for anonymous THP. */ - if (new_order == 1) { - VM_WARN_ONCE(1, "Cannot split to order-1 folio"); - return -EINVAL; - } - } else if (new_order) { - /* - * No split if the file system does not support large folio. - * Note that we might still have THPs in such mappings due to - * CONFIG_READ_ONLY_THP_FOR_FS. But in that case, the mapping - * does not actually support large folios properly. - */ - if (IS_ENABLED(CONFIG_READ_ONLY_THP_FOR_FS) && - !mapping_large_folio_support(folio->mapping)) { - VM_WARN_ONCE(1, - "Cannot split file folio to non-0 order"); - return -EINVAL; - } - } + if (uniform_split && !uniform_split_supported(folio, new_order, true)) + return -EINVAL; - /* Only swapping a whole PMD-mapped folio is supported */ - if (folio_test_swapcache(folio) && new_order) + if (!uniform_split && + !non_uniform_split_supported(folio, new_order, true)) return -EINVAL; is_hzp = is_huge_zero_folio(folio); @@ -3840,10 +3878,13 @@ static int __folio_split(struct folio *folio, unsigned int new_order, goto out; } - xas_split_alloc(&xas, folio, folio_order(folio), gfp); - if (xas_error(&xas)) { - ret = xas_error(&xas); - goto out; + if (uniform_split) { + xas_set_order(&xas, folio->index, new_order); + xas_split_alloc(&xas, folio, folio_order(folio), gfp); + if (xas_error(&xas)) { + ret = xas_error(&xas); + goto out; + } } anon_vma = NULL; @@ -3908,7 +3949,6 @@ static int __folio_split(struct folio *folio, unsigned int new_order, if (mapping) { int nr = folio_nr_pages(folio); - xas_split(&xas, folio, folio_order(folio)); if (folio_test_pmd_mappable(folio) && new_order < HPAGE_PMD_ORDER) { if (folio_test_swapbacked(folio)) { @@ -3922,12 +3962,8 @@ static int __folio_split(struct folio *folio, unsigned int new_order, } } - if (is_anon) { - mod_mthp_stat(order, MTHP_STAT_NR_ANON, -1); - mod_mthp_stat(new_order, MTHP_STAT_NR_ANON, 1 << (order - new_order)); - } - __split_huge_page(page, list, end, new_order); - ret = 0; + ret = __split_unmapped_folio(page_folio(page), new_order, + page, list, end, &xas, mapping, uniform_split); } else { spin_unlock(&ds_queue->split_queue_lock); fail: @@ -4005,7 +4041,31 @@ int split_huge_page_to_list_to_order(struct page *page, struct list_head *list, { struct folio *folio = page_folio(page); - return __folio_split(folio, new_order, page, list); + return __folio_split(folio, new_order, page, list, true); +} + +/* + * folio_split: split a folio at offset_in_new_order to a new_order folio + * @folio: folio to split + * @new_order: the order of the new folio + * @page: a page within the new folio + * + * return: 0: successful, <0 failed (if -ENOMEM is returned, @folio might be + * split but not to @new_order, the caller needs to check) + * + * Split a folio at offset_in_new_order to a new_order folio, leave the + * remaining subpages of the original folio as large as possible. For example, + * split an order-9 folio at its third order-3 subpages to an order-3 folio. + * There are 2^6=64 order-3 subpages in an order-9 folio and the result will be + * a set of folios with different order and the new folio is in bracket: + * [order-4, {order-3}, order-3, order-5, order-6, order-7, order-8]. + * + * After split, folio is left locked for caller. + */ +int folio_split(struct folio *folio, unsigned int new_order, + struct page *page, struct list_head *list) +{ + return __folio_split(folio, new_order, page, list, false); } int min_order_for_split(struct folio *folio) From patchwork Wed Feb 5 03:14:14 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Zi Yan X-Patchwork-Id: 13960496 Received: from NAM02-BN1-obe.outbound.protection.outlook.com (mail-bn1nam02on2077.outbound.protection.outlook.com [40.107.212.77]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 594BD215048; Wed, 5 Feb 2025 03:14:48 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=fail smtp.client-ip=40.107.212.77 ARC-Seal: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1738725293; cv=fail; b=qVZ6+S3noZJ27LPCkJPJd/S6Edf2/0wtHg3zq+O2DLPgdxNAWJl1o3ra0UsGIOQgh7ZZWmuDVx6V5OhOmecWAUXBrL5pL0HqUzE2EoSgA1z2oZF/MxPZKhnECyUxTiYCmuSaIb145kNSCXJz25cCkwT6YByR+d053o/UREmb4XU= ARC-Message-Signature: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1738725293; c=relaxed/simple; bh=TTjbU6lxZ72v8uV5VPVQpasdPmmQjcMgKqTcHDDN/3Q=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: Content-Type:MIME-Version; b=ngTEBmrotcuR/ta6iorCKt5bgsyCaWiGfWZAXM06wzEziEJZPI8Jrq8OPiS45hF3k0r5Sgoc2UerimXV7A8USg1wRurQ4unj1/FKTP4TUUXROLVwXxzD9RSZBJKM3xI42JKX+TjOM+FxdT04udhhwFPXvnITLncJn4Vosvt/pik= ARC-Authentication-Results: i=2; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=nvidia.com; spf=fail smtp.mailfrom=nvidia.com; dkim=pass (2048-bit key) header.d=Nvidia.com header.i=@Nvidia.com header.b=pqT4V5e4; arc=fail smtp.client-ip=40.107.212.77 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=nvidia.com Authentication-Results: smtp.subspace.kernel.org; spf=fail smtp.mailfrom=nvidia.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=Nvidia.com header.i=@Nvidia.com header.b="pqT4V5e4" ARC-Seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=eVtbKZVerXU6l2l1pOBH88ianttDfd2BNBdr+gnrEi9MuztfGZir1UxEJ2wF0WnjAue2Z9c6ClqMSmBLWh9Nq00NEj0ScagGKGwRiJZxlfjWN+Vs+G7eYFG/bis4+jfyDB1puPl/MF/dbg1PI64CxzfGM4Naupddg6RuKG82Cda7YFt+NuM7654/Fwj/1Dsmp9TwZF9UvJzfZXdIFkhpXoYLhF2vbdQ1eYiEOT55piACOWw8OrY1X/nwGHuLtsAm+2LU9k+nURNEugU72Rw4RpgtiLNZJzRm+h5BbdH+zpQAOWCWF4uIIjabb+ba301iIUf/DEAYtg25HatrLsfckw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=JcHn6EpuIJHmIzyx4MrjM6i0D2F+qnS76CpNUvLHJI0=; b=OYOci7TyJR6dc64R32iraKkqb3FktMJC9be3K3b0NW4yylbaqfwfxwoA0MV7yuAVT8Mn32Y3l2JMI4fkBnUV8HBDDdfkbAkPvsYZSfR7YFtuEPpdJHtcWNW//H9Llv2GMf5OTLF55kAoG5fKZefoIMD9ytF+aa8jOVxpaT1jjtROZZwE9jFP22EdhttBvNteGNCPETVKk1uUz1u6ieKVll195iYJFJ3Z9KmQIJ/QQt3tqcwiwaPYSTGSKOBeI2Y7m5Oqax+WScIsPoGVy1He9YZto6KDhCwMt3DDtp5QTzDS0SjFXDhbAHp5AGT4XA1mr+rxs59Qk3DHbI/oZ6jpug== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=nvidia.com; dmarc=pass action=none header.from=nvidia.com; dkim=pass header.d=nvidia.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=JcHn6EpuIJHmIzyx4MrjM6i0D2F+qnS76CpNUvLHJI0=; b=pqT4V5e4zSJ3P0L5C8q7EBD32na9cDLP+PC9L86/9Ckls3HsEfvm8hbob0mtSJDkaqyLL1QFOmr6OgA9RfDRj/w0u8yvb4DmEiO9iFckO910BegVf1flkOYwnOUWRJk6F8IrdmiEpliMpEDa7JLM8BjeE5/1Ptk5PuYZKVySBXhgpm37YzI9ZBcJN+F9cSPiAV7OrMzhRb5Wa2TfdBrIXfaQeIJJNBaWIZnJG9mtsRSj0eFZ3WVnf4PCkxnTRjWi60Ev0Lgvsf+6Ddih5n+u9Kv6/CP50PO9s6BiVjVEZC2x81+gM4gMBERcCOrllaRH/ZEJwXLh+7QPVMgIe5Rplg== Authentication-Results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=nvidia.com; Received: from DS7PR12MB9473.namprd12.prod.outlook.com (2603:10b6:8:252::5) by MW6PR12MB8865.namprd12.prod.outlook.com (2603:10b6:303:23b::21) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8398.24; Wed, 5 Feb 2025 03:14:40 +0000 Received: from DS7PR12MB9473.namprd12.prod.outlook.com ([fe80::5189:ecec:d84a:133a]) by DS7PR12MB9473.namprd12.prod.outlook.com ([fe80::5189:ecec:d84a:133a%6]) with mapi id 15.20.8398.021; Wed, 5 Feb 2025 03:14:40 +0000 From: Zi Yan To: linux-mm@kvack.org, Andrew Morton , "Kirill A . Shutemov" , "Matthew Wilcox (Oracle)" Cc: Ryan Roberts , Hugh Dickins , David Hildenbrand , Yang Shi , Miaohe Lin , Kefeng Wang , Yu Zhao , John Hubbard , Baolin Wang , linux-kselftest@vger.kernel.org, linux-kernel@vger.kernel.org, Zi Yan Subject: [PATCH v6 4/7] mm/huge_memory: remove the old, unused __split_huge_page() Date: Tue, 4 Feb 2025 22:14:14 -0500 Message-ID: <20250205031417.1771278-5-ziy@nvidia.com> X-Mailer: git-send-email 2.47.2 In-Reply-To: <20250205031417.1771278-1-ziy@nvidia.com> References: <20250205031417.1771278-1-ziy@nvidia.com> X-ClientProxiedBy: BN9PR03CA0152.namprd03.prod.outlook.com (2603:10b6:408:f4::7) To DS7PR12MB9473.namprd12.prod.outlook.com (2603:10b6:8:252::5) Precedence: bulk X-Mailing-List: linux-kselftest@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: DS7PR12MB9473:EE_|MW6PR12MB8865:EE_ X-MS-Office365-Filtering-Correlation-Id: 2877f672-caaa-4fee-afcd-08dd45933ac2 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0;ARA:13230040|1800799024|366016|7416014|376014; X-Microsoft-Antispam-Message-Info: iA/dEYNJE9qhZAY/Syio/XNTBAWc2PV1yJc5Z4t8e6vjaiTna3Dt+t3Uas4mqGfICst/UDdgF050Y1VE9WUDIjA2K3VUAtN53D1eCw+1TPzY2EiwPcKUP7SVFbKet3vtt7azWMmeGNmwqkaP6HYZMksaa6N6o2JwMHkxeAA4Z8ui+k78QGG/7S6jIcjNd3jBTshtvk9WJmsg2VclVfkO+ovndZOq3Ibi0bC8kJ29Adlh7tT+IVtxsQrYB+oIE5W19WbTqH+UAB/Hs5BV1ExCF2P8cMt+N312eUdyo0HrRDOJHmhnNgvea5nJ2Fd4ntelcenVRRZFvNxLB9rwrJ/l/lJ7a1mFGSrewKu6KtUXMAf5b+LfSIBJv3Vbum7XlZqwRNFszJDOFc8+FEldSOFxlKLJtZtRTNrN6wabAtvuwHp6INKoSH0/taHs/9/6ZTrvHOLo4fMeFN6Pa6M369ir+MG+X8rZh5l1F4YsEEqYm6fAGVzzWrfjH45x0NUEV/0Kd4HWvl7TtnyOO1Xfh9bUZDCCh9lz9TIOwB29eTQDJqADpU3ICcWFEm3/twtQIbDUP6GDMLkn9Yv/UUY8xWEEATL0mDpSuxA+MhvpKrV5EDKRYmtxpghK6KOW38iL4BsRJ+/mieN0gkHpkBPO9IVWFE/hi0bsRR++E/mgwp+5myUKimcCVteSeSvPt+2JO8jshMEXdK2rcqf4mvWXRDGS9BsZjIaQQCXav9kGTmg3M1Y6CdAg2sIBHpd/+i2GX75vP6MJWgSQ7ngjLPD9ExF55+OuNd79rhD2jJfAMF+NYKAHiH9h4Nw9KHJsEpcQtHtPbK8iM1XveF1wOIjXL5yZ5V9BSw1pm+TmDQiEK3spiE86J7D0I0tWirEf7WiaN+44sHHfEyzNm7F1XFRjVHeAOlaHdkRFZIiALNHCOgUustvH1Q+uJ59xtnN2koNe8iLUTsGE2wL3l44gVO4qe+lUZZUsu1tQte0qLb/H+35Lu/p6DJQzw+vbNfrK4kWc5HIgjKE4pQhVC2AiYhxD8dMqlMaTJ+XYPnjWe8LY0N4sH5pAr3za4Li51iUygTiFNQnJz4Bnlg1j0xCJyoniF2zr/ip6RbfqXUEwihsklyHcurZSDzveDvxGUyozy4XRPtI5lp4ikN3Q9pSE1ze1GI/6LgyaxZ48mVIXKxQ2oi9LvORmkYVnjJmXbPryi1An921LHE/pnPG8CcQIG1ZK46e4FKxoXIpuSFXDyrPFWrhQNceaykh8m92mXYeZMHJzPnnAeENUegHf7w0Gfe0XUotsjGQGcglTjAMonC2+clVZzQRje5quobG1StnK1XRUPs04lhH1KKBzIvz8Mr3gTaPTbLJGCIcHgcFYAFa80sAprGbvdGIF6THi5AT7Yq2L8XZS X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR12MB9473.namprd12.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230040)(1800799024)(366016)(7416014)(376014);DIR:OUT;SFP:1101; X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1 X-MS-Exchange-AntiSpam-MessageData-0: AnZvlcee+FarLZaMILPlin/JYgah2RQ9vdMUBvsk4wu9D47bZUUJxVKaAiL6vOvfRZfeTlQhXdiKSgchY8AULAC32fM/OpVxEpKZzHjbm4+rQyQP/e7nrrF3ADWoPDJ5PAyU6DnOLVcERAquwQPVygkksnVg7RGGxODXzE9LXnn54tpwaIg7WAXr2aYQwCCFM7ZWkImrhTTBCRL40BT51SAxWlXkGHx5Pb1kWx9Ln9HVmML7OT0pi7vkZzBxRXg/F6muvc89CtSxR/c4GEE60x4702/VvEleiwbeof33zNAlk8+5y+VMXGCJb9hmCzC7ygByQ86iMdSOyQyXP2ldF8e7uQz8dlcIbiOFS80zseuU+E8svs4RxlJyJzD3VkpWwCEpQGYlqQ06QRIMQA4nN9Wk6TW5QUeewV9vP+qrQCZctOw2Jnb2aa6B53jArbovh60GChc+H1HdyI6vTYWvMZEicnyrzDhPfjMGK8gM+TRT7POBNsBKaKIKyIi3ixNIb46chUoe+4+kyvnqHtFum3f8oL9LwQMdwJYNTbyOoHGw/csD0q2Q9feG9Dy7nOIq+hXsdYEIY6mi5X3y7cHzM3avC08it67r8f9vVCRrEwGYEhfep+oYG0X0WLsqN18VF5/YNoCxG/n6qmPHigdZFXpKbC71CM9UnjWSpoK+r2IgxYX+sRcA4MkDhI7Yhdc/u8mwSnxuTjyFVhJgYUkHtl7n3lhUP7B0T+B9VNyNBIiavHs/RhDswRLJ5qO8AH/jfrH05OI9fHe/msSRvT24D9KFkDV9CwdJv6EMOn7FBMe6EN3QqXU/YlDaeMJt2VK10W+O11Zdc6g8voYnsb7D2KYasavisOD/0V2Bc/dozY0ioXQ96x/yRbhowzh3ZT4lC/2PzS7V9huAKzbcGkUckc8v8YEnlSEn54OTCVcYTkpc8U7l+DRXxmQkfN/k/a+ijvJYhoQbH4RoG6NL4nJOzcQsXRnlhJ6fvwhZtcpwdYSWCj7GZ+detopzXJMIpk75SZGmnMdPMEPklT8T/dzzCf7bUD0Y7OukoeDH27yme6wyKGbsOPdGsAonIrSB8Bq3MFL1Vs/9dXrOvB9dC1pfGMvlHZm1tRjubEm3HlYkARPwVvEVYEudSdtSxUH+MQ/tZWfw/fDSp44i31kkbAXiDUJvTr7pW9jA7dJn9BIkYNzQttLnndXGuy4O9VSKsdEp8ILgE8KziNokI0ppQMhuzRtnJtZrns2feM6DATzi3h2Olh0RmZ336jqBeNvZPMJH19b0S224NrkCS8SgJFSbcqwC8wKscAvLV0BDIO8vKA68HtoiwTfbIuRKUMbysap/vhg9UNd5a0Mde2oVLLTbeA3OIQ9+AIGEB1hp+Nav1kH8OUpnsfjY/3NOY0IaH/XGmUh0O466wECbgKBkGzC+EvndlJokTvAAwblCp4pocL9xmFdoXcYHQAScPje0ds9xHzWAYmuWnUrtParpZxsxMZ2dSNsEBN5dbgET/JgpZdC9h39LrKnOx2qqoacZgvmuQPFsDjg5xFCHlEZ/8/0e3e5LZ17DY+l5LLducE/WzCPXdJwM842CMVWKOPBskxBY X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-Network-Message-Id: 2877f672-caaa-4fee-afcd-08dd45933ac2 X-MS-Exchange-CrossTenant-AuthSource: DS7PR12MB9473.namprd12.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-OriginalArrivalTime: 05 Feb 2025 03:14:40.5993 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-MailboxType: HOSTED X-MS-Exchange-CrossTenant-UserPrincipalName: 4UNldyv358+ZOAd0yjLhMFGzUvSRS2AOyo2S471MNz5vz7rNQbV9vUuAd7XBuGOV X-MS-Exchange-Transport-CrossTenantHeadersStamped: MW6PR12MB8865 Now split_huge_page_to_list_to_order() uses the new backend split code in __folio_split_without_mapping(), the old __split_huge_page() and __split_huge_page_tail() can be removed. Signed-off-by: Zi Yan --- mm/huge_memory.c | 207 ----------------------------------------------- 1 file changed, 207 deletions(-) diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 20d7be07cd7b..36594eef5c24 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -3168,213 +3168,6 @@ static void lru_add_page_tail(struct folio *folio, struct page *tail, } } -static void __split_huge_page_tail(struct folio *folio, int tail, - struct lruvec *lruvec, struct list_head *list, - unsigned int new_order) -{ - struct page *head = &folio->page; - struct page *page_tail = head + tail; - /* - * Careful: new_folio is not a "real" folio before we cleared PageTail. - * Don't pass it around before clear_compound_head(). - */ - struct folio *new_folio = (struct folio *)page_tail; - - VM_BUG_ON_PAGE(atomic_read(&page_tail->_mapcount) != -1, page_tail); - - /* - * Clone page flags before unfreezing refcount. - * - * After successful get_page_unless_zero() might follow flags change, - * for example lock_page() which set PG_waiters. - * - * Note that for mapped sub-pages of an anonymous THP, - * PG_anon_exclusive has been cleared in unmap_folio() and is stored in - * the migration entry instead from where remap_page() will restore it. - * We can still have PG_anon_exclusive set on effectively unmapped and - * unreferenced sub-pages of an anonymous THP: we can simply drop - * PG_anon_exclusive (-> PG_mappedtodisk) for these here. - */ - page_tail->flags &= ~PAGE_FLAGS_CHECK_AT_PREP; - page_tail->flags |= (head->flags & - ((1L << PG_referenced) | - (1L << PG_swapbacked) | - (1L << PG_swapcache) | - (1L << PG_mlocked) | - (1L << PG_uptodate) | - (1L << PG_active) | - (1L << PG_workingset) | - (1L << PG_locked) | - (1L << PG_unevictable) | -#ifdef CONFIG_ARCH_USES_PG_ARCH_2 - (1L << PG_arch_2) | -#endif -#ifdef CONFIG_ARCH_USES_PG_ARCH_3 - (1L << PG_arch_3) | -#endif - (1L << PG_dirty) | - LRU_GEN_MASK | LRU_REFS_MASK)); - - /* ->mapping in first and second tail page is replaced by other uses */ - VM_BUG_ON_PAGE(tail > 2 && page_tail->mapping != TAIL_MAPPING, - page_tail); - new_folio->mapping = folio->mapping; - new_folio->index = folio->index + tail; - - /* - * page->private should not be set in tail pages. Fix up and warn once - * if private is unexpectedly set. - */ - if (unlikely(page_tail->private)) { - VM_WARN_ON_ONCE_PAGE(true, page_tail); - page_tail->private = 0; - } - if (folio_test_swapcache(folio)) - new_folio->swap.val = folio->swap.val + tail; - - /* Page flags must be visible before we make the page non-compound. */ - smp_wmb(); - - /* - * Clear PageTail before unfreezing page refcount. - * - * After successful get_page_unless_zero() might follow put_page() - * which needs correct compound_head(). - */ - clear_compound_head(page_tail); - if (new_order) { - prep_compound_page(page_tail, new_order); - folio_set_large_rmappable(new_folio); - } - - /* Finally unfreeze refcount. Additional reference from page cache. */ - page_ref_unfreeze(page_tail, - 1 + ((!folio_test_anon(folio) || folio_test_swapcache(folio)) ? - folio_nr_pages(new_folio) : 0)); - - if (folio_test_young(folio)) - folio_set_young(new_folio); - if (folio_test_idle(folio)) - folio_set_idle(new_folio); - - folio_xchg_last_cpupid(new_folio, folio_last_cpupid(folio)); - - /* - * always add to the tail because some iterators expect new - * pages to show after the currently processed elements - e.g. - * migrate_pages - */ - lru_add_page_tail(folio, page_tail, lruvec, list); -} - -static void __split_huge_page(struct page *page, struct list_head *list, - pgoff_t end, unsigned int new_order) -{ - struct folio *folio = page_folio(page); - struct page *head = &folio->page; - struct lruvec *lruvec; - struct address_space *swap_cache = NULL; - unsigned long offset = 0; - int i, nr_dropped = 0; - unsigned int new_nr = 1 << new_order; - int order = folio_order(folio); - unsigned int nr = 1 << order; - - /* complete memcg works before add pages to LRU */ - split_page_memcg(head, order, new_order); - - if (folio_test_anon(folio) && folio_test_swapcache(folio)) { - offset = swap_cache_index(folio->swap); - swap_cache = swap_address_space(folio->swap); - xa_lock(&swap_cache->i_pages); - } - - /* lock lru list/PageCompound, ref frozen by page_ref_freeze */ - lruvec = folio_lruvec_lock(folio); - - folio_clear_has_hwpoisoned(folio); - - for (i = nr - new_nr; i >= new_nr; i -= new_nr) { - struct folio *tail; - __split_huge_page_tail(folio, i, lruvec, list, new_order); - tail = page_folio(head + i); - /* Some pages can be beyond EOF: drop them from page cache */ - if (tail->index >= end) { - if (shmem_mapping(folio->mapping)) - nr_dropped += new_nr; - else if (folio_test_clear_dirty(tail)) - folio_account_cleaned(tail, - inode_to_wb(folio->mapping->host)); - __filemap_remove_folio(tail, NULL); - folio_put(tail); - } else if (!folio_test_anon(folio)) { - __xa_store(&folio->mapping->i_pages, tail->index, - tail, 0); - } else if (swap_cache) { - __xa_store(&swap_cache->i_pages, offset + i, - tail, 0); - } - } - - if (!new_order) - ClearPageCompound(head); - else { - struct folio *new_folio = (struct folio *)head; - - folio_set_order(new_folio, new_order); - } - unlock_page_lruvec(lruvec); - /* Caller disabled irqs, so they are still disabled here */ - - split_page_owner(head, order, new_order); - pgalloc_tag_split(folio, order, new_order); - - /* See comment in __split_huge_page_tail() */ - if (folio_test_anon(folio)) { - /* Additional pin to swap cache */ - if (folio_test_swapcache(folio)) { - folio_ref_add(folio, 1 + new_nr); - xa_unlock(&swap_cache->i_pages); - } else { - folio_ref_inc(folio); - } - } else { - /* Additional pin to page cache */ - folio_ref_add(folio, 1 + new_nr); - xa_unlock(&folio->mapping->i_pages); - } - local_irq_enable(); - - if (nr_dropped) - shmem_uncharge(folio->mapping->host, nr_dropped); - remap_page(folio, nr, PageAnon(head) ? RMP_USE_SHARED_ZEROPAGE : 0); - - /* - * set page to its compound_head when split to non order-0 pages, so - * we can skip unlocking it below, since PG_locked is transferred to - * the compound_head of the page and the caller will unlock it. - */ - if (new_order) - page = compound_head(page); - - for (i = 0; i < nr; i += new_nr) { - struct page *subpage = head + i; - struct folio *new_folio = page_folio(subpage); - if (subpage == page) - continue; - folio_unlock(new_folio); - - /* - * Subpages may be freed if there wasn't any mapping - * like if add_to_swap() is running on a lru page that - * had its mapping zapped. And freeing these pages - * requires taking the lru_lock so we do the put_page - * of the tail pages after the split is complete. - */ - free_page_and_swap_cache(subpage); - } -} - /* Racy check whether the huge page can be split */ bool can_split_folio(struct folio *folio, int caller_pins, int *pextra_pins) { From patchwork Wed Feb 5 03:14:15 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Zi Yan X-Patchwork-Id: 13960497 Received: from NAM02-BN1-obe.outbound.protection.outlook.com (mail-bn1nam02on2077.outbound.protection.outlook.com [40.107.212.77]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 8321E214A7F; Wed, 5 Feb 2025 03:14:53 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=fail smtp.client-ip=40.107.212.77 ARC-Seal: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1738725295; cv=fail; b=sXztgnih3Xn5liId7wXPzmJrwW71sPuVxTlLqlPi1LMlDxpZrSTTAz2SNRGa5lqKz2jljvi0zBFsW8gjNfWBZgAqrMe5lZHo8HKWEdz0M3V6/Dqv/0w8L2AU9YftbeU7pi1Zy2kdjjJsaC328buhrz3+YvZ4JkfukrklU9lTTqY= ARC-Message-Signature: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1738725295; c=relaxed/simple; bh=Ycq35FVFHUh3+lXFduO0LGjUEXQ/XsJt+EdRiAspq7Q=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: Content-Type:MIME-Version; b=P0MpiRqvQc8uaBn1zboaJE+n730BRw24/MwUUvlm4B+W91Rd3TLi3EB1bptIg6AmodjIsis5gxLcc0RD5tIvI20co6Mn4I0Bk59MiOwFFq4mn4/O3ae0hALPBc2BpIu54t7rJgDpf240AgPqoScH/4cIfbGgH5DHLF/VKeaZoy4= ARC-Authentication-Results: i=2; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=nvidia.com; spf=fail smtp.mailfrom=nvidia.com; dkim=pass (2048-bit key) header.d=Nvidia.com header.i=@Nvidia.com header.b=pb9cqA/+; arc=fail smtp.client-ip=40.107.212.77 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=nvidia.com Authentication-Results: smtp.subspace.kernel.org; spf=fail smtp.mailfrom=nvidia.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=Nvidia.com header.i=@Nvidia.com header.b="pb9cqA/+" ARC-Seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=GvhBmNihZJYFX+6p0BuT5TXlt8G8KA+eneaeUyBzCQxGWVrMyeT1u7wO3Fo0/LAJcQ9M4Mq+Q52oBneFhoIt5/6PJL76WA4GmYJV8ubqEm7aFtsi10ZyuV4uYXpsKNFY3OYWoEzJ21KdxnENGmzeUr08B0y01yZKKqHXbn/skvJQXGqxphLZJSGYOZuUIy/ezYbbl3G7xFyL+4PUWqOahbM1rRxZfrdSWVEqexDR37fhYFIyS6WpSkGVrOwUyI/WQaH8qaDipa7phfAcYOfrOUS5u5MVYYVYFqn3OX3j6OOcJ0NvArRKckfZcQjEXemILbMeBscwO+Xr9ZjjYNctiA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=APQ4/MGbBmdCGLmDFlteEcEqhBpNZsR+DIcsiodcQmc=; b=xGdPXY8wLwo1Gh/nvNwQVGHLCajU9zrfwV8TwvRFSHPPR4pC9qKMF/QjOjGRxe2S+dwBmZwK86aWZDFy2xIaXoqoYcm2XjIhCCOzoCwkzsy/rmCHnkeJK6ot6p3tW/o00FKil9gf9+AulLW4rgirrpgEM24tjq0vK6C7a3S1QegdUlUQxOmjZhpaveUgvdWFsKAUa7Q7hSV0f+aaADTMI03Fvqa6NV2znmvZb0b4/Hk4w7VBe6TSq5MW5k1t0Ubc1l8V4nT+20+cO5Fm6PmJOTKpUClM1L08TY7jpoqsdqnYXYaXzxqn8GvlcXNuHcqdqpZEsA8Mp6Qjy+tTewNWEA== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=nvidia.com; dmarc=pass action=none header.from=nvidia.com; dkim=pass header.d=nvidia.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=APQ4/MGbBmdCGLmDFlteEcEqhBpNZsR+DIcsiodcQmc=; b=pb9cqA/+tEscKswnH+2SnjCx8lOc+FfH9whfLM27Q3HvoHi4QZlCwaLtb8Hbpm2LLfW2rZJsyUgL5oLnOp5K/pEp7ZKoMwFz5/LmUUCpRjrmRde8NiLspfL8Wepcz0ZvpVi+wh+voOmS3+DSSjyRnmVBo1nPuIlMOBD/0m+k9zpMVttM6YEsJTHb7JRyVIt4pdyp8yOkWtUEvthRP55C8RVF0K6BBjHlOg8DgmYdPW5mM7iQxcu/9ghZpTFt/oUza6z5jrHfdcjUXTqroAhYbPz1xg6WpVPUp7KBBmGsYxytxpxLLOANP7gzeb7juB6E22mp5kdm7unXdi3j0Z7k+g== Authentication-Results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=nvidia.com; Received: from DS7PR12MB9473.namprd12.prod.outlook.com (2603:10b6:8:252::5) by MW6PR12MB8865.namprd12.prod.outlook.com (2603:10b6:303:23b::21) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8398.24; Wed, 5 Feb 2025 03:14:41 +0000 Received: from DS7PR12MB9473.namprd12.prod.outlook.com ([fe80::5189:ecec:d84a:133a]) by DS7PR12MB9473.namprd12.prod.outlook.com ([fe80::5189:ecec:d84a:133a%6]) with mapi id 15.20.8398.021; Wed, 5 Feb 2025 03:14:41 +0000 From: Zi Yan To: linux-mm@kvack.org, Andrew Morton , "Kirill A . Shutemov" , "Matthew Wilcox (Oracle)" Cc: Ryan Roberts , Hugh Dickins , David Hildenbrand , Yang Shi , Miaohe Lin , Kefeng Wang , Yu Zhao , John Hubbard , Baolin Wang , linux-kselftest@vger.kernel.org, linux-kernel@vger.kernel.org, Zi Yan Subject: [PATCH v6 5/7] mm/huge_memory: add folio_split() to debugfs testing interface. Date: Tue, 4 Feb 2025 22:14:15 -0500 Message-ID: <20250205031417.1771278-6-ziy@nvidia.com> X-Mailer: git-send-email 2.47.2 In-Reply-To: <20250205031417.1771278-1-ziy@nvidia.com> References: <20250205031417.1771278-1-ziy@nvidia.com> X-ClientProxiedBy: MN0P221CA0001.NAMP221.PROD.OUTLOOK.COM (2603:10b6:208:52a::7) To DS7PR12MB9473.namprd12.prod.outlook.com (2603:10b6:8:252::5) Precedence: bulk X-Mailing-List: linux-kselftest@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: DS7PR12MB9473:EE_|MW6PR12MB8865:EE_ X-MS-Office365-Filtering-Correlation-Id: 24979d01-c439-4f49-65af-08dd45933b74 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0;ARA:13230040|1800799024|366016|7416014|376014; X-Microsoft-Antispam-Message-Info: /oR5lpOH4r6RJVnvFNq0RuceuYg8dAryvnjbGqh5qzLpqXQfs/Pmi+fsFjGzZMM1htO/8KARKGBgo7Y6HqvSoIFdwQh23L9OCj+GiXwZfzdt1fsRra7hWgkOfN3HdRtyUfpeaVCchpzRhziPg4m57FphtRP+atdiZnCf3IegNZidNnvhlNvSdR6pT5M4eeTcI6B5gZcv/DgI0HIwSWZKnSTtlHf8LnPiwoNRywN222xxGGD9OStwZSffe0iAtAaQLXlpA/mwcXC9OAMmjB+btSOCRnChJ6wpql9Swkh2Un1UUEon64oJuDkDM2MXdwEpCqeFhqPRfLJlyoi8aqrSpKeViKmfvcoOoTz5uJt+eNZ9PEJ0eFWm2nI7w+5uwstsJ/3+l4+vtor2ZX1fDYMTK+RiqxdrpcSq3u4bIYYptCGFyxNxjg20aBhZgjgNIPzt/k7KnRhwVL/bYQS+tWVuLVLLYPOpZuFgTGSFdgT6Xly8NFmT3zQdWHE7v9+OoXdzvzYc3c01q2G9ipVGgRDHO35IkiCEEs8FU0pVC3y1p/7YYwBfVEBcOt06DO4wzYrzyJO17vVYK1zUCSJHyHB0flV9+6/qPg/YKJwzAhD106t7adumy/qGKdXpGHB2fm5oJCvsyyO0xOhP64mxbjvEQYsJ8CGiAsvuevxQ/vVQUZHDpzXHFp8A2Pw4OsR4mgMtkDACi38Jc8PpUiEKB4oRmAx4ugXVUCQ2oBnjivC2gu5e0WyooG67QeNA7t6mz7DcYSofUimr0s6O/c51TxU+ly5rdhQj1bNyIcI3TNagNIJfdAPDK05H8vqy4kQxImqtlTxaaTQWDHcOGAlFf6oMxDaYI4KUNO4YzQPmEqXHdSgnhHjXEBNIA8MrYbOIBM2USniwZbw7kILdvwG6Ek1Ns87ytdrE7i4bZO9CMilH/mfqib6dgdfTqgmtB6XrBk0n8Qby4NR98CanQKs9Fd13Tpc1vpjonzwqjjS3Ad5bcjq+hrJ3hKJRPnOVniQPu4km0HbHEZLinLfGITkFDXvJ7JZ8SBK7kavyfYnnwxCn+HuvXyuMz5ClcCZTRQPlt6DrRttltDPXA3nY6ZzplxI7JApwwkdFjwKckobIXlaGCHSDWqLDV/oRaj/4nL05ibXUq/ZQfkxE79eExALe3Y72qqXy+karS8OU3ZReRfHeLr2uMd6w4N839Z/yCCxfNl4eRO+PbSqNDKW4iL0BMaNeEO8CcLkRc8Im9PLJKZJ/pi79qcU/lG6DeGswLPp6cGtj9OWWsio1Lod7Ols+nmSNcwpxhUehD8e+pzpY+0mI1/BGuiFQ7OIt84XkaWXm1g35odxmKd2W71SD1sYE5AP7Q40l8uPeOKXTYwLZBIrftE+koRFepUW3l+rSBRoN0luC X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR12MB9473.namprd12.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230040)(1800799024)(366016)(7416014)(376014);DIR:OUT;SFP:1101; X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1 X-MS-Exchange-AntiSpam-MessageData-0: 6wQ13kAVJhbPF7hYg95kXzT0KzjAOulqbWRlTTRuMAEespSQyDaGrnSsuenTTgWnxj2JQ/T3KZCVh+D6exDAEhV9sDZnxscm39v5qpR/3jZXGWR8TlUx759+PK7O2LwXgpKItwCAJ50UaSTVpXrUzSPH15UAYfJ+sKmDIufWoYR32/JXAx/S1MBRoplmYo91OALuosmBfpRJI6ZeojeMpYuhRVczTXaXv6ofGbd0JkP4RJ57kiwJ6NbGIgOi9dNw+Y5v0ibZMC5ZOfANfU95WgMf7XpDmdhI6Wz9co4K/dM5g1lS1YN+OvfcbzwWK4zCUPG9QY1GMNnbYaBpAgudQMoUv3pw6/lF7CyyYkx4/H1XlvLRaGbZMvMBePyHCh+vjeCgiyYiUTt/cXAajOgh99Ud5WzM5kNcmc3PhupE6A9D20KaPfKyHZJKH0fL85NQiAuvDICjDDOJFgtZtgadXTABcscoKKJ8nfRvc0D8xbclWgOoXb1RVrQm5CS5mcmbe4bmQWFVVHyKukLvbPrssQLglz51Gbfm23VBO2b2h5uZ/61h7kBhvi3shjASXlcIPx2SsyMssT4m2H/G6/Um2sMWPFJXRhXAsdjarmaOmH9zRIOI9kD5Mp3GIPd/USVtKhfjOZCit0Yaqba8Zs+lw1yFhFi8R88yKREpOmsz9aWCxBrtZtdt6yy1KrDYofj1CkRIOkcsSP3196WKLjH42HE8zaT4h2LbNQg9kwo/sOLZf02hpe7qTHTJjQF0sNsqkOnIMREcIXeDxp195K8Zu6yOjLCm/fDhveDHYzyE05zbMyIWt61jVCa05JUTWLt+roB682M7qAH6bpjtGTr4oi3GGOihCpIKRm97BDEQlXt+coPi1lB8nCezkDY5onP1nBeg1UXI9dIcuTg7TQJZsxEyM56z+PsRdFj11f90yKCADP0iQ944m34fwJJ/1EzGvgA7T2Q5l3kOsmGdvceZ3+H+JPRJflLOFrSqlZBBFOUi8VsKkVLkQNrGbVijTue/f8ACk0tTtTncS1X8POUvyPogW+M8yyNiaQuRMCpXTCWQiH4kjjgCnzuDh78kOJ9OKFOQcSoiOLXV4N9ucfc0roLTiFubb56IqGmvNQI3fDyuC/vDYHgLZl+MrnMeNsLWTRkbvUdwyRPQXOIbMplLJFv+Ii19/cxvZVwG2+uJdvskCiNXuO3MTRPl9VCYrYOqVTowblDU035fMhA0F1rjBSCrQ4Y2Av+LHHDHkpUTqEw+Ba4tsALRob7F/CjHikTBfiEZXx1J23WrpRw/zucIrlGAta3C/ggUJFNERe2aD629caUxqsMfWMI4c7FH/Ou8BuqIp0NUn7mA8ROQxRbXP+bfVD+VmGGHJDVk1gzd35if8v977CL+mCoFSDnDLtdTkfN6UEFJLxCO87xCCywzbPzSAa+WFjfePlIKwy2G3E624QElOKwE7afxMIH+ESQKpE1cO74eE7bhAftmhSKwqRc3x4/Af04HXFwGA0tbwYzyTOaEYhHtzcTENWEOzdWaFqRy4DnuX1U1hhBkTVCjSD0XRoQ4auYih6Kcuex1YyItkhHiU5bzL7Sfcwubz7ML X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-Network-Message-Id: 24979d01-c439-4f49-65af-08dd45933b74 X-MS-Exchange-CrossTenant-AuthSource: DS7PR12MB9473.namprd12.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-OriginalArrivalTime: 05 Feb 2025 03:14:41.7762 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-MailboxType: HOSTED X-MS-Exchange-CrossTenant-UserPrincipalName: kUqwzOiSXC6ayTr5lG0EMB0lXKu9QbdUZJdy6UcrX6CL84pB6SGwirzHWjdCXqr0 X-MS-Exchange-Transport-CrossTenantHeadersStamped: MW6PR12MB8865 This allows to test folio_split() by specifying an additional in folio page offset parameter to split_huge_page debugfs interface. Signed-off-by: Zi Yan --- mm/huge_memory.c | 47 ++++++++++++++++++++++++++++++++++------------- 1 file changed, 34 insertions(+), 13 deletions(-) diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 36594eef5c24..dad6819901a8 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -4180,7 +4180,8 @@ static inline bool vma_not_suitable_for_thp_split(struct vm_area_struct *vma) } static int split_huge_pages_pid(int pid, unsigned long vaddr_start, - unsigned long vaddr_end, unsigned int new_order) + unsigned long vaddr_end, unsigned int new_order, + long in_folio_offset) { int ret = 0; struct task_struct *task; @@ -4264,8 +4265,16 @@ static int split_huge_pages_pid(int pid, unsigned long vaddr_start, if (!folio_test_anon(folio) && folio->mapping != mapping) goto unlock; - if (!split_folio_to_order(folio, target_order)) - split++; + if (in_folio_offset < 0 || + in_folio_offset >= folio_nr_pages(folio)) { + if (!split_folio_to_order(folio, target_order)) + split++; + } else { + struct page *split_at = folio_page(folio, + in_folio_offset); + if (!folio_split(folio, target_order, split_at, NULL)) + split++; + } unlock: @@ -4288,7 +4297,8 @@ static int split_huge_pages_pid(int pid, unsigned long vaddr_start, } static int split_huge_pages_in_file(const char *file_path, pgoff_t off_start, - pgoff_t off_end, unsigned int new_order) + pgoff_t off_end, unsigned int new_order, + long in_folio_offset) { struct filename *file; struct file *candidate; @@ -4337,8 +4347,15 @@ static int split_huge_pages_in_file(const char *file_path, pgoff_t off_start, if (folio->mapping != mapping) goto unlock; - if (!split_folio_to_order(folio, target_order)) - split++; + if (in_folio_offset < 0 || in_folio_offset >= nr_pages) { + if (!split_folio_to_order(folio, target_order)) + split++; + } else { + struct page *split_at = folio_page(folio, + in_folio_offset); + if (!folio_split(folio, target_order, split_at, NULL)) + split++; + } unlock: folio_unlock(folio); @@ -4371,6 +4388,7 @@ static ssize_t split_huge_pages_write(struct file *file, const char __user *buf, int pid; unsigned long vaddr_start, vaddr_end; unsigned int new_order = 0; + long in_folio_offset = -1; ret = mutex_lock_interruptible(&split_debug_mutex); if (ret) @@ -4399,30 +4417,33 @@ static ssize_t split_huge_pages_write(struct file *file, const char __user *buf, goto out; } - ret = sscanf(tok_buf, "0x%lx,0x%lx,%d", &off_start, - &off_end, &new_order); - if (ret != 2 && ret != 3) { + ret = sscanf(tok_buf, "0x%lx,0x%lx,%d,%ld", &off_start, &off_end, + &new_order, &in_folio_offset); + if (ret != 2 && ret != 3 && ret != 4) { ret = -EINVAL; goto out; } - ret = split_huge_pages_in_file(file_path, off_start, off_end, new_order); + ret = split_huge_pages_in_file(file_path, off_start, off_end, + new_order, in_folio_offset); if (!ret) ret = input_len; goto out; } - ret = sscanf(input_buf, "%d,0x%lx,0x%lx,%d", &pid, &vaddr_start, &vaddr_end, &new_order); + ret = sscanf(input_buf, "%d,0x%lx,0x%lx,%d,%ld", &pid, &vaddr_start, + &vaddr_end, &new_order, &in_folio_offset); if (ret == 1 && pid == 1) { split_huge_pages_all(); ret = strlen(input_buf); goto out; - } else if (ret != 3 && ret != 4) { + } else if (ret != 3 && ret != 4 && ret != 5) { ret = -EINVAL; goto out; } - ret = split_huge_pages_pid(pid, vaddr_start, vaddr_end, new_order); + ret = split_huge_pages_pid(pid, vaddr_start, vaddr_end, new_order, + in_folio_offset); if (!ret) ret = strlen(input_buf); out: From patchwork Wed Feb 5 03:14:16 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Zi Yan X-Patchwork-Id: 13960498 Received: from NAM02-BN1-obe.outbound.protection.outlook.com (mail-bn1nam02on2077.outbound.protection.outlook.com [40.107.212.77]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 7F649218EB4; Wed, 5 Feb 2025 03:14:55 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=fail smtp.client-ip=40.107.212.77 ARC-Seal: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1738725297; cv=fail; b=Gwh9i4gSM31ryy/UUytGOFd6mxuImq/ZiCnhtgjv5zgmzavYX04Css7az1JloUaClB/b+kANDBq+6qfqrp/CNmglj79XfAEfETrQUbDopKXbHDpAX/0fcAlJx5SpjGkpqrEfL/omomOFtyJtomj+h1miZKWffWkXd2sxgQ7GZlc= ARC-Message-Signature: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1738725297; c=relaxed/simple; bh=UEWUd+N/jLqsQ4zCOxSdnZYZJewC2MH0AfG2oZf+j7U=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: Content-Type:MIME-Version; b=SJSgWVB8auSlYKx5L9d0eNUQQaD10wDpHJwJ/sTwsZvB5GCebCVTY3/3CW/fq5Ng7aAeYejefpGkwGe4Ux94vl2+S2XK5aUcS7bUrnDQEvP/Yg+4aW76AepkusrQvXm69dZma3xtsy836aSwQYL16sgGLXI/zm9B492oUQ+qJWI= ARC-Authentication-Results: i=2; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=nvidia.com; spf=fail smtp.mailfrom=nvidia.com; dkim=pass (2048-bit key) header.d=Nvidia.com header.i=@Nvidia.com header.b=OTXRPkhu; arc=fail smtp.client-ip=40.107.212.77 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=nvidia.com Authentication-Results: smtp.subspace.kernel.org; spf=fail smtp.mailfrom=nvidia.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=Nvidia.com header.i=@Nvidia.com header.b="OTXRPkhu" ARC-Seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=abkkq9NPJ8faUZrhIJHqj8jV9hZStL64rAiZlGT/Ep5lK/kqJMp8jmi8IOnmCcBshWFK62VWls0Lx74TVbwZ4c9W45dzao1KB6ald7Kmmuptw35PNDBCEm2DYG/ikOqJbloMc6CNr/X4KRv80Ya6LvTVfP/Kwk2mq2zH2PQPDebq4H9Yqvg0nHQy0c9W54cHGNMCbTyMlNW3OjgE+Z4Px/eNTx46kR60BNyEJ267/uAYa1yDNk9r4ae8eN/5/iSdBm2Jqa/VuE+yodYjBC+rJGyE8y+0NimzDjMk2UfHbQLZfR2zcDvWHA0yf7kROQxHyY19fjfhl3BSl99W2MnZpg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=zC07S7lCAXJ1lYnVEktSFOP5Mkmv0FFf6DYkZBYrTRk=; b=GwqT5uJh7NjUBdKr6rc11EVy45b3CHxNtuTz5HSFS7fXwJyMM/Bj8y6hfAImMvXUmFen/qQL4ScuigXQWs3zxjyR8L6jHfzeITIbD83rQuWGWr3RJ7RKw7ODtlBC85K1cnPP2xq1xvrIPYpa8gQ/3LkrFaZWyD/lChxsH7SN932FX7Wl6nKXyCbydnDiJhoPyPuFT1eJaHD5496ukOIVRtZwytoDMZr+lKnav6qxH6M0tGvrezwPToQLcZ8QVG60YY5n5okbv4WkWN+bRS0FgJEvgXCtw3qv2fxYlogSX08rvzpSNOzE4ero/0MiWcSdvaAr11dmS7YIatHGD7sK2w== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=nvidia.com; dmarc=pass action=none header.from=nvidia.com; dkim=pass header.d=nvidia.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=zC07S7lCAXJ1lYnVEktSFOP5Mkmv0FFf6DYkZBYrTRk=; b=OTXRPkhu26+WICBAPXR4ODIXU25OiyLSCYATEKfI+N6RvJMaXt6GnEvMODsnM+cSFKrbp0W6WJkjkrJ1z1ZqSsDkO55ohbb1KLByezsaS45Iu1WGiGbco6R8vxXZVi+iprGfMtxSJ1BC9LfYH+EQGXezZxnUV2Fn/Z6XQxdOU5t0p0GPMQ1+ySa2PTNKLDh7ErWO1B9g81mWou8YjeoFIgSI2PmDPqzQeMRW4UBl/WtVGN82FpXBnbHFAghriEpVIlxFs1lVytBVYiJZHORZSgECsh9r4mRoM6tR5yMXB0MdZBP4LPIs7+YGYsSOjfqaxcq3WB93WAKwtcnttmlCTg== Authentication-Results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=nvidia.com; Received: from DS7PR12MB9473.namprd12.prod.outlook.com (2603:10b6:8:252::5) by MW6PR12MB8865.namprd12.prod.outlook.com (2603:10b6:303:23b::21) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8398.24; Wed, 5 Feb 2025 03:14:43 +0000 Received: from DS7PR12MB9473.namprd12.prod.outlook.com ([fe80::5189:ecec:d84a:133a]) by DS7PR12MB9473.namprd12.prod.outlook.com ([fe80::5189:ecec:d84a:133a%6]) with mapi id 15.20.8398.021; Wed, 5 Feb 2025 03:14:43 +0000 From: Zi Yan To: linux-mm@kvack.org, Andrew Morton , "Kirill A . Shutemov" , "Matthew Wilcox (Oracle)" Cc: Ryan Roberts , Hugh Dickins , David Hildenbrand , Yang Shi , Miaohe Lin , Kefeng Wang , Yu Zhao , John Hubbard , Baolin Wang , linux-kselftest@vger.kernel.org, linux-kernel@vger.kernel.org, Zi Yan Subject: [PATCH v6 6/7] mm/truncate: use buddy allocator like folio split for truncate operation. Date: Tue, 4 Feb 2025 22:14:16 -0500 Message-ID: <20250205031417.1771278-7-ziy@nvidia.com> X-Mailer: git-send-email 2.47.2 In-Reply-To: <20250205031417.1771278-1-ziy@nvidia.com> References: <20250205031417.1771278-1-ziy@nvidia.com> X-ClientProxiedBy: BL1PR13CA0120.namprd13.prod.outlook.com (2603:10b6:208:2b9::35) To DS7PR12MB9473.namprd12.prod.outlook.com (2603:10b6:8:252::5) Precedence: bulk X-Mailing-List: linux-kselftest@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: DS7PR12MB9473:EE_|MW6PR12MB8865:EE_ X-MS-Office365-Filtering-Correlation-Id: af665be6-3696-436c-dd85-08dd45933c6b X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0;ARA:13230040|1800799024|366016|7416014|376014; X-Microsoft-Antispam-Message-Info: 8U1GIf2Bw0WCKNivIKqs1Nx858p3zL7D8PKor36xvbFNnVe3TmInP5CYVCC4JSea8eZkECIeEbTlR/OlQE3vpV1xIfjuoOcOQN5nHZS3AyyxXpC/NGw+wTTWPRoQ2MugJFO2lO5m5q7s2Oa6HDCS6vIgVFAxEoirs8J2HQBG71qDgtbeEfyT/ibeg5x+HyjHJkxoR92tv4hZgTFhVUJNCeZA2iFA4oEpbkyPsUNMkMGLGbrRIl73FkqI899cKjOM/lac1wVPRmM1YUFT9tbhgnjn3/KNYS8KG2MfmgqulKeuuOi5+9Z7lcrrUNMgox0epe5vJOlE/IOSo+HcdK5Q2PNIJMBJKy5J699cw52kA1vbKYBREorv3XK+KCARqUjV8cH7esVhdUtjcPiUpiMCJUFoZBF5Yk/fafPJ8oJ9Zm0WodOhi0OpxHIVK7sz+1XG7Xxg6AI4MSj0XxqGQ9+udMRS73EYVohanEPZccAhABB54imtt7j13lsdR3BhGhrvh/Bf3tl6n4ZGQ+5AHZSHKAvCb5JdGad8T/9vGvew+OHpsmI24s3vObxiuIta0OCl1pSjEo9DyE6f3cwCA1GdJcbSjR0VTDPsteaKS419ksUL0JPDpXYIgHK2XvBWhTXsWy8rAE9jxkw6gzi58K79XUDTE/QDS4LAstO4bfrOdNfInlMf4GcqX2WaZwavAgNPGSGBq5fCSH+F2ldFg0zOAtS9X+ANRSre4sfcLvukVGLTRfLzUFjK+ZwutJNpVHus1CdFcO2MPUz62Ac2SOdWHcT98QNU6T1tkfb9Odu5YRBmTGjJkIoIRgIwu1pqkHj58PXiTL+Ogu0SM2YqRRFsYbIVyBE4chy+h5zLWaoNTLVbh9yOKLzJRPv9d3i2r/ZJMWpNIf0tWGhUZiM2eF/K4q2/KMiILS2ohqOIGlTV050AYsoZ12+uQsI3jVR4vM7xy3zp4bIy6sSWi182waG8dlI/7EjmzhptQaWejAfpcRFtPp//qd1HWtzjXQT5Yrl/yNnzrRPWtNj9VTFT27oGDCeeWsDnk+tQb81Q6+4lH/Wv7za0k3vgWa+lHuzZ4MsGkSrLKhdIrSOED5p0OWXKSNIMJjmmHvppi7dGR2V1JcsFIEI+3k83tXD9k8HVfzFXTTk0f5i2+PBpfK92rv6PaTUgK/3PToddLccF3ItQWpb6fiWeZAWY0EandmhTy2A9GoMW/MaE+6kyp60QSILpuWAUyXEjmQGhV5KvmmUUtcA73lso/thnggD+eVOui3LJ5nMqxz42+SGYfhLA5tHPdE74mzmztc9AtlchsWVfUrC3tTz7CXRID+tf2iOKJtZ/zqJ7d11yXe/8xSw3rp6LYLvmJ4tQiqHmD/KCVXvmBcSYMCW9T1yCUf+ZjxuN+4/2 X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR12MB9473.namprd12.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230040)(1800799024)(366016)(7416014)(376014);DIR:OUT;SFP:1101; X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1 X-MS-Exchange-AntiSpam-MessageData-0: VOpojWrpmdNYFWspaYQb3g0lAnm4DPlB+tkeVr1tWal5U2H0SMI/vhw/Cu5/k2kBysB+4hhrlTY21u0msXJPi4CMWdxzdQyhTIZgA949b9rno14Kt4wg3JJ/wpGsc+tEPt26RIOB8Zhzhxfrw7xyhltDFp+WpEo+nrDd39FCwGTBqzu7dDzDmUCfaON8i+Foke+5aEJEssKKHC30SSlxdlLugVUdlgtHbL0FlZL2GcmgUSSwzBqb9Tn/1I1OQFVBRBZb//5Anfc/EkY21BXfnACTANihxUuykT3n+qMzF/CsVBH2HxuMWAveYgUuS8j3kA+tXYa6wIkUodQpfN5yQAahKTKHxgkVM3AUAcmaMRry1FOVr78qvfgW7jK1YgjStxgnQi+YOwGQTO27MRfJZXw1i3CRuIACHWsjcUYHNgj9lOMCqidVaBhi8mU9UyJLJhB0Eh1Pfo2kQ8S8ccUulFpVa8B8zl4s01eEYgtddF213aCVSDnO4ydwHLSiYlJ8G84mZAF+Fi59oJIsvCwc2ATohUlfYmrE61OeUfhJqqnRRiLtVeXrGfICuR6IAQNojuWcLSu7Sd+z8Jj9FUJmU0O7VdlQTeMW0bZCrslHgXpSYf+7kGJKBIPWgpJQXR8HQZlY0rv8814z+DTtMyG85OwcYWZEDI0en0sWmRH3wzXJhQhdi1b9vNzwps9S0UqVeYlPyPscLRqk6yGd8uyAi3R1/zgvLnUdJDNfzwfkJcF6WaIcxS8qLLAYLvK+okqmRiJ7NMKTXr/iavUMcVJ6vdQakJ7cprou+ytsOKvCJAwGcT9xIaR6+pVlyypEVtywP7zgJURg0nJLpbDkOJSxI017IXJx4Dgh/yRijgWKLCBLfrKCGXZZSxcGgI2VWsf7nGpbh2+JWDPTtCof9kV85Xa9Zs7/kGbX9/lYrAQfzZ+kdN5WpLwtBiWOHQ1pdXZa1AVMYu4dMWltKRLD9FBzuMrh8NBgMxVx3+hIIBmDs9LYa14ZSUI58p9f6yGu0kcmQfjHdPfx/6MIwsGLONP6ToCi5hUSHZTrqYbzJzHRvPwscdX+u0RiFmwCxv9AA3OfwnE687PMEj4o9yx4iUO7/xf2pAya1zvxWqFZ8wZxl3LZpZVUfkZn3MJ6UHvbRwPzX7+J68PEIx2zqUcvLPnkV6L9pKc228d+lF/qXDqzgVchkVNkDuj1d1WIlkv7RUA1W2wdDYZ8ABdXGXcqmT09WnRFTx1OPSEpX4m3avDjTKwCRxCwcMBMkgqxo59uQwNW9wbZlImbpGZXCsUjxBKmi5xIs+ouFfh6XOfGHKipkGP867Qn90rNiuBHm4yiUYlCJRkgZpEgYyHzYU1HuN360l6GCWr7V6NWvvqG1YOQj0EE1O8BF+s7/RWBrqy2+Z/1OzukGoS/X4VjEtH3q2zlGiY5KwGtY2010no/CLwYy4mxhFXa2NXugYlTR9IQaRZDzFMTwRRbnYcBh3j0qhqDAwR9WIHPBVkf1gyut2mq46Op7IX2KgtIxynyXRGKqTqmxhphTgan/VvjyJp+a3hhAPWBHlJxwvZVzxhd34VYXXYjqL88QuoGNFeJEYgD9GSA X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-Network-Message-Id: af665be6-3696-436c-dd85-08dd45933c6b X-MS-Exchange-CrossTenant-AuthSource: DS7PR12MB9473.namprd12.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-OriginalArrivalTime: 05 Feb 2025 03:14:43.3875 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-MailboxType: HOSTED X-MS-Exchange-CrossTenant-UserPrincipalName: m0ogFDV977PKnR+2R0zea/X7f49H+enPGijuLjjgpUGKos2QTu5fqSDrV8dTBMvr X-MS-Exchange-Transport-CrossTenantHeadersStamped: MW6PR12MB8865 Instead of splitting the large folio uniformly during truncation, try to use buddy allocator like split at the start of truncation range to minimize the number of resulting folios if it is supported. try_folio_split() is introduced to use folio_split() if supported and fall back to uniform split otherwise. For example, to truncate a order-4 folio [0, 1, 2, 3, 4, 5, ..., 15] between [3, 10] (inclusive), folio_split() splits the folio to [0,1], [2], [3], [4..7], [8..15] and [3], [4..7] can be dropped and [8..15] is kept with zeros in [8..10], then another folio_split() is done at 10, so [8..10] can be dropped. One possible optimization is to make folio_split() to split a folio based on a given range, like [3..10] above. But that complicates folio_split(), so it will be investigated when necessary. Signed-off-by: Zi Yan --- include/linux/huge_mm.h | 36 ++++++++++++++++++++++++++++++++++++ mm/huge_memory.c | 4 ++-- mm/truncate.c | 31 ++++++++++++++++++++++++++++++- 3 files changed, 68 insertions(+), 3 deletions(-) diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h index e1bea54820ff..2bd181142b96 100644 --- a/include/linux/huge_mm.h +++ b/include/linux/huge_mm.h @@ -341,6 +341,36 @@ int split_huge_page_to_list_to_order(struct page *page, struct list_head *list, unsigned int new_order); int min_order_for_split(struct folio *folio); int split_folio_to_list(struct folio *folio, struct list_head *list); +bool uniform_split_supported(struct folio *folio, unsigned int new_order, + bool warns); +bool non_uniform_split_supported(struct folio *folio, unsigned int new_order, + bool warns); +int folio_split(struct folio *folio, unsigned int new_order, struct page *page, + struct list_head *list); +/* + * try_folio_split - try to split a @folio at @page using non uniform split. + * @folio: folio to be split + * @page: split to order-0 at the given page + * @list: store the after-split folios + * + * Try to split a @folio at @page using non uniform split to order-0, if + * non uniform split is not supported, fall back to uniform split. + * + * Return: 0: split is successful, otherwise split failed. + */ +static inline int try_folio_split(struct folio *folio, struct page *page, + struct list_head *list) +{ + int ret = min_order_for_split(folio); + + if (ret < 0) + return ret; + + if (!non_uniform_split_supported(folio, 0, false)) + return split_huge_page_to_list_to_order(&folio->page, list, + ret); + return folio_split(folio, ret, page, list); +} static inline int split_huge_page(struct page *page) { struct folio *folio = page_folio(page); @@ -533,6 +563,12 @@ static inline int split_folio_to_list(struct folio *folio, struct list_head *lis return 0; } +static inline int try_folio_split(struct folio *folio, struct page *page, + struct list_head *list) +{ + return 0; +} + static inline void deferred_split_folio(struct folio *folio, bool partially_mapped) {} #define split_huge_pmd(__vma, __pmd, __address) \ do { } while (0) diff --git a/mm/huge_memory.c b/mm/huge_memory.c index dad6819901a8..06087c8d0931 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -3534,7 +3534,7 @@ static int __split_unmapped_folio(struct folio *folio, int new_order, return ret; } -static bool non_uniform_split_supported(struct folio *folio, unsigned int new_order, +bool non_uniform_split_supported(struct folio *folio, unsigned int new_order, bool warns) { /* order-1 is not supported for anonymous THP. */ @@ -3567,7 +3567,7 @@ static bool non_uniform_split_supported(struct folio *folio, unsigned int new_or } /* See comments in non_uniform_split_supported() */ -static bool uniform_split_supported(struct folio *folio, unsigned int new_order, +bool uniform_split_supported(struct folio *folio, unsigned int new_order, bool warns) { if (folio_test_anon(folio) && new_order == 1) { diff --git a/mm/truncate.c b/mm/truncate.c index e922ceb66c44..6a1f5d21679c 100644 --- a/mm/truncate.c +++ b/mm/truncate.c @@ -178,6 +178,7 @@ bool truncate_inode_partial_folio(struct folio *folio, loff_t start, loff_t end) { loff_t pos = folio_pos(folio); unsigned int offset, length; + struct page *split_at, *split_at2; if (pos < start) offset = start - pos; @@ -207,8 +208,36 @@ bool truncate_inode_partial_folio(struct folio *folio, loff_t start, loff_t end) folio_invalidate(folio, offset, length); if (!folio_test_large(folio)) return true; - if (split_folio(folio) == 0) + + split_at = folio_page(folio, PAGE_ALIGN_DOWN(offset) / PAGE_SIZE); + split_at2 = folio_page(folio, + PAGE_ALIGN_DOWN(offset + length) / PAGE_SIZE); + + if (!try_folio_split(folio, split_at, NULL)) { + /* + * try to split at offset + length to make sure folios within + * the range can be dropped, especially to avoid memory waste + * for shmem truncate + */ + struct folio *folio2 = page_folio(split_at2); + + if (!folio_try_get(folio2)) + goto no_split; + + if (!folio_test_large(folio2)) + goto out; + + if (!folio_trylock(folio2)) + goto out; + + /* split result does not matter here */ + try_folio_split(folio2, split_at2, NULL); + folio_unlock(folio2); +out: + folio_put(folio2); +no_split: return true; + } if (folio_test_dirty(folio)) return false; truncate_inode_folio(folio->mapping, folio); From patchwork Wed Feb 5 03:14:17 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Zi Yan X-Patchwork-Id: 13960499 Received: from NAM02-BN1-obe.outbound.protection.outlook.com (mail-bn1nam02on2077.outbound.protection.outlook.com [40.107.212.77]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 9406721D587; Wed, 5 Feb 2025 03:14:57 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=fail smtp.client-ip=40.107.212.77 ARC-Seal: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1738725299; cv=fail; b=XfFJnyLNKQWy0PgwQA4jZMGHLH1UKjRmEUumWqBGcPwpnkFvwAPDJwQ6/OKk7okzayUUkjNggbOxjSZrZm3FAZEHxnsfQxgkk0paXDfMlPXWnsg+/nDYPtMr29ek3Ug+qBm6EsRX9X8cZo0Jhdi9d2/kicpyTMMrCCSH00qDyvg= ARC-Message-Signature: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1738725299; c=relaxed/simple; bh=siUAkUsrYmN0xTkGTYj67zOhBR+jzX37/9m2FaJA0aI=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: Content-Type:MIME-Version; b=gaCv5NNh55TuLbzrn+KrcKG1MoDNSIsUOCobDOlPr34crpgMfiTDgtBmHViAwLANSFDl80PFBRo9YxhYS3r+jeBYapHXEfQgku2Pony0sOfsIYA8f3+1d70zuwuos1cFWBw/pDX8ogRgsBrNvcI0eSLEpvvKGD2ScIcWJT84PvA= ARC-Authentication-Results: i=2; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=nvidia.com; spf=fail smtp.mailfrom=nvidia.com; dkim=pass (2048-bit key) header.d=Nvidia.com header.i=@Nvidia.com header.b=qzaNC6SN; arc=fail smtp.client-ip=40.107.212.77 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=nvidia.com Authentication-Results: smtp.subspace.kernel.org; spf=fail smtp.mailfrom=nvidia.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=Nvidia.com header.i=@Nvidia.com header.b="qzaNC6SN" ARC-Seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=ULZkSTDI1Jmvq4Mlx5/YW+lxnTJKXcwmWndf0xhJAd4FbPitJ1mesS4eq2WPFTfHJzbi8E5mQZrusblX0NEg6KmAEdCxmL6DMgcvdv8Q8xsRbXgu2zVi1KlOiv3R3RVjpMk3gxFkqQPSz1lF0g+u6Ms9Y8prh0KFeiQ0LfBu6VbFSBSEJ30q2taYnIPG2aH2reJQmRPFwhM3rirm7drZZs/qP64XEgpH+akhEdFGROApgm/iOwdb3oHbcpIPUxPR4zFOyv4fR0PJtDrG3KGb+76L6NuNE9olTmrBn+y9ZmHo5FJNhdCGu3etkv2sKxNbw7PG5s6QQHzayIked7/Qrg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=QxUrFoVdwqE7hmVfm5+m5yVYkjr3vlGolGIklz7tl8A=; b=JFliUcM6vMDCaeAjuneSerrZDFnCxIVwj3rDBVQSDjO/xgVmT3/uLdRaVlqGCBylTaOgV7i65SNNbyFGzPO/eq/FFR+7LYTFnAGKg7HZfvDEfs86x5xFAM9DWava8NewS5+7uZQP6FSYo69748IPbV95duVzHzE5Y+za0JfSA8xChhJLb7gPI7x08Slss24bcHlFahfV5olEnliPQxTt4H6G4SGZ742ZkYn21cCuafuhfjDtbUhq9TLCYCSdho4LqPWZrrUChowrhS6vPgAIdkAOutkHv9CjyKKVZ9Z1eTatzoJ9fCib6CgEZoaKUDBCQZ7OO15b3XTJehlEVJqbGw== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=nvidia.com; dmarc=pass action=none header.from=nvidia.com; dkim=pass header.d=nvidia.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=QxUrFoVdwqE7hmVfm5+m5yVYkjr3vlGolGIklz7tl8A=; b=qzaNC6SNS4gq9+klUs9yG050FOU3veBr08UY4MJMa8+3tWaeZBAx/MpAlOJanX5DOa+4lQUab8f+1//5MGrHMRnXg6hwQ98M+DQVtYnbnYMI8Tv6MPa7rgfqyJ8zwahH3E+P7xY3h0HZ05kgGXvSNBjSuCAutVbTb4KrRBovao0DvI+VYVzly8ZM7rBbQuMaGoi/dzk1kc+35wnWySPbDwGXSs+voKyEJjRWj8nB6Txx98nINcAieMvB+x2oImD6smhXWq8G50Qp2WegjRXxURfG499t3gYasPgfsl3G5rCXT0GLOWuPRpDrDuodZBe4quAqV4w93LmifzXWIk88og== Authentication-Results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=nvidia.com; Received: from DS7PR12MB9473.namprd12.prod.outlook.com (2603:10b6:8:252::5) by MW6PR12MB8865.namprd12.prod.outlook.com (2603:10b6:303:23b::21) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8398.24; Wed, 5 Feb 2025 03:14:44 +0000 Received: from DS7PR12MB9473.namprd12.prod.outlook.com ([fe80::5189:ecec:d84a:133a]) by DS7PR12MB9473.namprd12.prod.outlook.com ([fe80::5189:ecec:d84a:133a%6]) with mapi id 15.20.8398.021; Wed, 5 Feb 2025 03:14:44 +0000 From: Zi Yan To: linux-mm@kvack.org, Andrew Morton , "Kirill A . Shutemov" , "Matthew Wilcox (Oracle)" Cc: Ryan Roberts , Hugh Dickins , David Hildenbrand , Yang Shi , Miaohe Lin , Kefeng Wang , Yu Zhao , John Hubbard , Baolin Wang , linux-kselftest@vger.kernel.org, linux-kernel@vger.kernel.org, Zi Yan Subject: [PATCH v6 7/7] selftests/mm: add tests for folio_split(), buddy allocator like split. Date: Tue, 4 Feb 2025 22:14:17 -0500 Message-ID: <20250205031417.1771278-8-ziy@nvidia.com> X-Mailer: git-send-email 2.47.2 In-Reply-To: <20250205031417.1771278-1-ziy@nvidia.com> References: <20250205031417.1771278-1-ziy@nvidia.com> X-ClientProxiedBy: BL1PR13CA0120.namprd13.prod.outlook.com (2603:10b6:208:2b9::35) To DS7PR12MB9473.namprd12.prod.outlook.com (2603:10b6:8:252::5) Precedence: bulk X-Mailing-List: linux-kselftest@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: DS7PR12MB9473:EE_|MW6PR12MB8865:EE_ X-MS-Office365-Filtering-Correlation-Id: ed24231e-e2cd-4927-a02e-08dd45933d1c X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0;ARA:13230040|1800799024|366016|7416014|376014; X-Microsoft-Antispam-Message-Info: HVycuM3lpXf+iC4Z3Xceq4CfuhpOxomVLeeDEf59GKjabDjuRv3AFxH8iL4IOJLld48iCrlzLP0/QqOaJD4J4aRGCNj8vuZTmkLfGddgPHZRH94BPrDTzylFmZminvyTeo3e8XyrKxhMjdWsFfhsiwKvgvJSVsrks/VudqUGN+Obe1W1+VSXL8R8ELHU6QbISeyHW1uT+Tt8ORq+Pd7Z49xO7egawNqV4h7goxuevwK0ULfXAvcIAp81gtU5t9SUPFJW+OytcpHe+B/H4NtTWl6QnlLd7hJNsZlXXnD5BarEKIA+zK/5clSZ3jK2/TrIm1t0ptZsIEawuCGMQnusiPyDap8u1aYBkDNgLAShyG4NoIvA2TTIDwjLM92RtFUNBV1OUGoE4LDQwYyKk9/W92/xxqvwLzQWnE32zmI5NbSDguomTUlQLeNZfyklSdzNUMgwLV0dZGFtyI0SRZ0suvWIB83ojy654b3zsFKudoPQdmaUSs/qLmQ7PV2OEwbAgkCL6b7jXb7SwiiDDAvzykLvSjeOJwRrv0dUQF+mjUBqj8ADxWrPe5zUmh7TJilQguvJKeOg4vh9Ogq3JmKNzd4WvyZNWiCU08KWGaObVqEcvM0annCgueG7bfZ6LuAFaaR/MpjT6JS86Z5S2OrUtExRMenMS7w1KKoRXrpYSxw4EQ/cxU4KWx+parD1tV/MsHZvQTDQNdR2ZJy102GvGc4xmA9YbXCrTGFajWkTDnc961jnxb1/Cvwr0ZozUmM2yk778jFniTyEPHd8+If1zGeeC+3mTBZKrJcZXUecav5iSyC7KuDkqxhGEfVELIiXjFJxTFBoNLdQUg2F2y3HPeez6yxsS44s2zc72DyHKJGX+YwgoaE9TZjcWztnMl8w1lrtRRrLyZZs7JVngV1iSxUoRSAgv3E4wAAm0eq6m1op0FTSlpjwlQ5iVB9AOzSlw5rJTei7D1mWd8ux2ROfN+Ufc0qaYAPIiXHTyp9c8IYIeXs30sIPUI58nN7hO58fyNP5VXHyuq8ob+0HeeXH7e0Fn+/Sd6v19s/SA3MI1WCBKsnFZcThVQalKp7QqqWgjUIoqKUJsRI9g6IAAuABc2VoB0XYGOJXtJUxp1fSfAW7mtbq+buL2oWtFCfeawdaHqyYuxxQvuZVRBU9XGf1aAoElThSv387mlKugyAJKTMck6x8f1rGfZ8y+TddNyu1kPBc79fSRkcmtwwgembFzqVQtgabQzvvOSr3hcxlF7aIqJBAOo3YSLxKbR4wPh1jnt1NRiP+x4FF73jiHgtcg+pKVSMJOKhXdh41r6xRbtIh+5e/u+k59XwRRWxDzL/EMo1GzpMUHZgf6EQ6I8UoZd1aagCy5AVsnj+Qvbw09aY2gN77OuyEO6w2tBHddzQE X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR12MB9473.namprd12.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230040)(1800799024)(366016)(7416014)(376014);DIR:OUT;SFP:1101; X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1 X-MS-Exchange-AntiSpam-MessageData-0: 3sT57NCMf5aKvJ9Xn05efc67wiDHQ/T/yKdAOgk+KhGGL6IXDyrYXNQYFxQu5yot8rV1uf0p79E7ewPDQzIigemgnkyFLg5HPfscDLQkBcmlXkcCQf285maotWJRYBZ+OfnnRx5wbmfDD+xzVSUTJJjR/JFGp//FUW9x5iQ0RLqRah8PFcIcJ6NugCe3dCMcuZtdqabd01jHy63lHufBtLOa8D0ViGXCu+NSPbozHv+WlOT59ANrW0308gMJMXoInkeX0ZM3qCm417AB0Ec6know2d51wRSTI92q1TO8bp5DT/pfTv082Dps89fcdaxJh0lYGN7ueQiMhYHYtoWl0+fgTLhBm8KcpwI/I36z20GZeMV9B2EwSiCPDGXqJbt6Y7NS0kgTZ7GmuMZfwZ3vpq9zP72axO9pxobWKZAjO+rM2MhUEPrxMXgpow11Z2E8xYKeh3fAZCy2Q7HPIOwmV74VEtD+7Ays7Msru9zSDzqbmgFTGmOj1WuwI+eOP5GIJPnmS62JEWYidjXUidyMWHfk9AiuYlV3aXZM0CSQBP6652f9a6amoaT1GX6sjdBll8427gLmkmGVxgbNNrnnMfZNNELTCJiCk3YXk2wtaMCTixchdtoRxf6oenMJMxN596BOOfNp9FpqwNjr0dNTy4ct1gupvCHpiAIKaHpfV/a/EI3Lzq1GUyTdB5hCVMuaf/rflFL5+qEvj/4l1FcwfW77VmWzzrWpelzB84bw+17bZgwyMOFr5AI2fqhCEnlq8/Z9bepyYFgvInmQwHP8VrTn4QLSPs+/pVtHSbfD+s2JdQ0zazBccqCpvUkhYRMAMvDJccD76fFS1Sepspws2o38Yp4JI0Q1ygE7KC2kgQ1gq6vjm8Lh94672a4+N8YkWbZ+7XtMCRkdXvNMOJkM1DPxEGjGfU/oSNcnS0XWq3Jzy3Ao1jt/rjSApNIV3R3YkPqOWkqSHaiaLyQPTq4lspadkVZrqEFdgw9dt7EchFdPdl0skClh4RpYhow/W+/KOwydEL3vZWHgA6RjEuP/mjwC1MV1HoStOkVmD92uZk29iH+R0uM9xalYAw1v3YexfxWiGnqZYIOkSM3UOkGUO0X95pTpr/sMgs6ooAQl3nAFBBIjxQleKSgkdYxkz7Jui/VFjueiFUF1ZmHEu4JWySQIBwixU4DrfuIGCnTgYjynmVlWmwTa9I8lcwsGZU538K7Ea1O9/J1/xo0TsvFbhjXewTTO78vwUnX+DGBt3IhALGLQvqyQ/QHDFDiBw80FXx+HglOS2aI/bItiesdT2e/OOtnvQU99mZ8OmpwY2Ntla+Z3bd00tpyuNw2H/vz+UVPetXfzASrpoTzuOE66zMBd1spUq4Qcc0RTS2gvTd2mRS9xtMbUrBJxtVS2ZZlCjpyA9/UsU3OVxOPs4B/2Z4OTAWFFufPzdp+24ccSbGd4O+E/OjcgRRURGXYdbVbKurkdty+Pax7mYZp1fOJREwg+hVmTUXbntPcb+noqDqFQEZz1GhGitTAHWlRRl9bKbOALBXCVCeUu9uYsNfnYCCv88xNj5xUpdJWI8UEOqteRMLMMDRhAPOv98Jgj8+m6 X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-Network-Message-Id: ed24231e-e2cd-4927-a02e-08dd45933d1c X-MS-Exchange-CrossTenant-AuthSource: DS7PR12MB9473.namprd12.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-OriginalArrivalTime: 05 Feb 2025 03:14:44.5564 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-MailboxType: HOSTED X-MS-Exchange-CrossTenant-UserPrincipalName: OP5JOI0O0A5okmAh0o3b6R77J7UVdUZIWKQcTZq+6ZDBL6a3aJfx7rqfz5pmtFqd X-MS-Exchange-Transport-CrossTenantHeadersStamped: MW6PR12MB8865 It splits page cache folios to orders from 0 to 8 at different in-folio offset. Signed-off-by: Zi Yan --- .../selftests/mm/split_huge_page_test.c | 34 +++++++++++++++---- 1 file changed, 27 insertions(+), 7 deletions(-) diff --git a/tools/testing/selftests/mm/split_huge_page_test.c b/tools/testing/selftests/mm/split_huge_page_test.c index e0304046b1a0..719c5e2a6624 100644 --- a/tools/testing/selftests/mm/split_huge_page_test.c +++ b/tools/testing/selftests/mm/split_huge_page_test.c @@ -14,6 +14,7 @@ #include #include #include +#include #include #include #include @@ -456,7 +457,8 @@ int create_pagecache_thp_and_fd(const char *testfile, size_t fd_size, int *fd, return -1; } -void split_thp_in_pagecache_to_order(size_t fd_size, int order, const char *fs_loc) +void split_thp_in_pagecache_to_order_at(size_t fd_size, const char *fs_loc, + int order, int offset) { int fd; char *addr; @@ -474,7 +476,12 @@ void split_thp_in_pagecache_to_order(size_t fd_size, int order, const char *fs_l return; err = 0; - write_debugfs(PID_FMT, getpid(), (uint64_t)addr, (uint64_t)addr + fd_size, order); + if (offset == -1) + write_debugfs(PID_FMT, getpid(), (uint64_t)addr, + (uint64_t)addr + fd_size, order); + else + write_debugfs(PID_FMT, getpid(), (uint64_t)addr, + (uint64_t)addr + fd_size, order, offset); for (i = 0; i < fd_size; i++) if (*(addr + i) != (char)i) { @@ -493,9 +500,15 @@ void split_thp_in_pagecache_to_order(size_t fd_size, int order, const char *fs_l munmap(addr, fd_size); close(fd); unlink(testfile); - if (err) - ksft_exit_fail_msg("Split PMD-mapped pagecache folio to order %d failed\n", order); - ksft_test_result_pass("Split PMD-mapped pagecache folio to order %d passed\n", order); + if (offset == -1) { + if (err) + ksft_exit_fail_msg("Split PMD-mapped pagecache folio to order %d failed\n", order); + ksft_test_result_pass("Split PMD-mapped pagecache folio to order %d passed\n", order); + } else { + if (err) + ksft_exit_fail_msg("Split PMD-mapped pagecache folio to order %d at in-folio offset %d failed\n", order, offset); + ksft_test_result_pass("Split PMD-mapped pagecache folio to order %d at in-folio offset %d passed\n", order, offset); + } } int main(int argc, char **argv) @@ -506,6 +519,7 @@ int main(int argc, char **argv) char fs_loc_template[] = "/tmp/thp_fs_XXXXXX"; const char *fs_loc; bool created_tmp; + int offset; ksft_print_header(); @@ -517,7 +531,7 @@ int main(int argc, char **argv) if (argc > 1) optional_xfs_path = argv[1]; - ksft_set_plan(1+8+1+9+9); + ksft_set_plan(1+8+1+9+9+8*4+2); pagesize = getpagesize(); pageshift = ffs(pagesize) - 1; @@ -540,7 +554,13 @@ int main(int argc, char **argv) created_tmp = prepare_thp_fs(optional_xfs_path, fs_loc_template, &fs_loc); for (i = 8; i >= 0; i--) - split_thp_in_pagecache_to_order(fd_size, i, fs_loc); + split_thp_in_pagecache_to_order_at(fd_size, fs_loc, i, -1); + + for (i = 0; i < 9; i++) + for (offset = 0; + offset < pmd_pagesize / pagesize; + offset += MAX(pmd_pagesize / pagesize / 4, 1 << i)) + split_thp_in_pagecache_to_order_at(fd_size, fs_loc, i, offset); cleanup_thp_fs(fs_loc, created_tmp); ksft_finished();