From patchwork Wed Jan 5 21:47:52 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Zi Yan X-Patchwork-Id: 12704785 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 17AF6C43217 for ; Wed, 5 Jan 2022 21:48:32 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id A5FA06B007D; Wed, 5 Jan 2022 16:48:27 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 99C086B007E; Wed, 5 Jan 2022 16:48:27 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 777BC6B0080; Wed, 5 Jan 2022 16:48:27 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0069.hostedemail.com [216.40.44.69]) by kanga.kvack.org (Postfix) with ESMTP id 6507B6B007D for ; Wed, 5 Jan 2022 16:48:27 -0500 (EST) Received: from smtpin12.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 26E668249980 for ; Wed, 5 Jan 2022 21:48:27 +0000 (UTC) X-FDA: 78997572654.12.FE076E8 Received: from new4-smtp.messagingengine.com (new4-smtp.messagingengine.com [66.111.4.230]) by imf28.hostedemail.com (Postfix) with ESMTP id B272DC0010 for ; Wed, 5 Jan 2022 21:48:26 +0000 (UTC) Received: from compute4.internal (compute4.nyi.internal [10.202.2.44]) by mailnew.nyi.internal (Postfix) with ESMTP id 45284580547; Wed, 5 Jan 2022 16:48:26 -0500 (EST) Received: from mailfrontend2 ([10.202.2.163]) by compute4.internal (MEProxy); Wed, 05 Jan 2022 16:48:26 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=sent.com; h=from :to:cc:subject:date:message-id:in-reply-to:references:reply-to :mime-version:content-transfer-encoding; s=fm2; bh=kmVQ9SzMySc6E PFCmvu67pL9VTKp1qAMRW9N2THgimw=; b=uGku6gIPQ976fWYy0UtYKjRbH41+H 0L+xBjqmOvSSsDwqjpqbHm4wqA5t+434o953ZVROcfxTrqJ+BVsZaaEk4AfvaueA b6zDUM97GTtVl+ASBvlckfeUGKOQsQ396od7R+1jLZfuWlNNs5CEGj2cTaVfYxez dTaFfU/eWpO8IVroIVCpbsXl5jaMsO/x6DC03qX10eVNfpu3abYOR82Ys+69DOJ2 RNdPOsLRLGiaVQ0saKCcPePPFaoWzUJnjZbrP9EfSWe3CiZHBxE6pNzltRs+sMp3 fcXcnZH/uE3mW0efpROuG/hw/dnBn6CNKZdQSAb0E1cRT2FT3zgc2zShA== DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:content-transfer-encoding:date:from :in-reply-to:message-id:mime-version:references:reply-to:subject :to:x-me-proxy:x-me-proxy:x-me-sender:x-me-sender:x-sasl-enc; s= fm1; bh=kmVQ9SzMySc6EPFCmvu67pL9VTKp1qAMRW9N2THgimw=; b=GmjX6tbX x4XxUA5uUpw9kU9UY7Z5FqgNWpAIXI2Pz52G1hVKmlDQKJpwTEL1tih1fnaWoDPx dYBdNdFy/kmQPlf90kmYhvXKAqRTPcyQEbPAzRl/GTJ3jKZc0HE2gVec8Qh7gCAH J12e8d7iHK7JsiaDywFNb40fug0PCNq3qqSmI25g4zREk4tFU7CUNBz0x07XQepk 3oeDeSe7V5O7B3v1LUkoiD5yPkTHoVHYc6oapKIvtcxtQ/YWiTtZtHP4MQzY8Tnm q+spLxLpKs4ollVYxR4oafUeKTVEK0b6R3Iv+Rt5PB3L+zsRuYZ0kezCyhR+Fx2+ uPuYVPLN8id7OQ== X-ME-Sender: X-ME-Received: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvuddrudefjedgieekucetufdoteggodetrfdotf fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfqfgfvpdfurfetoffkrfgpnffqhgen uceurghilhhouhhtmecufedttdenucesvcftvggtihhpihgvnhhtshculddquddttddmne cujfgurhephffvufffkffojghfrhgggfestdhqredtredttdenucfhrhhomhepkghiucgj rghnuceoiihirdihrghnsehsvghnthdrtghomheqnecuggftrfgrthhtvghrnhepieejue dvueduuefhgefhheeiuedvtedvuefgieegveetueeiueehtdegudehfeelnecuvehluhhs thgvrhfuihiivgeptdenucfrrghrrghmpehmrghilhhfrhhomhepiihirdihrghnsehsvg hnthdrtghomh X-ME-Proxy: Received: by mail.messagingengine.com (Postfix) with ESMTPA; Wed, 5 Jan 2022 16:48:25 -0500 (EST) From: Zi Yan To: David Hildenbrand , linux-mm@kvack.org Cc: linux-kernel@vger.kernel.org, Michael Ellerman , Christoph Hellwig , Marek Szyprowski , Robin Murphy , linuxppc-dev@lists.ozlabs.org, virtualization@lists.linux-foundation.org, iommu@lists.linux-foundation.org, Vlastimil Babka , Mel Gorman , Eric Ren , Zi Yan Subject: [RFC PATCH v3 4/8] mm: make alloc_contig_range work at pageblock granularity Date: Wed, 5 Jan 2022 16:47:52 -0500 Message-Id: <20220105214756.91065-5-zi.yan@sent.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20220105214756.91065-1-zi.yan@sent.com> References: <20220105214756.91065-1-zi.yan@sent.com> Reply-To: Zi Yan MIME-Version: 1.0 X-Rspamd-Server: rspam09 X-Rspamd-Queue-Id: B272DC0010 X-Stat-Signature: przxg94wx8aj8uy11sky9h6brwgbzbhf Authentication-Results: imf28.hostedemail.com; dkim=pass header.d=sent.com header.s=fm2 header.b=uGku6gIP; dkim=pass header.d=messagingengine.com header.s=fm1 header.b=GmjX6tbX; spf=pass (imf28.hostedemail.com: domain of zi.yan@sent.com designates 66.111.4.230 as permitted sender) smtp.mailfrom=zi.yan@sent.com; dmarc=pass (policy=none) header.from=sent.com X-HE-Tag: 1641419306-817410 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Zi Yan alloc_contig_range() worked at MAX_ORDER-1 granularity to avoid merging pageblocks with different migratetypes. It might unnecessarily convert extra pageblocks at the beginning and at the end of the range. Change alloc_contig_range() to work at pageblock granularity. It is done by restoring pageblock types and split >pageblock_order free pages after isolating at MAX_ORDER-1 granularity and migrating pages away at pageblock granularity. The reason for this process is that during isolation, some pages, either free or in-use, might have >pageblock sizes and isolating part of them can cause free accounting issues. Restoring the migratetypes of the pageblocks not in the interesting range later is much easier. Signed-off-by: Zi Yan --- mm/page_alloc.c | 174 ++++++++++++++++++++++++++++++++++++++++++------ 1 file changed, 154 insertions(+), 20 deletions(-) diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 5193c953dbf8..e1c09ae54e31 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -8986,8 +8986,8 @@ struct page *has_unmovable_pages(struct zone *zone, struct page *page, #ifdef CONFIG_CONTIG_ALLOC static unsigned long pfn_max_align_down(unsigned long pfn) { - return pfn & ~(max_t(unsigned long, MAX_ORDER_NR_PAGES, - pageblock_nr_pages) - 1); + return ALIGN_DOWN(pfn, max_t(unsigned long, MAX_ORDER_NR_PAGES, + pageblock_nr_pages)); } static unsigned long pfn_max_align_up(unsigned long pfn) @@ -9076,6 +9076,52 @@ static int __alloc_contig_migrate_range(struct compact_control *cc, return 0; } +static inline int save_migratetypes(unsigned char *migratetypes, + unsigned long start_pfn, unsigned long end_pfn) +{ + unsigned long pfn = start_pfn; + int num = 0; + + while (pfn < end_pfn) { + migratetypes[num] = get_pageblock_migratetype(pfn_to_page(pfn)); + num++; + pfn += pageblock_nr_pages; + } + return num; +} + +static inline int restore_migratetypes(unsigned char *migratetypes, + unsigned long start_pfn, unsigned long end_pfn) +{ + unsigned long pfn = start_pfn; + int num = 0; + + while (pfn < end_pfn) { + set_pageblock_migratetype(pfn_to_page(pfn), migratetypes[num]); + num++; + pfn += pageblock_nr_pages; + } + return num; +} + +static inline void split_free_page_into_pageblocks(struct page *free_page, + int order, struct zone *zone) +{ + unsigned long pfn; + + spin_lock(&zone->lock); + del_page_from_free_list(free_page, zone, order); + for (pfn = page_to_pfn(free_page); + pfn < page_to_pfn(free_page) + (1UL << order); + pfn += pageblock_nr_pages) { + int mt = get_pfnblock_migratetype(pfn_to_page(pfn), pfn); + + __free_one_page(pfn_to_page(pfn), pfn, zone, pageblock_order, + mt, FPI_NONE); + } + spin_unlock(&zone->lock); +} + /** * alloc_contig_range() -- tries to allocate given range of pages * @start: start PFN to allocate @@ -9101,8 +9147,15 @@ int alloc_contig_range(unsigned long start, unsigned long end, unsigned migratetype, gfp_t gfp_mask) { unsigned long outer_start, outer_end; + unsigned long isolate_start = pfn_max_align_down(start); + unsigned long isolate_end = pfn_max_align_up(end); + unsigned long alloc_start = ALIGN_DOWN(start, pageblock_nr_pages); + unsigned long alloc_end = ALIGN(end, pageblock_nr_pages); + unsigned long num_pageblock_to_save; unsigned int order; int ret = 0; + unsigned char *saved_mt; + int num; struct compact_control cc = { .nr_migratepages = 0, @@ -9116,11 +9169,30 @@ int alloc_contig_range(unsigned long start, unsigned long end, }; INIT_LIST_HEAD(&cc.migratepages); + /* + * TODO: make MIGRATE_ISOLATE a standalone bit to avoid overwriting + * the exiting migratetype. Then, we will not need the save and restore + * process here. + */ + + /* Save the migratepages of the pageblocks before start and after end */ + num_pageblock_to_save = (alloc_start - isolate_start) / pageblock_nr_pages + + (isolate_end - alloc_end) / pageblock_nr_pages; + saved_mt = + kmalloc_array(num_pageblock_to_save, + sizeof(unsigned char), GFP_KERNEL); + if (!saved_mt) + return -ENOMEM; + + num = save_migratetypes(saved_mt, isolate_start, alloc_start); + + num = save_migratetypes(&saved_mt[num], alloc_end, isolate_end); + /* * What we do here is we mark all pageblocks in range as * MIGRATE_ISOLATE. Because pageblock and max order pages may * have different sizes, and due to the way page allocator - * work, we align the range to biggest of the two pages so + * work, we align the isolation range to biggest of the two so * that page allocator won't try to merge buddies from * different pageblocks and change MIGRATE_ISOLATE to some * other migration type. @@ -9130,6 +9202,20 @@ int alloc_contig_range(unsigned long start, unsigned long end, * we are interested in). This will put all the pages in * range back to page allocator as MIGRATE_ISOLATE. * + * Afterwards, we restore the migratetypes of the pageblocks not + * in range, split free pages spanning outside the range, + * and put split free pages (at pageblock_order) to the right + * migratetype list. + * + * NOTE: the above approach is used because it can cause free + * page accounting issues during isolation, if a page, either + * free or in-use, contains multiple pageblocks and we only + * isolate a subset of them. For example, if only the second + * pageblock is isolated from a page with 2 pageblocks, after + * the page is free, it will be put in the first pageblock + * migratetype list instead of having 2 pageblocks in two + * separate migratetype lists. + * * When this is done, we take the pages in range from page * allocator removing them from the buddy system. This way * page allocator will never consider using them. @@ -9140,10 +9226,9 @@ int alloc_contig_range(unsigned long start, unsigned long end, * put back to page allocator so that buddy can use them. */ - ret = start_isolate_page_range(pfn_max_align_down(start), - pfn_max_align_up(end), migratetype, 0); + ret = start_isolate_page_range(isolate_start, isolate_end, migratetype, 0); if (ret) - return ret; + goto done; drain_all_pages(cc.zone); @@ -9179,6 +9264,19 @@ int alloc_contig_range(unsigned long start, unsigned long end, * isolated thus they won't get removed from buddy. */ + /* + * Restore migratetypes of pageblocks outside [start, end) + * TODO: remove it when MIGRATE_ISOLATE becomes a standalone bit + */ + + num = restore_migratetypes(saved_mt, isolate_start, alloc_start); + + num = restore_migratetypes(&saved_mt[num], alloc_end, isolate_end); + + /* + * Split free page spanning [isolate_start, alloc_start) and put the + * pageblocks in the right migratetype lists. + */ order = 0; outer_start = start; while (!PageBuddy(pfn_to_page(outer_start))) { @@ -9193,37 +9291,73 @@ int alloc_contig_range(unsigned long start, unsigned long end, order = buddy_order(pfn_to_page(outer_start)); /* - * outer_start page could be small order buddy page and - * it doesn't include start page. Adjust outer_start - * in this case to report failed page properly - * on tracepoint in test_pages_isolated() + * split the free page has start page and put the pageblocks + * in the right migratetype list */ - if (outer_start + (1UL << order) <= start) - outer_start = start; + if (outer_start + (1UL << order) > start) { + struct page *free_page = pfn_to_page(outer_start); + + split_free_page_into_pageblocks(free_page, order, cc.zone); + } + } + + /* + * Split free page spanning [alloc_end, isolate_end) and put the + * pageblocks in the right migratetype list + */ + for (outer_end = alloc_end; outer_end < isolate_end;) { + unsigned long begin_pfn = outer_end; + + order = 0; + while (!PageBuddy(pfn_to_page(outer_end))) { + if (++order >= MAX_ORDER) { + outer_end = begin_pfn; + break; + } + outer_end &= ~0UL << order; + } + + if (outer_end != begin_pfn) { + order = buddy_order(pfn_to_page(outer_end)); + + /* + * split the free page has start page and put the pageblocks + * in the right migratetype list + */ + VM_BUG_ON(outer_end + (1UL << order) <= begin_pfn); + { + struct page *free_page = pfn_to_page(outer_end); + + split_free_page_into_pageblocks(free_page, order, cc.zone); + } + outer_end += 1UL << order; + } else + outer_end = begin_pfn + 1; } /* Make sure the range is really isolated. */ - if (test_pages_isolated(outer_start, end, 0)) { + if (test_pages_isolated(alloc_start, alloc_end, 0)) { ret = -EBUSY; goto done; } /* Grab isolated pages from freelists. */ - outer_end = isolate_freepages_range(&cc, outer_start, end); + outer_end = isolate_freepages_range(&cc, alloc_start, alloc_end); if (!outer_end) { ret = -EBUSY; goto done; } /* Free head and tail (if any) */ - if (start != outer_start) - free_contig_range(outer_start, start - outer_start); - if (end != outer_end) - free_contig_range(end, outer_end - end); + if (start != alloc_start) + free_contig_range(alloc_start, start - alloc_start); + if (end != alloc_end) + free_contig_range(end, alloc_end - end); done: - undo_isolate_page_range(pfn_max_align_down(start), - pfn_max_align_up(end), migratetype); + kfree(saved_mt); + undo_isolate_page_range(alloc_start, + alloc_end, migratetype); return ret; } EXPORT_SYMBOL(alloc_contig_range);