From patchwork Wed Oct 13 16:00:30 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kent Overstreet X-Patchwork-Id: 12556237 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 76970C43217 for ; Wed, 13 Oct 2021 16:00:44 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id EBBAE610FE for ; Wed, 13 Oct 2021 16:00:43 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org EBBAE610FE Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id 8BF2C900002; Wed, 13 Oct 2021 12:00:43 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 86F986B0074; Wed, 13 Oct 2021 12:00:43 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 73738900002; Wed, 13 Oct 2021 12:00:43 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0040.hostedemail.com [216.40.44.40]) by kanga.kvack.org (Postfix) with ESMTP id 647916B0073 for ; Wed, 13 Oct 2021 12:00:43 -0400 (EDT) Received: from smtpin12.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 1C7BE184383EF for ; Wed, 13 Oct 2021 16:00:43 +0000 (UTC) X-FDA: 78691877166.12.1FE9D14 Received: from mail-qv1-f49.google.com (mail-qv1-f49.google.com [209.85.219.49]) by imf09.hostedemail.com (Postfix) with ESMTP id A01FF3000108 for ; Wed, 13 Oct 2021 16:00:42 +0000 (UTC) Received: by mail-qv1-f49.google.com with SMTP id m13so1986250qvk.1 for ; Wed, 13 Oct 2021 09:00:42 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=pZexP7HogPfyV8yTiXGv7CUWiNgnazFkMB2vWUC/dbo=; b=GJh4rtFJ38Z6naQzZ8Xp9sIvZwwAZbpY9qdMZbmiWoeDWkKABOU3kTQVN508osXhEu VKW5jW1sgoSolLAwb1r0NnabzjFEaMiy32diG+QLlz+MU0cqbUlTmjbGhHjbTjLxpC1f w6mKKLf+9C7rfLd6c3wvqlGFznroCDRis44gi83jSdA2dW3JGegquav69UiEHLxKzNCO lBqQNR1LC15JS3OonZU+0n8/1DTjpf2Xnykr8W+bto0CmA+g5Sk2SsMZl5kMfpJ52yZZ KzfvEFFuqROwoQNATs36ElJxJO0y7KAv709wsjXk+j4vgqyJO55498hcVI9y9jOqbhzi dDxA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=pZexP7HogPfyV8yTiXGv7CUWiNgnazFkMB2vWUC/dbo=; b=Vg2Ka6dtXulr16ZApriSktPydGq2yHkxHRTZUHWCxl4gGEonM9Mma1d9ZYMpS+s1fp jYuWpK3yGu0DINNPABeWfqmZMkh3pu9IpCZcAABPa9tdmXCKblSFQghhGkbrhUO8Qu8O JhSE84G7E4+8U+lcQuDXMa0NQwzrPB0fqrDjF9cJng4dfb8KpUi3VRTeo/0X1CuqEGM9 TiyA1TYnXhBVOFlI1WqvVGlhFQZYkfimsWRGI0aQ4AoZokLr+D+rxmFIxbU/YQJTPySi 7heUp2UtUSG6UdheuvpT1oCDv5RleM92/zph5Sm9+cW333KSU9Z3ODMOvPAdOi6ygp2R kcTQ== X-Gm-Message-State: AOAM533BVZEUY8d0Eor4weFM3HAdaICJE9CGdN9KECPNWA2co4lSZhy4 o399+yk0ABvVGc9AIACgVA== X-Google-Smtp-Source: ABdhPJweZh80c8p2kCFXcJDAOtKDvmfyAuC0lF3eF4cM2TDclooX+UDe12liPR44rLkcgU6VXNe+iw== X-Received: by 2002:a05:6214:1c8d:: with SMTP id ib13mr36691652qvb.10.1634140841945; Wed, 13 Oct 2021 09:00:41 -0700 (PDT) Received: from moria.home.lan (c-73-219-103-14.hsd1.vt.comcast.net. [73.219.103.14]) by smtp.gmail.com with ESMTPSA id w17sm20161qts.53.2021.10.13.09.00.40 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 13 Oct 2021 09:00:41 -0700 (PDT) From: Kent Overstreet To: linux-kernel@vger.kernel.org, linux-mm@kvack.org, akpm@linux-foundation.org, linux-raid@vger.kernel.org, linux-block@vger.kernel.org, axboe@kernel.dk Cc: Kent Overstreet , alexander.h.duyck@linux.intel.com Subject: [PATCH 1/5] mm: Make free_area->nr_free per migratetype Date: Wed, 13 Oct 2021 12:00:30 -0400 Message-Id: <20211013160034.3472923-2-kent.overstreet@gmail.com> X-Mailer: git-send-email 2.33.0 In-Reply-To: <20211013160034.3472923-1-kent.overstreet@gmail.com> References: <20211013160034.3472923-1-kent.overstreet@gmail.com> MIME-Version: 1.0 X-Rspamd-Server: rspam03 X-Rspamd-Queue-Id: A01FF3000108 X-Stat-Signature: wwmsh856fhxscchrks3zi5tg8g9symxp Authentication-Results: imf09.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b=GJh4rtFJ; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf09.hostedemail.com: domain of kent.overstreet@gmail.com designates 209.85.219.49 as permitted sender) smtp.mailfrom=kent.overstreet@gmail.com X-HE-Tag: 1634140842-663685 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: This is prep work for introducing a struct page_free_list, which will have a list head and nr_free - it turns out a fair amount of the code looking at free_area->nr_free actually wants the number of elements on a particular freelist. Signed-off-by: Kent Overstreet Reported-by: kernel test robot --- include/linux/mmzone.h | 14 ++++++++++++-- mm/page_alloc.c | 30 +++++++++++++++++------------- mm/page_reporting.c | 2 +- mm/vmstat.c | 28 +++------------------------- 4 files changed, 33 insertions(+), 41 deletions(-) diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h index 6a1d79d846..089587b918 100644 --- a/include/linux/mmzone.h +++ b/include/linux/mmzone.h @@ -96,7 +96,7 @@ extern int page_group_by_mobility_disabled; struct free_area { struct list_head free_list[MIGRATE_TYPES]; - unsigned long nr_free; + unsigned long nr_free[MIGRATE_TYPES]; }; static inline struct page *get_page_from_free_area(struct free_area *area, @@ -108,7 +108,17 @@ static inline struct page *get_page_from_free_area(struct free_area *area, static inline bool free_area_empty(struct free_area *area, int migratetype) { - return list_empty(&area->free_list[migratetype]); + return area->nr_free[migratetype] == 0; +} + +static inline size_t free_area_nr_free(struct free_area *area) +{ + unsigned migratetype; + size_t nr_free = 0; + + for (migratetype = 0; migratetype < MIGRATE_TYPES; migratetype++) + nr_free += area->nr_free[migratetype]; + return nr_free; } struct pglist_data; diff --git a/mm/page_alloc.c b/mm/page_alloc.c index b37435c274..8918c00a91 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -966,7 +966,7 @@ static inline void add_to_free_list(struct page *page, struct zone *zone, struct free_area *area = &zone->free_area[order]; list_add(&page->lru, &area->free_list[migratetype]); - area->nr_free++; + area->nr_free[migratetype]++; } /* Used for pages not on another list */ @@ -976,7 +976,7 @@ static inline void add_to_free_list_tail(struct page *page, struct zone *zone, struct free_area *area = &zone->free_area[order]; list_add_tail(&page->lru, &area->free_list[migratetype]); - area->nr_free++; + area->nr_free[migratetype]++; } /* @@ -993,7 +993,7 @@ static inline void move_to_free_list(struct page *page, struct zone *zone, } static inline void del_page_from_free_list(struct page *page, struct zone *zone, - unsigned int order) + unsigned int order, int migratetype) { /* clear reported state and update reported page count */ if (page_reported(page)) @@ -1002,7 +1002,7 @@ static inline void del_page_from_free_list(struct page *page, struct zone *zone, list_del(&page->lru); __ClearPageBuddy(page); set_page_private(page, 0); - zone->free_area[order].nr_free--; + zone->free_area[order].nr_free[migratetype]--; } /* @@ -1098,7 +1098,7 @@ static inline void __free_one_page(struct page *page, if (page_is_guard(buddy)) clear_page_guard(zone, buddy, order, migratetype); else - del_page_from_free_list(buddy, zone, order); + del_page_from_free_list(buddy, zone, order, migratetype); combined_pfn = buddy_pfn & pfn; page = page + (combined_pfn - pfn); pfn = combined_pfn; @@ -2456,7 +2456,7 @@ struct page *__rmqueue_smallest(struct zone *zone, unsigned int order, page = get_page_from_free_area(area, migratetype); if (!page) continue; - del_page_from_free_list(page, zone, current_order); + del_page_from_free_list(page, zone, current_order, migratetype); expand(zone, page, order, current_order, migratetype); set_pcppage_migratetype(page, migratetype); return page; @@ -3525,7 +3525,7 @@ int __isolate_free_page(struct page *page, unsigned int order) /* Remove page from free list */ - del_page_from_free_list(page, zone, order); + del_page_from_free_list(page, zone, order, mt); /* * Set the pageblock if the isolated page is at least half of a @@ -6038,14 +6038,16 @@ void show_free_areas(unsigned int filter, nodemask_t *nodemask) struct free_area *area = &zone->free_area[order]; int type; - nr[order] = area->nr_free; - total += nr[order] << order; + nr[order] = 0; + types[order] = 0; - types[order] = 0; for (type = 0; type < MIGRATE_TYPES; type++) { if (!free_area_empty(area, type)) types[order] |= 1 << type; + nr[order] += area->nr_free[type]; } + + total += nr[order] << order; } spin_unlock_irqrestore(&zone->lock, flags); for (order = 0; order < MAX_ORDER; order++) { @@ -6623,7 +6625,7 @@ static void __meminit zone_init_free_lists(struct zone *zone) unsigned int order, t; for_each_migratetype_order(order, t) { INIT_LIST_HEAD(&zone->free_area[order].free_list[t]); - zone->free_area[order].nr_free = 0; + zone->free_area[order].nr_free[t] = 0; } } @@ -9317,6 +9319,7 @@ void __offline_isolated_pages(unsigned long start_pfn, unsigned long end_pfn) struct page *page; struct zone *zone; unsigned int order; + unsigned int migratetype; unsigned long flags; offline_mem_sections(pfn, end_pfn); @@ -9346,7 +9349,8 @@ void __offline_isolated_pages(unsigned long start_pfn, unsigned long end_pfn) BUG_ON(page_count(page)); BUG_ON(!PageBuddy(page)); order = buddy_order(page); - del_page_from_free_list(page, zone, order); + migratetype = get_pfnblock_migratetype(page, pfn); + del_page_from_free_list(page, zone, order, migratetype); pfn += (1 << order); } spin_unlock_irqrestore(&zone->lock, flags); @@ -9428,7 +9432,7 @@ bool take_page_off_buddy(struct page *page) int migratetype = get_pfnblock_migratetype(page_head, pfn_head); - del_page_from_free_list(page_head, zone, page_order); + del_page_from_free_list(page_head, zone, page_order, migratetype); break_down_buddy_pages(zone, page_head, page, 0, page_order, migratetype); if (!is_migrate_isolate(migratetype)) diff --git a/mm/page_reporting.c b/mm/page_reporting.c index 382958eef8..4e45ae95db 100644 --- a/mm/page_reporting.c +++ b/mm/page_reporting.c @@ -145,7 +145,7 @@ page_reporting_cycle(struct page_reporting_dev_info *prdev, struct zone *zone, * The division here should be cheap since PAGE_REPORTING_CAPACITY * should always be a power of 2. */ - budget = DIV_ROUND_UP(area->nr_free, PAGE_REPORTING_CAPACITY * 16); + budget = DIV_ROUND_UP(area->nr_free[mt], PAGE_REPORTING_CAPACITY * 16); /* loop through free list adding unreported pages to sg list */ list_for_each_entry_safe(page, next, list, lru) { diff --git a/mm/vmstat.c b/mm/vmstat.c index 8ce2620344..eb46f99c72 100644 --- a/mm/vmstat.c +++ b/mm/vmstat.c @@ -1071,7 +1071,7 @@ static void fill_contig_page_info(struct zone *zone, unsigned long blocks; /* Count number of free blocks */ - blocks = zone->free_area[order].nr_free; + blocks = free_area_nr_free(&zone->free_area[order]); info->free_blocks_total += blocks; /* Count free base pages */ @@ -1445,7 +1445,7 @@ static void frag_show_print(struct seq_file *m, pg_data_t *pgdat, seq_printf(m, "Node %d, zone %8s ", pgdat->node_id, zone->name); for (order = 0; order < MAX_ORDER; ++order) - seq_printf(m, "%6lu ", zone->free_area[order].nr_free); + seq_printf(m, "%6zu ", free_area_nr_free(&zone->free_area[order])); seq_putc(m, '\n'); } @@ -1470,29 +1470,7 @@ static void pagetypeinfo_showfree_print(struct seq_file *m, zone->name, migratetype_names[mtype]); for (order = 0; order < MAX_ORDER; ++order) { - unsigned long freecount = 0; - struct free_area *area; - struct list_head *curr; - bool overflow = false; - - area = &(zone->free_area[order]); - - list_for_each(curr, &area->free_list[mtype]) { - /* - * Cap the free_list iteration because it might - * be really large and we are under a spinlock - * so a long time spent here could trigger a - * hard lockup detector. Anyway this is a - * debugging tool so knowing there is a handful - * of pages of this order should be more than - * sufficient. - */ - if (++freecount >= 100000) { - overflow = true; - break; - } - } - seq_printf(m, "%s%6lu ", overflow ? ">" : "", freecount); + seq_printf(m, "%6zu ", zone->free_area[order].nr_free[mtype]); spin_unlock_irq(&zone->lock); cond_resched(); spin_lock_irq(&zone->lock); From patchwork Wed Oct 13 16:00:31 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kent Overstreet X-Patchwork-Id: 12556239 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 41C6BC4321E for ; Wed, 13 Oct 2021 16:00:46 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id D96F461163 for ; Wed, 13 Oct 2021 16:00:45 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org D96F461163 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id 7AEE56B0073; Wed, 13 Oct 2021 12:00:45 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 75EA8940007; Wed, 13 Oct 2021 12:00:45 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 5FF4B6B0078; Wed, 13 Oct 2021 12:00:45 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0054.hostedemail.com [216.40.44.54]) by kanga.kvack.org (Postfix) with ESMTP id 4DF5B6B0073 for ; Wed, 13 Oct 2021 12:00:45 -0400 (EDT) Received: from smtpin05.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 1017B8249980 for ; Wed, 13 Oct 2021 16:00:45 +0000 (UTC) X-FDA: 78691877250.05.5E9BE13 Received: from mail-qt1-f177.google.com (mail-qt1-f177.google.com [209.85.160.177]) by imf22.hostedemail.com (Postfix) with ESMTP id CC7C519A6 for ; Wed, 13 Oct 2021 16:00:43 +0000 (UTC) Received: by mail-qt1-f177.google.com with SMTP id c20so2999109qtb.2 for ; Wed, 13 Oct 2021 09:00:44 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=ExiMDGbOiJCAEk+N5uuM+TKuKA0rA49Uu1AdlJ97UCo=; b=JdtUNo2uMuhB4xHYxxWXnoWBjLnKFx5ffqYUIWfHcOYu2yuZ8hKu7HEaIp6KCMIAP1 z200x6fijVza031Vgc145jKL2KJ5QHqcJGkx8J7ZuRm9kraci1vMLHWolIypPNJcP6qI kMxwbvWJTgxXA9N67G/LFYC8bHbLymJot2bKKJNw1qM7f9RDBN8aBEvt7CQRI1jTKThO t8TDKd4taCKcbBbjPQC2cAxqpRY3cHPyl4TzsprIWL5+UtPJPZ3ELKy0yDvZ5QfSIZgp LRvqgNbkMjGuwWzw3TTbQmNcZ/zajOgvvlPkNHWMMP9wT8Icf51zb17iTUCbZdv1CoAI S51g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=ExiMDGbOiJCAEk+N5uuM+TKuKA0rA49Uu1AdlJ97UCo=; b=pWS163J54cJ5gRiFIPqbvqdp1Ok9X1cG71xjLX4LWtWIWpjmUJmYIJuRi2Ae7MOYmL yTskKEasK6BoRL/scA297lJyUbzykquDgVqg4GZMoNofzwvmuS/3R1P4lpS5njSlNVjf tnFpzAkJ9MIfCeoyuSsgOG+eZ7pi5iJ5D0XWSgbdnFzSI77Xcx0AysEGQuUbGxohpMfV nLADnNg61cWvtAEKEwA1epBC9ZRM9+lFmeg/o9MBNuACzbagzawPnm+GBm5oDp7yw25B 6nJHvtSdtYVZxu6GPa6DIKyWmKl6PRK8kGdQQkd9Xurfr/yC1Jh4YufITz3PE9S2u6Nh 50JA== X-Gm-Message-State: AOAM530gnSvyLGuZJlffziFBW/kg4BQPgthjiPGtAryUl/d7aoBTraGB R+ZU7HAbWQ4tAdQdzHpahqH0KCp8QuFK X-Google-Smtp-Source: ABdhPJyr+vj0+AmLaT4apOjAmLNIXpzPBUePVsaTpMuwFMwEuFhb9yVoLOfGZjkU0FPoV9K7FP/lCA== X-Received: by 2002:aed:2791:: with SMTP id a17mr145086qtd.34.1634140843570; Wed, 13 Oct 2021 09:00:43 -0700 (PDT) Received: from moria.home.lan (c-73-219-103-14.hsd1.vt.comcast.net. [73.219.103.14]) by smtp.gmail.com with ESMTPSA id w17sm20161qts.53.2021.10.13.09.00.42 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 13 Oct 2021 09:00:42 -0700 (PDT) From: Kent Overstreet To: linux-kernel@vger.kernel.org, linux-mm@kvack.org, akpm@linux-foundation.org, linux-raid@vger.kernel.org, linux-block@vger.kernel.org, axboe@kernel.dk Cc: Kent Overstreet , alexander.h.duyck@linux.intel.com Subject: [PATCH 2/5] mm: Introduce struct page_free_list Date: Wed, 13 Oct 2021 12:00:31 -0400 Message-Id: <20211013160034.3472923-3-kent.overstreet@gmail.com> X-Mailer: git-send-email 2.33.0 In-Reply-To: <20211013160034.3472923-1-kent.overstreet@gmail.com> References: <20211013160034.3472923-1-kent.overstreet@gmail.com> MIME-Version: 1.0 X-Rspamd-Queue-Id: CC7C519A6 X-Stat-Signature: eyofgihqd9saumjq7e9stfpqbzrgr63i Authentication-Results: imf22.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b=JdtUNo2u; spf=pass (imf22.hostedemail.com: domain of kent.overstreet@gmail.com designates 209.85.160.177 as permitted sender) smtp.mailfrom=kent.overstreet@gmail.com; dmarc=pass (policy=none) header.from=gmail.com X-Rspamd-Server: rspam06 X-HE-Tag: 1634140843-490100 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Small type system cleanup, enabling further cleanups and possibly switching the freelists from linked lists to radix trees. Signed-off-by: Kent Overstreet --- include/linux/mmzone.h | 14 +++++++++----- kernel/crash_core.c | 4 ++-- mm/compaction.c | 20 +++++++++++--------- mm/page_alloc.c | 30 +++++++++++++++--------------- mm/page_reporting.c | 20 ++++++++++---------- mm/vmstat.c | 2 +- 6 files changed, 48 insertions(+), 42 deletions(-) diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h index 089587b918..1fe820ead2 100644 --- a/include/linux/mmzone.h +++ b/include/linux/mmzone.h @@ -94,21 +94,25 @@ extern int page_group_by_mobility_disabled; #define get_pageblock_migratetype(page) \ get_pfnblock_flags_mask(page, page_to_pfn(page), MIGRATETYPE_MASK) +struct page_free_list { + struct list_head list; + size_t nr; +}; + struct free_area { - struct list_head free_list[MIGRATE_TYPES]; - unsigned long nr_free[MIGRATE_TYPES]; + struct page_free_list free[MIGRATE_TYPES]; }; static inline struct page *get_page_from_free_area(struct free_area *area, int migratetype) { - return list_first_entry_or_null(&area->free_list[migratetype], + return list_first_entry_or_null(&area->free[migratetype].list, struct page, lru); } static inline bool free_area_empty(struct free_area *area, int migratetype) { - return area->nr_free[migratetype] == 0; + return area->free[migratetype].nr == 0; } static inline size_t free_area_nr_free(struct free_area *area) @@ -117,7 +121,7 @@ static inline size_t free_area_nr_free(struct free_area *area) size_t nr_free = 0; for (migratetype = 0; migratetype < MIGRATE_TYPES; migratetype++) - nr_free += area->nr_free[migratetype]; + nr_free += area->free[migratetype].nr; return nr_free; } diff --git a/kernel/crash_core.c b/kernel/crash_core.c index eb53f5ec62..f9cc4c3cd1 100644 --- a/kernel/crash_core.c +++ b/kernel/crash_core.c @@ -447,14 +447,14 @@ static int __init crash_save_vmcoreinfo_init(void) VMCOREINFO_OFFSET(zone, free_area); VMCOREINFO_OFFSET(zone, vm_stat); VMCOREINFO_OFFSET(zone, spanned_pages); - VMCOREINFO_OFFSET(free_area, free_list); + VMCOREINFO_OFFSET(free_area, free); VMCOREINFO_OFFSET(list_head, next); VMCOREINFO_OFFSET(list_head, prev); VMCOREINFO_OFFSET(vmap_area, va_start); VMCOREINFO_OFFSET(vmap_area, list); VMCOREINFO_LENGTH(zone.free_area, MAX_ORDER); log_buf_vmcoreinfo_setup(); - VMCOREINFO_LENGTH(free_area.free_list, MIGRATE_TYPES); + VMCOREINFO_LENGTH(free_area.free, MIGRATE_TYPES); VMCOREINFO_NUMBER(NR_FREE_PAGES); VMCOREINFO_NUMBER(PG_lru); VMCOREINFO_NUMBER(PG_private); diff --git a/mm/compaction.c b/mm/compaction.c index bfc93da1c2..7a15f350e4 100644 --- a/mm/compaction.c +++ b/mm/compaction.c @@ -1414,19 +1414,21 @@ fast_isolate_freepages(struct compact_control *cc) for (order = cc->search_order; !page && order >= 0; order = next_search_order(cc, order)) { - struct free_area *area = &cc->zone->free_area[order]; - struct list_head *freelist; + struct page_free_list *free = + &cc->zone->free_area[order].free[MIGRATE_MOVABLE]; struct page *freepage; unsigned long flags; unsigned int order_scanned = 0; unsigned long high_pfn = 0; - if (!area->nr_free) + spin_lock_irqsave(&cc->zone->lock, flags); + + if (!free->nr) { + spin_unlock_irqrestore(&cc->zone->lock, flags); continue; + } - spin_lock_irqsave(&cc->zone->lock, flags); - freelist = &area->free_list[MIGRATE_MOVABLE]; - list_for_each_entry_reverse(freepage, freelist, lru) { + list_for_each_entry_reverse(freepage, &free->list, lru) { unsigned long pfn; order_scanned++; @@ -1464,7 +1466,7 @@ fast_isolate_freepages(struct compact_control *cc) } /* Reorder to so a future search skips recent pages */ - move_freelist_head(freelist, freepage); + move_freelist_head(&free->list, freepage); /* Isolate the page if available */ if (page) { @@ -1786,11 +1788,11 @@ static unsigned long fast_find_migrateblock(struct compact_control *cc) unsigned long flags; struct page *freepage; - if (!area->nr_free) + if (!free_area_nr_free(area)) continue; spin_lock_irqsave(&cc->zone->lock, flags); - freelist = &area->free_list[MIGRATE_MOVABLE]; + freelist = &area->free[MIGRATE_MOVABLE].list; list_for_each_entry(freepage, freelist, lru) { unsigned long free_pfn; diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 8918c00a91..70e4bcd2f3 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -963,20 +963,20 @@ compaction_capture(struct capture_control *capc, struct page *page, static inline void add_to_free_list(struct page *page, struct zone *zone, unsigned int order, int migratetype) { - struct free_area *area = &zone->free_area[order]; + struct page_free_list *list = &zone->free_area[order].free[migratetype]; - list_add(&page->lru, &area->free_list[migratetype]); - area->nr_free[migratetype]++; + list_add(&page->lru, &list->list); + list->nr++; } /* Used for pages not on another list */ static inline void add_to_free_list_tail(struct page *page, struct zone *zone, unsigned int order, int migratetype) { - struct free_area *area = &zone->free_area[order]; + struct page_free_list *list = &zone->free_area[order].free[migratetype]; - list_add_tail(&page->lru, &area->free_list[migratetype]); - area->nr_free[migratetype]++; + list_add_tail(&page->lru, &list->list); + list->nr++; } /* @@ -987,9 +987,9 @@ static inline void add_to_free_list_tail(struct page *page, struct zone *zone, static inline void move_to_free_list(struct page *page, struct zone *zone, unsigned int order, int migratetype) { - struct free_area *area = &zone->free_area[order]; + struct page_free_list *list = &zone->free_area[order].free[migratetype]; - list_move_tail(&page->lru, &area->free_list[migratetype]); + list_move_tail(&page->lru, &list->list); } static inline void del_page_from_free_list(struct page *page, struct zone *zone, @@ -1002,7 +1002,7 @@ static inline void del_page_from_free_list(struct page *page, struct zone *zone, list_del(&page->lru); __ClearPageBuddy(page); set_page_private(page, 0); - zone->free_area[order].nr_free[migratetype]--; + zone->free_area[order].free[migratetype].nr--; } /* @@ -2734,7 +2734,7 @@ int find_suitable_fallback(struct free_area *area, unsigned int order, int i; int fallback_mt; - if (area->nr_free == 0) + if (free_area_nr_free(area) == 0) return -1; *can_steal = false; @@ -3290,7 +3290,7 @@ void mark_free_pages(struct zone *zone) for_each_migratetype_order(order, t) { list_for_each_entry(page, - &zone->free_area[order].free_list[t], lru) { + &zone->free_area[order].free[t].list, lru) { unsigned long i; pfn = page_to_pfn(page); @@ -3886,7 +3886,7 @@ bool __zone_watermark_ok(struct zone *z, unsigned int order, unsigned long mark, struct free_area *area = &z->free_area[o]; int mt; - if (!area->nr_free) + if (!free_area_nr_free(area)) continue; for (mt = 0; mt < MIGRATE_PCPTYPES; mt++) { @@ -6044,7 +6044,7 @@ void show_free_areas(unsigned int filter, nodemask_t *nodemask) for (type = 0; type < MIGRATE_TYPES; type++) { if (!free_area_empty(area, type)) types[order] |= 1 << type; - nr[order] += area->nr_free[type]; + nr[order] += area->free[type].nr; } total += nr[order] << order; @@ -6624,8 +6624,8 @@ static void __meminit zone_init_free_lists(struct zone *zone) { unsigned int order, t; for_each_migratetype_order(order, t) { - INIT_LIST_HEAD(&zone->free_area[order].free_list[t]); - zone->free_area[order].nr_free[t] = 0; + INIT_LIST_HEAD(&zone->free_area[order].free[t].list); + zone->free_area[order].free[t].nr = 0; } } diff --git a/mm/page_reporting.c b/mm/page_reporting.c index 4e45ae95db..c4362b4b0c 100644 --- a/mm/page_reporting.c +++ b/mm/page_reporting.c @@ -115,8 +115,7 @@ page_reporting_cycle(struct page_reporting_dev_info *prdev, struct zone *zone, unsigned int order, unsigned int mt, struct scatterlist *sgl, unsigned int *offset) { - struct free_area *area = &zone->free_area[order]; - struct list_head *list = &area->free_list[mt]; + struct page_free_list *list = &zone->free_area[order].free[mt]; unsigned int page_len = PAGE_SIZE << order; struct page *page, *next; long budget; @@ -126,7 +125,7 @@ page_reporting_cycle(struct page_reporting_dev_info *prdev, struct zone *zone, * Perform early check, if free area is empty there is * nothing to process so we can skip this free_list. */ - if (list_empty(list)) + if (list_empty(&list->list)) return err; spin_lock_irq(&zone->lock); @@ -145,10 +144,10 @@ page_reporting_cycle(struct page_reporting_dev_info *prdev, struct zone *zone, * The division here should be cheap since PAGE_REPORTING_CAPACITY * should always be a power of 2. */ - budget = DIV_ROUND_UP(area->nr_free[mt], PAGE_REPORTING_CAPACITY * 16); + budget = DIV_ROUND_UP(list->nr, PAGE_REPORTING_CAPACITY * 16); /* loop through free list adding unreported pages to sg list */ - list_for_each_entry_safe(page, next, list, lru) { + list_for_each_entry_safe(page, next, &list->list, lru) { /* We are going to skip over the reported pages. */ if (PageReported(page)) continue; @@ -183,8 +182,8 @@ page_reporting_cycle(struct page_reporting_dev_info *prdev, struct zone *zone, * the new head of the free list before we release the * zone lock. */ - if (!list_is_first(&page->lru, list)) - list_rotate_to_front(&page->lru, list); + if (!list_is_first(&page->lru, &list->list)) + list_rotate_to_front(&page->lru, &list->list); /* release lock before waiting on report processing */ spin_unlock_irq(&zone->lock); @@ -208,7 +207,7 @@ page_reporting_cycle(struct page_reporting_dev_info *prdev, struct zone *zone, * Reset next to first entry, the old next isn't valid * since we dropped the lock to report the pages */ - next = list_first_entry(list, struct page, lru); + next = list_first_entry(&list->list, struct page, lru); /* exit on error */ if (err) @@ -216,8 +215,9 @@ page_reporting_cycle(struct page_reporting_dev_info *prdev, struct zone *zone, } /* Rotate any leftover pages to the head of the freelist */ - if (!list_entry_is_head(next, list, lru) && !list_is_first(&next->lru, list)) - list_rotate_to_front(&next->lru, list); + if (!list_entry_is_head(next, &list->list, lru) && + !list_is_first(&next->lru, &list->list)) + list_rotate_to_front(&next->lru, &list->list); spin_unlock_irq(&zone->lock); diff --git a/mm/vmstat.c b/mm/vmstat.c index eb46f99c72..1620dc120f 100644 --- a/mm/vmstat.c +++ b/mm/vmstat.c @@ -1470,7 +1470,7 @@ static void pagetypeinfo_showfree_print(struct seq_file *m, zone->name, migratetype_names[mtype]); for (order = 0; order < MAX_ORDER; ++order) { - seq_printf(m, "%6zu ", zone->free_area[order].nr_free[mtype]); + seq_printf(m, "%6zu ", zone->free_area[order].free[mtype].nr); spin_unlock_irq(&zone->lock); cond_resched(); spin_lock_irq(&zone->lock); From patchwork Wed Oct 13 16:00:32 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kent Overstreet X-Patchwork-Id: 12556241 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id BDCB5C433EF for ; Wed, 13 Oct 2021 16:00:54 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 6D763611AD for ; Wed, 13 Oct 2021 16:00:54 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 6D763611AD Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id 1458B940008; Wed, 13 Oct 2021 12:00:54 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 0F62D940007; Wed, 13 Oct 2021 12:00:54 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id F0112940008; Wed, 13 Oct 2021 12:00:53 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0213.hostedemail.com [216.40.44.213]) by kanga.kvack.org (Postfix) with ESMTP id DE8D7940007 for ; Wed, 13 Oct 2021 12:00:53 -0400 (EDT) Received: from smtpin25.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id A10CC82499A8 for ; Wed, 13 Oct 2021 16:00:53 +0000 (UTC) X-FDA: 78691877586.25.15C2C8C Received: from mail-qk1-f181.google.com (mail-qk1-f181.google.com [209.85.222.181]) by imf27.hostedemail.com (Postfix) with ESMTP id 1E8E4700027B for ; Wed, 13 Oct 2021 16:00:52 +0000 (UTC) Received: by mail-qk1-f181.google.com with SMTP id y10so2651368qkp.9 for ; Wed, 13 Oct 2021 09:00:52 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=4aTU3EupNAF3fXFzgSlutYMVrbUDuqlP4zujHMtjctE=; b=h/tWd++epVlpSaGdmTnilIlcyoE4elRhjQ5cQBfgBNNAybjIUgMF7G+BxL4Gz2wyxE 0QHkDhRjnB65xE1/FxwYiKeBBUyNpqjrhXNEeuAwQK2YgOAWwD3GWnJblUt3hHwlAm0A q/xJLwP2grqJWENkukwpicdLRAGXk7bor0Pnf2JQWAoRefnvQfbBtxXm+L7qz+VeaeTD sm8GfA57hO9VmllHYZ9HS59wzeMYZWIGl2pK6SKPAFWFqgFhJuRXtz+ysgQSpv/uCGni 48pLrcUZKicCLOdT/4Tj8qMNu1U/g5KGJHUb4946n0GOQXrrTaNoBbvbKxTe05aqbQKq 1Abg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=4aTU3EupNAF3fXFzgSlutYMVrbUDuqlP4zujHMtjctE=; b=zw/Z1QUblE10V8Ih87nV14Eb87MOOBsx+6aNTyctxpptmC77A5qgSgdGfGBPBTr/0S 48D9W5v3MWQpOI4jRLJnthrNu0+TnR+AqGRhJqDoCiyLqvQiFWPXE6ddzBNXXJhY/BAF YpbpFp+IdXNiO+4m3hb/pVWlKoH6/lwE3DnmdXC7IRsOs+wDUgFpnESbqS2uJ4BH/cLk 1OsmbOiWPITd4PYNrZKn86RvVK2yJz4lhI3OXqO/ix62FyH2nr0ShEqL8CbjxKMdVjzf N56h+H1Fr86pBvFDVCstEA4+DwHSJc1Rw+xfyUsHjK439xw1hT0OMc3lAtAoD81FCTiK 7uRQ== X-Gm-Message-State: AOAM532u/ZmfbFxBuTxzGwH9t91dq16ZkKmIBvrUg+5z2bcwiM3LpJb8 lPSWyp4BSAVxX9V2oU6FqA== X-Google-Smtp-Source: ABdhPJz99I5rmowhZayl5hxWUYnKdfzZmblaJaKJn+p9JaTKvo8tZHclbJG61WM80PRdLJTYjCPPOg== X-Received: by 2002:a37:8883:: with SMTP id k125mr100968qkd.144.1634140844944; Wed, 13 Oct 2021 09:00:44 -0700 (PDT) Received: from moria.home.lan (c-73-219-103-14.hsd1.vt.comcast.net. [73.219.103.14]) by smtp.gmail.com with ESMTPSA id w17sm20161qts.53.2021.10.13.09.00.43 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 13 Oct 2021 09:00:44 -0700 (PDT) From: Kent Overstreet To: linux-kernel@vger.kernel.org, linux-mm@kvack.org, akpm@linux-foundation.org, linux-raid@vger.kernel.org, linux-block@vger.kernel.org, axboe@kernel.dk Cc: Kent Overstreet , alexander.h.duyck@linux.intel.com Subject: [PATCH 3/5] mm/page_reporting: Improve control flow Date: Wed, 13 Oct 2021 12:00:32 -0400 Message-Id: <20211013160034.3472923-4-kent.overstreet@gmail.com> X-Mailer: git-send-email 2.33.0 In-Reply-To: <20211013160034.3472923-1-kent.overstreet@gmail.com> References: <20211013160034.3472923-1-kent.overstreet@gmail.com> MIME-Version: 1.0 X-Rspamd-Queue-Id: 1E8E4700027B X-Stat-Signature: rpug86g7ee85kinnq6ztpe4ttphd6u18 Authentication-Results: imf27.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b="h/tWd++e"; spf=pass (imf27.hostedemail.com: domain of kent.overstreet@gmail.com designates 209.85.222.181 as permitted sender) smtp.mailfrom=kent.overstreet@gmail.com; dmarc=pass (policy=none) header.from=gmail.com X-Rspamd-Server: rspam06 X-HE-Tag: 1634140852-745393 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: This splits out page_reporting_get_pages() from page_reporting_cycle(), which is a considerable simplification and lets us delete some duplicated code. We're cleaning up code that touches page freelists as prep work for possibly converting them to radix trees, but this is a worthy cleanup on its own. Signed-off-by: Kent Overstreet --- mm/page_reporting.c | 154 ++++++++++++++++---------------------------- 1 file changed, 54 insertions(+), 100 deletions(-) diff --git a/mm/page_reporting.c b/mm/page_reporting.c index c4362b4b0c..ab2be13d8e 100644 --- a/mm/page_reporting.c +++ b/mm/page_reporting.c @@ -71,10 +71,8 @@ void __page_reporting_notify(void) static void page_reporting_drain(struct page_reporting_dev_info *prdev, - struct scatterlist *sgl, unsigned int nents, bool reported) + struct scatterlist *sg, bool reported) { - struct scatterlist *sg = sgl; - /* * Drain the now reported pages back into their respective * free lists/areas. We assume at least one page is populated. @@ -100,9 +98,45 @@ page_reporting_drain(struct page_reporting_dev_info *prdev, if (PageBuddy(page) && buddy_order(page) == order) __SetPageReported(page); } while ((sg = sg_next(sg))); +} + +static int +page_reporting_get_pages(struct page_reporting_dev_info *prdev, struct zone *zone, + unsigned int order, unsigned int mt, + struct scatterlist *sgl) +{ + struct page_free_list *list = &zone->free_area[order].free[mt]; + unsigned int page_len = PAGE_SIZE << order; + struct page *page, *next; + unsigned nr_got = 0; + + spin_lock_irq(&zone->lock); + + /* loop through free list adding unreported pages to sg list */ + list_for_each_entry_safe(page, next, &list->list, lru) { + /* We are going to skip over the reported pages. */ + if (PageReported(page)) + continue; + + /* Attempt to pull page from list and place in scatterlist */ + if (!__isolate_free_page(page, order)) { + next = page; + break; + } + + sg_set_page(&sgl[nr_got++], page, page_len, 0); + if (nr_got == PAGE_REPORTING_CAPACITY) + break; + } + + /* Rotate any leftover pages to the head of the freelist */ + if (!list_entry_is_head(next, &list->list, lru) && + !list_is_first(&next->lru, &list->list)) + list_rotate_to_front(&next->lru, &list->list); + + spin_unlock_irq(&zone->lock); - /* reinitialize scatterlist now that it is empty */ - sg_init_table(sgl, nents); + return nr_got; } /* @@ -113,23 +147,13 @@ page_reporting_drain(struct page_reporting_dev_info *prdev, static int page_reporting_cycle(struct page_reporting_dev_info *prdev, struct zone *zone, unsigned int order, unsigned int mt, - struct scatterlist *sgl, unsigned int *offset) + struct scatterlist *sgl) { struct page_free_list *list = &zone->free_area[order].free[mt]; - unsigned int page_len = PAGE_SIZE << order; - struct page *page, *next; + unsigned nr_pages; long budget; int err = 0; - /* - * Perform early check, if free area is empty there is - * nothing to process so we can skip this free_list. - */ - if (list_empty(&list->list)) - return err; - - spin_lock_irq(&zone->lock); - /* * Limit how many calls we will be making to the page reporting * device for this list. By doing this we avoid processing any @@ -146,80 +170,25 @@ page_reporting_cycle(struct page_reporting_dev_info *prdev, struct zone *zone, */ budget = DIV_ROUND_UP(list->nr, PAGE_REPORTING_CAPACITY * 16); - /* loop through free list adding unreported pages to sg list */ - list_for_each_entry_safe(page, next, &list->list, lru) { - /* We are going to skip over the reported pages. */ - if (PageReported(page)) - continue; + while (budget > 0 && !err) { + sg_init_table(sgl, PAGE_REPORTING_CAPACITY); - /* - * If we fully consumed our budget then update our - * state to indicate that we are requesting additional - * processing and exit this list. - */ - if (budget < 0) { - atomic_set(&prdev->state, PAGE_REPORTING_REQUESTED); - next = page; + nr_pages = page_reporting_get_pages(prdev, zone, order, mt, sgl); + if (!nr_pages) break; - } - - /* Attempt to pull page from list and place in scatterlist */ - if (*offset) { - if (!__isolate_free_page(page, order)) { - next = page; - break; - } - - /* Add page to scatter list */ - --(*offset); - sg_set_page(&sgl[*offset], page, page_len, 0); - - continue; - } - - /* - * Make the first non-reported page in the free list - * the new head of the free list before we release the - * zone lock. - */ - if (!list_is_first(&page->lru, &list->list)) - list_rotate_to_front(&page->lru, &list->list); - - /* release lock before waiting on report processing */ - spin_unlock_irq(&zone->lock); - - /* begin processing pages in local list */ - err = prdev->report(prdev, sgl, PAGE_REPORTING_CAPACITY); - /* reset offset since the full list was reported */ - *offset = PAGE_REPORTING_CAPACITY; + sg_mark_end(sgl + nr_pages); - /* update budget to reflect call to report function */ - budget--; + budget -= nr_pages; + err = prdev->report(prdev, sgl, nr_pages); - /* reacquire zone lock and resume processing */ spin_lock_irq(&zone->lock); - - /* flush reported pages from the sg list */ - page_reporting_drain(prdev, sgl, PAGE_REPORTING_CAPACITY, !err); - - /* - * Reset next to first entry, the old next isn't valid - * since we dropped the lock to report the pages - */ - next = list_first_entry(&list->list, struct page, lru); - - /* exit on error */ - if (err) - break; + page_reporting_drain(prdev, sgl, !err); + spin_unlock_irq(&zone->lock); } - /* Rotate any leftover pages to the head of the freelist */ - if (!list_entry_is_head(next, &list->list, lru) && - !list_is_first(&next->lru, &list->list)) - list_rotate_to_front(&next->lru, &list->list); - - spin_unlock_irq(&zone->lock); + if (budget <= 0 && list->nr) + atomic_set(&prdev->state, PAGE_REPORTING_REQUESTED); return err; } @@ -228,7 +197,7 @@ static int page_reporting_process_zone(struct page_reporting_dev_info *prdev, struct scatterlist *sgl, struct zone *zone) { - unsigned int order, mt, leftover, offset = PAGE_REPORTING_CAPACITY; + unsigned int order, mt; unsigned long watermark; int err = 0; @@ -250,25 +219,12 @@ page_reporting_process_zone(struct page_reporting_dev_info *prdev, if (is_migrate_isolate(mt)) continue; - err = page_reporting_cycle(prdev, zone, order, mt, - sgl, &offset); + err = page_reporting_cycle(prdev, zone, order, mt, sgl); if (err) return err; } } - /* report the leftover pages before going idle */ - leftover = PAGE_REPORTING_CAPACITY - offset; - if (leftover) { - sgl = &sgl[offset]; - err = prdev->report(prdev, sgl, leftover); - - /* flush any remaining pages out from the last report */ - spin_lock_irq(&zone->lock); - page_reporting_drain(prdev, sgl, leftover, !err); - spin_unlock_irq(&zone->lock); - } - return err; } @@ -294,8 +250,6 @@ static void page_reporting_process(struct work_struct *work) if (!sgl) goto err_out; - sg_init_table(sgl, PAGE_REPORTING_CAPACITY); - for_each_zone(zone) { err = page_reporting_process_zone(prdev, sgl, zone); if (err) From patchwork Wed Oct 13 16:00:33 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kent Overstreet X-Patchwork-Id: 12556243 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 69F0EC433F5 for ; Wed, 13 Oct 2021 16:00:56 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 085F961163 for ; Wed, 13 Oct 2021 16:00:56 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 085F961163 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id 8E70D940009; Wed, 13 Oct 2021 12:00:55 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 8961D940007; Wed, 13 Oct 2021 12:00:55 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 736E2940009; Wed, 13 Oct 2021 12:00:55 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0053.hostedemail.com [216.40.44.53]) by kanga.kvack.org (Postfix) with ESMTP id 61B1A940007 for ; Wed, 13 Oct 2021 12:00:55 -0400 (EDT) Received: from smtpin35.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id 2511139B8C for ; Wed, 13 Oct 2021 16:00:55 +0000 (UTC) X-FDA: 78691877670.35.4259793 Received: from mail-qt1-f178.google.com (mail-qt1-f178.google.com [209.85.160.178]) by imf07.hostedemail.com (Postfix) with ESMTP id A5AA710001CF for ; Wed, 13 Oct 2021 16:00:54 +0000 (UTC) Received: by mail-qt1-f178.google.com with SMTP id c20so2999762qtb.2 for ; Wed, 13 Oct 2021 09:00:54 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=SaQNQGKlwsbhCSQSQvDPP/vFHDfDgc8R8QPJSwyWtiI=; b=c5BXD9X9PnmMXiwzqVFutWdb7jiQuirFJd4C0vWTBWbTVxyc63GnnIzGpKuf/JeFL9 V+fAkXXPlbx+doXp3IvbSIKf325ChrHvJDsyJx/PWVKTix17Xgs0alHOuzwPM/COq99R IjTyba3Rjk5c0F1LgJmNc5R5IrNrbtfhouKhDjhlXj5ot66pYD9zoiUx/z4tUsUc2UCn Ou7fR5ybi/sD7WZQnTfZtDMiwUqqu0lhNHssvEgL9ym36ZsMTrOjdxWUIQj/e9WUEINe 4oqLdxVg5dZ5sgEcPX9NDcO/GIGQkZJ2q8nGllNFzKLpbFTGN0cnIvLiINaF8HeHpDbl ojLw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=SaQNQGKlwsbhCSQSQvDPP/vFHDfDgc8R8QPJSwyWtiI=; b=IVSCatQavEcf16fPc2PmTJmz9WqL0JxKHnOLYCanM4Dc0iPTSrVvhgntngpD1aXfZH 50OUR5GdLBuc0ZHYbycYg5Y15rFaAn/dm8E7ibNiy4bAFnvIPL2nATn4njDokSVws0GJ cZf7j0KHY2hu2DBJ2HEt/ZPJnQbiEMg7DHXyUqGxlYv9JSfifuYVfEQtKYWYTLps79JH Cj453SA9Ekl7sKeWBcFpKRf8qn1rDAZ4V2+aA5CnRAJNSW+J125gRrhTb8MOQmO8Mk1I GZ97rgSzaJFQGHvXBnsOe2fckNUPSFNADYR1JerYVqGoQdf7T8D3Ery49oY5LApHEM5+ ie6Q== X-Gm-Message-State: AOAM530oP8EUVW847bl0RMVz6wPSlIWmeA0Su2uqAlNZKq0lbXb3J9oD fBeUX8oVNUEvM4jWCMCRcw== X-Google-Smtp-Source: ABdhPJwzZujhU4CzidE3yM7yWt8jzop5KisJSJGDD7J5YyQGAYeYXcAkMpvy5WgnIozmV87HMegSKA== X-Received: by 2002:a05:622a:4d2:: with SMTP id q18mr145098qtx.84.1634140853713; Wed, 13 Oct 2021 09:00:53 -0700 (PDT) Received: from moria.home.lan (c-73-219-103-14.hsd1.vt.comcast.net. [73.219.103.14]) by smtp.gmail.com with ESMTPSA id w17sm20161qts.53.2021.10.13.09.00.52 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 13 Oct 2021 09:00:52 -0700 (PDT) From: Kent Overstreet To: linux-kernel@vger.kernel.org, linux-mm@kvack.org, akpm@linux-foundation.org, linux-raid@vger.kernel.org, linux-block@vger.kernel.org, axboe@kernel.dk Cc: Kent Overstreet , alexander.h.duyck@linux.intel.com Subject: [PATCH 4/5] md: Kill usage of page->index Date: Wed, 13 Oct 2021 12:00:33 -0400 Message-Id: <20211013160034.3472923-5-kent.overstreet@gmail.com> X-Mailer: git-send-email 2.33.0 In-Reply-To: <20211013160034.3472923-1-kent.overstreet@gmail.com> References: <20211013160034.3472923-1-kent.overstreet@gmail.com> MIME-Version: 1.0 X-Rspamd-Queue-Id: A5AA710001CF X-Stat-Signature: ucshbfqzoe7tcpfubzcxpyzw855c3tij Authentication-Results: imf07.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b=c5BXD9X9; spf=pass (imf07.hostedemail.com: domain of kent.overstreet@gmail.com designates 209.85.160.178 as permitted sender) smtp.mailfrom=kent.overstreet@gmail.com; dmarc=pass (policy=none) header.from=gmail.com X-Rspamd-Server: rspam06 X-HE-Tag: 1634140854-740498 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: As part of the struct page cleanups underway, we want to remove as much usage of page->mapping and page->index as possible, as frequently they are known from context - as they are here in the md bitmap code. Signed-off-by: Kent Overstreet --- drivers/md/md-bitmap.c | 44 ++++++++++++++++++++---------------------- 1 file changed, 21 insertions(+), 23 deletions(-) diff --git a/drivers/md/md-bitmap.c b/drivers/md/md-bitmap.c index e29c6298ef..dcdb4597c5 100644 --- a/drivers/md/md-bitmap.c +++ b/drivers/md/md-bitmap.c @@ -165,10 +165,8 @@ static int read_sb_page(struct mddev *mddev, loff_t offset, if (sync_page_io(rdev, target, roundup(size, bdev_logical_block_size(rdev->bdev)), - page, REQ_OP_READ, 0, true)) { - page->index = index; + page, REQ_OP_READ, 0, true)) return 0; - } } return -EIO; } @@ -209,7 +207,8 @@ static struct md_rdev *next_active_rdev(struct md_rdev *rdev, struct mddev *mdde return NULL; } -static int write_sb_page(struct bitmap *bitmap, struct page *page, int wait) +static int write_sb_page(struct bitmap *bitmap, struct page *page, + unsigned long index, int wait) { struct md_rdev *rdev; struct block_device *bdev; @@ -224,7 +223,7 @@ static int write_sb_page(struct bitmap *bitmap, struct page *page, int wait) bdev = (rdev->meta_bdev) ? rdev->meta_bdev : rdev->bdev; - if (page->index == store->file_pages-1) { + if (index == store->file_pages-1) { int last_page_size = store->bytes & (PAGE_SIZE-1); if (last_page_size == 0) last_page_size = PAGE_SIZE; @@ -236,8 +235,7 @@ static int write_sb_page(struct bitmap *bitmap, struct page *page, int wait) */ if (mddev->external) { /* Bitmap could be anywhere. */ - if (rdev->sb_start + offset + (page->index - * (PAGE_SIZE/512)) + if (rdev->sb_start + offset + index * PAGE_SECTORS > rdev->data_offset && rdev->sb_start + offset @@ -247,7 +245,7 @@ static int write_sb_page(struct bitmap *bitmap, struct page *page, int wait) } else if (offset < 0) { /* DATA BITMAP METADATA */ if (offset - + (long)(page->index * (PAGE_SIZE/512)) + + (long)(index * PAGE_SECTORS) + size/512 > 0) /* bitmap runs in to metadata */ goto bad_alignment; @@ -259,7 +257,7 @@ static int write_sb_page(struct bitmap *bitmap, struct page *page, int wait) /* METADATA BITMAP DATA */ if (rdev->sb_start + offset - + page->index*(PAGE_SIZE/512) + size/512 + + index * PAGE_SECTORS + size/512 > rdev->data_offset) /* bitmap runs in to data */ goto bad_alignment; @@ -268,7 +266,7 @@ static int write_sb_page(struct bitmap *bitmap, struct page *page, int wait) } md_super_write(mddev, rdev, rdev->sb_start + offset - + page->index * (PAGE_SIZE/512), + + index * PAGE_SECTORS, size, page); } @@ -285,12 +283,13 @@ static void md_bitmap_file_kick(struct bitmap *bitmap); /* * write out a page to a file */ -static void write_page(struct bitmap *bitmap, struct page *page, int wait) +static void write_page(struct bitmap *bitmap, struct page *page, + unsigned long index, int wait) { struct buffer_head *bh; if (bitmap->storage.file == NULL) { - switch (write_sb_page(bitmap, page, wait)) { + switch (write_sb_page(bitmap, page, index, wait)) { case -EINVAL: set_bit(BITMAP_WRITE_ERROR, &bitmap->flags); } @@ -399,7 +398,6 @@ static int read_page(struct file *file, unsigned long index, blk_cur++; bh = bh->b_this_page; } - page->index = index; wait_event(bitmap->write_wait, atomic_read(&bitmap->pending_writes)==0); @@ -472,7 +470,7 @@ void md_bitmap_update_sb(struct bitmap *bitmap) sb->sectors_reserved = cpu_to_le32(bitmap->mddev-> bitmap_info.space); kunmap_atomic(sb); - write_page(bitmap, bitmap->storage.sb_page, 1); + write_page(bitmap, bitmap->storage.sb_page, 0, 1); } EXPORT_SYMBOL(md_bitmap_update_sb); @@ -524,7 +522,6 @@ static int md_bitmap_new_disk_sb(struct bitmap *bitmap) bitmap->storage.sb_page = alloc_page(GFP_KERNEL | __GFP_ZERO); if (bitmap->storage.sb_page == NULL) return -ENOMEM; - bitmap->storage.sb_page->index = 0; sb = kmap_atomic(bitmap->storage.sb_page); @@ -802,7 +799,6 @@ static int md_bitmap_storage_alloc(struct bitmap_storage *store, if (store->sb_page) { store->filemap[0] = store->sb_page; pnum = 1; - store->sb_page->index = offset; } for ( ; pnum < num_pages; pnum++) { @@ -929,6 +925,7 @@ static void md_bitmap_file_set_bit(struct bitmap *bitmap, sector_t block) unsigned long chunk = block >> bitmap->counts.chunkshift; struct bitmap_storage *store = &bitmap->storage; unsigned long node_offset = 0; + unsigned long index = file_page_index(store, chunk); if (mddev_is_clustered(bitmap->mddev)) node_offset = bitmap->cluster_slot * store->file_pages; @@ -945,9 +942,9 @@ static void md_bitmap_file_set_bit(struct bitmap *bitmap, sector_t block) else set_bit_le(bit, kaddr); kunmap_atomic(kaddr); - pr_debug("set file bit %lu page %lu\n", bit, page->index); + pr_debug("set file bit %lu page %lu\n", bit, index); /* record page number so it gets flushed to disk when unplug occurs */ - set_page_attr(bitmap, page->index - node_offset, BITMAP_PAGE_DIRTY); + set_page_attr(bitmap, index - node_offset, BITMAP_PAGE_DIRTY); } static void md_bitmap_file_clear_bit(struct bitmap *bitmap, sector_t block) @@ -958,6 +955,7 @@ static void md_bitmap_file_clear_bit(struct bitmap *bitmap, sector_t block) unsigned long chunk = block >> bitmap->counts.chunkshift; struct bitmap_storage *store = &bitmap->storage; unsigned long node_offset = 0; + unsigned long index = file_page_index(store, chunk); if (mddev_is_clustered(bitmap->mddev)) node_offset = bitmap->cluster_slot * store->file_pages; @@ -972,8 +970,8 @@ static void md_bitmap_file_clear_bit(struct bitmap *bitmap, sector_t block) else clear_bit_le(bit, paddr); kunmap_atomic(paddr); - if (!test_page_attr(bitmap, page->index - node_offset, BITMAP_PAGE_NEEDWRITE)) { - set_page_attr(bitmap, page->index - node_offset, BITMAP_PAGE_PENDING); + if (!test_page_attr(bitmap, index - node_offset, BITMAP_PAGE_NEEDWRITE)) { + set_page_attr(bitmap, index - node_offset, BITMAP_PAGE_PENDING); bitmap->allclean = 0; } } @@ -1027,7 +1025,7 @@ void md_bitmap_unplug(struct bitmap *bitmap) "md bitmap_unplug"); } clear_page_attr(bitmap, i, BITMAP_PAGE_PENDING); - write_page(bitmap, bitmap->storage.filemap[i], 0); + write_page(bitmap, bitmap->storage.filemap[i], i, 0); writing = 1; } } @@ -1137,7 +1135,7 @@ static int md_bitmap_init_from_disk(struct bitmap *bitmap, sector_t start) memset(paddr + offset, 0xff, PAGE_SIZE - offset); kunmap_atomic(paddr); - write_page(bitmap, page, 1); + write_page(bitmap, page, index, 1); ret = -EIO; if (test_bit(BITMAP_WRITE_ERROR, @@ -1336,7 +1334,7 @@ void md_bitmap_daemon_work(struct mddev *mddev) if (bitmap->storage.filemap && test_and_clear_page_attr(bitmap, j, BITMAP_PAGE_NEEDWRITE)) { - write_page(bitmap, bitmap->storage.filemap[j], 0); + write_page(bitmap, bitmap->storage.filemap[j], j, 0); } } From patchwork Wed Oct 13 16:00:34 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kent Overstreet X-Patchwork-Id: 12556245 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 92A0AC433FE for ; Wed, 13 Oct 2021 16:01:02 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 4378C61163 for ; Wed, 13 Oct 2021 16:01:02 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 4378C61163 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id E17F794000A; Wed, 13 Oct 2021 12:01:01 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id DC7EE940007; Wed, 13 Oct 2021 12:01:01 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id C8E1494000A; Wed, 13 Oct 2021 12:01:01 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0215.hostedemail.com [216.40.44.215]) by kanga.kvack.org (Postfix) with ESMTP id B8156940007 for ; Wed, 13 Oct 2021 12:01:01 -0400 (EDT) Received: from smtpin32.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 64DEC8249980 for ; Wed, 13 Oct 2021 16:01:01 +0000 (UTC) X-FDA: 78691877922.32.281F3DD Received: from mail-qt1-f179.google.com (mail-qt1-f179.google.com [209.85.160.179]) by imf22.hostedemail.com (Postfix) with ESMTP id 8851219BC for ; Wed, 13 Oct 2021 16:00:59 +0000 (UTC) Received: by mail-qt1-f179.google.com with SMTP id v17so3007641qtp.1 for ; Wed, 13 Oct 2021 09:00:59 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=JqXrOW1sz81l7piXeJ6TmVlYLqW03X51G3JR7Qr/jYw=; b=mpN2Ljf6OWSB7Vva54RoUDfZ6Q3KNqIBKGRBHhpr6j0955npGcPr5zboDSUBt8+OMH EbZ15JvJ8T9M34AHEeXip3qHWrrwS+RDrwcbLh6ae9OehrnQuln0mj6XWCG8yRJggHON 0pj2mXcE83TZsXJ+XFyx2B5Ab/O3ZD3l+KemPRGhyWYRAb4xLaMriSsH93pgR9KJuZE9 yMiHo+XQJe1ULyeEfm3PFiC1JJH0+eqWGovjNWEmDK0d4rHzyZsOvvtoPEw+Bl+fEnGj T70aXIcDaPrG2hp+p06tgaDc78NMCEQ4yfI9X+tHBnhAr+cT+KfhdmpwvJXgtLkumthE OU0Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=JqXrOW1sz81l7piXeJ6TmVlYLqW03X51G3JR7Qr/jYw=; b=orN1oeUgbn/BodKVi22RnrBsmrPJpzf7Hv3tiArU4zd/CHhTFOchwlfO84o4oKyinT TUH45d3hOVK8EynfcUaoVjDVH4wfqXsUmF3KHKmXcODClzLfaACYA/zANME2dWwjNrHa Hs08iloQ2p0cneutAYpz/mEqPGIbZFkjOz1oJcJra5FqFd2ko9MZnv2HIFm6ccVsqiBg AxFMdYDv32XDOXCq3v/mBV/vJQY2lAxvSqA9jBbnwPijYzY1wGgng98PV9FQKTi4XlFy IseK45TUYS1gsTB3YLUXHedrlETXehfI7gXndRanYgCfgzY3RXsxl2FjrnbA/Y0iwKwP +OmA== X-Gm-Message-State: AOAM533bJ+cajX9CaHAyyVckeA1pltiK5KVCPZ2iThTeQwX/BKVxQ+Xa SYr/kTkFLMuGf+2Lxg4mbpMBtC18vLpe X-Google-Smtp-Source: ABdhPJx1ckUYAdslQAAfyG6CdVNk5O/7d27iKRx/bXp+Qol0mRi3VXuk+NUm3TZhT+H2RMTyLjJvOw== X-Received: by 2002:a05:622a:186:: with SMTP id s6mr58015qtw.323.1634140855563; Wed, 13 Oct 2021 09:00:55 -0700 (PDT) Received: from moria.home.lan (c-73-219-103-14.hsd1.vt.comcast.net. [73.219.103.14]) by smtp.gmail.com with ESMTPSA id w17sm20161qts.53.2021.10.13.09.00.53 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 13 Oct 2021 09:00:54 -0700 (PDT) From: Kent Overstreet To: linux-kernel@vger.kernel.org, linux-mm@kvack.org, akpm@linux-foundation.org, linux-raid@vger.kernel.org, linux-block@vger.kernel.org, axboe@kernel.dk Cc: Kent Overstreet , alexander.h.duyck@linux.intel.com Subject: [PATCH 5/5] brd: Kill usage of page->index Date: Wed, 13 Oct 2021 12:00:34 -0400 Message-Id: <20211013160034.3472923-6-kent.overstreet@gmail.com> X-Mailer: git-send-email 2.33.0 In-Reply-To: <20211013160034.3472923-1-kent.overstreet@gmail.com> References: <20211013160034.3472923-1-kent.overstreet@gmail.com> MIME-Version: 1.0 X-Rspamd-Server: rspam01 X-Rspamd-Queue-Id: 8851219BC X-Stat-Signature: cpyy4c3a8qxycijkyzdsdapgpi81w87t Authentication-Results: imf22.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b=mpN2Ljf6; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf22.hostedemail.com: domain of kent.overstreet@gmail.com designates 209.85.160.179 as permitted sender) smtp.mailfrom=kent.overstreet@gmail.com X-HE-Tag: 1634140859-698467 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: As part of the struct page cleanups underway, we want to remove as much usage of page->mapping and page->index as possible, as frequently they are known from context. In the brd code, we're never actually reading from page->index except in assertions, so references to it can be safely deleted. Signed-off-by: Kent Overstreet Reviewed-by: Matthew Wilcox (Oracle) Reviewed-by: David Hildenbrand --- drivers/block/brd.c | 4 ---- 1 file changed, 4 deletions(-) diff --git a/drivers/block/brd.c b/drivers/block/brd.c index 58ec167aa0..0a55aed832 100644 --- a/drivers/block/brd.c +++ b/drivers/block/brd.c @@ -72,8 +72,6 @@ static struct page *brd_lookup_page(struct brd_device *brd, sector_t sector) page = radix_tree_lookup(&brd->brd_pages, idx); rcu_read_unlock(); - BUG_ON(page && page->index != idx); - return page; } @@ -108,12 +106,10 @@ static struct page *brd_insert_page(struct brd_device *brd, sector_t sector) spin_lock(&brd->brd_lock); idx = sector >> PAGE_SECTORS_SHIFT; - page->index = idx; if (radix_tree_insert(&brd->brd_pages, idx, page)) { __free_page(page); page = radix_tree_lookup(&brd->brd_pages, idx); BUG_ON(!page); - BUG_ON(page->index != idx); } else { brd->brd_nr_pages++; }