From patchwork Fri Jun 7 09:09:36 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Hildenbrand X-Patchwork-Id: 13689520 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 04A40C27C5F for ; Fri, 7 Jun 2024 09:09:58 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id F056D6B00A6; Fri, 7 Jun 2024 05:09:57 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id EB3A06B00A9; Fri, 7 Jun 2024 05:09:57 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id D53E46B00AB; Fri, 7 Jun 2024 05:09:57 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id BA5306B00A6 for ; Fri, 7 Jun 2024 05:09:57 -0400 (EDT) Received: from smtpin21.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay04.hostedemail.com (Postfix) with ESMTP id 7C7251A177B for ; Fri, 7 Jun 2024 09:09:57 +0000 (UTC) X-FDA: 82203520434.21.367E385 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by imf10.hostedemail.com (Postfix) with ESMTP id BBE5CC000D for ; Fri, 7 Jun 2024 09:09:55 +0000 (UTC) Authentication-Results: imf10.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=QD+SPR3S; dmarc=pass (policy=none) header.from=redhat.com; spf=pass (imf10.hostedemail.com: domain of david@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=david@redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1717751395; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=kiFnKpy9jAQsfmTIBaaW+Ib0u1X/H+DkqsqDykpaDbc=; b=dYvVAZYueUzVgquGrhhMPyBM61q5sqQI+WdWrq2skI29qwIkmc6iZWlONKkbrS8Nytqbn1 YkCsX9V/qZqc+1enhv1KPIw1rV7784Ob545wyz7ne8h01yHWkXpEq3QKpAoRHn+3Dw1urS TeadPKuybyCYbo7sJpAMB9tWoCKIpyw= ARC-Authentication-Results: i=1; imf10.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=QD+SPR3S; dmarc=pass (policy=none) header.from=redhat.com; spf=pass (imf10.hostedemail.com: domain of david@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=david@redhat.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1717751395; a=rsa-sha256; cv=none; b=6gSGDcehv9FlDg/U+uoSO+dmeTf+CtPdhOZ1P30WaBsS2Ks1hVII/cdVuIP7yTcoYWPqF+ qVmyJrzS1L2Z+a8fkWTSW6LxvYHydaADMuI6EdyQScE09DOcONCwWzqZGaezDs5AJffqcy xjG9FQsb2h0f3DFfxKv/RlC5R9W3TDU= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1717751395; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=kiFnKpy9jAQsfmTIBaaW+Ib0u1X/H+DkqsqDykpaDbc=; b=QD+SPR3SeCgHHTvpOwvI0Gbs7jSZhFvuOKnc3C1p5qzylR9B5ynIEvKwtNj7no2Z/s/2ZD iM8chHCc0XTs3iD+gn9hB9JSYhoD6ih0OMxv+Bl2Kv4UddWH4540LpRq9s5MrLZvilUjwC jfPfjX30ggHT0/Ky5/mGa87AMCAc25w= Received: from mimecast-mx02.redhat.com (mx-ext.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-94-siZVc8J6MdebVKtoCSUwSQ-1; Fri, 07 Jun 2024 05:09:51 -0400 X-MC-Unique: siZVc8J6MdebVKtoCSUwSQ-1 Received: from smtp.corp.redhat.com (int-mx01.intmail.prod.int.rdu2.redhat.com [10.11.54.1]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id C59D31C05190; Fri, 7 Jun 2024 09:09:50 +0000 (UTC) Received: from t14s.fritz.box (unknown [10.39.194.94]) by smtp.corp.redhat.com (Postfix) with ESMTP id 44DB337E7; Fri, 7 Jun 2024 09:09:46 +0000 (UTC) From: David Hildenbrand To: linux-kernel@vger.kernel.org Cc: linux-mm@kvack.org, linux-hyperv@vger.kernel.org, virtualization@lists.linux.dev, xen-devel@lists.xenproject.org, kasan-dev@googlegroups.com, David Hildenbrand , Andrew Morton , Mike Rapoport , Oscar Salvador , "K. Y. Srinivasan" , Haiyang Zhang , Wei Liu , Dexuan Cui , "Michael S. Tsirkin" , Jason Wang , Xuan Zhuo , =?utf-8?q?Eugenio_P=C3=A9rez?= , Juergen Gross , Stefano Stabellini , Oleksandr Tyshchenko , Alexander Potapenko , Marco Elver , Dmitry Vyukov Subject: [PATCH v1 1/3] mm: pass meminit_context to __free_pages_core() Date: Fri, 7 Jun 2024 11:09:36 +0200 Message-ID: <20240607090939.89524-2-david@redhat.com> In-Reply-To: <20240607090939.89524-1-david@redhat.com> References: <20240607090939.89524-1-david@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.4.1 on 10.11.54.1 X-Rspamd-Queue-Id: BBE5CC000D X-Stat-Signature: yt5brbwpq57d71z59wd33qhka3sxicak X-Rspam-User: X-Rspamd-Server: rspam11 X-HE-Tag: 1717751395-957805 X-HE-Meta: U2FsdGVkX18ZZTGFaxMicaYiMDD0sKvd3HUnMrekrHE6fDaq8SFbexU4rKzmaOLvacaJYa2mh9cOWV7qR7xZco7ieEcvGBRjO4q+CO0A08PtPfOhik2R7nKnNX83TuuGw8Vjv9q7J7C41TV3ej3fEHnJfMgXacbvy56TQ1jl8eqJq8foZp3C7P7dF1oTYy+HuH+s1sTbtSsbM+VXe5kVuU6ygk0XoGQkhGwZ7zPzrtan+iqTwQpTv/Tq0Gt6obqKLpOb2Qf39cjtD9a38DpkrpWI64io6uJhgM+Vjnzt1Heo5ZC+6UMXs/qPpfU8kDlBoMXmt7LEyMR/1qtAlCIS9LXWLpZ1otaajM97oKNDoplCEtXlCAQEYDZUpDe3esjzqOq7vZwzTKnRopMRdutAE+wwWwpsclq8CzQb8dDuDbKSxq6nYCOt117y/NJ8nTehd72ts7LbsxSCBu6+kVlX77z76G6ivDiy2OCrpR9aNO4WcM2VXw3RcTQ6349uvLjM1aet5GI8VpkJfckjX6kTsCBBROvoel9eQd4mWv0EYsEOCO4QUNhDjink7ZY3of2EB4RCF8Px+6AvKbi8atFYGka8VmHH17VjIN2zUJTb2ZtaYQrFoJ6wxw0JI3bJJGaO0AMGqQeCIc4jXxw2ll1qRJtiB/OEimKZoWGAX+mDL5w19PzdZXn4LU8BIAYtc4mUL9KQmVA1fxuewutVICt9JZ/2DWXfcKBFiyyrfUz/EsfA7UxobkzkfoccERV1wPIqLmsdhYYlLl8uiONTWejU2Z5kJwg8nKKT3/JRsC+Zj89AJBdj7nH1SLGjxXQwSZFdA+wSb4jt0Q9+IOutpKXfLPhAnTceoeGpY0chBz99UeX5mig+O6hqUKHyRH4OKUhT5xmQabbnZf5G9LB7xSerm9tdlGh5bXv52V+MQXcqaZeriml5Fg8Aq5xxLUCthfP9yVXLki+Usj6aAeCc6v8 f80I7OtO ++EvwYwuDwIExVOmdno6pBa5BnwvcjvMuzDUc/t03HBfwnCg8WYsySlO6Cwi809wcNS8Ra2bXxaPPc4pEQreJx0N+iaZImlrO6pMP9tKrsYFVAWoGE0nhNz3oOYZaOhAibBSFxuwaM/U3G5u2pFl38QfuhXdP9egfz1TZS7APKHLSPQu1IOCbWDLptzev6WQd6UHkyrPhyTEgfk2i6iPwD99OmKmtx5sPTNnT X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: In preparation for further changes, let's teach __free_pages_core() about the differences of memory hotplug handling. Move the memory hotplug specific handling from generic_online_page() to __free_pages_core(), use adjust_managed_page_count() on the memory hotplug path, and spell out why memory freed via memblock cannot currently use adjust_managed_page_count(). Signed-off-by: David Hildenbrand Signed-off-by: David Hildenbrand Signed-off-by: David Hildenbrand --- mm/internal.h | 3 ++- mm/kmsan/init.c | 2 +- mm/memory_hotplug.c | 9 +-------- mm/mm_init.c | 4 ++-- mm/page_alloc.c | 17 +++++++++++++++-- 5 files changed, 21 insertions(+), 14 deletions(-) diff --git a/mm/internal.h b/mm/internal.h index 12e95fdf61e90..3fdee779205ab 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -604,7 +604,8 @@ extern void __putback_isolated_page(struct page *page, unsigned int order, int mt); extern void memblock_free_pages(struct page *page, unsigned long pfn, unsigned int order); -extern void __free_pages_core(struct page *page, unsigned int order); +extern void __free_pages_core(struct page *page, unsigned int order, + enum meminit_context); /* * This will have no effect, other than possibly generating a warning, if the diff --git a/mm/kmsan/init.c b/mm/kmsan/init.c index 3ac3b8921d36f..ca79636f858e5 100644 --- a/mm/kmsan/init.c +++ b/mm/kmsan/init.c @@ -172,7 +172,7 @@ static void do_collection(void) shadow = smallstack_pop(&collect); origin = smallstack_pop(&collect); kmsan_setup_meta(page, shadow, origin, collect.order); - __free_pages_core(page, collect.order); + __free_pages_core(page, collect.order, MEMINIT_EARLY); } } diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c index 171ad975c7cfd..27e3be75edcf7 100644 --- a/mm/memory_hotplug.c +++ b/mm/memory_hotplug.c @@ -630,14 +630,7 @@ EXPORT_SYMBOL_GPL(restore_online_page_callback); void generic_online_page(struct page *page, unsigned int order) { - /* - * Freeing the page with debug_pagealloc enabled will try to unmap it, - * so we should map it first. This is better than introducing a special - * case in page freeing fast path. - */ - debug_pagealloc_map_pages(page, 1 << order); - __free_pages_core(page, order); - totalram_pages_add(1UL << order); + __free_pages_core(page, order, MEMINIT_HOTPLUG); } EXPORT_SYMBOL_GPL(generic_online_page); diff --git a/mm/mm_init.c b/mm/mm_init.c index 019193b0d8703..feb5b6e8c8875 100644 --- a/mm/mm_init.c +++ b/mm/mm_init.c @@ -1938,7 +1938,7 @@ static void __init deferred_free_range(unsigned long pfn, for (i = 0; i < nr_pages; i++, page++, pfn++) { if (pageblock_aligned(pfn)) set_pageblock_migratetype(page, MIGRATE_MOVABLE); - __free_pages_core(page, 0); + __free_pages_core(page, 0, MEMINIT_EARLY); } } @@ -2513,7 +2513,7 @@ void __init memblock_free_pages(struct page *page, unsigned long pfn, } } - __free_pages_core(page, order); + __free_pages_core(page, order, MEMINIT_EARLY); } DEFINE_STATIC_KEY_MAYBE(CONFIG_INIT_ON_ALLOC_DEFAULT_ON, init_on_alloc); diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 2224965ada468..e0c8a8354be36 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -1214,7 +1214,8 @@ static void __free_pages_ok(struct page *page, unsigned int order, __count_vm_events(PGFREE, 1 << order); } -void __free_pages_core(struct page *page, unsigned int order) +void __free_pages_core(struct page *page, unsigned int order, + enum meminit_context context) { unsigned int nr_pages = 1 << order; struct page *p = page; @@ -1234,7 +1235,19 @@ void __free_pages_core(struct page *page, unsigned int order) __ClearPageReserved(p); set_page_count(p, 0); - atomic_long_add(nr_pages, &page_zone(page)->managed_pages); + if (IS_ENABLED(CONFIG_MEMORY_HOTPLUG) && + unlikely(context == MEMINIT_HOTPLUG)) { + /* + * Freeing the page with debug_pagealloc enabled will try to + * unmap it; some archs don't like double-unmappings, so + * map it first. + */ + debug_pagealloc_map_pages(page, nr_pages); + adjust_managed_page_count(page, nr_pages); + } else { + /* memblock adjusts totalram_pages() ahead of time. */ + atomic_long_add(nr_pages, &page_zone(page)->managed_pages); + } if (page_contains_unaccepted(page, order)) { if (order == MAX_PAGE_ORDER && __free_unaccepted(page))