From patchwork Thu Mar 6 22:44:52 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Luiz Capitulino X-Patchwork-Id: 14005456 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id C013FC282EC for ; Thu, 6 Mar 2025 22:45:37 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 72ACD280012; Thu, 6 Mar 2025 17:45:36 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 6DA3128000A; Thu, 6 Mar 2025 17:45:36 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 5A28E280012; Thu, 6 Mar 2025 17:45:36 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 391E828000A for ; Thu, 6 Mar 2025 17:45:36 -0500 (EST) Received: from smtpin14.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id B5585C063C for ; Thu, 6 Mar 2025 22:45:36 +0000 (UTC) X-FDA: 83192609472.14.4138D3D Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by imf07.hostedemail.com (Postfix) with ESMTP id BF8B24000B for ; Thu, 6 Mar 2025 22:45:34 +0000 (UTC) Authentication-Results: imf07.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=VSLJM7nI; spf=pass (imf07.hostedemail.com: domain of luizcap@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=luizcap@redhat.com; dmarc=pass (policy=none) header.from=redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1741301134; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=WRxmppCKJxJfC19HA1OxBrFfA0HTaxCM6cQbJYX0rnQ=; b=3R9F5b157zKfGNqZw7IcOv484Qu1/y5H8w2an3QUg57KaBECzRAkChndAL8d19VGD2zWlr sOWvKNjHjm7c5o6AXdnZv+4Li08nHLA6QWLAS0ZQEbtoU5wi+mqzBz+4OVBHV0fj8ZQYKq 1xRmmyNc1GSDyr+WPMVmZpKybpLGcng= ARC-Authentication-Results: i=1; imf07.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=VSLJM7nI; spf=pass (imf07.hostedemail.com: domain of luizcap@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=luizcap@redhat.com; dmarc=pass (policy=none) header.from=redhat.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1741301134; a=rsa-sha256; cv=none; b=UlRhY6HzskhzzGZHorxDTY4G7RmGrjQ6quA4H0dCPLmFr6WXEut2M9ujtJ3NU1XcDgJjwj hozBaMMqhdGkQhQw0on6qXLp7m01quT54igbflJd81F4BzoktaYpIDFKI/xMEDxdl9pD9s guLDyLhrMTvOJiSbA8exGRLFB7gCT0I= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1741301134; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=WRxmppCKJxJfC19HA1OxBrFfA0HTaxCM6cQbJYX0rnQ=; b=VSLJM7nIBh5NES0QP1mVpZfpK/q4gD7+kYr+UthsI6vrHAIubfXXMEt0suLW+YQUZ79oO0 5FXSL6od9edV2VFxSI4EDikUueGjJLYi2AtKNFXUWjCH2Hsv2LklngHkgEpns4i9kOD6qE W2awN/usfbVbeHrogOl+2TEE9z5oOF4= Received: from mx-prod-mc-08.mail-002.prod.us-west-2.aws.redhat.com (ec2-35-165-154-97.us-west-2.compute.amazonaws.com [35.165.154.97]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-134-EFFnaN7fO9OoBTQOWk3X2g-1; Thu, 06 Mar 2025 17:45:10 -0500 X-MC-Unique: EFFnaN7fO9OoBTQOWk3X2g-1 X-Mimecast-MFC-AGG-ID: EFFnaN7fO9OoBTQOWk3X2g_1741301109 Received: from mx-prod-int-08.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-08.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.111]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-08.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id 54743180AF7A; Thu, 6 Mar 2025 22:45:09 +0000 (UTC) Received: from fedora.redhat.com (unknown [10.22.88.191]) by mx-prod-int-08.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTP id 1D81A18009BC; Thu, 6 Mar 2025 22:45:06 +0000 (UTC) From: Luiz Capitulino To: linux-kernel@vger.kernel.org, linux-mm@kvack.org, david@redhat.com, yuzhao@google.com, pasha.tatashin@soleen.com Cc: akpm@linux-foundation.org, hannes@cmpxchg.org, muchun.song@linux.dev, luizcap@redhat.com Subject: [PATCH v3 3/3] mm: page_owner: use new iteration API Date: Thu, 6 Mar 2025 17:44:52 -0500 Message-ID: <93c80b040960fa2ebab4a9729073f77a30649862.1741301089.git.luizcap@redhat.com> In-Reply-To: References: MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.4.1 on 10.30.177.111 X-Rspam-User: X-Stat-Signature: zguir7teu31kzjjen5dhawurd9jfyzqn X-Rspamd-Queue-Id: BF8B24000B X-Rspamd-Server: rspam07 X-HE-Tag: 1741301134-211157 X-HE-Meta: U2FsdGVkX1+rTUqJSTz0osWZAGrA5HaKnlytWOiJHX5SDPQpVmFsnANsE9GAkfA1ARSM5AY4dPpu47j6w69BFe/5zbHPPQxb79SKG9+w2ZPed8AFrG4chQ4Mdi+9tciQ0apYzvc66zaXmlnCY9eDe5QEN2u6I8qpxCP+iQibE41ewoqBX7xbx+cSYy6c3Bw32tqe121p+OKFVB1S2qqyCnGm7tHj6Ie9EVpNIonyCXu1nIt9FoN3ZN+hiBYkUUoK27YQU3ij4FyvSF3kGZp2oMfAt/A/tsjwJqLJsscha0pLA2YFWxuqnusf1RmnCLgbH3LuilpB51TYt+Ue4B9p1XTSfyQoUSMrkgxHXCS/Pf+NefOYRV2ExNO2d+QfqF8zwdKCra1HQrOTEpnKB5rZ/C6QlxnR4qnyUJ/4Ikt2mg34mQvFgcLge+WsqUJpwcBfB+dlIU4YNh083wjW45omGJhbxcvLXqay2SIf7Zfsn45j9pUNUFw0gy+df3wWav+kIldIoSWPMd1IQiADa4b/S06TkiZjwc2QJ//phgSUAZ4a+aM7cP6FeMCMRwBFD7og3+WP8C29l+1J7nZBgYZ3mQruKEzy/fhpTp7k5vac7KILLeSnfcX9QmwjmQby+AcK8Zd3PuIbdhM407oM3iFLTDj7gUY8ajOMwL8zL3HAHiylb/E61yOXq74a/I54s5ZPSE1goMCokdUmEO80G6JWQSXK6AWeKfnDjM5fw3CSj/oRnHjTRtwHXpmB2RjVOz2R+KWW7NdZGRba+Rm1QFlMGB/Yp1ux2F9Umzcivvj3ZDF8o5sSyNtSJnolwHOfPxRAGs6vlAoKZPKigg9K0/QFI5CCcsScKOSEWsLaS5nyVi8r443Ork3L1SvxEmOlgk4JllIe9MDrja0K3evrEYrkbUvl+Pr6Y/mjD6rpx8T2t5+EYtLCk+71jOv1lUmxV3UTiOTwKAwKjZnw4RQSZnr 8E9ZT43X QyVL5ec8+3zAEiRO6nSYzNYx3f1vaHOgXSPyw34V3qf2gh/JJppZHgX6o8ADdsO3FPFoH7REpZWqMlMKIIkivHydCvlfZZ0bpg/zmPkGOhVazpyvpwED4HVUwfSVulIthYvzPkLtKYwPVJzt30Lpi8l8TuxRKm9eEBH4859HtCifrkKy/3MZ4vMWe/xysaioOlzwYK4AT6buMvfz9RxBR/2LJXK7IqXIWPEeC9KSFjhIB+0N4gWke4yNTVHAIp8Bb2sYQqNpI7JHcgs9V2TBN3LClw91eXjsUsvn2eVSfCNKOEWad7ywVNUp9HzRjJ9tQMk+6HAevOWxKRUjaEeUKe+BNqmLK8ly+Mf7buX4Bm3nTeZKoCQih24C0JA== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: The page_ext_next() function assumes that page extension objects for a page order allocation always reside in the same memory section, which may not be true and could lead to crashes. Use the new page_ext iteration API instead. Fixes: cf54f310d0d3 ("mm/hugetlb: use __GFP_COMP for gigantic folios") Signed-off-by: Luiz Capitulino Acked-by: David Hildenbrand --- mm/page_owner.c | 84 +++++++++++++++++++++++-------------------------- 1 file changed, 39 insertions(+), 45 deletions(-) diff --git a/mm/page_owner.c b/mm/page_owner.c index 2d6360eaccbb6..65adc66582d82 100644 --- a/mm/page_owner.c +++ b/mm/page_owner.c @@ -229,17 +229,19 @@ static void dec_stack_record_count(depot_stack_handle_t handle, handle); } -static inline void __update_page_owner_handle(struct page_ext *page_ext, +static inline void __update_page_owner_handle(struct page *page, depot_stack_handle_t handle, unsigned short order, gfp_t gfp_mask, short last_migrate_reason, u64 ts_nsec, pid_t pid, pid_t tgid, char *comm) { - int i; + struct page_ext_iter iter; + struct page_ext *page_ext; struct page_owner *page_owner; - for (i = 0; i < (1 << order); i++) { + rcu_read_lock(); + for_each_page_ext(page, 1 << order, page_ext, iter) { page_owner = get_page_owner(page_ext); page_owner->handle = handle; page_owner->order = order; @@ -252,20 +254,22 @@ static inline void __update_page_owner_handle(struct page_ext *page_ext, sizeof(page_owner->comm)); __set_bit(PAGE_EXT_OWNER, &page_ext->flags); __set_bit(PAGE_EXT_OWNER_ALLOCATED, &page_ext->flags); - page_ext = page_ext_next(page_ext); } + rcu_read_unlock(); } -static inline void __update_page_owner_free_handle(struct page_ext *page_ext, +static inline void __update_page_owner_free_handle(struct page *page, depot_stack_handle_t handle, unsigned short order, pid_t pid, pid_t tgid, u64 free_ts_nsec) { - int i; + struct page_ext_iter iter; + struct page_ext *page_ext; struct page_owner *page_owner; - for (i = 0; i < (1 << order); i++) { + rcu_read_lock(); + for_each_page_ext(page, 1 << order, page_ext, iter) { page_owner = get_page_owner(page_ext); /* Only __reset_page_owner() wants to clear the bit */ if (handle) { @@ -275,8 +279,8 @@ static inline void __update_page_owner_free_handle(struct page_ext *page_ext, page_owner->free_ts_nsec = free_ts_nsec; page_owner->free_pid = current->pid; page_owner->free_tgid = current->tgid; - page_ext = page_ext_next(page_ext); } + rcu_read_unlock(); } void __reset_page_owner(struct page *page, unsigned short order) @@ -293,11 +297,11 @@ void __reset_page_owner(struct page *page, unsigned short order) page_owner = get_page_owner(page_ext); alloc_handle = page_owner->handle; + page_ext_put(page_ext); handle = save_stack(GFP_NOWAIT | __GFP_NOWARN); - __update_page_owner_free_handle(page_ext, handle, order, current->pid, + __update_page_owner_free_handle(page, handle, order, current->pid, current->tgid, free_ts_nsec); - page_ext_put(page_ext); if (alloc_handle != early_handle) /* @@ -313,19 +317,13 @@ void __reset_page_owner(struct page *page, unsigned short order) noinline void __set_page_owner(struct page *page, unsigned short order, gfp_t gfp_mask) { - struct page_ext *page_ext; u64 ts_nsec = local_clock(); depot_stack_handle_t handle; handle = save_stack(gfp_mask); - - page_ext = page_ext_get(page); - if (unlikely(!page_ext)) - return; - __update_page_owner_handle(page_ext, handle, order, gfp_mask, -1, + __update_page_owner_handle(page, handle, order, gfp_mask, -1, ts_nsec, current->pid, current->tgid, current->comm); - page_ext_put(page_ext); inc_stack_record_count(handle, gfp_mask, 1 << order); } @@ -344,44 +342,42 @@ void __set_page_owner_migrate_reason(struct page *page, int reason) void __split_page_owner(struct page *page, int old_order, int new_order) { - int i; - struct page_ext *page_ext = page_ext_get(page); + struct page_ext_iter iter; + struct page_ext *page_ext; struct page_owner *page_owner; - if (unlikely(!page_ext)) - return; - - for (i = 0; i < (1 << old_order); i++) { + rcu_read_lock(); + for_each_page_ext(page, 1 << old_order, page_ext, iter) { page_owner = get_page_owner(page_ext); page_owner->order = new_order; - page_ext = page_ext_next(page_ext); } - page_ext_put(page_ext); + rcu_read_unlock(); } void __folio_copy_owner(struct folio *newfolio, struct folio *old) { - int i; - struct page_ext *old_ext; - struct page_ext *new_ext; + struct page_ext *page_ext; + struct page_ext_iter iter; struct page_owner *old_page_owner; struct page_owner *new_page_owner; depot_stack_handle_t migrate_handle; - old_ext = page_ext_get(&old->page); - if (unlikely(!old_ext)) + page_ext = page_ext_get(&old->page); + if (unlikely(!page_ext)) return; - new_ext = page_ext_get(&newfolio->page); - if (unlikely(!new_ext)) { - page_ext_put(old_ext); + old_page_owner = get_page_owner(page_ext); + page_ext_put(page_ext); + + page_ext = page_ext_get(&newfolio->page); + if (unlikely(!page_ext)) return; - } - old_page_owner = get_page_owner(old_ext); - new_page_owner = get_page_owner(new_ext); + new_page_owner = get_page_owner(page_ext); + page_ext_put(page_ext); + migrate_handle = new_page_owner->handle; - __update_page_owner_handle(new_ext, old_page_owner->handle, + __update_page_owner_handle(&newfolio->page, old_page_owner->handle, old_page_owner->order, old_page_owner->gfp_mask, old_page_owner->last_migrate_reason, old_page_owner->ts_nsec, old_page_owner->pid, @@ -391,7 +387,7 @@ void __folio_copy_owner(struct folio *newfolio, struct folio *old) * will be freed after migration. Keep them until then as they may be * useful. */ - __update_page_owner_free_handle(new_ext, 0, old_page_owner->order, + __update_page_owner_free_handle(&newfolio->page, 0, old_page_owner->order, old_page_owner->free_pid, old_page_owner->free_tgid, old_page_owner->free_ts_nsec); @@ -400,14 +396,12 @@ void __folio_copy_owner(struct folio *newfolio, struct folio *old) * for the new one and the old folio otherwise there will be an imbalance * when subtracting those pages from the stack. */ - for (i = 0; i < (1 << new_page_owner->order); i++) { + rcu_read_lock(); + for_each_page_ext(&old->page, 1 << new_page_owner->order, page_ext, iter) { + old_page_owner = get_page_owner(page_ext); old_page_owner->handle = migrate_handle; - old_ext = page_ext_next(old_ext); - old_page_owner = get_page_owner(old_ext); } - - page_ext_put(new_ext); - page_ext_put(old_ext); + rcu_read_unlock(); } void pagetypeinfo_showmixedcount_print(struct seq_file *m, @@ -813,7 +807,7 @@ static void init_pages_in_zone(pg_data_t *pgdat, struct zone *zone) goto ext_put_continue; /* Found early allocated page */ - __update_page_owner_handle(page_ext, early_handle, 0, 0, + __update_page_owner_handle(page, early_handle, 0, 0, -1, local_clock(), current->pid, current->tgid, current->comm); count++;