From patchwork Fri Jan 24 03:56:51 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexei Starovoitov X-Patchwork-Id: 13948880 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0DAEEC02181 for ; Fri, 24 Jan 2025 03:57:09 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 8A65F280031; Thu, 23 Jan 2025 22:57:08 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 830E328002E; Thu, 23 Jan 2025 22:57:08 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 68233280031; Thu, 23 Jan 2025 22:57:08 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 3D52C28002E for ; Thu, 23 Jan 2025 22:57:08 -0500 (EST) Received: from smtpin12.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id DD04C140B91 for ; Fri, 24 Jan 2025 03:57:07 +0000 (UTC) X-FDA: 83040984894.12.1545674 Received: from mail-pl1-f177.google.com (mail-pl1-f177.google.com [209.85.214.177]) by imf10.hostedemail.com (Postfix) with ESMTP id F2B4BC0004 for ; Fri, 24 Jan 2025 03:57:05 +0000 (UTC) Authentication-Results: imf10.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=PfVseP3B; spf=pass (imf10.hostedemail.com: domain of alexei.starovoitov@gmail.com designates 209.85.214.177 as permitted sender) smtp.mailfrom=alexei.starovoitov@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1737691026; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=Fz2IAlBDxhiT58/U161LSQMwkAoY7PfGXhcEsriHk/0=; b=LG+Pcx+SlvU/Ti6RhAVvwCJRumJ2yEM8ZCQZKwEz9Atfli8Hm1kCbPenMgc/LACN+HUaFr d5rtmsByiWwQi8diYo1A9dwkFvTfjb3zjiqiN52R3d2haEFnmf9NSjy967XSZPGobS+dRP ZhFbUIbsUzqtnx2KvvatpMsT7MRl5Ro= ARC-Authentication-Results: i=1; imf10.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=PfVseP3B; spf=pass (imf10.hostedemail.com: domain of alexei.starovoitov@gmail.com designates 209.85.214.177 as permitted sender) smtp.mailfrom=alexei.starovoitov@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1737691026; a=rsa-sha256; cv=none; b=1FtRPvC+bafFRuMZbQAK9DcTX5zEYhpX/QnvRYZqtzeiS7v21dG+kROyt2NHQIL9sZwuvY WAWDrool5F/fk/1nqwzGC0Ar0KQNYytQdb/TuPM5kzIONTBZmy5whKSz6fZiOg/SotwHG0 I0S7wghezpgdsTyHwhXOK1sr/CLl4Xo= Received: by mail-pl1-f177.google.com with SMTP id d9443c01a7336-21661be2c2dso28765335ad.1 for ; Thu, 23 Jan 2025 19:57:05 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1737691025; x=1738295825; darn=kvack.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=Fz2IAlBDxhiT58/U161LSQMwkAoY7PfGXhcEsriHk/0=; b=PfVseP3BulHe0dH2uLbVjRnirfQDPCMEkhVq66twUU8B1Gf/9Yl9kHr19OKFPRvik/ Y9mE1XGZFpay2V9Kzf7dKdO5uDG3R6i35JWu78VtsY+Ps99yn0JlcZsMzsrTMbaYfqQY 1c8eujnqG2W4yPeWRQzg8/arDPmT49k/EXn2wmUIKgWLdIT77lC7v7ltmvhJ9IVKjRRE QUW7Y9lC3Pkbbdm6poPEEdAWh8/tq3JfhnVhcgGnLDtQc+1fnsdiCcf7XFEta8vnlTu0 Y7GlLugl74YPCrc+x6EgHOJvAMU6Y5dhXzRy2NJwA3ivNeFEwj4rAlt/KU/vOv2JmdXQ q9cw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1737691025; x=1738295825; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=Fz2IAlBDxhiT58/U161LSQMwkAoY7PfGXhcEsriHk/0=; b=l2ePmS+3Ey4CEDvL2ao54JKSymtp0OfD1syCI8SND4S0FWs7P2BjTSVett6OMp+TGB /0kAgtkJYFYE6eJwNxpcF8vM/XVq2lrbye55mSZBwr45iIkFTJ4CYhmMwTmY3uL4qvzX dWr9tBoPYQAIFt78RhCeQ6lS7304jo5gM/zqorDjonO2mhjnvv3s5jOkgIax6C5/Q7KK S/QlR7MfcgX+fAV+EYgvstWZF/QYv/P+GIhHNODL5AsW/oNnwODOQEG3enUPNBcJSUzP jHRZQmi69erKAn9sbkHkj4BR4x+k5vsgKaQ0ID29XaThtcLMEA0QHSTAqUBryw6MF0+z LJFg== X-Forwarded-Encrypted: i=1; AJvYcCUStXHcc2hu51FwYCtSN533/UfdFRwimWS+l7YoKIJZxIa3ffxqzyx9BqZwxxd9jUgzEYwmhnEw4w==@kvack.org X-Gm-Message-State: AOJu0Yy1qkL2vFzl5/T837L1d9MB6Usq3EGOK12jFkiNF7r2B1jBsecV VYBtikE65b6yhu0R08ePQ4Bn/5C15NZIKgzCfK7t6s8WwqFLkoJf X-Gm-Gg: ASbGncvoSUdK5AnxEwioKoJCK7QsPg1EALrxhrw3xBBPJ8tL4p6xzSj3g9zjY8fnRna hz2RVIO9qLqfzYmLnXIv2vGDS1C/amh5X5vjhURcy4NCnvdRt7DT7rUIYkatpnWN/HgVuxjMiHY FKLQKPQtQWgWvLNsUrN+Bb/0xjSWqWYAVpt8YYxgxc3CNkFEGmpIDpYIl2EKDk0iGF4phSI6fJe Wo7hdpm6VGO0rQn9qfHHA6SLHijt5dlfPTFYbECV1MEVk468SvWzz+19qLlOLAACHlNTQpVuxow uFHHbV1v1JN/sovg/QgB7rqnQpo2d2O7FiHtFAg= X-Google-Smtp-Source: AGHT+IHZeimjkRMY4H4j9FWOv2afkQfL53TrxMWXGw2hDPca6jWcxwmgWQSstHxk/bosCoPa9wzktQ== X-Received: by 2002:a17:902:e84c:b0:216:6769:9eca with SMTP id d9443c01a7336-21c355b9442mr374872965ad.37.1737691024611; Thu, 23 Jan 2025 19:57:04 -0800 (PST) Received: from macbookpro.lan ([2603:3023:16e:5000:8af:ecd2:44cd:8027]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-21da414151fsm6640485ad.121.2025.01.23.19.57.03 (version=TLS1_3 cipher=TLS_CHACHA20_POLY1305_SHA256 bits=256/256); Thu, 23 Jan 2025 19:57:04 -0800 (PST) From: Alexei Starovoitov To: bpf@vger.kernel.org Cc: andrii@kernel.org, memxor@gmail.com, akpm@linux-foundation.org, peterz@infradead.org, vbabka@suse.cz, bigeasy@linutronix.de, rostedt@goodmis.org, houtao1@huawei.com, hannes@cmpxchg.org, shakeel.butt@linux.dev, mhocko@suse.com, willy@infradead.org, tglx@linutronix.de, jannh@google.com, tj@kernel.org, linux-mm@kvack.org, kernel-team@fb.com Subject: [PATCH bpf-next v6 2/6] mm, bpf: Introduce free_pages_nolock() Date: Thu, 23 Jan 2025 19:56:51 -0800 Message-Id: <20250124035655.78899-3-alexei.starovoitov@gmail.com> X-Mailer: git-send-email 2.39.5 (Apple Git-154) In-Reply-To: <20250124035655.78899-1-alexei.starovoitov@gmail.com> References: <20250124035655.78899-1-alexei.starovoitov@gmail.com> MIME-Version: 1.0 X-Rspamd-Server: rspam05 X-Rspamd-Queue-Id: F2B4BC0004 X-Stat-Signature: y1cbt3rxit4xk3awjshyk3kgrhghtr6d X-Rspam-User: X-HE-Tag: 1737691025-175310 X-HE-Meta: U2FsdGVkX19+NW2RFcipd13/oymGbjW5GHC1tEhfMUSaMjGWLTDDBMCreYKAzZ+jXsCwShHjoLMlZwp4+mRbOFh4dClrA6Vq6e8Q1wOKsQPiDHKdmAWeIBa4DffQdsH6q45Dj0Hv6j8cWkkzMybguMY0WClPn6T3+p3DG6foNIqekCn1aGkqLt2lR3/OnsTVLbjSJSOZcCcPa2hWgtzw0PoCbr41bLbgMfliafa0S64EYnMdnmaKpioqiHQiwnIJw/WH4nJPBMRkZhgwuy/a1LcGOnuH7zkAjc/2yBRjMfRkGB/3IU4EEAhrJUvZe01Lp8UZNSdg6MRVDvAf00Pndgkaeapb/TXLDOpPuzh5KdzOZZYia2ppiWxQjg60yADlcnPSdSLRdPrB2o7bnuxkVLI71UFak9vA5Sh4GWXP2N6WB/GHaWwBYEdS7HYdmV11+B0ks8tup9QRQn5hDIQ50bL7oLX+3scsGiv35zzJDTvRKDJaFhWpKVVEn8FNchUWjOapJqslB+C0etzpajL537+KUZUNqQhpHSOE9RTzTvf6XJNj9DlPm65yptI9M9UyoUHBpabgg2j4UV/8+WnG27Ywl1vszMmN7iRqzvvpOzL9hp5CQlkNWXmV0BY1L4CXwI4tg/L5aNOa5bC4XjbAjNAAhIZId6jLQUM5OgK2FfUFNReluTbri1MjL6yH7bgj283FMjhjojvuGEOsk0bNchDCs6nnSsvkX+SkHJvwY/c8drza9xLQ7necXTJGFSQ/6EHSa/ctHkeC7gkb6GlmOCsQWbTrvANxNMlxcy1/N8AG+nqflG7UuFrbi4iIJ47g3w1Gpi9vxAXlOGbnL+dRak5afHaqecQjWuRX+EWi7C/JzDM49174rs+glnCRohx89CBa75SfJ4fAgCJiFhhgepQC4MT5EAaP7hdzrCat6uNvQhuhNolp10ObNrsA7z32KQ+fttu9y2noeKJ8U4F IVv/Eqro WZ08ClUI/2eQraXhJmPOCcvw4exkccaCpnQFmIqwgUa/Y3YMc0xeoJxqY/SiiyUUYsROEk1O2GIzmCb7To1j8CnzWKuO5hTMXwEsnUprHUGso59LC25biSiUelwHhdi0NSq+8VZdh07o7cTVyNod6cBl0opMcz0KOmpTDRZmCrvYDk8ml/Qg2oZsmlkDf020cWYdL4CMsx9CXcnNqrdiahvu9bkJXvTmyhEacdvIohFb//85JjNWvkMv78OJJrMyNO3DJeP08nppXfGwX23EYPBvlIT9oaPFXEY9zUlr2XlW2N5kowtiQjSIxsRj4AjnOEBXQFHuU+QGvcN+aRSuemINlfNFMTriTjYudfMso/xMb9g2LbS4cDdvjeJsPvGF6Y3WFcumL1t5pF/pfJvB4n+CYWZt/sGBvC7Kvjqq+8O6RoQG8KZiN9u0Eb41c8SVNv55zTPpxTfztf//1JJ15P1IOu0kLvW12Do5xfVRrPjvkm+38Zzf0W8lu1A== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: Alexei Starovoitov Introduce free_pages_nolock() that can free pages without taking locks. It relies on trylock and can be called from any context. Since spin_trylock() cannot be used in PREEMPT_RT from hard IRQ or NMI it uses lockless link list to stash the pages which will be freed by subsequent free_pages() from good context. Do not use llist unconditionally. BPF maps continuously allocate/free, so we cannot unconditionally delay the freeing to llist. When the memory becomes free make it available to the kernel and BPF users right away if possible, and fallback to llist as the last resort. Acked-by: Vlastimil Babka Acked-by: Sebastian Andrzej Siewior Signed-off-by: Alexei Starovoitov --- include/linux/gfp.h | 1 + include/linux/mm_types.h | 4 ++ include/linux/mmzone.h | 3 ++ lib/stackdepot.c | 5 ++- mm/page_alloc.c | 90 +++++++++++++++++++++++++++++++++++----- mm/page_owner.c | 8 +++- 6 files changed, 98 insertions(+), 13 deletions(-) diff --git a/include/linux/gfp.h b/include/linux/gfp.h index 82bfb65b8d15..a8233d09acfa 100644 --- a/include/linux/gfp.h +++ b/include/linux/gfp.h @@ -391,6 +391,7 @@ __meminit void *alloc_pages_exact_nid_noprof(int nid, size_t size, gfp_t gfp_mas __get_free_pages((gfp_mask) | GFP_DMA, (order)) extern void __free_pages(struct page *page, unsigned int order); +extern void free_pages_nolock(struct page *page, unsigned int order); extern void free_pages(unsigned long addr, unsigned int order); #define __free_page(page) __free_pages((page), 0) diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h index 825c04b56403..583bf59e2627 100644 --- a/include/linux/mm_types.h +++ b/include/linux/mm_types.h @@ -99,6 +99,10 @@ struct page { /* Or, free page */ struct list_head buddy_list; struct list_head pcp_list; + struct { + struct llist_node pcp_llist; + unsigned int order; + }; }; /* See page-flags.h for PAGE_MAPPING_FLAGS */ struct address_space *mapping; diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h index b36124145a16..1a854e0a9e3b 100644 --- a/include/linux/mmzone.h +++ b/include/linux/mmzone.h @@ -953,6 +953,9 @@ struct zone { /* Primarily protects free_area */ spinlock_t lock; + /* Pages to be freed when next trylock succeeds */ + struct llist_head trylock_free_pages; + /* Write-intensive fields used by compaction and vmstats. */ CACHELINE_PADDING(_pad2_); diff --git a/lib/stackdepot.c b/lib/stackdepot.c index 377194969e61..73d7b50924ef 100644 --- a/lib/stackdepot.c +++ b/lib/stackdepot.c @@ -672,7 +672,10 @@ depot_stack_handle_t stack_depot_save_flags(unsigned long *entries, exit: if (prealloc) { /* Stack depot didn't use this memory, free it. */ - free_pages((unsigned long)prealloc, DEPOT_POOL_ORDER); + if (!allow_spin) + free_pages_nolock(virt_to_page(prealloc), DEPOT_POOL_ORDER); + else + free_pages((unsigned long)prealloc, DEPOT_POOL_ORDER); } if (found) handle = found->handle.handle; diff --git a/mm/page_alloc.c b/mm/page_alloc.c index a82bc67abbdb..fa750c46e0fc 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -88,6 +88,9 @@ typedef int __bitwise fpi_t; */ #define FPI_TO_TAIL ((__force fpi_t)BIT(1)) +/* Free the page without taking locks. Rely on trylock only. */ +#define FPI_TRYLOCK ((__force fpi_t)BIT(2)) + /* prevent >1 _updater_ of zone percpu pageset ->high and ->batch fields */ static DEFINE_MUTEX(pcp_batch_high_lock); #define MIN_PERCPU_PAGELIST_HIGH_FRACTION (8) @@ -1249,13 +1252,44 @@ static void split_large_buddy(struct zone *zone, struct page *page, } while (1); } +static void add_page_to_zone_llist(struct zone *zone, struct page *page, + unsigned int order) +{ + /* Remember the order */ + page->order = order; + /* Add the page to the free list */ + llist_add(&page->pcp_llist, &zone->trylock_free_pages); +} + static void free_one_page(struct zone *zone, struct page *page, unsigned long pfn, unsigned int order, fpi_t fpi_flags) { + struct llist_head *llhead; unsigned long flags; - spin_lock_irqsave(&zone->lock, flags); + if (!spin_trylock_irqsave(&zone->lock, flags)) { + if (unlikely(fpi_flags & FPI_TRYLOCK)) { + add_page_to_zone_llist(zone, page, order); + return; + } + spin_lock_irqsave(&zone->lock, flags); + } + + /* The lock succeeded. Process deferred pages. */ + llhead = &zone->trylock_free_pages; + if (unlikely(!llist_empty(llhead) && !(fpi_flags & FPI_TRYLOCK))) { + struct llist_node *llnode; + struct page *p, *tmp; + + llnode = llist_del_all(llhead); + llist_for_each_entry_safe(p, tmp, llnode, pcp_llist) { + unsigned int p_order = p->order; + + split_large_buddy(zone, p, page_to_pfn(p), p_order, fpi_flags); + __count_vm_events(PGFREE, 1 << p_order); + } + } split_large_buddy(zone, page, pfn, order, fpi_flags); spin_unlock_irqrestore(&zone->lock, flags); @@ -2598,7 +2632,7 @@ static int nr_pcp_high(struct per_cpu_pages *pcp, struct zone *zone, static void free_unref_page_commit(struct zone *zone, struct per_cpu_pages *pcp, struct page *page, int migratetype, - unsigned int order) + unsigned int order, fpi_t fpi_flags) { int high, batch; int pindex; @@ -2633,6 +2667,14 @@ static void free_unref_page_commit(struct zone *zone, struct per_cpu_pages *pcp, } if (pcp->free_count < (batch << CONFIG_PCP_BATCH_SCALE_MAX)) pcp->free_count += (1 << order); + + if (unlikely(fpi_flags & FPI_TRYLOCK)) { + /* + * Do not attempt to take a zone lock. Let pcp->count get + * over high mark temporarily. + */ + return; + } high = nr_pcp_high(pcp, zone, batch, free_high); if (pcp->count >= high) { free_pcppages_bulk(zone, nr_pcp_free(pcp, batch, high, free_high), @@ -2647,7 +2689,8 @@ static void free_unref_page_commit(struct zone *zone, struct per_cpu_pages *pcp, /* * Free a pcp page */ -void free_unref_page(struct page *page, unsigned int order) +static void __free_unref_page(struct page *page, unsigned int order, + fpi_t fpi_flags) { unsigned long __maybe_unused UP_flags; struct per_cpu_pages *pcp; @@ -2656,7 +2699,7 @@ void free_unref_page(struct page *page, unsigned int order) int migratetype; if (!pcp_allowed_order(order)) { - __free_pages_ok(page, order, FPI_NONE); + __free_pages_ok(page, order, fpi_flags); return; } @@ -2673,24 +2716,34 @@ void free_unref_page(struct page *page, unsigned int order) migratetype = get_pfnblock_migratetype(page, pfn); if (unlikely(migratetype >= MIGRATE_PCPTYPES)) { if (unlikely(is_migrate_isolate(migratetype))) { - free_one_page(page_zone(page), page, pfn, order, FPI_NONE); + free_one_page(page_zone(page), page, pfn, order, fpi_flags); return; } migratetype = MIGRATE_MOVABLE; } zone = page_zone(page); + if (unlikely((fpi_flags & FPI_TRYLOCK) && IS_ENABLED(CONFIG_PREEMPT_RT) + && (in_nmi() || in_hardirq()))) { + add_page_to_zone_llist(zone, page, order); + return; + } pcp_trylock_prepare(UP_flags); pcp = pcp_spin_trylock(zone->per_cpu_pageset); if (pcp) { - free_unref_page_commit(zone, pcp, page, migratetype, order); + free_unref_page_commit(zone, pcp, page, migratetype, order, fpi_flags); pcp_spin_unlock(pcp); } else { - free_one_page(zone, page, pfn, order, FPI_NONE); + free_one_page(zone, page, pfn, order, fpi_flags); } pcp_trylock_finish(UP_flags); } +void free_unref_page(struct page *page, unsigned int order) +{ + __free_unref_page(page, order, FPI_NONE); +} + /* * Free a batch of folios */ @@ -2779,7 +2832,7 @@ void free_unref_folios(struct folio_batch *folios) trace_mm_page_free_batched(&folio->page); free_unref_page_commit(zone, pcp, &folio->page, migratetype, - order); + order, FPI_NONE); } if (pcp) { @@ -4843,22 +4896,37 @@ EXPORT_SYMBOL(get_zeroed_page_noprof); * Context: May be called in interrupt context or while holding a normal * spinlock, but not in NMI context or while holding a raw spinlock. */ -void __free_pages(struct page *page, unsigned int order) +static void ___free_pages(struct page *page, unsigned int order, + fpi_t fpi_flags) { /* get PageHead before we drop reference */ int head = PageHead(page); struct alloc_tag *tag = pgalloc_tag_get(page); if (put_page_testzero(page)) - free_unref_page(page, order); + __free_unref_page(page, order, fpi_flags); else if (!head) { pgalloc_tag_sub_pages(tag, (1 << order) - 1); while (order-- > 0) - free_unref_page(page + (1 << order), order); + __free_unref_page(page + (1 << order), order, + fpi_flags); } } +void __free_pages(struct page *page, unsigned int order) +{ + ___free_pages(page, order, FPI_NONE); +} EXPORT_SYMBOL(__free_pages); +/* + * Can be called while holding raw_spin_lock or from IRQ and NMI for any + * page type (not only those that came from try_alloc_pages) + */ +void free_pages_nolock(struct page *page, unsigned int order) +{ + ___free_pages(page, order, FPI_TRYLOCK); +} + void free_pages(unsigned long addr, unsigned int order) { if (addr != 0) { diff --git a/mm/page_owner.c b/mm/page_owner.c index 2d6360eaccbb..90e31d0e3ed7 100644 --- a/mm/page_owner.c +++ b/mm/page_owner.c @@ -294,7 +294,13 @@ void __reset_page_owner(struct page *page, unsigned short order) page_owner = get_page_owner(page_ext); alloc_handle = page_owner->handle; - handle = save_stack(GFP_NOWAIT | __GFP_NOWARN); + /* + * Do not specify GFP_NOWAIT to make gfpflags_allow_spinning() == false + * to prevent issues in stack_depot_save(). + * This is similar to try_alloc_pages() gfp flags, but only used + * to signal stack_depot to avoid spin_locks. + */ + handle = save_stack(__GFP_NOWARN); __update_page_owner_free_handle(page_ext, handle, order, current->pid, current->tgid, free_ts_nsec); page_ext_put(page_ext);