From patchwork Wed Jul 31 12:44:51 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yunsheng Lin X-Patchwork-Id: 13748705 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 75B97C3DA64 for ; Wed, 31 Jul 2024 12:50:48 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id C03FF6B0082; Wed, 31 Jul 2024 08:50:47 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id BB34B6B0083; Wed, 31 Jul 2024 08:50:47 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id A2D476B0085; Wed, 31 Jul 2024 08:50:47 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id 8493B6B0082 for ; Wed, 31 Jul 2024 08:50:47 -0400 (EDT) Received: from smtpin06.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id 0E8E6C0289 for ; Wed, 31 Jul 2024 12:50:47 +0000 (UTC) X-FDA: 82400032134.06.54B3A1F Received: from szxga01-in.huawei.com (szxga01-in.huawei.com [45.249.212.187]) by imf03.hostedemail.com (Postfix) with ESMTP id 3CAC620021 for ; Wed, 31 Jul 2024 12:50:43 +0000 (UTC) Authentication-Results: imf03.hostedemail.com; dkim=none; dmarc=pass (policy=quarantine) header.from=huawei.com; spf=pass (imf03.hostedemail.com: domain of linyunsheng@huawei.com designates 45.249.212.187 as permitted sender) smtp.mailfrom=linyunsheng@huawei.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1722430240; a=rsa-sha256; cv=none; b=XFfr5K0sldSefEY7o7yG2Elx4ZsIDMNmJ2SN1ozKAu0blr7O4D2uiPyIPU+E6MngQ2HuY0 sHJjVCkMPypDAUTM8LPhPkVEDy1wAlnUhESmUWPsxiC7upNV5GipbscsnrkT207dfQCUHU 3gk0lVv04kr7sPhEhjrffaNXHwhSLo4= ARC-Authentication-Results: i=1; imf03.hostedemail.com; dkim=none; dmarc=pass (policy=quarantine) header.from=huawei.com; spf=pass (imf03.hostedemail.com: domain of linyunsheng@huawei.com designates 45.249.212.187 as permitted sender) smtp.mailfrom=linyunsheng@huawei.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1722430240; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=LB/QNHn+MRP1u5R3IA/U6Ll5ADInrylOqigNes5d30c=; b=VBtN5M9ebOH3GSuS31UVAA9KgoDlYoPXJ/A2xzQxgLumTv/C2t33lbPy5xmw5kcE/ekqcb ODOQfrp0UYND00UcN3vqumdyAzOKp6vfoDWFLcM83faO5LNs/+JZAjmwv2Dzwa+mP4XCuL aE1NFb6dpOPHfaLK79U9S5+9KaKkkaU= Received: from mail.maildlp.com (unknown [172.19.163.252]) by szxga01-in.huawei.com (SkyGuard) with ESMTP id 4WYsQB1k18zxVds; Wed, 31 Jul 2024 20:50:26 +0800 (CST) Received: from dggpemf200006.china.huawei.com (unknown [7.185.36.61]) by mail.maildlp.com (Postfix) with ESMTPS id EC4B7180AE6; Wed, 31 Jul 2024 20:50:37 +0800 (CST) Received: from localhost.localdomain (10.90.30.45) by dggpemf200006.china.huawei.com (7.185.36.61) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.11; Wed, 31 Jul 2024 20:50:37 +0800 From: Yunsheng Lin To: , , CC: , , Yunsheng Lin , Alexander Duyck , Andrew Morton , Subject: [PATCH net-next v12 01/14] mm: page_frag: add a test module for page_frag Date: Wed, 31 Jul 2024 20:44:51 +0800 Message-ID: <20240731124505.2903877-2-linyunsheng@huawei.com> X-Mailer: git-send-email 2.30.0 In-Reply-To: <20240731124505.2903877-1-linyunsheng@huawei.com> References: <20240731124505.2903877-1-linyunsheng@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.90.30.45] X-ClientProxiedBy: dggems705-chm.china.huawei.com (10.3.19.182) To dggpemf200006.china.huawei.com (7.185.36.61) X-Rspam-User: X-Rspamd-Queue-Id: 3CAC620021 X-Rspamd-Server: rspam01 X-Stat-Signature: x1ocwf61dzmjrouwzsh9osuar4ygwy1n X-HE-Tag: 1722430243-436661 X-HE-Meta: U2FsdGVkX18qWo1sxyW5OU0GJgSZoHyH1ZdMmgxWFyU7F/4JBPJeY7M4BiZ/4ew6gc1skbGJV5+aMB010PIXy6fmF/RzpKin0nHG97CDnbw++ZLIFTdVciBIurKZSJx7RQL79/J/0kePzFaAjdACGLGJRTb7AqFSLYINGeKeFW/wASsmaq3xz2U7DUi6lEIYT8X8J29aG6L5Vre2GS+GT+X5t/6xgHuewnpmVhI9N5+RiQpEybx+8S2CT+ZSzlzytQNDoFLRzyaDJV3a+XP2LWpUSgBbIaIDphffL9LO9qyrhkMtVnmYA4UabbRNwM886quV/YtVX+q5fB07vtXJCw+tgztTz6HhrwxxwPrDzEO4a9mfjNSh08Smymx/SrcE1G8rtXJg0zlxjbE1deDMSDJbn8+DYsyZoTB92yvlE0np68rap/QZr3qPDJk+fE2LcgEgnBVkeUTMJx0+0Ab0WpH70zECXHX3Ub3U3i+vVF6tcw/+tN7p/LbKKVD1f5IiARTGVyz893/T9WxBcHsEJFul3/ieZgVQvO/nUScDKEIxu+b0ekQGgBofrxNvEvCKqcXGAsXmKi4S39r9/NVa/6F16/4lIns3yHzk88NpBCWk+081enXji3wN32IvT4WoMnFTH/UiO3Oh85TT3BMqNfac0q9iolW6Aeo9jGSofzgkB58upUBvPjXmUtWtvMKT07CI72Pbn4r5OZJ+M+SgSNFDzSzApGYHRLdfn0KHczHMlz4xmIfSaJ7BKMO84qGp4uemUIlIPMSKlN0bz/5I1fhTddM6JuQ4booypnSD6iCryyRl9m+Tv4Jh8OagY+pIv3K/XPEUolM3skPcXTCyPcRuWx3bNuTpKFf14P3xiJd7wz4gM5dCUmeArf2P/SZ/sHyWbtM6XVfMHt4TT4OquZytbGFETmafYZTNj8QQDHYazqbPIJYtJPr1xY0+PMOnLJnbomZ8fS1VDtE0Xvt pnj6FIbZ tFtqDP3fLTv6rzop/VqjALxLB20B9yw6ctV8GBaL5nrl7a/43GHwPK/m9wRLBAqyImy9sVQZ0sEKZdbJfs5CMpg+GqAFv5YThi6FqWKV0kycdL5qpTM1CjsAEj8G3uL7eVsRSrlPwoQeGKWlHVQo/Z4OdCRACHxtQhIWu25u49gOWrHvsteFWzftLqCyf+iXEBRVg9Jwsb2eXCq1kBxWwl4Wx+FR3uoxgpws74hfbn6/LAWi27oHinB0/gWcN99v8JiXSizphvQbl815eZJFPlnIK8zE9U+unO321ijU+C41ynTG1i8BvpxmwYqlMHWJS6jd+ X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Basing on the lib/objpool.c, change it to something like a ptrpool, so that we can utilize that to test the correctness and performance of the page_frag. The testing is done by ensuring that the fragment allocated from a frag_frag_cache instance is pushed into a ptrpool instance in a kthread binded to a specified cpu, and a kthread binded to a specified cpu will pop the fragment from the ptrpool and free the fragment. We may refactor out the common part between objpool and ptrpool if this ptrpool thing turns out to be helpful for other place. CC: Alexander Duyck Signed-off-by: Yunsheng Lin --- mm/Kconfig | 8 + mm/Makefile | 1 + mm/page_frag_test.c | 393 ++++++++++++++++++++++++++++++++++++++++++++ 3 files changed, 402 insertions(+) create mode 100644 mm/page_frag_test.c diff --git a/mm/Kconfig b/mm/Kconfig index b72e7d040f78..305f02df7d67 100644 --- a/mm/Kconfig +++ b/mm/Kconfig @@ -1140,6 +1140,14 @@ config DMAPOOL_TEST provide a consistent way to measure how changes to the dma_pool_alloc/free routines affect performance. +config PAGE_FRAG_TEST + tristate "Test module for page_frag" + default n + depends on m + help + Provides a test module that is used to test the correctness and + performance of page_frag's implementation. + config ARCH_HAS_PTE_SPECIAL bool diff --git a/mm/Makefile b/mm/Makefile index d2915f8c9dc0..59c587341e54 100644 --- a/mm/Makefile +++ b/mm/Makefile @@ -141,3 +141,4 @@ obj-$(CONFIG_HAVE_BOOTMEM_INFO_NODE) += bootmem_info.o obj-$(CONFIG_GENERIC_IOREMAP) += ioremap.o obj-$(CONFIG_SHRINKER_DEBUG) += shrinker_debug.o obj-$(CONFIG_EXECMEM) += execmem.o +obj-$(CONFIG_PAGE_FRAG_TEST) += page_frag_test.o diff --git a/mm/page_frag_test.c b/mm/page_frag_test.c new file mode 100644 index 000000000000..cf2691f60b67 --- /dev/null +++ b/mm/page_frag_test.c @@ -0,0 +1,393 @@ +// SPDX-License-Identifier: GPL-2.0 + +/* + * Test module for page_frag cache + * + * Copyright: linyunsheng@huawei.com + */ + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#define OBJPOOL_NR_OBJECT_MAX BIT(24) + +struct objpool_slot { + u32 head; + u32 tail; + u32 last; + u32 mask; + void *entries[]; +} __packed; + +struct objpool_head { + int nr_cpus; + int capacity; + struct objpool_slot **cpu_slots; +}; + +/* initialize percpu objpool_slot */ +static void objpool_init_percpu_slot(struct objpool_head *pool, + struct objpool_slot *slot) +{ + /* initialize elements of percpu objpool_slot */ + slot->mask = pool->capacity - 1; +} + +/* allocate and initialize percpu slots */ +static int objpool_init_percpu_slots(struct objpool_head *pool, + int nr_objs, gfp_t gfp) +{ + int i; + + for (i = 0; i < pool->nr_cpus; i++) { + struct objpool_slot *slot; + int size; + + /* skip the cpu node which could never be present */ + if (!cpu_possible(i)) + continue; + + size = struct_size(slot, entries, pool->capacity); + + /* + * here we allocate percpu-slot & objs together in a single + * allocation to make it more compact, taking advantage of + * warm caches and TLB hits. in default vmalloc is used to + * reduce the pressure of kernel slab system. as we know, + * minimal size of vmalloc is one page since vmalloc would + * always align the requested size to page size + */ + if (gfp & GFP_ATOMIC) + slot = kmalloc_node(size, gfp, cpu_to_node(i)); + else + slot = __vmalloc_node(size, sizeof(void *), gfp, + cpu_to_node(i), + __builtin_return_address(0)); + if (!slot) + return -ENOMEM; + + memset(slot, 0, size); + pool->cpu_slots[i] = slot; + + objpool_init_percpu_slot(pool, slot); + } + + return 0; +} + +/* cleanup all percpu slots of the object pool */ +static void objpool_fini_percpu_slots(struct objpool_head *pool) +{ + int i; + + if (!pool->cpu_slots) + return; + + for (i = 0; i < pool->nr_cpus; i++) + kvfree(pool->cpu_slots[i]); + kfree(pool->cpu_slots); +} + +/* initialize object pool and pre-allocate objects */ +static int objpool_init(struct objpool_head *pool, int nr_objs, gfp_t gfp) +{ + int rc, capacity, slot_size; + + /* check input parameters */ + if (nr_objs <= 0 || nr_objs > OBJPOOL_NR_OBJECT_MAX) + return -EINVAL; + + /* calculate capacity of percpu objpool_slot */ + capacity = roundup_pow_of_two(nr_objs); + if (!capacity) + return -EINVAL; + + gfp = gfp & ~__GFP_ZERO; + + /* initialize objpool pool */ + memset(pool, 0, sizeof(struct objpool_head)); + pool->nr_cpus = nr_cpu_ids; + pool->capacity = capacity; + slot_size = pool->nr_cpus * sizeof(struct objpool_slot *); + pool->cpu_slots = kzalloc(slot_size, gfp); + if (!pool->cpu_slots) + return -ENOMEM; + + /* initialize per-cpu slots */ + rc = objpool_init_percpu_slots(pool, nr_objs, gfp); + if (rc) + objpool_fini_percpu_slots(pool); + + return rc; +} + +/* adding object to slot, abort if the slot was already full */ +static int objpool_try_add_slot(void *obj, struct objpool_head *pool, int cpu) +{ + struct objpool_slot *slot = pool->cpu_slots[cpu]; + u32 head, tail; + + /* loading tail and head as a local snapshot, tail first */ + tail = READ_ONCE(slot->tail); + + do { + head = READ_ONCE(slot->head); + /* fault caught: something must be wrong */ + if (unlikely(tail - head >= pool->capacity)) + return -ENOSPC; + } while (!try_cmpxchg_acquire(&slot->tail, &tail, tail + 1)); + + /* now the tail position is reserved for the given obj */ + WRITE_ONCE(slot->entries[tail & slot->mask], obj); + /* update sequence to make this obj available for pop() */ + smp_store_release(&slot->last, tail + 1); + + return 0; +} + +/* reclaim an object to object pool */ +static int objpool_push(void *obj, struct objpool_head *pool) +{ + unsigned long flags; + int rc; + + /* disable local irq to avoid preemption & interruption */ + raw_local_irq_save(flags); + rc = objpool_try_add_slot(obj, pool, raw_smp_processor_id()); + raw_local_irq_restore(flags); + + return rc; +} + +/* try to retrieve object from slot */ +static void *objpool_try_get_slot(struct objpool_head *pool, int cpu) +{ + struct objpool_slot *slot = pool->cpu_slots[cpu]; + /* load head snapshot, other cpus may change it */ + u32 head = smp_load_acquire(&slot->head); + + while (head != READ_ONCE(slot->last)) { + void *obj; + + /* + * data visibility of 'last' and 'head' could be out of + * order since memory updating of 'last' and 'head' are + * performed in push() and pop() independently + * + * before any retrieving attempts, pop() must guarantee + * 'last' is behind 'head', that is to say, there must + * be available objects in slot, which could be ensured + * by condition 'last != head && last - head <= nr_objs' + * that is equivalent to 'last - head - 1 < nr_objs' as + * 'last' and 'head' are both unsigned int32 + */ + if (READ_ONCE(slot->last) - head - 1 >= pool->capacity) { + head = READ_ONCE(slot->head); + continue; + } + + /* obj must be retrieved before moving forward head */ + obj = READ_ONCE(slot->entries[head & slot->mask]); + + /* move head forward to mark it's consumption */ + if (try_cmpxchg_release(&slot->head, &head, head + 1)) + return obj; + } + + return NULL; +} + +/* allocate an object from object pool */ +static void *objpool_pop(struct objpool_head *pool) +{ + void *obj = NULL; + unsigned long flags; + int i, cpu; + + /* disable local irq to avoid preemption & interruption */ + raw_local_irq_save(flags); + + cpu = raw_smp_processor_id(); + for (i = 0; i < num_possible_cpus(); i++) { + obj = objpool_try_get_slot(pool, cpu); + if (obj) + break; + cpu = cpumask_next_wrap(cpu, cpu_possible_mask, -1, 1); + } + raw_local_irq_restore(flags); + + return obj; +} + +/* release whole objpool forcely */ +static void objpool_free(struct objpool_head *pool) +{ + if (!pool->cpu_slots) + return; + + /* release percpu slots */ + objpool_fini_percpu_slots(pool); +} + +static struct objpool_head ptr_pool; +static int nr_objs = 512; +static atomic_t nthreads; +static struct completion wait; +static struct page_frag_cache test_frag; + +static int nr_test = 5120000; +module_param(nr_test, int, 0); +MODULE_PARM_DESC(nr_test, "number of iterations to test"); + +static bool test_align; +module_param(test_align, bool, 0); +MODULE_PARM_DESC(test_align, "use align API for testing"); + +static int test_alloc_len = 2048; +module_param(test_alloc_len, int, 0); +MODULE_PARM_DESC(test_alloc_len, "alloc len for testing"); + +static int test_push_cpu; +module_param(test_push_cpu, int, 0); +MODULE_PARM_DESC(test_push_cpu, "test cpu for pushing fragment"); + +static int test_pop_cpu; +module_param(test_pop_cpu, int, 0); +MODULE_PARM_DESC(test_pop_cpu, "test cpu for popping fragment"); + +static int page_frag_pop_thread(void *arg) +{ + struct objpool_head *pool = arg; + int nr = nr_test; + + pr_info("page_frag pop test thread begins on cpu %d\n", + smp_processor_id()); + + while (nr > 0) { + void *obj = objpool_pop(pool); + + if (obj) { + nr--; + page_frag_free(obj); + } else { + cond_resched(); + } + } + + if (atomic_dec_and_test(&nthreads)) + complete(&wait); + + pr_info("page_frag pop test thread exits on cpu %d\n", + smp_processor_id()); + + return 0; +} + +static int page_frag_push_thread(void *arg) +{ + struct objpool_head *pool = arg; + int nr = nr_test; + + pr_info("page_frag push test thread begins on cpu %d\n", + smp_processor_id()); + + while (nr > 0) { + void *va; + int ret; + + if (test_align) { + va = page_frag_alloc_align(&test_frag, test_alloc_len, + GFP_KERNEL, SMP_CACHE_BYTES); + + WARN_ONCE((unsigned long)va & (SMP_CACHE_BYTES - 1), + "unaligned va returned\n"); + } else { + va = page_frag_alloc(&test_frag, test_alloc_len, GFP_KERNEL); + } + + if (!va) + continue; + + ret = objpool_push(va, pool); + if (ret) { + page_frag_free(va); + cond_resched(); + } else { + nr--; + } + } + + pr_info("page_frag push test thread exits on cpu %d\n", + smp_processor_id()); + + if (atomic_dec_and_test(&nthreads)) + complete(&wait); + + return 0; +} + +static int __init page_frag_test_init(void) +{ + struct task_struct *tsk_push, *tsk_pop; + ktime_t start; + u64 duration; + int ret; + + test_frag.va = NULL; + atomic_set(&nthreads, 2); + init_completion(&wait); + + if (test_alloc_len > PAGE_SIZE || test_alloc_len <= 0) + return -EINVAL; + + ret = objpool_init(&ptr_pool, nr_objs, GFP_KERNEL); + if (ret) + return ret; + + tsk_push = kthread_create_on_cpu(page_frag_push_thread, &ptr_pool, + test_push_cpu, "page_frag_push"); + if (IS_ERR(tsk_push)) + return PTR_ERR(tsk_push); + + tsk_pop = kthread_create_on_cpu(page_frag_pop_thread, &ptr_pool, + test_pop_cpu, "page_frag_pop"); + if (IS_ERR(tsk_pop)) { + kthread_stop(tsk_push); + return PTR_ERR(tsk_pop); + } + + start = ktime_get(); + wake_up_process(tsk_push); + wake_up_process(tsk_pop); + + pr_info("waiting for test to complete\n"); + wait_for_completion(&wait); + + duration = (u64)ktime_us_delta(ktime_get(), start); + pr_info("%d of iterations for %s testing took: %lluus\n", nr_test, + test_align ? "aligned" : "non-aligned", duration); + + objpool_free(&ptr_pool); + page_frag_cache_drain(&test_frag); + + return -EAGAIN; +} + +static void __exit page_frag_test_exit(void) +{ +} + +module_init(page_frag_test_init); +module_exit(page_frag_test_exit); + +MODULE_LICENSE("GPL"); +MODULE_AUTHOR("Yunsheng Lin "); +MODULE_DESCRIPTION("Test module for page_frag"); From patchwork Wed Jul 31 12:44:52 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yunsheng Lin X-Patchwork-Id: 13748706 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 003FEC3DA7F for ; Wed, 31 Jul 2024 12:50:49 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 749D06B0083; Wed, 31 Jul 2024 08:50:49 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 6F9E46B0085; Wed, 31 Jul 2024 08:50:49 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 575516B0088; Wed, 31 Jul 2024 08:50:49 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 35E9C6B0083 for ; Wed, 31 Jul 2024 08:50:49 -0400 (EDT) Received: from smtpin29.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id A8EC4C01C1 for ; Wed, 31 Jul 2024 12:50:48 +0000 (UTC) X-FDA: 82400032176.29.DD6BC8E Received: from szxga02-in.huawei.com (szxga02-in.huawei.com [45.249.212.188]) by imf19.hostedemail.com (Postfix) with ESMTP id 1DFA91A0033 for ; Wed, 31 Jul 2024 12:50:45 +0000 (UTC) Authentication-Results: imf19.hostedemail.com; dkim=none; spf=pass (imf19.hostedemail.com: domain of linyunsheng@huawei.com designates 45.249.212.188 as permitted sender) smtp.mailfrom=linyunsheng@huawei.com; dmarc=pass (policy=quarantine) header.from=huawei.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1722430203; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=ra/RP+pQZHmxoZs9K0mzb42xMToDUsTCz8MzZzQdWT4=; b=6e3OmeOZnFN3QZ8m0MKkDarBa0WlWmRJ3vzR0WsKFt1WQF83HtT6cL+M8KH49D/JJxAm5+ 1TwaJ1Aw8wGm0mJzq+zNrqYf5kYEz6FGg1xoIW6Ysli+VbZitlEwmxCeyUv6BAYlPuVj7I uey4jmEaR1doa3bFZy+kwwbfUR+A9c0= ARC-Authentication-Results: i=1; imf19.hostedemail.com; dkim=none; spf=pass (imf19.hostedemail.com: domain of linyunsheng@huawei.com designates 45.249.212.188 as permitted sender) smtp.mailfrom=linyunsheng@huawei.com; dmarc=pass (policy=quarantine) header.from=huawei.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1722430203; a=rsa-sha256; cv=none; b=us296YAr5ea9EC42SM6Q8blRLgy5aWgOxaqE2EN4wte/tZiEqpc2bhQNitPzrDYNVVQ6Yz 4B6KxFJulQyio4Zfb4ugJMYGERdHOKkY0zBfUAcy+3RlQQXv0UlOvqtVcqsL+vzorHQfVZ EJuwiIKK8bh+7R1qbFb0xTzeUeBvTlw= Received: from mail.maildlp.com (unknown [172.19.163.252]) by szxga02-in.huawei.com (SkyGuard) with ESMTP id 4WYsNM2kfdzgYYM; Wed, 31 Jul 2024 20:48:51 +0800 (CST) Received: from dggpemf200006.china.huawei.com (unknown [7.185.36.61]) by mail.maildlp.com (Postfix) with ESMTPS id E5515180AE7; Wed, 31 Jul 2024 20:50:41 +0800 (CST) Received: from localhost.localdomain (10.90.30.45) by dggpemf200006.china.huawei.com (7.185.36.61) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.11; Wed, 31 Jul 2024 20:50:41 +0800 From: Yunsheng Lin To: , , CC: , , Yunsheng Lin , David Howells , Alexander Duyck , Andrew Morton , Subject: [PATCH net-next v12 02/14] mm: move the page fragment allocator from page_alloc into its own file Date: Wed, 31 Jul 2024 20:44:52 +0800 Message-ID: <20240731124505.2903877-3-linyunsheng@huawei.com> X-Mailer: git-send-email 2.30.0 In-Reply-To: <20240731124505.2903877-1-linyunsheng@huawei.com> References: <20240731124505.2903877-1-linyunsheng@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.90.30.45] X-ClientProxiedBy: dggems705-chm.china.huawei.com (10.3.19.182) To dggpemf200006.china.huawei.com (7.185.36.61) X-Stat-Signature: w4sk11whshywj3bi9dtwa1yyudjxkt4h X-Rspam-User: X-Rspamd-Queue-Id: 1DFA91A0033 X-Rspamd-Server: rspam02 X-HE-Tag: 1722430245-202253 X-HE-Meta: U2FsdGVkX18nxeCPreljpDQ9BhvNctGUBVSNxfXOpJDnXsO5YkEcKYGt3SweZphIeNdSpYB0rGSoPlYKgUbP9QR2OVyGBnCtEItL1XiXqtP5gDSlocRDyDk1152SwE+yyTDz2B9QZ1m0Ue5wdbmXVEAjqRKH8Zs8l09oynISRae2FwWq5+XC55wrRGVr7f26YKbi2DyXBYAZ5wj1SycY/+1sCjRFPHuT6JJ0RGZKh64d7yT80J/qjS4xRbZzd0m5dYMZuwtDbyTq2APV9cQxtw+QOLDg0pygY7Nitx6k6Iie/m5GfKUZGVPncjGFO/pTLBj0369MJBHN4vo29QU7eqXK24GAaJRt0wH2e7MXyHmAVDpvU05znrI1h6St1DM2X7qVBa+K4j5tv44CYa/aO/ISVfN4Dpk0LDGhRYy+1LNqJGhf4By1lk3uFheei/7rVlMSYIANNHL9RB5tTEO4XrgJPmGrAPwa2gzEzmKmtosdhNJS8tObqdccA8QSXc+lLWoAFmB1WugGRANsYIAI5tGrer2wHGXn7HtL+3VtIywXPQ0ULBM/2QkeghL3oc0cBtMZeO0xrZmTHrsmIpEkBDhSdPca5VsIcm7hChzK+6zRsCywaACDqvCw7sYupDuL8VEFXc9SqRw7G1qnF9saNVqhGS5j0OTST2+H9LeFh2ILTw5wFVy83Utgmi5EJCt6QNLR4rNz+c4TUk3LKB1Eqpsbox+GVKdYVVpA/KZ6AbSu+9zKpNOdPQgmg3AthwjSUxyP+WEZ+T70qOgn8RpDV425SJM8UNoN6z2ZeQNSewb1Nq7gB0/5Rud+uQWKG2etJEExnYXyckDUyZcwHDIIZqBFW8Scc5aur7M6tHyhChHTVjW0IuDCFp0OfGQEmHsWNXEUGEfkRGJIyjLqqbgHUteJnCYrKOm/NnO7nKKCST4mt9aRmbUvLIUwMmE1vI5HhvVLnmZLfZYESVlfv46 TA3L+vme c35VUZjflwe1Y4LMoNIVD8W55ajG3cQXQsY70mnLOHMgIo6UXqNMHZlvy+HwIH/9mmlOpXQQ89DCeCEqPbXHyH/4A+KhD89zsjewGAa1Mv6GUFB30gJ0reX4k32IN2D2k0y5WNsJVQQ5GjhXAecvLO0Fb7znI0BL6d8iebDOpSxjKq7f7kTFKI+QCtggE6xLQV/5j++BSfbJew6GXVSRrsA01NCM75AguSk8EXty4iz62zBhDzu0IlMzBHKyT64XD97hp0hBJWZVDdtmCC5F6ucEwUUAP5LLK/Db20gstJbJm9gkkRYrbTc0Cij+WCoeD8+3NrxAqhdxew5Kuw/bqJuYvGU3c2qWcC6UxCnJItEV0GbOuHh9hmEeI5A== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Inspired by [1], move the page fragment allocator from page_alloc into its own c file and header file, as we are about to make more change for it to replace another page_frag implementation in sock.c As this patchset is going to replace 'struct page_frag' with 'struct page_frag_cache' in sched.h, including page_frag_cache.h in sched.h has a compiler error caused by interdependence between mm_types.h and mm.h for asm-offsets.c, see [2]. So avoid the compiler error by moving 'struct page_frag_cache' to mm_types_task.h as suggested by Alexander, see [3]. 1. https://lore.kernel.org/all/20230411160902.4134381-3-dhowells@redhat.com/ 2. https://lore.kernel.org/all/15623dac-9358-4597-b3ee-3694a5956920@gmail.com/ 3. https://lore.kernel.org/all/CAKgT0UdH1yD=LSCXFJ=YM_aiA4OomD-2wXykO42bizaWMt_HOA@mail.gmail.com/ CC: David Howells CC: Alexander Duyck Signed-off-by: Yunsheng Lin --- include/linux/gfp.h | 22 ----- include/linux/mm_types.h | 18 ---- include/linux/mm_types_task.h | 18 ++++ include/linux/page_frag_cache.h | 31 +++++++ include/linux/skbuff.h | 1 + mm/Makefile | 1 + mm/page_alloc.c | 136 ------------------------------ mm/page_frag_cache.c | 145 ++++++++++++++++++++++++++++++++ mm/page_frag_test.c | 2 +- 9 files changed, 197 insertions(+), 177 deletions(-) create mode 100644 include/linux/page_frag_cache.h create mode 100644 mm/page_frag_cache.c diff --git a/include/linux/gfp.h b/include/linux/gfp.h index f53f76e0b17e..01a49be7c98d 100644 --- a/include/linux/gfp.h +++ b/include/linux/gfp.h @@ -371,28 +371,6 @@ __meminit void *alloc_pages_exact_nid_noprof(int nid, size_t size, gfp_t gfp_mas extern void __free_pages(struct page *page, unsigned int order); extern void free_pages(unsigned long addr, unsigned int order); -struct page_frag_cache; -void page_frag_cache_drain(struct page_frag_cache *nc); -extern void __page_frag_cache_drain(struct page *page, unsigned int count); -void *__page_frag_alloc_align(struct page_frag_cache *nc, unsigned int fragsz, - gfp_t gfp_mask, unsigned int align_mask); - -static inline void *page_frag_alloc_align(struct page_frag_cache *nc, - unsigned int fragsz, gfp_t gfp_mask, - unsigned int align) -{ - WARN_ON_ONCE(!is_power_of_2(align)); - return __page_frag_alloc_align(nc, fragsz, gfp_mask, -align); -} - -static inline void *page_frag_alloc(struct page_frag_cache *nc, - unsigned int fragsz, gfp_t gfp_mask) -{ - return __page_frag_alloc_align(nc, fragsz, gfp_mask, ~0u); -} - -extern void page_frag_free(void *addr); - #define __free_page(page) __free_pages((page), 0) #define free_page(addr) free_pages((addr), 0) diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h index 485424979254..843d75412105 100644 --- a/include/linux/mm_types.h +++ b/include/linux/mm_types.h @@ -521,9 +521,6 @@ static_assert(sizeof(struct ptdesc) <= sizeof(struct page)); */ #define STRUCT_PAGE_MAX_SHIFT (order_base_2(sizeof(struct page))) -#define PAGE_FRAG_CACHE_MAX_SIZE __ALIGN_MASK(32768, ~PAGE_MASK) -#define PAGE_FRAG_CACHE_MAX_ORDER get_order(PAGE_FRAG_CACHE_MAX_SIZE) - /* * page_private can be used on tail pages. However, PagePrivate is only * checked by the VM on the head page. So page_private on the tail pages @@ -542,21 +539,6 @@ static inline void *folio_get_private(struct folio *folio) return folio->private; } -struct page_frag_cache { - void * va; -#if (PAGE_SIZE < PAGE_FRAG_CACHE_MAX_SIZE) - __u16 offset; - __u16 size; -#else - __u32 offset; -#endif - /* we maintain a pagecount bias, so that we dont dirty cache line - * containing page->_refcount every time we allocate a fragment. - */ - unsigned int pagecnt_bias; - bool pfmemalloc; -}; - typedef unsigned long vm_flags_t; /* diff --git a/include/linux/mm_types_task.h b/include/linux/mm_types_task.h index a2f6179b672b..cdc1e3696439 100644 --- a/include/linux/mm_types_task.h +++ b/include/linux/mm_types_task.h @@ -8,6 +8,7 @@ * (These are defined separately to decouple sched.h from mm_types.h as much as possible.) */ +#include #include #include @@ -46,6 +47,23 @@ struct page_frag { #endif }; +#define PAGE_FRAG_CACHE_MAX_SIZE __ALIGN_MASK(32768, ~PAGE_MASK) +#define PAGE_FRAG_CACHE_MAX_ORDER get_order(PAGE_FRAG_CACHE_MAX_SIZE) +struct page_frag_cache { + void *va; +#if (PAGE_SIZE < PAGE_FRAG_CACHE_MAX_SIZE) + __u16 offset; + __u16 size; +#else + __u32 offset; +#endif + /* we maintain a pagecount bias, so that we dont dirty cache line + * containing page->_refcount every time we allocate a fragment. + */ + unsigned int pagecnt_bias; + bool pfmemalloc; +}; + /* Track pages that require TLB flushes */ struct tlbflush_unmap_batch { #ifdef CONFIG_ARCH_WANT_BATCHED_UNMAP_TLB_FLUSH diff --git a/include/linux/page_frag_cache.h b/include/linux/page_frag_cache.h new file mode 100644 index 000000000000..a758cb65a9b3 --- /dev/null +++ b/include/linux/page_frag_cache.h @@ -0,0 +1,31 @@ +/* SPDX-License-Identifier: GPL-2.0 */ + +#ifndef _LINUX_PAGE_FRAG_CACHE_H +#define _LINUX_PAGE_FRAG_CACHE_H + +#include +#include +#include + +void page_frag_cache_drain(struct page_frag_cache *nc); +void __page_frag_cache_drain(struct page *page, unsigned int count); +void *__page_frag_alloc_align(struct page_frag_cache *nc, unsigned int fragsz, + gfp_t gfp_mask, unsigned int align_mask); + +static inline void *page_frag_alloc_align(struct page_frag_cache *nc, + unsigned int fragsz, gfp_t gfp_mask, + unsigned int align) +{ + WARN_ON_ONCE(!is_power_of_2(align)); + return __page_frag_alloc_align(nc, fragsz, gfp_mask, -align); +} + +static inline void *page_frag_alloc(struct page_frag_cache *nc, + unsigned int fragsz, gfp_t gfp_mask) +{ + return __page_frag_alloc_align(nc, fragsz, gfp_mask, ~0u); +} + +void page_frag_free(void *addr); + +#endif diff --git a/include/linux/skbuff.h b/include/linux/skbuff.h index 29c3ea5b6e93..e057db1c63e9 100644 --- a/include/linux/skbuff.h +++ b/include/linux/skbuff.h @@ -31,6 +31,7 @@ #include #include #include +#include #include #if IS_ENABLED(CONFIG_NF_CONNTRACK) #include diff --git a/mm/Makefile b/mm/Makefile index 59c587341e54..0e1490d7f88e 100644 --- a/mm/Makefile +++ b/mm/Makefile @@ -65,6 +65,7 @@ page-alloc-$(CONFIG_SHUFFLE_PAGE_ALLOCATOR) += shuffle.o memory-hotplug-$(CONFIG_MEMORY_HOTPLUG) += memory_hotplug.o obj-y += page-alloc.o +obj-y += page_frag_cache.o obj-y += init-mm.o obj-y += memblock.o obj-y += $(memory-hotplug-y) diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 8337926b89d4..e4c2f35c0363 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -4799,142 +4799,6 @@ void free_pages(unsigned long addr, unsigned int order) EXPORT_SYMBOL(free_pages); -/* - * Page Fragment: - * An arbitrary-length arbitrary-offset area of memory which resides - * within a 0 or higher order page. Multiple fragments within that page - * are individually refcounted, in the page's reference counter. - * - * The page_frag functions below provide a simple allocation framework for - * page fragments. This is used by the network stack and network device - * drivers to provide a backing region of memory for use as either an - * sk_buff->head, or to be used in the "frags" portion of skb_shared_info. - */ -static struct page *__page_frag_cache_refill(struct page_frag_cache *nc, - gfp_t gfp_mask) -{ - struct page *page = NULL; - gfp_t gfp = gfp_mask; - -#if (PAGE_SIZE < PAGE_FRAG_CACHE_MAX_SIZE) - gfp_mask = (gfp_mask & ~__GFP_DIRECT_RECLAIM) | __GFP_COMP | - __GFP_NOWARN | __GFP_NORETRY | __GFP_NOMEMALLOC; - page = alloc_pages_node(NUMA_NO_NODE, gfp_mask, - PAGE_FRAG_CACHE_MAX_ORDER); - nc->size = page ? PAGE_FRAG_CACHE_MAX_SIZE : PAGE_SIZE; -#endif - if (unlikely(!page)) - page = alloc_pages_node(NUMA_NO_NODE, gfp, 0); - - nc->va = page ? page_address(page) : NULL; - - return page; -} - -void page_frag_cache_drain(struct page_frag_cache *nc) -{ - if (!nc->va) - return; - - __page_frag_cache_drain(virt_to_head_page(nc->va), nc->pagecnt_bias); - nc->va = NULL; -} -EXPORT_SYMBOL(page_frag_cache_drain); - -void __page_frag_cache_drain(struct page *page, unsigned int count) -{ - VM_BUG_ON_PAGE(page_ref_count(page) == 0, page); - - if (page_ref_sub_and_test(page, count)) - free_unref_page(page, compound_order(page)); -} -EXPORT_SYMBOL(__page_frag_cache_drain); - -void *__page_frag_alloc_align(struct page_frag_cache *nc, - unsigned int fragsz, gfp_t gfp_mask, - unsigned int align_mask) -{ - unsigned int size = PAGE_SIZE; - struct page *page; - int offset; - - if (unlikely(!nc->va)) { -refill: - page = __page_frag_cache_refill(nc, gfp_mask); - if (!page) - return NULL; - -#if (PAGE_SIZE < PAGE_FRAG_CACHE_MAX_SIZE) - /* if size can vary use size else just use PAGE_SIZE */ - size = nc->size; -#endif - /* Even if we own the page, we do not use atomic_set(). - * This would break get_page_unless_zero() users. - */ - page_ref_add(page, PAGE_FRAG_CACHE_MAX_SIZE); - - /* reset page count bias and offset to start of new frag */ - nc->pfmemalloc = page_is_pfmemalloc(page); - nc->pagecnt_bias = PAGE_FRAG_CACHE_MAX_SIZE + 1; - nc->offset = size; - } - - offset = nc->offset - fragsz; - if (unlikely(offset < 0)) { - page = virt_to_page(nc->va); - - if (!page_ref_sub_and_test(page, nc->pagecnt_bias)) - goto refill; - - if (unlikely(nc->pfmemalloc)) { - free_unref_page(page, compound_order(page)); - goto refill; - } - -#if (PAGE_SIZE < PAGE_FRAG_CACHE_MAX_SIZE) - /* if size can vary use size else just use PAGE_SIZE */ - size = nc->size; -#endif - /* OK, page count is 0, we can safely set it */ - set_page_count(page, PAGE_FRAG_CACHE_MAX_SIZE + 1); - - /* reset page count bias and offset to start of new frag */ - nc->pagecnt_bias = PAGE_FRAG_CACHE_MAX_SIZE + 1; - offset = size - fragsz; - if (unlikely(offset < 0)) { - /* - * The caller is trying to allocate a fragment - * with fragsz > PAGE_SIZE but the cache isn't big - * enough to satisfy the request, this may - * happen in low memory conditions. - * We don't release the cache page because - * it could make memory pressure worse - * so we simply return NULL here. - */ - return NULL; - } - } - - nc->pagecnt_bias--; - offset &= align_mask; - nc->offset = offset; - - return nc->va + offset; -} -EXPORT_SYMBOL(__page_frag_alloc_align); - -/* - * Frees a page fragment allocated out of either a compound or order 0 page. - */ -void page_frag_free(void *addr) -{ - struct page *page = virt_to_head_page(addr); - - if (unlikely(put_page_testzero(page))) - free_unref_page(page, compound_order(page)); -} -EXPORT_SYMBOL(page_frag_free); - static void *make_alloc_exact(unsigned long addr, unsigned int order, size_t size) { diff --git a/mm/page_frag_cache.c b/mm/page_frag_cache.c new file mode 100644 index 000000000000..609a485cd02a --- /dev/null +++ b/mm/page_frag_cache.c @@ -0,0 +1,145 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* Page fragment allocator + * + * Page Fragment: + * An arbitrary-length arbitrary-offset area of memory which resides within a + * 0 or higher order page. Multiple fragments within that page are + * individually refcounted, in the page's reference counter. + * + * The page_frag functions provide a simple allocation framework for page + * fragments. This is used by the network stack and network device drivers to + * provide a backing region of memory for use as either an sk_buff->head, or to + * be used in the "frags" portion of skb_shared_info. + */ + +#include +#include +#include +#include +#include +#include "internal.h" + +static struct page *__page_frag_cache_refill(struct page_frag_cache *nc, + gfp_t gfp_mask) +{ + struct page *page = NULL; + gfp_t gfp = gfp_mask; + +#if (PAGE_SIZE < PAGE_FRAG_CACHE_MAX_SIZE) + gfp_mask = (gfp_mask & ~__GFP_DIRECT_RECLAIM) | __GFP_COMP | + __GFP_NOWARN | __GFP_NORETRY | __GFP_NOMEMALLOC; + page = alloc_pages_node(NUMA_NO_NODE, gfp_mask, + PAGE_FRAG_CACHE_MAX_ORDER); + nc->size = page ? PAGE_FRAG_CACHE_MAX_SIZE : PAGE_SIZE; +#endif + if (unlikely(!page)) + page = alloc_pages_node(NUMA_NO_NODE, gfp, 0); + + nc->va = page ? page_address(page) : NULL; + + return page; +} + +void page_frag_cache_drain(struct page_frag_cache *nc) +{ + if (!nc->va) + return; + + __page_frag_cache_drain(virt_to_head_page(nc->va), nc->pagecnt_bias); + nc->va = NULL; +} +EXPORT_SYMBOL(page_frag_cache_drain); + +void __page_frag_cache_drain(struct page *page, unsigned int count) +{ + VM_BUG_ON_PAGE(page_ref_count(page) == 0, page); + + if (page_ref_sub_and_test(page, count)) + free_unref_page(page, compound_order(page)); +} +EXPORT_SYMBOL(__page_frag_cache_drain); + +void *__page_frag_alloc_align(struct page_frag_cache *nc, + unsigned int fragsz, gfp_t gfp_mask, + unsigned int align_mask) +{ + unsigned int size = PAGE_SIZE; + struct page *page; + int offset; + + if (unlikely(!nc->va)) { +refill: + page = __page_frag_cache_refill(nc, gfp_mask); + if (!page) + return NULL; + +#if (PAGE_SIZE < PAGE_FRAG_CACHE_MAX_SIZE) + /* if size can vary use size else just use PAGE_SIZE */ + size = nc->size; +#endif + /* Even if we own the page, we do not use atomic_set(). + * This would break get_page_unless_zero() users. + */ + page_ref_add(page, PAGE_FRAG_CACHE_MAX_SIZE); + + /* reset page count bias and offset to start of new frag */ + nc->pfmemalloc = page_is_pfmemalloc(page); + nc->pagecnt_bias = PAGE_FRAG_CACHE_MAX_SIZE + 1; + nc->offset = size; + } + + offset = nc->offset - fragsz; + if (unlikely(offset < 0)) { + page = virt_to_page(nc->va); + + if (!page_ref_sub_and_test(page, nc->pagecnt_bias)) + goto refill; + + if (unlikely(nc->pfmemalloc)) { + free_unref_page(page, compound_order(page)); + goto refill; + } + +#if (PAGE_SIZE < PAGE_FRAG_CACHE_MAX_SIZE) + /* if size can vary use size else just use PAGE_SIZE */ + size = nc->size; +#endif + /* OK, page count is 0, we can safely set it */ + set_page_count(page, PAGE_FRAG_CACHE_MAX_SIZE + 1); + + /* reset page count bias and offset to start of new frag */ + nc->pagecnt_bias = PAGE_FRAG_CACHE_MAX_SIZE + 1; + offset = size - fragsz; + if (unlikely(offset < 0)) { + /* + * The caller is trying to allocate a fragment + * with fragsz > PAGE_SIZE but the cache isn't big + * enough to satisfy the request, this may + * happen in low memory conditions. + * We don't release the cache page because + * it could make memory pressure worse + * so we simply return NULL here. + */ + return NULL; + } + } + + nc->pagecnt_bias--; + offset &= align_mask; + nc->offset = offset; + + return nc->va + offset; +} +EXPORT_SYMBOL(__page_frag_alloc_align); + +/* + * Frees a page fragment allocated out of either a compound or order 0 page. + */ +void page_frag_free(void *addr) +{ + struct page *page = virt_to_head_page(addr); + + if (unlikely(put_page_testzero(page))) + free_unref_page(page, compound_order(page)); +} +EXPORT_SYMBOL(page_frag_free); diff --git a/mm/page_frag_test.c b/mm/page_frag_test.c index cf2691f60b67..b7a5affb92f2 100644 --- a/mm/page_frag_test.c +++ b/mm/page_frag_test.c @@ -6,7 +6,6 @@ * Copyright: linyunsheng@huawei.com */ -#include #include #include #include @@ -16,6 +15,7 @@ #include #include #include +#include #define OBJPOOL_NR_OBJECT_MAX BIT(24) From patchwork Wed Jul 31 12:44:53 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yunsheng Lin X-Patchwork-Id: 13748707 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 73E01C49EA1 for ; Wed, 31 Jul 2024 12:50:52 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 528346B0085; Wed, 31 Jul 2024 08:50:50 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 4B2146B0088; Wed, 31 Jul 2024 08:50:50 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 28C906B0089; Wed, 31 Jul 2024 08:50:50 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id 0D0426B0085 for ; Wed, 31 Jul 2024 08:50:50 -0400 (EDT) Received: from smtpin15.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id 90433C013D for ; Wed, 31 Jul 2024 12:50:49 +0000 (UTC) X-FDA: 82400032218.15.90F75B5 Received: from szxga03-in.huawei.com (szxga03-in.huawei.com [45.249.212.189]) by imf20.hostedemail.com (Postfix) with ESMTP id 12B541C0021 for ; Wed, 31 Jul 2024 12:50:46 +0000 (UTC) Authentication-Results: imf20.hostedemail.com; dkim=none; spf=pass (imf20.hostedemail.com: domain of linyunsheng@huawei.com designates 45.249.212.189 as permitted sender) smtp.mailfrom=linyunsheng@huawei.com; dmarc=pass (policy=quarantine) header.from=huawei.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1722430191; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=ndHPZibyN91y2SvysEbGwwZzMP8q7lzkifaFrI9sbuU=; b=Otde8IOqBnd7t1oFztYWgPAKTc3ciRziVD/sKDOdQ0Waiii6hMA9EJHSqO6glLdfhtEf15 E6DLzwWpcfjizenTgv6HzVJgU3v4ojzD6MIIwmY5blGYSjvGgRfHuqPYztZttZDh/1Z5A8 AUBgm6NkbEYzl/8duN1eLiwU5J5e51c= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1722430191; a=rsa-sha256; cv=none; b=E/M0rP+DOc7cPnzC5Z6bGj6y9qEMu3cwgLKxGz1F7WNEkb9F+Mg5gxVdUHQfR1DEeV+9sT bsgj5dde/yMlfFust55ZkPibb+0ha/amX+3PTHSbmuyad0DPpw18TtCEGlxoS3QKODmgbP psd0nPflWwrqZw5f9HouuMKJVIH6EsM= ARC-Authentication-Results: i=1; imf20.hostedemail.com; dkim=none; spf=pass (imf20.hostedemail.com: domain of linyunsheng@huawei.com designates 45.249.212.189 as permitted sender) smtp.mailfrom=linyunsheng@huawei.com; dmarc=pass (policy=quarantine) header.from=huawei.com Received: from mail.maildlp.com (unknown [172.19.88.194]) by szxga03-in.huawei.com (SkyGuard) with ESMTP id 4WYsKY07hKzQnR4; Wed, 31 Jul 2024 20:46:25 +0800 (CST) Received: from dggpemf200006.china.huawei.com (unknown [7.185.36.61]) by mail.maildlp.com (Postfix) with ESMTPS id 1867D140257; Wed, 31 Jul 2024 20:50:44 +0800 (CST) Received: from localhost.localdomain (10.90.30.45) by dggpemf200006.china.huawei.com (7.185.36.61) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.11; Wed, 31 Jul 2024 20:50:43 +0800 From: Yunsheng Lin To: , , CC: , , Yunsheng Lin , Alexander Duyck , Andrew Morton , Subject: [PATCH net-next v12 03/14] mm: page_frag: use initial zero offset for page_frag_alloc_align() Date: Wed, 31 Jul 2024 20:44:53 +0800 Message-ID: <20240731124505.2903877-4-linyunsheng@huawei.com> X-Mailer: git-send-email 2.30.0 In-Reply-To: <20240731124505.2903877-1-linyunsheng@huawei.com> References: <20240731124505.2903877-1-linyunsheng@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.90.30.45] X-ClientProxiedBy: dggems705-chm.china.huawei.com (10.3.19.182) To dggpemf200006.china.huawei.com (7.185.36.61) X-Stat-Signature: p1zmxaikosazongu4zp56zocabeguqag X-Rspamd-Queue-Id: 12B541C0021 X-Rspam-User: X-Rspamd-Server: rspam08 X-HE-Tag: 1722430246-417359 X-HE-Meta: U2FsdGVkX1/H0JRajf1g43sYo+2j0Vtn+Uk8TZMRjUMYYKvOVxvoT9+xR5RDRA/MtlwnwPKyJ3PY5rJgDKHxAjfDa21HLiAKAvb/vDl4uUT5SY+P11p9rKha6LeYinRwQT/K5HgLtD5whVSLdiJwlNXnee10dQgRqUbzTlU6d52B/G5u45rw4ziMO6cbIsz6TO627YzOcXjneT20ZNUE1BmyWx172MtPzRjL32dbsDdoVP//L0R/doMwpoKNVFP3xTrNAZAoWZx74akwS1r6aK8S5kOhZn+zvg7RWMq0BWna9/llvAF5xMbShMwKx1PGhodxiSVWZdG4R3GE9O3uGAMEf5tLARbsNmhHayqSpniOwmKH8g8Zz2IlBOD/dZG8I9Dls1FYfNLYT7XWTZmbdiGue0FI7kAeLtfwuxJq217nrVXEirczLsFydqmV5yjjGAJEaS4KqMb17dGsb9bqJzPCXwggv98MUB0EFyEjNmVSykCrw2fQpa35N2zLClGGf/siVFXRsKAwqey1FlRJLZ84yeTQWiZcdcDPp7+ItN9wTbLKY9lKBm+VGwEC57aE0aGRPN4CzIfcC5WVnlEstbHst7RYk97V1n0+9IDOIBKKQUkL35YQxl+S74t0WVL2b+XVVP8k0VLnDY7m1kzTDWPWry8Ryfjg51c5SnQVv5T9ewz0iMlfXx7d+u0K9F0qiz3pjU6tVDnNEGNXh3HVf2ntN/t5Xc/p2lZVXn2DfRJ8Zr3B4d+E4Q5pX605xcJiO/M60ergW/1i30GQ93xkam2oz8GqFh9cK2b+Ry+TwW8jz0ZxPwXqDat/G3UC224PeBHXdCt5uQYFca06waNOYFM0eZYvheJ1qMRA8Yn4SomtOvpy2c2taeuG0f67HXvhmo2LJbycJJbA2Jwqi4ipfuh01DDWOhWCuWwwS2Jc+cNDSHPib5X6T5E84hRA0ez/gFSX9D/2rOyEtwZE7fw a6jiDjAF UmjYj12laSk0buFpRhxPPMTAmlQ+/56FIx/zKLjmcLmUtig5wM/f406vuVoiXd5kAuVnFI7sEB0gHw9qqo346j3MDyPgv8KlxvTlyAIq3WH959aMUbTQIsc7GhKfqRcHHGXC+Cp7LJ5JPQp9TpTQ8fOE34F4PFqBbGL0R5IILWsxTpZZIEtxMh/auJJM5Js+1AHjN1rNtbNRQOQCw2rseJ15ibPPzfh/BS5WkNcEnM9d7N8xSNVHfBrLrVU6XIcYwQj3e/ZhdwI9JnNFvUTIbVxwZ8iCIDPzVV2ZDUDPagx/jnrkpIbdC2gTQe4DwlnUJ8gDxaGukF6ptBD21ig/lpt6VRw== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: We are about to use page_frag_alloc_*() API to not just allocate memory for skb->data, but also use them to do the memory allocation for skb frag too. Currently the implementation of page_frag in mm subsystem is running the offset as a countdown rather than count-up value, there may have several advantages to that as mentioned in [1], but it may have some disadvantages, for example, it may disable skb frag coalescing and more correct cache prefetching We have a trade-off to make in order to have a unified implementation and API for page_frag, so use a initial zero offset in this patch, and the following patch will try to make some optimization to avoid the disadvantages as much as possible. Rename 'offset' to 'remaining' to retain the 'countdown' behavior as 'remaining countdown' instead of 'offset countdown'. Also, Renaming enable us to do a single 'fragsz > remaining' checking for the case of cache not being enough, which should be the fast path if we ensure 'remaining' is zero when 'va' == NULL by memset'ing 'struct page_frag_cache' in page_frag_cache_init() and page_frag_cache_drain(). 1. https://lore.kernel.org/all/f4abe71b3439b39d17a6fb2d410180f367cadf5c.camel@gmail.com/ CC: Alexander Duyck Signed-off-by: Yunsheng Lin --- include/linux/mm_types_task.h | 4 +-- mm/page_frag_cache.c | 52 +++++++++++++++++------------------ 2 files changed, 28 insertions(+), 28 deletions(-) diff --git a/include/linux/mm_types_task.h b/include/linux/mm_types_task.h index cdc1e3696439..b1c54b2b9308 100644 --- a/include/linux/mm_types_task.h +++ b/include/linux/mm_types_task.h @@ -52,10 +52,10 @@ struct page_frag { struct page_frag_cache { void *va; #if (PAGE_SIZE < PAGE_FRAG_CACHE_MAX_SIZE) - __u16 offset; + __u16 remaining; __u16 size; #else - __u32 offset; + __u32 remaining; #endif /* we maintain a pagecount bias, so that we dont dirty cache line * containing page->_refcount every time we allocate a fragment. diff --git a/mm/page_frag_cache.c b/mm/page_frag_cache.c index 609a485cd02a..c5bc72cf018a 100644 --- a/mm/page_frag_cache.c +++ b/mm/page_frag_cache.c @@ -63,9 +63,13 @@ void *__page_frag_alloc_align(struct page_frag_cache *nc, unsigned int fragsz, gfp_t gfp_mask, unsigned int align_mask) { +#if (PAGE_SIZE < PAGE_FRAG_CACHE_MAX_SIZE) + unsigned int size = nc->size; +#else unsigned int size = PAGE_SIZE; +#endif + unsigned int remaining; struct page *page; - int offset; if (unlikely(!nc->va)) { refill: @@ -82,14 +86,27 @@ void *__page_frag_alloc_align(struct page_frag_cache *nc, */ page_ref_add(page, PAGE_FRAG_CACHE_MAX_SIZE); - /* reset page count bias and offset to start of new frag */ + /* reset page count bias and remaining to start of new frag */ nc->pfmemalloc = page_is_pfmemalloc(page); nc->pagecnt_bias = PAGE_FRAG_CACHE_MAX_SIZE + 1; - nc->offset = size; + nc->remaining = size; } - offset = nc->offset - fragsz; - if (unlikely(offset < 0)) { + remaining = nc->remaining & align_mask; + if (unlikely(remaining < fragsz)) { + if (unlikely(fragsz > PAGE_SIZE)) { + /* + * The caller is trying to allocate a fragment + * with fragsz > PAGE_SIZE but the cache isn't big + * enough to satisfy the request, this may + * happen in low memory conditions. + * We don't release the cache page because + * it could make memory pressure worse + * so we simply return NULL here. + */ + return NULL; + } + page = virt_to_page(nc->va); if (!page_ref_sub_and_test(page, nc->pagecnt_bias)) @@ -100,35 +117,18 @@ void *__page_frag_alloc_align(struct page_frag_cache *nc, goto refill; } -#if (PAGE_SIZE < PAGE_FRAG_CACHE_MAX_SIZE) - /* if size can vary use size else just use PAGE_SIZE */ - size = nc->size; -#endif /* OK, page count is 0, we can safely set it */ set_page_count(page, PAGE_FRAG_CACHE_MAX_SIZE + 1); - /* reset page count bias and offset to start of new frag */ + /* reset page count bias and remaining to start of new frag */ nc->pagecnt_bias = PAGE_FRAG_CACHE_MAX_SIZE + 1; - offset = size - fragsz; - if (unlikely(offset < 0)) { - /* - * The caller is trying to allocate a fragment - * with fragsz > PAGE_SIZE but the cache isn't big - * enough to satisfy the request, this may - * happen in low memory conditions. - * We don't release the cache page because - * it could make memory pressure worse - * so we simply return NULL here. - */ - return NULL; - } + remaining = size; } nc->pagecnt_bias--; - offset &= align_mask; - nc->offset = offset; + nc->remaining = remaining - fragsz; - return nc->va + offset; + return nc->va + (size - remaining); } EXPORT_SYMBOL(__page_frag_alloc_align); From patchwork Wed Jul 31 12:44:54 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yunsheng Lin X-Patchwork-Id: 13748708 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id F2F74C3DA7F for ; Wed, 31 Jul 2024 12:50:57 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 8CF8C6B008A; Wed, 31 Jul 2024 08:50:57 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 8802F6B008C; Wed, 31 Jul 2024 08:50:57 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 6F93F6B0092; Wed, 31 Jul 2024 08:50:57 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id 4BE2A6B008A for ; Wed, 31 Jul 2024 08:50:57 -0400 (EDT) Received: from smtpin29.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id EFA84160336 for ; Wed, 31 Jul 2024 12:50:56 +0000 (UTC) X-FDA: 82400032512.29.0FE4C88 Received: from szxga01-in.huawei.com (szxga01-in.huawei.com [45.249.212.187]) by imf24.hostedemail.com (Postfix) with ESMTP id 7F76218000F for ; Wed, 31 Jul 2024 12:50:54 +0000 (UTC) Authentication-Results: imf24.hostedemail.com; dkim=none; spf=pass (imf24.hostedemail.com: domain of linyunsheng@huawei.com designates 45.249.212.187 as permitted sender) smtp.mailfrom=linyunsheng@huawei.com; dmarc=pass (policy=quarantine) header.from=huawei.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1722430199; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=S5DNHTsppvNsSGV5MXpytjEHr6t1PwiptEQtzs7gaFA=; b=bP8BPtw+EC9FwojWyF8KbbuYN/gVNWfHYR0r66YS3tVkjI10q6qKW3uJQJpYB+pgV7vaXV mGAsT7krI0skJs4kLF/zoe3oK7FByqJKJaJQj+nymbVxJpcEnDOvcNcamUnJywEeQWXhiU gj2WbK4enDU5CbS9d7q8MWrEt68ZIPc= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1722430199; a=rsa-sha256; cv=none; b=l4jyDHYfqYJpplBosciX+8neQ2MlJ7HRWqywsmSGDeL74gyGaQ8XPA0pgY7MFQPjnfYBn+ jzus5d6nAvUBYEmJkkZopBu5d74lkXMnD2TrxYG2sHRyl7vSjUpEywrL8P2o6t0Stv6ebZ /0+9SChJQvHds7fZtAgSFRpweKbpMI0= ARC-Authentication-Results: i=1; imf24.hostedemail.com; dkim=none; spf=pass (imf24.hostedemail.com: domain of linyunsheng@huawei.com designates 45.249.212.187 as permitted sender) smtp.mailfrom=linyunsheng@huawei.com; dmarc=pass (policy=quarantine) header.from=huawei.com Received: from mail.maildlp.com (unknown [172.19.163.252]) by szxga01-in.huawei.com (SkyGuard) with ESMTP id 4WYsQQ4j58zxVyV; Wed, 31 Jul 2024 20:50:38 +0800 (CST) Received: from dggpemf200006.china.huawei.com (unknown [7.185.36.61]) by mail.maildlp.com (Postfix) with ESMTPS id 5679F180AE3; Wed, 31 Jul 2024 20:50:50 +0800 (CST) Received: from localhost.localdomain (10.90.30.45) by dggpemf200006.china.huawei.com (7.185.36.61) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.11; Wed, 31 Jul 2024 20:50:49 +0800 From: Yunsheng Lin To: , , CC: , , Yunsheng Lin , Alexander Duyck , Subbaraya Sundeep , Jeroen de Borst , Praveen Kaligineedi , Shailend Chand , Eric Dumazet , Tony Nguyen , Przemek Kitszel , Sunil Goutham , Geetha sowjanya , hariprasad , Felix Fietkau , Sean Wang , Mark Lee , Lorenzo Bianconi , Matthias Brugger , AngeloGioacchino Del Regno , Keith Busch , Jens Axboe , Christoph Hellwig , Sagi Grimberg , Chaitanya Kulkarni , "Michael S. Tsirkin" , Jason Wang , =?utf-8?q?Eugenio_P=C3=A9rez?= , Andrew Morton , Alexei Starovoitov , Daniel Borkmann , Jesper Dangaard Brouer , John Fastabend , Andrii Nakryiko , Martin KaFai Lau , Eduard Zingerman , Song Liu , Yonghong Song , KP Singh , Stanislav Fomichev , Hao Luo , Jiri Olsa , David Howells , Marc Dionne , Chuck Lever , Jeff Layton , Neil Brown , Olga Kornievskaia , Dai Ngo , Tom Talpey , Trond Myklebust , Anna Schumaker , , , , , , , , , , Subject: [PATCH net-next v12 04/14] mm: page_frag: add '_va' suffix to page_frag API Date: Wed, 31 Jul 2024 20:44:54 +0800 Message-ID: <20240731124505.2903877-5-linyunsheng@huawei.com> X-Mailer: git-send-email 2.30.0 In-Reply-To: <20240731124505.2903877-1-linyunsheng@huawei.com> References: <20240731124505.2903877-1-linyunsheng@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.90.30.45] X-ClientProxiedBy: dggems705-chm.china.huawei.com (10.3.19.182) To dggpemf200006.china.huawei.com (7.185.36.61) X-Rspam-User: X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: 7F76218000F X-Stat-Signature: t8bjdjutgayj1whcuddaof7dktzaeun1 X-HE-Tag: 1722430254-248973 X-HE-Meta: U2FsdGVkX19lcSsJukptmfpR2e2+s11BJ8fW/xLh+0LLiB81PIXkByGb2tuyT85+gT89lZRTDHpPYBPGy1KiVS9s0cgZcjBumEeuFIKxyPRxg20Ri110r5KDz6vDwMoP6i5pW2ZHB2wwlzHzl9F+CqvCGmKNs5QtA4Pp9g/PIHsnU1cwvXW05ZgR3BWPBgSHnjIjb/xc4jqhqVZfAn6t54/o77yvUdaBF1XKDppOD13XHPRTrXfSxzB1j1nMH9Wvf3WaK5eoDgulyjNTIpMhEJFwrdIOEpAB9YWWfMzcsWUUlhEPyDKiTx4Vaqpmty+Euo+Z3wuSIPtgp3jngosI7g+0NU+su/33KXB9L7/AviaqXge2rfLxkBVdt0OobWrBr7yhgpgslgkq4VnnWMLId0+tXxsTM3HBCxQr/NguGbqPl9nFiarSLhoHfWnDhOzekIb7YCFu8S7KUIiPFrVJ8JBhJ8vtw4G9ViUSk4d/de4os1rUdyIsFRS2GPcDbj3Lc+eWBCKBbKAMmRJBKQfBfK90QRH9gf9nqLVI9O5pdHcjPDRjVhT5vYR7aTubq1IRA9dV/hQ3MEAz0es9FMrxU5+A9JpioJpj1+oD5BLQLkubH41rIE/Zky6RJENpIX4BNZbBLgQ7/gruwBaYM7yWAtCaWiYHeiWnOY4MUn8Z20Lbe49h4ypG2vOoXUkoZKXNu7Chai5eHqTJUaaIG091v+4WeCFzI+FSYVd+5WYl+iAuVG5Rl0C6zOEdljyanovpfxPgPtd6JShxQ1E+REJDGATXbh+dOkg6PjwOUtvobgCQ9BazBBemIJCLRrqSbdt154/P9+x8htAabJcrNwVx/wKooSGfnjnp9H/8GRegEBVmrRTdoARBYc8ogXvBXq4mg72Uou3kGMBbQxKffZx70BmIpDyvpMc4dGO00H9QljDJrPDYbNgZGI2BCotIQAgUCcfJ/5j23QOGsPchp+o XHjTfrSF x7fBjIsd1aO118pPal5t6vAtNqQI/nsY2Cd+/LCRAA9gfbVQLZFVM+eMHgYXyNvjHavNGqDvEjhSj9DSk4uEiJb6BRd71SyB4bhVP1FvUg4ZuMAE2KHg+SU1eKPL882TLTm5Qz3ArtRDO+EhrXy8xRABRwLZZ5jLScGolkFW/zh2C3KcjLgZSbgUheilLky2eQOj2fVqCtmempNT6nCbnS0CZElB3VuwEGp8tnPgwDgyBPhfN8cKI6FvMiEoMXQ/znU9W6PVjWm98QOHgPM3Jmz2E+OnRZrheSrR8V6fKgomCo8OSHo35MbA0WkW+LbLxG5ng8wX8UVcCDxaxysaFLe7hKPwIoI1sCbx3 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Currently the page_frag API is returning 'virtual address' or 'va' when allocing and expecting 'virtual address' or 'va' as input when freeing. As we are about to support new use cases that the caller need to deal with 'struct page' or need to deal with both 'va' and 'struct page'. In order to differentiate the API handling between 'va' and 'struct page', add '_va' suffix to the corresponding API mirroring the page_pool_alloc_va() API of the page_pool. So that callers expecting to deal with va, page or both va and page may call page_frag_alloc_va*, page_frag_alloc_pg*, or page_frag_alloc* API accordingly. CC: Alexander Duyck Signed-off-by: Yunsheng Lin Reviewed-by: Subbaraya Sundeep Acked-by: Chuck Lever Acked-by: Sagi Grimberg --- drivers/net/ethernet/google/gve/gve_rx.c | 4 ++-- drivers/net/ethernet/intel/ice/ice_txrx.c | 2 +- drivers/net/ethernet/intel/ice/ice_txrx.h | 2 +- drivers/net/ethernet/intel/ice/ice_txrx_lib.c | 2 +- .../net/ethernet/intel/ixgbevf/ixgbevf_main.c | 4 ++-- .../marvell/octeontx2/nic/otx2_common.c | 2 +- drivers/net/ethernet/mediatek/mtk_wed_wo.c | 4 ++-- drivers/nvme/host/tcp.c | 8 +++---- drivers/nvme/target/tcp.c | 22 +++++++++---------- drivers/vhost/net.c | 6 ++--- include/linux/page_frag_cache.h | 21 +++++++++--------- include/linux/skbuff.h | 2 +- kernel/bpf/cpumap.c | 2 +- mm/page_frag_cache.c | 12 +++++----- mm/page_frag_test.c | 13 ++++++----- net/core/skbuff.c | 14 ++++++------ net/core/xdp.c | 2 +- net/rxrpc/txbuf.c | 15 +++++++------ net/sunrpc/svcsock.c | 6 ++--- 19 files changed, 74 insertions(+), 69 deletions(-) diff --git a/drivers/net/ethernet/google/gve/gve_rx.c b/drivers/net/ethernet/google/gve/gve_rx.c index acb73d4d0de6..b6c10100e462 100644 --- a/drivers/net/ethernet/google/gve/gve_rx.c +++ b/drivers/net/ethernet/google/gve/gve_rx.c @@ -729,7 +729,7 @@ static int gve_xdp_redirect(struct net_device *dev, struct gve_rx_ring *rx, total_len = headroom + SKB_DATA_ALIGN(len) + SKB_DATA_ALIGN(sizeof(struct skb_shared_info)); - frame = page_frag_alloc(&rx->page_cache, total_len, GFP_ATOMIC); + frame = page_frag_alloc_va(&rx->page_cache, total_len, GFP_ATOMIC); if (!frame) { u64_stats_update_begin(&rx->statss); rx->xdp_alloc_fails++; @@ -742,7 +742,7 @@ static int gve_xdp_redirect(struct net_device *dev, struct gve_rx_ring *rx, err = xdp_do_redirect(dev, &new, xdp_prog); if (err) - page_frag_free(frame); + page_frag_free_va(frame); return err; } diff --git a/drivers/net/ethernet/intel/ice/ice_txrx.c b/drivers/net/ethernet/intel/ice/ice_txrx.c index 8bb743f78fcb..399b317c509d 100644 --- a/drivers/net/ethernet/intel/ice/ice_txrx.c +++ b/drivers/net/ethernet/intel/ice/ice_txrx.c @@ -126,7 +126,7 @@ ice_unmap_and_free_tx_buf(struct ice_tx_ring *ring, struct ice_tx_buf *tx_buf) dev_kfree_skb_any(tx_buf->skb); break; case ICE_TX_BUF_XDP_TX: - page_frag_free(tx_buf->raw_buf); + page_frag_free_va(tx_buf->raw_buf); break; case ICE_TX_BUF_XDP_XMIT: xdp_return_frame(tx_buf->xdpf); diff --git a/drivers/net/ethernet/intel/ice/ice_txrx.h b/drivers/net/ethernet/intel/ice/ice_txrx.h index feba314a3fe4..6379f57d8228 100644 --- a/drivers/net/ethernet/intel/ice/ice_txrx.h +++ b/drivers/net/ethernet/intel/ice/ice_txrx.h @@ -148,7 +148,7 @@ static inline int ice_skb_pad(void) * @ICE_TX_BUF_DUMMY: dummy Flow Director packet, unmap and kfree() * @ICE_TX_BUF_FRAG: mapped skb OR &xdp_buff frag, only unmap DMA * @ICE_TX_BUF_SKB: &sk_buff, unmap and consume_skb(), update stats - * @ICE_TX_BUF_XDP_TX: &xdp_buff, unmap and page_frag_free(), stats + * @ICE_TX_BUF_XDP_TX: &xdp_buff, unmap and page_frag_free_va(), stats * @ICE_TX_BUF_XDP_XMIT: &xdp_frame, unmap and xdp_return_frame(), stats * @ICE_TX_BUF_XSK_TX: &xdp_buff on XSk queue, xsk_buff_free(), stats */ diff --git a/drivers/net/ethernet/intel/ice/ice_txrx_lib.c b/drivers/net/ethernet/intel/ice/ice_txrx_lib.c index 2719f0e20933..a1a41a14df0d 100644 --- a/drivers/net/ethernet/intel/ice/ice_txrx_lib.c +++ b/drivers/net/ethernet/intel/ice/ice_txrx_lib.c @@ -250,7 +250,7 @@ ice_clean_xdp_tx_buf(struct device *dev, struct ice_tx_buf *tx_buf, switch (tx_buf->type) { case ICE_TX_BUF_XDP_TX: - page_frag_free(tx_buf->raw_buf); + page_frag_free_va(tx_buf->raw_buf); break; case ICE_TX_BUF_XDP_XMIT: xdp_return_frame_bulk(tx_buf->xdpf, bq); diff --git a/drivers/net/ethernet/intel/ixgbevf/ixgbevf_main.c b/drivers/net/ethernet/intel/ixgbevf/ixgbevf_main.c index 149911e3002a..eef16a909f85 100644 --- a/drivers/net/ethernet/intel/ixgbevf/ixgbevf_main.c +++ b/drivers/net/ethernet/intel/ixgbevf/ixgbevf_main.c @@ -302,7 +302,7 @@ static bool ixgbevf_clean_tx_irq(struct ixgbevf_q_vector *q_vector, /* free the skb */ if (ring_is_xdp(tx_ring)) - page_frag_free(tx_buffer->data); + page_frag_free_va(tx_buffer->data); else napi_consume_skb(tx_buffer->skb, napi_budget); @@ -2412,7 +2412,7 @@ static void ixgbevf_clean_tx_ring(struct ixgbevf_ring *tx_ring) /* Free all the Tx ring sk_buffs */ if (ring_is_xdp(tx_ring)) - page_frag_free(tx_buffer->data); + page_frag_free_va(tx_buffer->data); else dev_kfree_skb_any(tx_buffer->skb); diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c index 87d5776e3b88..a485e988fa1d 100644 --- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c +++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c @@ -553,7 +553,7 @@ static int __otx2_alloc_rbuf(struct otx2_nic *pfvf, struct otx2_pool *pool, *dma = dma_map_single_attrs(pfvf->dev, buf, pool->rbsize, DMA_FROM_DEVICE, DMA_ATTR_SKIP_CPU_SYNC); if (unlikely(dma_mapping_error(pfvf->dev, *dma))) { - page_frag_free(buf); + page_frag_free_va(buf); return -ENOMEM; } diff --git a/drivers/net/ethernet/mediatek/mtk_wed_wo.c b/drivers/net/ethernet/mediatek/mtk_wed_wo.c index 7063c78bd35f..c4228719f8a4 100644 --- a/drivers/net/ethernet/mediatek/mtk_wed_wo.c +++ b/drivers/net/ethernet/mediatek/mtk_wed_wo.c @@ -142,8 +142,8 @@ mtk_wed_wo_queue_refill(struct mtk_wed_wo *wo, struct mtk_wed_wo_queue *q, dma_addr_t addr; void *buf; - buf = page_frag_alloc(&q->cache, q->buf_size, - GFP_ATOMIC | GFP_DMA32); + buf = page_frag_alloc_va(&q->cache, q->buf_size, + GFP_ATOMIC | GFP_DMA32); if (!buf) break; diff --git a/drivers/nvme/host/tcp.c b/drivers/nvme/host/tcp.c index a2a47d3ab99f..86906bc505de 100644 --- a/drivers/nvme/host/tcp.c +++ b/drivers/nvme/host/tcp.c @@ -506,7 +506,7 @@ static void nvme_tcp_exit_request(struct blk_mq_tag_set *set, { struct nvme_tcp_request *req = blk_mq_rq_to_pdu(rq); - page_frag_free(req->pdu); + page_frag_free_va(req->pdu); } static int nvme_tcp_init_request(struct blk_mq_tag_set *set, @@ -520,7 +520,7 @@ static int nvme_tcp_init_request(struct blk_mq_tag_set *set, struct nvme_tcp_queue *queue = &ctrl->queues[queue_idx]; u8 hdgst = nvme_tcp_hdgst_len(queue); - req->pdu = page_frag_alloc(&queue->pf_cache, + req->pdu = page_frag_alloc_va(&queue->pf_cache, sizeof(struct nvme_tcp_cmd_pdu) + hdgst, GFP_KERNEL | __GFP_ZERO); if (!req->pdu) @@ -1337,7 +1337,7 @@ static void nvme_tcp_free_async_req(struct nvme_tcp_ctrl *ctrl) { struct nvme_tcp_request *async = &ctrl->async_req; - page_frag_free(async->pdu); + page_frag_free_va(async->pdu); } static int nvme_tcp_alloc_async_req(struct nvme_tcp_ctrl *ctrl) @@ -1346,7 +1346,7 @@ static int nvme_tcp_alloc_async_req(struct nvme_tcp_ctrl *ctrl) struct nvme_tcp_request *async = &ctrl->async_req; u8 hdgst = nvme_tcp_hdgst_len(queue); - async->pdu = page_frag_alloc(&queue->pf_cache, + async->pdu = page_frag_alloc_va(&queue->pf_cache, sizeof(struct nvme_tcp_cmd_pdu) + hdgst, GFP_KERNEL | __GFP_ZERO); if (!async->pdu) diff --git a/drivers/nvme/target/tcp.c b/drivers/nvme/target/tcp.c index 5bff0d5464d1..560df3db2f82 100644 --- a/drivers/nvme/target/tcp.c +++ b/drivers/nvme/target/tcp.c @@ -1463,24 +1463,24 @@ static int nvmet_tcp_alloc_cmd(struct nvmet_tcp_queue *queue, c->queue = queue; c->req.port = queue->port->nport; - c->cmd_pdu = page_frag_alloc(&queue->pf_cache, + c->cmd_pdu = page_frag_alloc_va(&queue->pf_cache, sizeof(*c->cmd_pdu) + hdgst, GFP_KERNEL | __GFP_ZERO); if (!c->cmd_pdu) return -ENOMEM; c->req.cmd = &c->cmd_pdu->cmd; - c->rsp_pdu = page_frag_alloc(&queue->pf_cache, + c->rsp_pdu = page_frag_alloc_va(&queue->pf_cache, sizeof(*c->rsp_pdu) + hdgst, GFP_KERNEL | __GFP_ZERO); if (!c->rsp_pdu) goto out_free_cmd; c->req.cqe = &c->rsp_pdu->cqe; - c->data_pdu = page_frag_alloc(&queue->pf_cache, + c->data_pdu = page_frag_alloc_va(&queue->pf_cache, sizeof(*c->data_pdu) + hdgst, GFP_KERNEL | __GFP_ZERO); if (!c->data_pdu) goto out_free_rsp; - c->r2t_pdu = page_frag_alloc(&queue->pf_cache, + c->r2t_pdu = page_frag_alloc_va(&queue->pf_cache, sizeof(*c->r2t_pdu) + hdgst, GFP_KERNEL | __GFP_ZERO); if (!c->r2t_pdu) goto out_free_data; @@ -1495,20 +1495,20 @@ static int nvmet_tcp_alloc_cmd(struct nvmet_tcp_queue *queue, return 0; out_free_data: - page_frag_free(c->data_pdu); + page_frag_free_va(c->data_pdu); out_free_rsp: - page_frag_free(c->rsp_pdu); + page_frag_free_va(c->rsp_pdu); out_free_cmd: - page_frag_free(c->cmd_pdu); + page_frag_free_va(c->cmd_pdu); return -ENOMEM; } static void nvmet_tcp_free_cmd(struct nvmet_tcp_cmd *c) { - page_frag_free(c->r2t_pdu); - page_frag_free(c->data_pdu); - page_frag_free(c->rsp_pdu); - page_frag_free(c->cmd_pdu); + page_frag_free_va(c->r2t_pdu); + page_frag_free_va(c->data_pdu); + page_frag_free_va(c->rsp_pdu); + page_frag_free_va(c->cmd_pdu); } static int nvmet_tcp_alloc_cmds(struct nvmet_tcp_queue *queue) diff --git a/drivers/vhost/net.c b/drivers/vhost/net.c index f16279351db5..6691fac01e0d 100644 --- a/drivers/vhost/net.c +++ b/drivers/vhost/net.c @@ -686,8 +686,8 @@ static int vhost_net_build_xdp(struct vhost_net_virtqueue *nvq, return -ENOSPC; buflen += SKB_DATA_ALIGN(len + pad); - buf = page_frag_alloc_align(&net->pf_cache, buflen, GFP_KERNEL, - SMP_CACHE_BYTES); + buf = page_frag_alloc_va_align(&net->pf_cache, buflen, GFP_KERNEL, + SMP_CACHE_BYTES); if (unlikely(!buf)) return -ENOMEM; @@ -734,7 +734,7 @@ static int vhost_net_build_xdp(struct vhost_net_virtqueue *nvq, return 0; err: - page_frag_free(buf); + page_frag_free_va(buf); return ret; } diff --git a/include/linux/page_frag_cache.h b/include/linux/page_frag_cache.h index a758cb65a9b3..ef038a07925c 100644 --- a/include/linux/page_frag_cache.h +++ b/include/linux/page_frag_cache.h @@ -9,23 +9,24 @@ void page_frag_cache_drain(struct page_frag_cache *nc); void __page_frag_cache_drain(struct page *page, unsigned int count); -void *__page_frag_alloc_align(struct page_frag_cache *nc, unsigned int fragsz, - gfp_t gfp_mask, unsigned int align_mask); +void *__page_frag_alloc_va_align(struct page_frag_cache *nc, + unsigned int fragsz, gfp_t gfp_mask, + unsigned int align_mask); -static inline void *page_frag_alloc_align(struct page_frag_cache *nc, - unsigned int fragsz, gfp_t gfp_mask, - unsigned int align) +static inline void *page_frag_alloc_va_align(struct page_frag_cache *nc, + unsigned int fragsz, + gfp_t gfp_mask, unsigned int align) { WARN_ON_ONCE(!is_power_of_2(align)); - return __page_frag_alloc_align(nc, fragsz, gfp_mask, -align); + return __page_frag_alloc_va_align(nc, fragsz, gfp_mask, -align); } -static inline void *page_frag_alloc(struct page_frag_cache *nc, - unsigned int fragsz, gfp_t gfp_mask) +static inline void *page_frag_alloc_va(struct page_frag_cache *nc, + unsigned int fragsz, gfp_t gfp_mask) { - return __page_frag_alloc_align(nc, fragsz, gfp_mask, ~0u); + return __page_frag_alloc_va_align(nc, fragsz, gfp_mask, ~0u); } -void page_frag_free(void *addr); +void page_frag_free_va(void *addr); #endif diff --git a/include/linux/skbuff.h b/include/linux/skbuff.h index e057db1c63e9..8d50cb3b161e 100644 --- a/include/linux/skbuff.h +++ b/include/linux/skbuff.h @@ -3381,7 +3381,7 @@ static inline struct sk_buff *netdev_alloc_skb_ip_align(struct net_device *dev, static inline void skb_free_frag(void *addr) { - page_frag_free(addr); + page_frag_free_va(addr); } void *__napi_alloc_frag_align(unsigned int fragsz, unsigned int align_mask); diff --git a/kernel/bpf/cpumap.c b/kernel/bpf/cpumap.c index fbdf5a1aabfe..3b70b6b071b9 100644 --- a/kernel/bpf/cpumap.c +++ b/kernel/bpf/cpumap.c @@ -323,7 +323,7 @@ static int cpu_map_kthread_run(void *data) /* Bring struct page memory area to curr CPU. Read by * build_skb_around via page_is_pfmemalloc(), and when - * freed written by page_frag_free call. + * freed written by page_frag_free_va call. */ prefetchw(page); } diff --git a/mm/page_frag_cache.c b/mm/page_frag_cache.c index c5bc72cf018a..70fb6dead624 100644 --- a/mm/page_frag_cache.c +++ b/mm/page_frag_cache.c @@ -59,9 +59,9 @@ void __page_frag_cache_drain(struct page *page, unsigned int count) } EXPORT_SYMBOL(__page_frag_cache_drain); -void *__page_frag_alloc_align(struct page_frag_cache *nc, - unsigned int fragsz, gfp_t gfp_mask, - unsigned int align_mask) +void *__page_frag_alloc_va_align(struct page_frag_cache *nc, + unsigned int fragsz, gfp_t gfp_mask, + unsigned int align_mask) { #if (PAGE_SIZE < PAGE_FRAG_CACHE_MAX_SIZE) unsigned int size = nc->size; @@ -130,16 +130,16 @@ void *__page_frag_alloc_align(struct page_frag_cache *nc, return nc->va + (size - remaining); } -EXPORT_SYMBOL(__page_frag_alloc_align); +EXPORT_SYMBOL(__page_frag_alloc_va_align); /* * Frees a page fragment allocated out of either a compound or order 0 page. */ -void page_frag_free(void *addr) +void page_frag_free_va(void *addr) { struct page *page = virt_to_head_page(addr); if (unlikely(put_page_testzero(page))) free_unref_page(page, compound_order(page)); } -EXPORT_SYMBOL(page_frag_free); +EXPORT_SYMBOL(page_frag_free_va); diff --git a/mm/page_frag_test.c b/mm/page_frag_test.c index b7a5affb92f2..9eaa3ab74b29 100644 --- a/mm/page_frag_test.c +++ b/mm/page_frag_test.c @@ -276,7 +276,7 @@ static int page_frag_pop_thread(void *arg) if (obj) { nr--; - page_frag_free(obj); + page_frag_free_va(obj); } else { cond_resched(); } @@ -304,13 +304,16 @@ static int page_frag_push_thread(void *arg) int ret; if (test_align) { - va = page_frag_alloc_align(&test_frag, test_alloc_len, - GFP_KERNEL, SMP_CACHE_BYTES); + va = page_frag_alloc_va_align(&test_frag, + test_alloc_len, + GFP_KERNEL, + SMP_CACHE_BYTES); WARN_ONCE((unsigned long)va & (SMP_CACHE_BYTES - 1), "unaligned va returned\n"); } else { - va = page_frag_alloc(&test_frag, test_alloc_len, GFP_KERNEL); + va = page_frag_alloc_va(&test_frag, test_alloc_len, + GFP_KERNEL); } if (!va) @@ -318,7 +321,7 @@ static int page_frag_push_thread(void *arg) ret = objpool_push(va, pool); if (ret) { - page_frag_free(va); + page_frag_free_va(va); cond_resched(); } else { nr--; diff --git a/net/core/skbuff.c b/net/core/skbuff.c index 83f8cd8aa2d1..4b8acd967793 100644 --- a/net/core/skbuff.c +++ b/net/core/skbuff.c @@ -314,8 +314,8 @@ void *__napi_alloc_frag_align(unsigned int fragsz, unsigned int align_mask) fragsz = SKB_DATA_ALIGN(fragsz); local_lock_nested_bh(&napi_alloc_cache.bh_lock); - data = __page_frag_alloc_align(&nc->page, fragsz, GFP_ATOMIC, - align_mask); + data = __page_frag_alloc_va_align(&nc->page, fragsz, GFP_ATOMIC, + align_mask); local_unlock_nested_bh(&napi_alloc_cache.bh_lock); return data; @@ -330,8 +330,8 @@ void *__netdev_alloc_frag_align(unsigned int fragsz, unsigned int align_mask) struct page_frag_cache *nc = this_cpu_ptr(&netdev_alloc_cache); fragsz = SKB_DATA_ALIGN(fragsz); - data = __page_frag_alloc_align(nc, fragsz, GFP_ATOMIC, - align_mask); + data = __page_frag_alloc_va_align(nc, fragsz, GFP_ATOMIC, + align_mask); } else { local_bh_disable(); data = __napi_alloc_frag_align(fragsz, align_mask); @@ -748,14 +748,14 @@ struct sk_buff *__netdev_alloc_skb(struct net_device *dev, unsigned int len, if (in_hardirq() || irqs_disabled()) { nc = this_cpu_ptr(&netdev_alloc_cache); - data = page_frag_alloc(nc, len, gfp_mask); + data = page_frag_alloc_va(nc, len, gfp_mask); pfmemalloc = nc->pfmemalloc; } else { local_bh_disable(); local_lock_nested_bh(&napi_alloc_cache.bh_lock); nc = this_cpu_ptr(&napi_alloc_cache.page); - data = page_frag_alloc(nc, len, gfp_mask); + data = page_frag_alloc_va(nc, len, gfp_mask); pfmemalloc = nc->pfmemalloc; local_unlock_nested_bh(&napi_alloc_cache.bh_lock); @@ -845,7 +845,7 @@ struct sk_buff *napi_alloc_skb(struct napi_struct *napi, unsigned int len) } else { len = SKB_HEAD_ALIGN(len); - data = page_frag_alloc(&nc->page, len, gfp_mask); + data = page_frag_alloc_va(&nc->page, len, gfp_mask); pfmemalloc = nc->page.pfmemalloc; } local_unlock_nested_bh(&napi_alloc_cache.bh_lock); diff --git a/net/core/xdp.c b/net/core/xdp.c index bcc5551c6424..7d4e09fb478f 100644 --- a/net/core/xdp.c +++ b/net/core/xdp.c @@ -387,7 +387,7 @@ void __xdp_return(void *data, struct xdp_mem_info *mem, bool napi_direct, page_pool_put_full_page(page->pp, page, napi_direct); break; case MEM_TYPE_PAGE_SHARED: - page_frag_free(data); + page_frag_free_va(data); break; case MEM_TYPE_PAGE_ORDER0: page = virt_to_page(data); /* Assumes order0 page*/ diff --git a/net/rxrpc/txbuf.c b/net/rxrpc/txbuf.c index c3913d8a50d3..dccb0353ee84 100644 --- a/net/rxrpc/txbuf.c +++ b/net/rxrpc/txbuf.c @@ -33,8 +33,8 @@ struct rxrpc_txbuf *rxrpc_alloc_data_txbuf(struct rxrpc_call *call, size_t data_ data_align = umax(data_align, L1_CACHE_BYTES); mutex_lock(&call->conn->tx_data_alloc_lock); - buf = page_frag_alloc_align(&call->conn->tx_data_alloc, total, gfp, - data_align); + buf = page_frag_alloc_va_align(&call->conn->tx_data_alloc, total, gfp, + data_align); mutex_unlock(&call->conn->tx_data_alloc_lock); if (!buf) { kfree(txb); @@ -96,17 +96,18 @@ struct rxrpc_txbuf *rxrpc_alloc_ack_txbuf(struct rxrpc_call *call, size_t sack_s if (!txb) return NULL; - buf = page_frag_alloc(&call->local->tx_alloc, - sizeof(*whdr) + sizeof(*ack) + 1 + 3 + sizeof(*trailer), gfp); + buf = page_frag_alloc_va(&call->local->tx_alloc, + sizeof(*whdr) + sizeof(*ack) + 1 + 3 + sizeof(*trailer), gfp); if (!buf) { kfree(txb); return NULL; } if (sack_size) { - buf2 = page_frag_alloc(&call->local->tx_alloc, sack_size, gfp); + buf2 = page_frag_alloc_va(&call->local->tx_alloc, sack_size, + gfp); if (!buf2) { - page_frag_free(buf); + page_frag_free_va(buf); kfree(txb); return NULL; } @@ -180,7 +181,7 @@ static void rxrpc_free_txbuf(struct rxrpc_txbuf *txb) rxrpc_txbuf_free); for (i = 0; i < txb->nr_kvec; i++) if (txb->kvec[i].iov_base) - page_frag_free(txb->kvec[i].iov_base); + page_frag_free_va(txb->kvec[i].iov_base); kfree(txb); atomic_dec(&rxrpc_nr_txbuf); } diff --git a/net/sunrpc/svcsock.c b/net/sunrpc/svcsock.c index 6b3f01beb294..42d20412c1c3 100644 --- a/net/sunrpc/svcsock.c +++ b/net/sunrpc/svcsock.c @@ -1222,8 +1222,8 @@ static int svc_tcp_sendmsg(struct svc_sock *svsk, struct svc_rqst *rqstp, /* The stream record marker is copied into a temporary page * fragment buffer so that it can be included in rq_bvec. */ - buf = page_frag_alloc(&svsk->sk_frag_cache, sizeof(marker), - GFP_KERNEL); + buf = page_frag_alloc_va(&svsk->sk_frag_cache, sizeof(marker), + GFP_KERNEL); if (!buf) return -ENOMEM; memcpy(buf, &marker, sizeof(marker)); @@ -1235,7 +1235,7 @@ static int svc_tcp_sendmsg(struct svc_sock *svsk, struct svc_rqst *rqstp, iov_iter_bvec(&msg.msg_iter, ITER_SOURCE, rqstp->rq_bvec, 1 + count, sizeof(marker) + rqstp->rq_res.len); ret = sock_sendmsg(svsk->sk_sock, &msg); - page_frag_free(buf); + page_frag_free_va(buf); if (ret < 0) return ret; *sentp += ret; From patchwork Wed Jul 31 12:44:55 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yunsheng Lin X-Patchwork-Id: 13748709 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id B81BEC3DA64 for ; Wed, 31 Jul 2024 12:50:59 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 0E18F6B008C; Wed, 31 Jul 2024 08:50:59 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 092B66B0092; Wed, 31 Jul 2024 08:50:59 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id E9D836B0095; Wed, 31 Jul 2024 08:50:58 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id BAA406B008C for ; Wed, 31 Jul 2024 08:50:58 -0400 (EDT) Received: from smtpin03.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id 759C6A4A7D for ; Wed, 31 Jul 2024 12:50:58 +0000 (UTC) X-FDA: 82400032596.03.94E8987 Received: from szxga01-in.huawei.com (szxga01-in.huawei.com [45.249.212.187]) by imf22.hostedemail.com (Postfix) with ESMTP id 08D9FC002F for ; Wed, 31 Jul 2024 12:50:55 +0000 (UTC) Authentication-Results: imf22.hostedemail.com; dkim=none; spf=pass (imf22.hostedemail.com: domain of linyunsheng@huawei.com designates 45.249.212.187 as permitted sender) smtp.mailfrom=linyunsheng@huawei.com; dmarc=pass (policy=quarantine) header.from=huawei.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1722430200; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=EwZMLHgRLQ31cvJjRMZoZytD3CYHXQqT+91ejIA7Ets=; b=iAh1xqiU0aQ+VA3+AMDHX1/NZIqoNQyLVShHvXIKUykjzb+xa5wAMN+WMFIVVRvqDvefS5 UkySe71HU7+aep++vmR6BpBkg1lwseAeSgUHclO91LyKYF12NySxZT+qaYQnvO/ktdZUoV kWkH4caRSp1zf9tvO9j/4Awcr710zaY= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1722430200; a=rsa-sha256; cv=none; b=7mKdLqC4fC3wDnrhsYu17rvlhMYW0G4zZ/arbainHH8KEWcw68AGh2TiCrJ/htucCfLCm5 LVg+w4XCBRY2tC0Tzn0B2mOzpbJh6JOJd+QKEfgtWUQmURfZlMGjr9YOBDWeMPc3wP5RCy FYXOGO/B6xZ7QeuYoXoigQcHtxr6iWc= ARC-Authentication-Results: i=1; imf22.hostedemail.com; dkim=none; spf=pass (imf22.hostedemail.com: domain of linyunsheng@huawei.com designates 45.249.212.187 as permitted sender) smtp.mailfrom=linyunsheng@huawei.com; dmarc=pass (policy=quarantine) header.from=huawei.com Received: from mail.maildlp.com (unknown [172.19.163.252]) by szxga01-in.huawei.com (SkyGuard) with ESMTP id 4WYsQT3Z71zxVyf; Wed, 31 Jul 2024 20:50:41 +0800 (CST) Received: from dggpemf200006.china.huawei.com (unknown [7.185.36.61]) by mail.maildlp.com (Postfix) with ESMTPS id 33032180AE3; Wed, 31 Jul 2024 20:50:53 +0800 (CST) Received: from localhost.localdomain (10.90.30.45) by dggpemf200006.china.huawei.com (7.185.36.61) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.11; Wed, 31 Jul 2024 20:50:52 +0800 From: Yunsheng Lin To: , , CC: , , Yunsheng Lin , Alexander Duyck , Alexander Duyck , "Michael S. Tsirkin" , Jason Wang , =?utf-8?q?Eugenio_P=C3=A9rez?= , Andrew Morton , Eric Dumazet , David Howells , Marc Dionne , Trond Myklebust , Anna Schumaker , Chuck Lever , Jeff Layton , Neil Brown , Olga Kornievskaia , Dai Ngo , Tom Talpey , , , , , Subject: [PATCH net-next v12 05/14] mm: page_frag: avoid caller accessing 'page_frag_cache' directly Date: Wed, 31 Jul 2024 20:44:55 +0800 Message-ID: <20240731124505.2903877-6-linyunsheng@huawei.com> X-Mailer: git-send-email 2.30.0 In-Reply-To: <20240731124505.2903877-1-linyunsheng@huawei.com> References: <20240731124505.2903877-1-linyunsheng@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.90.30.45] X-ClientProxiedBy: dggems705-chm.china.huawei.com (10.3.19.182) To dggpemf200006.china.huawei.com (7.185.36.61) X-Stat-Signature: 6qpgr3xd5iznw6y3d6mza91kchjfbc1z X-Rspamd-Queue-Id: 08D9FC002F X-Rspam-User: X-Rspamd-Server: rspam08 X-HE-Tag: 1722430255-741869 X-HE-Meta: U2FsdGVkX1/9/Upzd+O0gTSh1SGZvxbcBVxynAxgK+DVZdKQ18LOaRwXZQ92isWEimcgV8s6qXr/sMUZsen+LP39PjH8WGepkJV8N9CZUAQh68qqdRAHhSxIlYytPU3u8B9xhxbka/XRJTW7mjz78MMGl5g8TBX9Ob+PK1kjeMRjQFFo+kSm+FZQtPUFQwdZUZaBv1SfjmcqXSzdI34M02VS0XaAEvh/Ey9fZbzNso4QRNY2G0Ccm3ubBqTXrK8c/Q8f5ZQYr+Sg6Ic8TbbHqWt7LK25XYohkMDsP+EhXMd0rMWKSa4wWa173euDP71NJBOZaIUdGoqpVey0nTzgaqt18N12Gk1d9o6tx/RVKjTl8G0A+KrGiIRJZ29IsjdKnXXp7uHWesC6IN0At0nsFStBW5TQ9QOmAgTcvFSnV2qzMnGPFpy9nKbTl+DEKFZzYM1NJ+xIO2qWiZPLq5F2wMHzPrAJhyejATjapQNT2I1OTI/EnDmTKpGPQsgDULkPfMzzr/QnQNfzwsNqHiWWWaI0Gk3wz86pG/djTou2zYwU9ra2zHI1uKnQIqGKIzNLoY2/TGe/P38a4BHFSQOm9d40xAgKraHrpRTVwb81zk/u+BTAmLmaNf6f6OBReJCCy7Q5xlE7V5gJ9cBcsgI9h2GV4lmoQCFVC/VqGDdf54FnYs8I2lCgz3E2iEDn2VoVnhG9Z1a4fL4NC2rgp5aOfeYmbyZP4lQVqbp6osNoCx/Z5LTcyqDyd/fkUm+E0HNbnQxvO5TQMvL27/kfZIF41qiH4ampG1Y2+IVjVWtHqmmiQeDyQB9rZbihYHAdqR5q2zEvH3JMTH0djCBRIGPygqB3LUSgXK+uHGX8+l2d874yhyOof4iOquLIj99XIwOM8XuM1S9bJ0FZYHSfBgE7adeJhvF3E8wSzXebLfmsnnZ2nULK0GrNfVSTSoN6/JB+hP2lhT1bUUdQhArBaH4 moeEQFlz uqIgADEoiYXq3XG7aUZ0iq1MSrsj6WHaKZEIvKJ/vDqxa4JtHluT/l38TZlbZgXB6KDwfAOUzE4R80TT5OLYScmkmVp0CLmCEeEK4mZiq3bRg3YWczequLWPrBRv8hRF9b3EFRrRvGD/N6sDy1TFzr0wC2SkCLj1yprNRJ0IGQa4EyHloCLsx9uSjJOqyyW0JV2k8C8re/mUlJb2ynsDQpQhl/R/g5czz9/cDuLgp6J3YwiUr9WRNzIoy1Blq+ONvXgya8guCTCn7Xr2uyAbNGmuXaOtD9pck0MpeDSNwKyWQaEYqnJItxBFATfYFQ/sGQ4QpSsUrEOZfJobNqijfnNAkXuFrJNkrajzL X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Use appropriate frag_page API instead of caller accessing 'page_frag_cache' directly. CC: Alexander Duyck Signed-off-by: Yunsheng Lin Reviewed-by: Alexander Duyck Acked-by: Chuck Lever --- drivers/vhost/net.c | 2 +- include/linux/page_frag_cache.h | 10 ++++++++++ mm/page_frag_test.c | 2 +- net/core/skbuff.c | 6 +++--- net/rxrpc/conn_object.c | 4 +--- net/rxrpc/local_object.c | 4 +--- net/sunrpc/svcsock.c | 6 ++---- 7 files changed, 19 insertions(+), 15 deletions(-) diff --git a/drivers/vhost/net.c b/drivers/vhost/net.c index 6691fac01e0d..b2737dc0dc50 100644 --- a/drivers/vhost/net.c +++ b/drivers/vhost/net.c @@ -1325,7 +1325,7 @@ static int vhost_net_open(struct inode *inode, struct file *f) vqs[VHOST_NET_VQ_RX]); f->private_data = n; - n->pf_cache.va = NULL; + page_frag_cache_init(&n->pf_cache); return 0; } diff --git a/include/linux/page_frag_cache.h b/include/linux/page_frag_cache.h index ef038a07925c..7c9125a9aed3 100644 --- a/include/linux/page_frag_cache.h +++ b/include/linux/page_frag_cache.h @@ -7,6 +7,16 @@ #include #include +static inline void page_frag_cache_init(struct page_frag_cache *nc) +{ + nc->va = NULL; +} + +static inline bool page_frag_cache_is_pfmemalloc(struct page_frag_cache *nc) +{ + return !!nc->pfmemalloc; +} + void page_frag_cache_drain(struct page_frag_cache *nc); void __page_frag_cache_drain(struct page *page, unsigned int count); void *__page_frag_alloc_va_align(struct page_frag_cache *nc, diff --git a/mm/page_frag_test.c b/mm/page_frag_test.c index 9eaa3ab74b29..6df8d8865afe 100644 --- a/mm/page_frag_test.c +++ b/mm/page_frag_test.c @@ -344,7 +344,7 @@ static int __init page_frag_test_init(void) u64 duration; int ret; - test_frag.va = NULL; + page_frag_cache_init(&test_frag); atomic_set(&nthreads, 2); init_completion(&wait); diff --git a/net/core/skbuff.c b/net/core/skbuff.c index 4b8acd967793..76a473b1072d 100644 --- a/net/core/skbuff.c +++ b/net/core/skbuff.c @@ -749,14 +749,14 @@ struct sk_buff *__netdev_alloc_skb(struct net_device *dev, unsigned int len, if (in_hardirq() || irqs_disabled()) { nc = this_cpu_ptr(&netdev_alloc_cache); data = page_frag_alloc_va(nc, len, gfp_mask); - pfmemalloc = nc->pfmemalloc; + pfmemalloc = page_frag_cache_is_pfmemalloc(nc); } else { local_bh_disable(); local_lock_nested_bh(&napi_alloc_cache.bh_lock); nc = this_cpu_ptr(&napi_alloc_cache.page); data = page_frag_alloc_va(nc, len, gfp_mask); - pfmemalloc = nc->pfmemalloc; + pfmemalloc = page_frag_cache_is_pfmemalloc(nc); local_unlock_nested_bh(&napi_alloc_cache.bh_lock); local_bh_enable(); @@ -846,7 +846,7 @@ struct sk_buff *napi_alloc_skb(struct napi_struct *napi, unsigned int len) len = SKB_HEAD_ALIGN(len); data = page_frag_alloc_va(&nc->page, len, gfp_mask); - pfmemalloc = nc->page.pfmemalloc; + pfmemalloc = page_frag_cache_is_pfmemalloc(&nc->page); } local_unlock_nested_bh(&napi_alloc_cache.bh_lock); diff --git a/net/rxrpc/conn_object.c b/net/rxrpc/conn_object.c index 1539d315afe7..694c4df7a1a3 100644 --- a/net/rxrpc/conn_object.c +++ b/net/rxrpc/conn_object.c @@ -337,9 +337,7 @@ static void rxrpc_clean_up_connection(struct work_struct *work) */ rxrpc_purge_queue(&conn->rx_queue); - if (conn->tx_data_alloc.va) - __page_frag_cache_drain(virt_to_page(conn->tx_data_alloc.va), - conn->tx_data_alloc.pagecnt_bias); + page_frag_cache_drain(&conn->tx_data_alloc); call_rcu(&conn->rcu, rxrpc_rcu_free_connection); } diff --git a/net/rxrpc/local_object.c b/net/rxrpc/local_object.c index 504453c688d7..a8cffe47cf01 100644 --- a/net/rxrpc/local_object.c +++ b/net/rxrpc/local_object.c @@ -452,9 +452,7 @@ void rxrpc_destroy_local(struct rxrpc_local *local) #endif rxrpc_purge_queue(&local->rx_queue); rxrpc_purge_client_connections(local); - if (local->tx_alloc.va) - __page_frag_cache_drain(virt_to_page(local->tx_alloc.va), - local->tx_alloc.pagecnt_bias); + page_frag_cache_drain(&local->tx_alloc); } /* diff --git a/net/sunrpc/svcsock.c b/net/sunrpc/svcsock.c index 42d20412c1c3..4b1e87187614 100644 --- a/net/sunrpc/svcsock.c +++ b/net/sunrpc/svcsock.c @@ -1609,7 +1609,6 @@ static void svc_tcp_sock_detach(struct svc_xprt *xprt) static void svc_sock_free(struct svc_xprt *xprt) { struct svc_sock *svsk = container_of(xprt, struct svc_sock, sk_xprt); - struct page_frag_cache *pfc = &svsk->sk_frag_cache; struct socket *sock = svsk->sk_sock; trace_svcsock_free(svsk, sock); @@ -1619,8 +1618,7 @@ static void svc_sock_free(struct svc_xprt *xprt) sockfd_put(sock); else sock_release(sock); - if (pfc->va) - __page_frag_cache_drain(virt_to_head_page(pfc->va), - pfc->pagecnt_bias); + + page_frag_cache_drain(&svsk->sk_frag_cache); kfree(svsk); } From patchwork Wed Jul 31 12:44:57 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yunsheng Lin X-Patchwork-Id: 13748710 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id CC93BC3DA64 for ; Wed, 31 Jul 2024 12:51:02 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 6136D6B0092; Wed, 31 Jul 2024 08:51:02 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 5C32F6B0095; Wed, 31 Jul 2024 08:51:02 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 4B1306B0096; Wed, 31 Jul 2024 08:51:02 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 265DB6B0092 for ; Wed, 31 Jul 2024 08:51:02 -0400 (EDT) Received: from smtpin16.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id CD38EC013D for ; Wed, 31 Jul 2024 12:51:01 +0000 (UTC) X-FDA: 82400032722.16.0C6AA6D Received: from szxga01-in.huawei.com (szxga01-in.huawei.com [45.249.212.187]) by imf05.hostedemail.com (Postfix) with ESMTP id 6F1C2100003 for ; Wed, 31 Jul 2024 12:50:59 +0000 (UTC) Authentication-Results: imf05.hostedemail.com; dkim=none; dmarc=pass (policy=quarantine) header.from=huawei.com; spf=pass (imf05.hostedemail.com: domain of linyunsheng@huawei.com designates 45.249.212.187 as permitted sender) smtp.mailfrom=linyunsheng@huawei.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1722430214; a=rsa-sha256; cv=none; b=kgMrHLGlbYrMjcfVZz7J3ZkZHkeutsl9BIeACJRW9cAh5/JKm3yy7k37Ptiekm8cjuJRKS B0PjSMolLl0HXi0mDH8O3yospIcvTX+oZbs26WGoLueBuvo66Ni6PeEkp3jYk7ySFXOU34 HF+xzJXhCuOsabbqDESy55h/pV+d5qY= ARC-Authentication-Results: i=1; imf05.hostedemail.com; dkim=none; dmarc=pass (policy=quarantine) header.from=huawei.com; spf=pass (imf05.hostedemail.com: domain of linyunsheng@huawei.com designates 45.249.212.187 as permitted sender) smtp.mailfrom=linyunsheng@huawei.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1722430214; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=wbxN6HFmPApZpCmj35iTaQWSjGHuVH+23z1ZHCdtx+U=; b=knJb4ONh2024J3dkJXnaH1EAbt9auiD9MKYaaTwSvavnWSlnC60H1z3a8W3712lfMGKRGf s/XjWN6wPpCt0mtIzXordinIVoTZvvhYla9jYVOWlQmLA6/dA6rvlphHzni88aPlbcwoVh zuSjqZj5LryyreBryvr1sI/WIrDBWLA= Received: from mail.maildlp.com (unknown [172.19.163.174]) by szxga01-in.huawei.com (SkyGuard) with ESMTP id 4WYsQY0d0rzxW2B; Wed, 31 Jul 2024 20:50:45 +0800 (CST) Received: from dggpemf200006.china.huawei.com (unknown [7.185.36.61]) by mail.maildlp.com (Postfix) with ESMTPS id C7689140360; Wed, 31 Jul 2024 20:50:56 +0800 (CST) Received: from localhost.localdomain (10.90.30.45) by dggpemf200006.china.huawei.com (7.185.36.61) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.11; Wed, 31 Jul 2024 20:50:56 +0800 From: Yunsheng Lin To: , , CC: , , Yunsheng Lin , Alexander Duyck , Andrew Morton , Subject: [PATCH net-next v12 07/14] mm: page_frag: reuse existing space for 'size' and 'pfmemalloc' Date: Wed, 31 Jul 2024 20:44:57 +0800 Message-ID: <20240731124505.2903877-8-linyunsheng@huawei.com> X-Mailer: git-send-email 2.30.0 In-Reply-To: <20240731124505.2903877-1-linyunsheng@huawei.com> References: <20240731124505.2903877-1-linyunsheng@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.90.30.45] X-ClientProxiedBy: dggems705-chm.china.huawei.com (10.3.19.182) To dggpemf200006.china.huawei.com (7.185.36.61) X-Rspamd-Server: rspam12 X-Rspamd-Queue-Id: 6F1C2100003 X-Stat-Signature: mfwmhbamtc9pjzpck1t79dw3uxk8ut9z X-Rspam-User: X-HE-Tag: 1722430259-618635 X-HE-Meta: U2FsdGVkX19BrmB/SSTNVOGmmg9k9yvSFOLXx9R+QQKmxw3bGJpAqlGCZ2pgcK+svE9LltnZdziNOg4Ybi/AViJDWu4400FJfISS3uwUqWBZTmWLveeWtEcPWbHZcBayi5Un0qCUqOg8WkR2TpR8akMcVdyDYSfgs1r4agsugJC1NIBEt5YIOYAipO5VeAQ5/bt4h3ce0HrsdCgOG2nvEzP6YpY032RnsQZMETLQGxQrsXM7oujyN/NMnER0xPyQ1Z8VmjQq8taeuA+dCLDz9IV9gZmeRwW2ww/iQLwIkJ5OolzKpqwmyorAqfwg6tZBTjBRtNG1aC5PzJSCqWWHD8BU+03awP/hMNvpViGfPyJDRDe/Fucn/VLW2IHjODY0aiF5zsyF1l2O+nsbVwLbkAYtXX8/4fx3S2GfmBEFXXa3dncRV2oILFYcjotk4HWtO5+BFPENioBUuEfKwVY1QgxcmnI+Dl37dfmTuLgbe0+4yR5SOXpC5JQl/kEL3OaRUPd2zos1S0kuv+hpusNJb/vFkuGlDz86CQify/zoUu9pVxXIUa0ThMRaajEpqNWom9TbN7+4aM2GWnOftIe1uVsoDZ1L5J9mNmKcdY6FccpGy6V8JwICuH0vyPkcBTDbkdnNzfQ0SuyIm3vZXE7GRy66UL97aC7b1lhYO+OrAsa6H3BvgGOrzp24l/7ohAieP1catNVdviXQzCeHflV9sr/cciJvthUV6b4f2ban+uf4gag6ZMmTdbFae0KBBoQI0yEwxSsunig6lzhlne5aXdermEvTW8tXY3jQnyV9t4fqlQbTKF8n7mR7QLuqFfUe7+q7pm+EdfJYgjJtzwD9a/nrdTpYsrVmxU0TEzij0xgUX03HrWXcWcl2uRVN7x6ZLgXDZUUWYCUx+7oDxyeychuM7uqTZpZGaVzcvkGSReYfqrboPj9tqKbDFIdRqhHw9BCuf6kbUHbSjZEPTKI O36ODv30 5OOqCncubhgBxGJzHDA8w7vrtzjEXHma3oAczW14Lpg3JAijxJhOynX6npRFZiUe2mDUTJfosfAuR6mGsJNKUOeeGRRFb4C4KTHgK4hEv/zn89gLRTfZaa5atNW3XQUMcvOA4sRLtU7xAJ6/pZCzVbUs3w4OmTnHZgfGfqF3+io3Z2XoLHCM2nSOedLGMGm6ESLMAsU4yVv96SquOCp5HiPkEnyBzCcdKmzR7AxYpnKKGOnruk1zT2QWc9b0eTOLGUyznFHmx9bopRSP0khWfv6q/Q3mkMjd5s74A X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Currently there is one 'struct page_frag' for every 'struct sock' and 'struct task_struct', we are about to replace the 'struct page_frag' with 'struct page_frag_cache' for them. Before begin the replacing, we need to ensure the size of 'struct page_frag_cache' is not bigger than the size of 'struct page_frag', as there may be tens of thousands of 'struct sock' and 'struct task_struct' instances in the system. By or'ing the page order & pfmemalloc with lower bits of 'va' instead of using 'u16' or 'u32' for page size and 'u8' for pfmemalloc, we are able to avoid 3 or 5 bytes space waste. And page address & pfmemalloc & order is unchanged for the same page in the same 'page_frag_cache' instance, it makes sense to fit them together. After this patch, the size of 'struct page_frag_cache' should be the same as the size of 'struct page_frag'. CC: Alexander Duyck Signed-off-by: Yunsheng Lin --- include/linux/mm_types_task.h | 16 +++++----- include/linux/page_frag_cache.h | 52 +++++++++++++++++++++++++++++++-- mm/page_frag_cache.c | 49 +++++++++++++++++-------------- 3 files changed, 85 insertions(+), 32 deletions(-) diff --git a/include/linux/mm_types_task.h b/include/linux/mm_types_task.h index b1c54b2b9308..f2610112a642 100644 --- a/include/linux/mm_types_task.h +++ b/include/linux/mm_types_task.h @@ -50,18 +50,18 @@ struct page_frag { #define PAGE_FRAG_CACHE_MAX_SIZE __ALIGN_MASK(32768, ~PAGE_MASK) #define PAGE_FRAG_CACHE_MAX_ORDER get_order(PAGE_FRAG_CACHE_MAX_SIZE) struct page_frag_cache { - void *va; -#if (PAGE_SIZE < PAGE_FRAG_CACHE_MAX_SIZE) + /* encoded_va consists of the virtual address, pfmemalloc bit and order + * of a page. + */ + unsigned long encoded_va; + +#if (PAGE_SIZE < PAGE_FRAG_CACHE_MAX_SIZE) && (BITS_PER_LONG <= 32) __u16 remaining; - __u16 size; + __u16 pagecnt_bias; #else __u32 remaining; + __u32 pagecnt_bias; #endif - /* we maintain a pagecount bias, so that we dont dirty cache line - * containing page->_refcount every time we allocate a fragment. - */ - unsigned int pagecnt_bias; - bool pfmemalloc; }; /* Track pages that require TLB flushes */ diff --git a/include/linux/page_frag_cache.h b/include/linux/page_frag_cache.h index 7c9125a9aed3..4ce924eaf1b1 100644 --- a/include/linux/page_frag_cache.h +++ b/include/linux/page_frag_cache.h @@ -3,18 +3,66 @@ #ifndef _LINUX_PAGE_FRAG_CACHE_H #define _LINUX_PAGE_FRAG_CACHE_H +#include +#include #include #include #include +#if (PAGE_SIZE < PAGE_FRAG_CACHE_MAX_SIZE) +/* Use a full byte here to enable assembler optimization as the shift + * operation is usually expecting a byte. + */ +#define PAGE_FRAG_CACHE_ORDER_MASK GENMASK(7, 0) +#define PAGE_FRAG_CACHE_PFMEMALLOC_BIT BIT(8) +#define PAGE_FRAG_CACHE_PFMEMALLOC_SHIFT 8 +#else +/* Compiler should be able to figure out we don't read things as any value + * ANDed with 0 is 0. + */ +#define PAGE_FRAG_CACHE_ORDER_MASK 0 +#define PAGE_FRAG_CACHE_PFMEMALLOC_BIT BIT(0) +#define PAGE_FRAG_CACHE_PFMEMALLOC_SHIFT 0 +#endif + +static inline unsigned long encode_aligned_va(void *va, unsigned int order, + bool pfmemalloc) +{ + BUILD_BUG_ON(PAGE_FRAG_CACHE_MAX_ORDER > PAGE_FRAG_CACHE_ORDER_MASK); + BUILD_BUG_ON(PAGE_FRAG_CACHE_PFMEMALLOC_SHIFT >= PAGE_SHIFT); + + return (unsigned long)va | (order & PAGE_FRAG_CACHE_ORDER_MASK) | + ((unsigned long)pfmemalloc << PAGE_FRAG_CACHE_PFMEMALLOC_SHIFT); +} + +static inline unsigned long encoded_page_order(unsigned long encoded_va) +{ + return encoded_va & PAGE_FRAG_CACHE_ORDER_MASK; +} + +static inline bool encoded_page_pfmemalloc(unsigned long encoded_va) +{ + return !!(encoded_va & PAGE_FRAG_CACHE_PFMEMALLOC_BIT); +} + +static inline void *encoded_page_address(unsigned long encoded_va) +{ + return (void *)(encoded_va & PAGE_MASK); +} + static inline void page_frag_cache_init(struct page_frag_cache *nc) { - nc->va = NULL; + nc->encoded_va = 0; } static inline bool page_frag_cache_is_pfmemalloc(struct page_frag_cache *nc) { - return !!nc->pfmemalloc; + return encoded_page_pfmemalloc(nc->encoded_va); +} + +static inline unsigned int page_frag_cache_page_size(unsigned long encoded_va) +{ + return PAGE_SIZE << encoded_page_order(encoded_va); } void page_frag_cache_drain(struct page_frag_cache *nc); diff --git a/mm/page_frag_cache.c b/mm/page_frag_cache.c index 70fb6dead624..2544b292375a 100644 --- a/mm/page_frag_cache.c +++ b/mm/page_frag_cache.c @@ -22,6 +22,7 @@ static struct page *__page_frag_cache_refill(struct page_frag_cache *nc, gfp_t gfp_mask) { + unsigned long order = PAGE_FRAG_CACHE_MAX_ORDER; struct page *page = NULL; gfp_t gfp = gfp_mask; @@ -30,23 +31,31 @@ static struct page *__page_frag_cache_refill(struct page_frag_cache *nc, __GFP_NOWARN | __GFP_NORETRY | __GFP_NOMEMALLOC; page = alloc_pages_node(NUMA_NO_NODE, gfp_mask, PAGE_FRAG_CACHE_MAX_ORDER); - nc->size = page ? PAGE_FRAG_CACHE_MAX_SIZE : PAGE_SIZE; #endif - if (unlikely(!page)) + if (unlikely(!page)) { page = alloc_pages_node(NUMA_NO_NODE, gfp, 0); + if (unlikely(!page)) { + nc->encoded_va = 0; + return NULL; + } - nc->va = page ? page_address(page) : NULL; + order = 0; + } + + nc->encoded_va = encode_aligned_va(page_address(page), order, + page_is_pfmemalloc(page)); return page; } void page_frag_cache_drain(struct page_frag_cache *nc) { - if (!nc->va) + if (!nc->encoded_va) return; - __page_frag_cache_drain(virt_to_head_page(nc->va), nc->pagecnt_bias); - nc->va = NULL; + __page_frag_cache_drain(virt_to_head_page((void *)nc->encoded_va), + nc->pagecnt_bias); + nc->encoded_va = 0; } EXPORT_SYMBOL(page_frag_cache_drain); @@ -63,33 +72,29 @@ void *__page_frag_alloc_va_align(struct page_frag_cache *nc, unsigned int fragsz, gfp_t gfp_mask, unsigned int align_mask) { -#if (PAGE_SIZE < PAGE_FRAG_CACHE_MAX_SIZE) - unsigned int size = nc->size; -#else - unsigned int size = PAGE_SIZE; -#endif - unsigned int remaining; + unsigned long encoded_va = nc->encoded_va; + unsigned int size, remaining; struct page *page; - if (unlikely(!nc->va)) { + if (unlikely(!encoded_va)) { refill: page = __page_frag_cache_refill(nc, gfp_mask); if (!page) return NULL; -#if (PAGE_SIZE < PAGE_FRAG_CACHE_MAX_SIZE) - /* if size can vary use size else just use PAGE_SIZE */ - size = nc->size; -#endif + encoded_va = nc->encoded_va; + size = page_frag_cache_page_size(encoded_va); + /* Even if we own the page, we do not use atomic_set(). * This would break get_page_unless_zero() users. */ page_ref_add(page, PAGE_FRAG_CACHE_MAX_SIZE); /* reset page count bias and remaining to start of new frag */ - nc->pfmemalloc = page_is_pfmemalloc(page); nc->pagecnt_bias = PAGE_FRAG_CACHE_MAX_SIZE + 1; nc->remaining = size; + } else { + size = page_frag_cache_page_size(encoded_va); } remaining = nc->remaining & align_mask; @@ -107,13 +112,13 @@ void *__page_frag_alloc_va_align(struct page_frag_cache *nc, return NULL; } - page = virt_to_page(nc->va); + page = virt_to_page((void *)encoded_va); if (!page_ref_sub_and_test(page, nc->pagecnt_bias)) goto refill; - if (unlikely(nc->pfmemalloc)) { - free_unref_page(page, compound_order(page)); + if (unlikely(encoded_page_pfmemalloc(encoded_va))) { + free_unref_page(page, encoded_page_order(encoded_va)); goto refill; } @@ -128,7 +133,7 @@ void *__page_frag_alloc_va_align(struct page_frag_cache *nc, nc->pagecnt_bias--; nc->remaining = remaining - fragsz; - return nc->va + (size - remaining); + return encoded_page_address(encoded_va) + (size - remaining); } EXPORT_SYMBOL(__page_frag_alloc_va_align); From patchwork Wed Jul 31 12:44:58 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yunsheng Lin X-Patchwork-Id: 13748711 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7C0ABC49EA1 for ; Wed, 31 Jul 2024 12:51:05 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 920196B0096; Wed, 31 Jul 2024 08:51:04 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 8F6BC6B0098; Wed, 31 Jul 2024 08:51:04 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 795A16B0099; Wed, 31 Jul 2024 08:51:04 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id 5C2F36B0096 for ; Wed, 31 Jul 2024 08:51:04 -0400 (EDT) Received: from smtpin06.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id CD21C4030A for ; Wed, 31 Jul 2024 12:51:03 +0000 (UTC) X-FDA: 82400032806.06.A7709B2 Received: from szxga03-in.huawei.com (szxga03-in.huawei.com [45.249.212.189]) by imf23.hostedemail.com (Postfix) with ESMTP id 65618140012 for ; Wed, 31 Jul 2024 12:51:01 +0000 (UTC) Authentication-Results: imf23.hostedemail.com; dkim=none; dmarc=pass (policy=quarantine) header.from=huawei.com; spf=pass (imf23.hostedemail.com: domain of linyunsheng@huawei.com designates 45.249.212.189 as permitted sender) smtp.mailfrom=linyunsheng@huawei.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1722430216; a=rsa-sha256; cv=none; b=A0uNhegGNFSSWhtccHhtrHUUdanSaeY1VtHWJQqcwwMFVxhTyc40P6Ljj+bKzdXx1/hztP Cx5agB43YyhVaJGsT5yPfSVbvYNqeRcWxIjygxTW/l6S0CJFtVhfmFTzTINe5t9ps2V1NA 79eYBrIXEXiDgpbXNKLlqIstBiIlgkY= ARC-Authentication-Results: i=1; imf23.hostedemail.com; dkim=none; dmarc=pass (policy=quarantine) header.from=huawei.com; spf=pass (imf23.hostedemail.com: domain of linyunsheng@huawei.com designates 45.249.212.189 as permitted sender) smtp.mailfrom=linyunsheng@huawei.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1722430216; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=60pAXD0ZYXW1c2nPlAqP1x/rIedEAPxoWXNVAg0gsp8=; b=dwR2HclIERq0zvNKVV3JicNLmnGa6dYlm5GSGfqbEDFyBM86oS2trCS7lh/eNnGXEgyyCG jj7S7XFPi+1kNMVUwJ4rDPjie70VkFGpWvJwGaztBZCMGoh53CQ47HtPDJ8U/HBWfZYFGw CuqiLMB3ENl50LNRUPNNbUPZwVQ0ea4= Received: from mail.maildlp.com (unknown [172.19.163.174]) by szxga03-in.huawei.com (SkyGuard) with ESMTP id 4WYsKq4M81zQnC9; Wed, 31 Jul 2024 20:46:39 +0800 (CST) Received: from dggpemf200006.china.huawei.com (unknown [7.185.36.61]) by mail.maildlp.com (Postfix) with ESMTPS id AAEEE140360; Wed, 31 Jul 2024 20:50:58 +0800 (CST) Received: from localhost.localdomain (10.90.30.45) by dggpemf200006.china.huawei.com (7.185.36.61) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.11; Wed, 31 Jul 2024 20:50:58 +0800 From: Yunsheng Lin To: , , CC: , , Yunsheng Lin , Alexander Duyck , Andrew Morton , Subject: [PATCH net-next v12 08/14] mm: page_frag: some minor refactoring before adding new API Date: Wed, 31 Jul 2024 20:44:58 +0800 Message-ID: <20240731124505.2903877-9-linyunsheng@huawei.com> X-Mailer: git-send-email 2.30.0 In-Reply-To: <20240731124505.2903877-1-linyunsheng@huawei.com> References: <20240731124505.2903877-1-linyunsheng@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.90.30.45] X-ClientProxiedBy: dggems705-chm.china.huawei.com (10.3.19.182) To dggpemf200006.china.huawei.com (7.185.36.61) X-Rspamd-Server: rspam12 X-Rspamd-Queue-Id: 65618140012 X-Stat-Signature: dscyfmoz7kicb4xe7tm39rewuny6guor X-Rspam-User: X-HE-Tag: 1722430261-830268 X-HE-Meta: U2FsdGVkX188heXBbkv9Aj9UYIAeydWBFhT44hk0wuMmb10cg3iXH3DJJIhTJEMTF40vrKZe1bHcfYAopxDCDd6oyeoKP1ByIsAzADjuw5AUCKeOr0zJ2iKG/q9T6wHJRFM/v0biQHwzcTxsBVDT/T93GTzOLiKFhG5p4NE5oqIyOeTQESxrFVPN79DU1bpnbJAdEU1fCQnH1RaiH2oo3AGcCFUdWhJE3iKkXN/u5itgB4sRG10h7qraXGkBWKwO64U8fEB2wZo09bZlCsEfQhpRf/SA5YZadYGhA3TIsaeTS2Xs8ADftiJi37+ovqDkCF3hYt544exyVoE5h7zIQ3Ng1pyOLQ+day8KPH/utjsDF57UKw5flEglq6SU8pqxLCJg3KJZTjhmMsje62OAOL9fPWdIFQNkCNBBiazcFFCzliuTc+o4/g9qaTMuGLCxjBLJEiTpckvQOFTkAVLEWdT2w191l/TA+fn4epGxqZjZNdBslhhmg49FJxWCVcFYDvRNN7GiFKe5h06cpFuDw6MBzbe2nbjmyshtvemxhto8Vyh5xJjvRpHRU5pCL4HOW8FgsTAQG/a8wn9LtUR9sCbla3dAQzmO4wOr6oz+1x8l3C1Vb856uWDUs7NfA0KlZDJ37m2/g8tEFAkEMJEzCzQfiRagz3wVD4Uo10F2X/V7nYKAMPqpsNbIvaeIF0olGGQn8Lu4Yu5/cMj2WAAiPX+2Srw0NmuPC/PAhviBQgGN/sGeEsYsPZThDUFdhlhgGyUZi+QlkK5D5/e+C3ev//SVnV6wvp5AnbwpSZI+3oWUUQ9AszV3FdgvgLTPwBgg2A3krlAjzk7YkCVKcMiLBkfBrjoSUF9ZcgniIGb5Lhd8W7OvAqmqLpkk20brF4GDDAJqjWU/FyioAwvweHbOigeupVsrTc38KoMk9oZD4YeBo49xm9ySO9HwcT3QzFehk+5Z642soTzqF7aJdNS 0JwS3pQk M2YW3J8n3TQWYN/uPrERNuwU8oUpEs1prQVxOG0uUz8ODkqR6PR2jEWduodvXs9l0qK/xsoNFnLlkcrtK87scqL09coQp1QZbTvcd5BibuA+1QjJpamsQx0CuvVM1VPTroxvJJOgqarV7z0wZTq8VqSoXzvcdIimQ7kCIsEZw+0bTHNpP/BHYvsj+Kki/+bW1Rd+Nu+Pvrx/Wn3ijaTX2gFdtjEl1KpXChG8nHzAJfpk2teSJHbetJmKt2YrRDmLARnWwyIPBworrSUDufxe5fmU4I083TXqTeWyW X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Refactor common codes from __page_frag_alloc_va_align() to __page_frag_cache_reload(), so that the new API can make use of them. CC: Alexander Duyck Signed-off-by: Yunsheng Lin --- include/linux/page_frag_cache.h | 2 +- mm/page_frag_cache.c | 138 ++++++++++++++++++-------------- 2 files changed, 81 insertions(+), 59 deletions(-) diff --git a/include/linux/page_frag_cache.h b/include/linux/page_frag_cache.h index 4ce924eaf1b1..0abffdd10a1c 100644 --- a/include/linux/page_frag_cache.h +++ b/include/linux/page_frag_cache.h @@ -52,7 +52,7 @@ static inline void *encoded_page_address(unsigned long encoded_va) static inline void page_frag_cache_init(struct page_frag_cache *nc) { - nc->encoded_va = 0; + memset(nc, 0, sizeof(*nc)); } static inline bool page_frag_cache_is_pfmemalloc(struct page_frag_cache *nc) diff --git a/mm/page_frag_cache.c b/mm/page_frag_cache.c index 2544b292375a..aa6eef55bb9c 100644 --- a/mm/page_frag_cache.c +++ b/mm/page_frag_cache.c @@ -19,8 +19,27 @@ #include #include "internal.h" -static struct page *__page_frag_cache_refill(struct page_frag_cache *nc, - gfp_t gfp_mask) +static bool __page_frag_cache_reuse(unsigned long encoded_va, + unsigned int pagecnt_bias) +{ + struct page *page; + + page = virt_to_page((void *)encoded_va); + if (!page_ref_sub_and_test(page, pagecnt_bias)) + return false; + + if (unlikely(encoded_page_pfmemalloc(encoded_va))) { + free_unref_page(page, encoded_page_order(encoded_va)); + return false; + } + + /* OK, page count is 0, we can safely set it */ + set_page_count(page, PAGE_FRAG_CACHE_MAX_SIZE + 1); + return true; +} + +static bool __page_frag_cache_refill(struct page_frag_cache *nc, + gfp_t gfp_mask) { unsigned long order = PAGE_FRAG_CACHE_MAX_ORDER; struct page *page = NULL; @@ -35,8 +54,8 @@ static struct page *__page_frag_cache_refill(struct page_frag_cache *nc, if (unlikely(!page)) { page = alloc_pages_node(NUMA_NO_NODE, gfp, 0); if (unlikely(!page)) { - nc->encoded_va = 0; - return NULL; + memset(nc, 0, sizeof(*nc)); + return false; } order = 0; @@ -45,7 +64,33 @@ static struct page *__page_frag_cache_refill(struct page_frag_cache *nc, nc->encoded_va = encode_aligned_va(page_address(page), order, page_is_pfmemalloc(page)); - return page; + /* Even if we own the page, we do not use atomic_set(). + * This would break get_page_unless_zero() users. + */ + page_ref_add(page, PAGE_FRAG_CACHE_MAX_SIZE); + + return true; +} + +/* Reload cache by reusing the old cache if it is possible, or + * refilling from the page allocator. + */ +static bool __page_frag_cache_reload(struct page_frag_cache *nc, + gfp_t gfp_mask) +{ + if (likely(nc->encoded_va)) { + if (__page_frag_cache_reuse(nc->encoded_va, nc->pagecnt_bias)) + goto out; + } + + if (unlikely(!__page_frag_cache_refill(nc, gfp_mask))) + return false; + +out: + /* reset page count bias and remaining to start of new frag */ + nc->pagecnt_bias = PAGE_FRAG_CACHE_MAX_SIZE + 1; + nc->remaining = page_frag_cache_page_size(nc->encoded_va); + return true; } void page_frag_cache_drain(struct page_frag_cache *nc) @@ -55,7 +100,7 @@ void page_frag_cache_drain(struct page_frag_cache *nc) __page_frag_cache_drain(virt_to_head_page((void *)nc->encoded_va), nc->pagecnt_bias); - nc->encoded_va = 0; + memset(nc, 0, sizeof(*nc)); } EXPORT_SYMBOL(page_frag_cache_drain); @@ -73,67 +118,44 @@ void *__page_frag_alloc_va_align(struct page_frag_cache *nc, unsigned int align_mask) { unsigned long encoded_va = nc->encoded_va; - unsigned int size, remaining; - struct page *page; - - if (unlikely(!encoded_va)) { -refill: - page = __page_frag_cache_refill(nc, gfp_mask); - if (!page) - return NULL; - - encoded_va = nc->encoded_va; - size = page_frag_cache_page_size(encoded_va); - - /* Even if we own the page, we do not use atomic_set(). - * This would break get_page_unless_zero() users. - */ - page_ref_add(page, PAGE_FRAG_CACHE_MAX_SIZE); - - /* reset page count bias and remaining to start of new frag */ - nc->pagecnt_bias = PAGE_FRAG_CACHE_MAX_SIZE + 1; - nc->remaining = size; - } else { - size = page_frag_cache_page_size(encoded_va); - } + unsigned int remaining; remaining = nc->remaining & align_mask; - if (unlikely(remaining < fragsz)) { - if (unlikely(fragsz > PAGE_SIZE)) { - /* - * The caller is trying to allocate a fragment - * with fragsz > PAGE_SIZE but the cache isn't big - * enough to satisfy the request, this may - * happen in low memory conditions. - * We don't release the cache page because - * it could make memory pressure worse - * so we simply return NULL here. - */ - return NULL; - } - - page = virt_to_page((void *)encoded_va); - if (!page_ref_sub_and_test(page, nc->pagecnt_bias)) - goto refill; - - if (unlikely(encoded_page_pfmemalloc(encoded_va))) { - free_unref_page(page, encoded_page_order(encoded_va)); - goto refill; - } + /* As we have ensured remaining is zero when initiating and draining old + * cache, 'remaining >= fragsz' checking is enough to indicate there is + * enough available space for the new fragment allocation. + */ + if (likely(remaining >= fragsz)) { + nc->pagecnt_bias--; + nc->remaining = remaining - fragsz; - /* OK, page count is 0, we can safely set it */ - set_page_count(page, PAGE_FRAG_CACHE_MAX_SIZE + 1); + return encoded_page_address(encoded_va) + + (page_frag_cache_page_size(encoded_va) - remaining); + } - /* reset page count bias and remaining to start of new frag */ - nc->pagecnt_bias = PAGE_FRAG_CACHE_MAX_SIZE + 1; - remaining = size; + if (unlikely(fragsz > PAGE_SIZE)) { + /* + * The caller is trying to allocate a fragment with + * fragsz > PAGE_SIZE but the cache isn't big enough to satisfy + * the request, this may happen in low memory conditions. We don't + * release the cache page because it could make memory pressure + * worse so we simply return NULL here. + */ + return NULL; } + if (unlikely(!__page_frag_cache_reload(nc, gfp_mask))) + return NULL; + + /* As the we are allocating fragment from cache by count-up way, the offset + * of allocated fragment from the just reloaded cache is zero, so remaining + * aligning and offset calculation are not needed. + */ nc->pagecnt_bias--; - nc->remaining = remaining - fragsz; + nc->remaining -= fragsz; - return encoded_page_address(encoded_va) + (size - remaining); + return encoded_page_address(nc->encoded_va); } EXPORT_SYMBOL(__page_frag_alloc_va_align); From patchwork Wed Jul 31 12:44:59 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yunsheng Lin X-Patchwork-Id: 13748712 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 599FDC3DA64 for ; Wed, 31 Jul 2024 12:51:08 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 240286B0099; Wed, 31 Jul 2024 08:51:06 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 1CBBC6B009A; Wed, 31 Jul 2024 08:51:06 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id EC2586B009B; Wed, 31 Jul 2024 08:51:05 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id CBA956B0099 for ; Wed, 31 Jul 2024 08:51:05 -0400 (EDT) Received: from smtpin27.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id 7BBE1A02F3 for ; Wed, 31 Jul 2024 12:51:05 +0000 (UTC) X-FDA: 82400032890.27.EBF3EDE Received: from szxga01-in.huawei.com (szxga01-in.huawei.com [45.249.212.187]) by imf12.hostedemail.com (Postfix) with ESMTP id 5111340021 for ; Wed, 31 Jul 2024 12:51:02 +0000 (UTC) Authentication-Results: imf12.hostedemail.com; dkim=none; spf=pass (imf12.hostedemail.com: domain of linyunsheng@huawei.com designates 45.249.212.187 as permitted sender) smtp.mailfrom=linyunsheng@huawei.com; dmarc=pass (policy=quarantine) header.from=huawei.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1722430208; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=Mq4/yA+8opzcyJt5hfobQHkz+fx2h8N/VaShcHkF2v0=; b=whCt+gtJzdpXnysC5Y/bHuibr7aIX9LMU573MAJwkVVew4LZvj3ZeI1qvtDZXkrYZNNKhq 8Uu7vob4JLphAA38t9uj1fOSWJpBdG+uSDwPDptskOHbTClMgj4UvieLRuP63FPw9CSP0W +9MT1ByAr3cdn14zNZzFf4LsFvk1c+I= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1722430208; a=rsa-sha256; cv=none; b=UoS731ddeYa812rPUhmPog5BamML+5o+cIrh+KA9+oWulNlXWFz2o/MfeehPr1PxGLLlYB E7Y2UnMHev26pfzuXwPqwA+/DuYSZFjpotmTryEEUE+m/FSVYh0Wkq+QnMWpPov3IAxEF8 0WMBRDR5YKQ24hYXVLUNgTKHaNqWev0= ARC-Authentication-Results: i=1; imf12.hostedemail.com; dkim=none; spf=pass (imf12.hostedemail.com: domain of linyunsheng@huawei.com designates 45.249.212.187 as permitted sender) smtp.mailfrom=linyunsheng@huawei.com; dmarc=pass (policy=quarantine) header.from=huawei.com Received: from mail.maildlp.com (unknown [172.19.163.252]) by szxga01-in.huawei.com (SkyGuard) with ESMTP id 4WYsQc3847zxW1g; Wed, 31 Jul 2024 20:50:48 +0800 (CST) Received: from dggpemf200006.china.huawei.com (unknown [7.185.36.61]) by mail.maildlp.com (Postfix) with ESMTPS id 2933C180AE3; Wed, 31 Jul 2024 20:51:00 +0800 (CST) Received: from localhost.localdomain (10.90.30.45) by dggpemf200006.china.huawei.com (7.185.36.61) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.11; Wed, 31 Jul 2024 20:50:59 +0800 From: Yunsheng Lin To: , , CC: , , Yunsheng Lin , Alexander Duyck , Andrew Morton , Subject: [PATCH net-next v12 09/14] mm: page_frag: use __alloc_pages() to replace alloc_pages_node() Date: Wed, 31 Jul 2024 20:44:59 +0800 Message-ID: <20240731124505.2903877-10-linyunsheng@huawei.com> X-Mailer: git-send-email 2.30.0 In-Reply-To: <20240731124505.2903877-1-linyunsheng@huawei.com> References: <20240731124505.2903877-1-linyunsheng@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.90.30.45] X-ClientProxiedBy: dggems705-chm.china.huawei.com (10.3.19.182) To dggpemf200006.china.huawei.com (7.185.36.61) X-Rspam-User: X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: 5111340021 X-Stat-Signature: s7ouyji7i8nppbj5wqksjymftnx5i74u X-HE-Tag: 1722430262-199461 X-HE-Meta: U2FsdGVkX1/wWvpEYkr3tY1yto9WzeNp1c+3DwVghY/VugxAcB8Elfl5N1h1R7l6zKDHrJH4Rl1OLY/Gl2u/Kzu3chm3RXv60MW0aeWb1TkYBQ0dkkUy8vXCX/UQX8JnMsOLU5AMXsdq2yWtiYyt0+fZLgxZP1mcynkaUHnEW/ORf82nuQYD5PYKrx8iHb09q5OukheQCy+yBh3pffjUm/kauKrYaatuX12Tuejox393uxsmw2QyOZDjCW8+69sKo8peoCfDFBycVqYpfsLFr6voJcQOIhwsrojSE01slTWyf5SynOzBQSv+VuK1IsGusMpmPvO34pRNwv+ctbaotaw2SSHj+KeYqy/7pPhqfzTulhTzv2niItiSMbfG5al919yHoEpJKdiD80NKx47d4wqhFiBdVm1f5xVnXvlq7nqqUWsdF85akglda8y/v+LSsnGrc/C1yWMgSmCdvEP8cUiI1DPoNU+Qg+hQowYfDUu5+S6dFAHBGGzpOtgaExc29tfP1n+5rKzScAJyVDvxOQOTMTQBffMDNbLZQaR8XhTrrwCRPz3bjINyrbhXNZ6Lf6DV8KUv9D9qjcw5m0OB9X3noZWERmfGTkF21y5Wtg/JQwLDOtLp576kOMEPOzsfao+SaWfnJxPJi/pWrxCrXei9fp9qtMxFNkBLAEvAKskyTg51yggnnAvq5TaVxlV4eVdXpZejwAaggxSiSxtDqRWnE/6a2+0SxG7pUo+KNyq/4NlgR4QraVjJeR0kg+JEe8IJ27GN7ZoDHDgaGukwAJ0RC/lo5esO/sbDODR8fxkzBA9XhkF49ZH3aInMfYI0jpRthJRIlbzw78a7pElUsx0ZIeJuPdrux7QAnL4xVzxQkTETrTqLbL6W200SjEfnD3eqrrDDUm/XD4ujXsJG/F/tOiqm9/C4dERyikA+wZrOkVcMMZU+qfssso/6X9uvMAqKzcDykI0zQuuh93C K5J5gH2d 579lw+GHZH7UvDLSgJ8q+sbs+zcldJC3Cjbuwal9P4OcE54eFdYCtvOC9zmEvdX7BR+sJHuqNtJdMZxdB+rsP7hi+dMepjkBwrNKJHLcY+Q50nk6rEWdhp+i1wB1/663xzNg1bLljpQaQ20aLQZ+FbmGKxapzjn3XL5OyCo64GiJ6Oy1XHoaQxEr39M5cWaY0Jfwfq+Ax0OHjg4BxGpcTHLCDtjSxFVcoRErbtfrUWSFopHLKhtZQMyiaagj1T8AfYm9YW7cyXzhu+IwMFik2iakvEFhxDTBIhmN6 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: There are more new APIs calling __page_frag_cache_refill() in this patchset, which may cause compiler not being able to inline __page_frag_cache_refill() into __page_frag_alloc_va_align(). Not being able to do the inlining seems to cause some notiable performance degradation in arm64 system with 64K PAGE_SIZE after adding new API calling __page_frag_cache_refill(). It seems there is about 24Bytes binary size increase for __page_frag_cache_refill() and __page_frag_cache_refill() in arm64 system with 64K PAGE_SIZE. By doing the gdb disassembling, It seems we can have more than 100Bytes decrease for the binary size by using __alloc_pages() to replace alloc_pages_node(), as there seems to be some unnecessary checking for nid being NUMA_NO_NODE, especially when page_frag is still part of the mm system. CC: Alexander Duyck Signed-off-by: Yunsheng Lin --- mm/page_frag_cache.c | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/mm/page_frag_cache.c b/mm/page_frag_cache.c index aa6eef55bb9c..a24d6d5278d1 100644 --- a/mm/page_frag_cache.c +++ b/mm/page_frag_cache.c @@ -48,11 +48,11 @@ static bool __page_frag_cache_refill(struct page_frag_cache *nc, #if (PAGE_SIZE < PAGE_FRAG_CACHE_MAX_SIZE) gfp_mask = (gfp_mask & ~__GFP_DIRECT_RECLAIM) | __GFP_COMP | __GFP_NOWARN | __GFP_NORETRY | __GFP_NOMEMALLOC; - page = alloc_pages_node(NUMA_NO_NODE, gfp_mask, - PAGE_FRAG_CACHE_MAX_ORDER); + page = __alloc_pages(gfp_mask, PAGE_FRAG_CACHE_MAX_ORDER, + numa_mem_id(), NULL); #endif if (unlikely(!page)) { - page = alloc_pages_node(NUMA_NO_NODE, gfp, 0); + page = __alloc_pages(gfp, 0, numa_mem_id(), NULL); if (unlikely(!page)) { memset(nc, 0, sizeof(*nc)); return false; From patchwork Wed Jul 31 12:45:01 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yunsheng Lin X-Patchwork-Id: 13748713 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5CA81C3DA64 for ; Wed, 31 Jul 2024 12:51:11 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 463D56B009B; Wed, 31 Jul 2024 08:51:10 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 43B886B009C; Wed, 31 Jul 2024 08:51:10 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 265A06B009D; Wed, 31 Jul 2024 08:51:10 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 089E56B009B for ; Wed, 31 Jul 2024 08:51:10 -0400 (EDT) Received: from smtpin10.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id B82DBA0300 for ; Wed, 31 Jul 2024 12:51:09 +0000 (UTC) X-FDA: 82400033058.10.37CB37E Received: from szxga03-in.huawei.com (szxga03-in.huawei.com [45.249.212.189]) by imf24.hostedemail.com (Postfix) with ESMTP id 5678518000C for ; Wed, 31 Jul 2024 12:51:06 +0000 (UTC) Authentication-Results: imf24.hostedemail.com; dkim=none; spf=pass (imf24.hostedemail.com: domain of linyunsheng@huawei.com designates 45.249.212.189 as permitted sender) smtp.mailfrom=linyunsheng@huawei.com; dmarc=pass (policy=quarantine) header.from=huawei.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1722430263; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=QZxA+kJAMme4SdVh1uoDid94HzC0OQXXJxCSgvmCupY=; b=gFsexLzgTarSOYTD15dwFg9U7Re2oVbkYQdisMLdtB4UIl6vy6J/oASQAhOmvdHTDm0Lz8 LkP3mMTHD18WMM5rAx01mMAA2X7paPUI26Rbpvb0uXQ0P+4je/nNpW9YytIGjtQhFtouyK DOfDv+M1Z5O2xGq0rx+DeWZG+PyqwUI= ARC-Authentication-Results: i=1; imf24.hostedemail.com; dkim=none; spf=pass (imf24.hostedemail.com: domain of linyunsheng@huawei.com designates 45.249.212.189 as permitted sender) smtp.mailfrom=linyunsheng@huawei.com; dmarc=pass (policy=quarantine) header.from=huawei.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1722430263; a=rsa-sha256; cv=none; b=5HB4iSf40+bdJtz8rxOZAG6tfJLyCaumyZqJqoF9nI7XPKRTO/umAcL+atbSsR0lI+6akm 7y6/QI58suJg/6i1+6OsDELz/DVBL0Y6Ti+AJc6I5Ife8jsu6eZicttivvwGPXu13qoUDU XwcGgrn68QZeSjNuicMvcYA9yGB7VjI= Received: from mail.maildlp.com (unknown [172.19.162.254]) by szxga03-in.huawei.com (SkyGuard) with ESMTP id 4WYsKx0yNRzQnQH; Wed, 31 Jul 2024 20:46:45 +0800 (CST) Received: from dggpemf200006.china.huawei.com (unknown [7.185.36.61]) by mail.maildlp.com (Postfix) with ESMTPS id 3552B180106; Wed, 31 Jul 2024 20:51:04 +0800 (CST) Received: from localhost.localdomain (10.90.30.45) by dggpemf200006.china.huawei.com (7.185.36.61) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.11; Wed, 31 Jul 2024 20:51:03 +0800 From: Yunsheng Lin To: , , CC: , , Yunsheng Lin , Alexander Duyck , Andrew Morton , Subject: [PATCH net-next v12 11/14] mm: page_frag: introduce prepare/probe/commit API Date: Wed, 31 Jul 2024 20:45:01 +0800 Message-ID: <20240731124505.2903877-12-linyunsheng@huawei.com> X-Mailer: git-send-email 2.30.0 In-Reply-To: <20240731124505.2903877-1-linyunsheng@huawei.com> References: <20240731124505.2903877-1-linyunsheng@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.90.30.45] X-ClientProxiedBy: dggems705-chm.china.huawei.com (10.3.19.182) To dggpemf200006.china.huawei.com (7.185.36.61) X-Rspam-User: X-Stat-Signature: uaeaxdg9jtkzddo9j4ud9at8ac8csafe X-Rspamd-Queue-Id: 5678518000C X-Rspamd-Server: rspam11 X-HE-Tag: 1722430266-130490 X-HE-Meta: U2FsdGVkX1/iKoX0jaEdCEjCDtFTK4cLO8NDmPcG5BNjWaaLrWaanwb1p2OOFmFpGfnhxPc0VBNsZZy+nYzTjmqZHV4ZF6FwLN91yPj9wLKqRhOxYQm0Sa2IYSjoZ2rk3IG/Xg03tHxlZ6a2VCAENf+LOP5jyexS3MtyGZl1QdbD+Expfpd9BIPuxO7PdLQyToM9958P6vTpOWjFpwLf/B3F8dDRnW4Immf/6mGe+TOS3SAWSYiw61QVenGQqLChA7CP57vCqfKNfj14XWevxT7loRL36FH6ix8jGY5x45X9AsEPCSoEdwAkp9I0Q9uiW/L5fWaHIkxUd/I0TMxOzs26msvn8HKoY+/Cz4XNB1Fts0e40FzwR9GOeFlv3HKILdmXJa8qybKsotrP+lfkTDRq2dXGBCdac2zcGCnXKxiqDyFrAaCT/HEBB0t9ndB25obnnXlVUuVM9hOLSnUv+Qi1H5jmGMnOEwxW3usPiAzDFXrn8IXVftWq/KluzkAWEoNQiY7Kc1GBPMPtDPOZDNaCfjXUJgwu+YEILiR7Sd8SST+44loM2+mvhOQnWPGs+nXvQ2jIw1fRIbeukzVKGwyQrfL5dYTs/R3Tm8bmfTD2Rb9Cw90ozwCnlLcfD+kn65umUcXQxAa778PcrB1DICyW/fudBR8x1FmIXsI3kXwvujIaPIaXqaZ5LIzmRc/aEkuUjbXB20XM5l0DRV17t1DVm40SQRW3dQDdsOI1SKtKeUw6SYDILL+0XTI7R41Jrs8z1k7EOAT/VYNQ+p2L1It+Pb5AK0OzOkCyOLNhbtZeKGLYD8fKuCp0wbtMo7R9ZC/7FqPxPKZyBfbk4SkSwla5a8B2j4aGNsyveNCj7yTFeGodiNPD8Tr/+gI1nJmQi29v7LBQJpyRtIoL1XEet9DXFrAJn/y7KooWg3+rq1rnjmJc7chhzW/m64DdPO01BSs5aHwGpfPspqYqb9D VCLdPZrI VwAQmkH5D1YpYolyo2jUw1vExnGEgU1iLoZgvk9P1p4Qc2+fPSnWzFkCWxrzjWBPd8rlQ8zlOn2Q1Bu0x87LXB7XNPbFdf7rhio9uWKiVDMVBlu/xi80H+IQ8QJRMjhxn8RotmPFA85NWdyT5kwZCsQV2QLaDb2f0wagr/iEkiYfC5ftAdrIRixbsKJssLmDiKKbppbtV9kV6R9sI/rhpdnhlCoVz2x2V3qtDCAfsSuFUJLXKBtCMxPScuH8L/hxWygo0jq/A3DcOyn8QN7/iOZge2ECjGM6gBftL X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: There are many use cases that need minimum memory in order for forward progress, but more performant if more memory is available or need to probe the cache info to use any memory available for frag caoleasing reason. Currently skb_page_frag_refill() API is used to solve the above use cases, but caller needs to know about the internal detail and access the data field of 'struct page_frag' to meet the requirement of the above use cases and its implementation is similar to the one in mm subsystem. To unify those two page_frag implementations, introduce a prepare API to ensure minimum memory is satisfied and return how much the actual memory is available to the caller and a probe API to report the current available memory to caller without doing cache refilling. The caller needs to either call the commit API to report how much memory it actually uses, or not do so if deciding to not use any memory. CC: Alexander Duyck Signed-off-by: Yunsheng Lin --- include/linux/page_frag_cache.h | 75 ++++++++++++++++ mm/page_frag_cache.c | 152 ++++++++++++++++++++++++++++---- 2 files changed, 212 insertions(+), 15 deletions(-) diff --git a/include/linux/page_frag_cache.h b/include/linux/page_frag_cache.h index 0abffdd10a1c..ba5d7f8a03cd 100644 --- a/include/linux/page_frag_cache.h +++ b/include/linux/page_frag_cache.h @@ -7,6 +7,8 @@ #include #include #include +#include +#include #include #if (PAGE_SIZE < PAGE_FRAG_CACHE_MAX_SIZE) @@ -67,6 +69,9 @@ static inline unsigned int page_frag_cache_page_size(unsigned long encoded_va) void page_frag_cache_drain(struct page_frag_cache *nc); void __page_frag_cache_drain(struct page *page, unsigned int count); +struct page *page_frag_alloc_pg(struct page_frag_cache *nc, + unsigned int *offset, unsigned int fragsz, + gfp_t gfp); void *__page_frag_alloc_va_align(struct page_frag_cache *nc, unsigned int fragsz, gfp_t gfp_mask, unsigned int align_mask); @@ -79,12 +84,82 @@ static inline void *page_frag_alloc_va_align(struct page_frag_cache *nc, return __page_frag_alloc_va_align(nc, fragsz, gfp_mask, -align); } +static inline unsigned int page_frag_cache_page_offset(const struct page_frag_cache *nc) +{ + return page_frag_cache_page_size(nc->encoded_va) - nc->remaining; +} + static inline void *page_frag_alloc_va(struct page_frag_cache *nc, unsigned int fragsz, gfp_t gfp_mask) { return __page_frag_alloc_va_align(nc, fragsz, gfp_mask, ~0u); } +void *page_frag_alloc_va_prepare(struct page_frag_cache *nc, unsigned int *fragsz, + gfp_t gfp); + +static inline void *page_frag_alloc_va_prepare_align(struct page_frag_cache *nc, + unsigned int *fragsz, + gfp_t gfp, + unsigned int align) +{ + WARN_ON_ONCE(!is_power_of_2(align) || align > PAGE_SIZE); + nc->remaining = nc->remaining & -align; + return page_frag_alloc_va_prepare(nc, fragsz, gfp); +} + +struct page *page_frag_alloc_pg_prepare(struct page_frag_cache *nc, + unsigned int *offset, + unsigned int *fragsz, gfp_t gfp); + +struct page *page_frag_alloc_prepare(struct page_frag_cache *nc, + unsigned int *offset, + unsigned int *fragsz, + void **va, gfp_t gfp); + +static inline struct page *page_frag_alloc_probe(struct page_frag_cache *nc, + unsigned int *offset, + unsigned int *fragsz, + void **va) +{ + unsigned long encoded_va = nc->encoded_va; + struct page *page; + + VM_BUG_ON(!*fragsz); + if (unlikely(nc->remaining < *fragsz)) + return NULL; + + *va = encoded_page_address(encoded_va); + page = virt_to_page(*va); + *fragsz = nc->remaining; + *offset = page_frag_cache_page_size(encoded_va) - *fragsz; + *va += *offset; + + return page; +} + +static inline void page_frag_alloc_commit(struct page_frag_cache *nc, + unsigned int fragsz) +{ + VM_BUG_ON(fragsz > nc->remaining || !nc->pagecnt_bias); + nc->pagecnt_bias--; + nc->remaining -= fragsz; +} + +static inline void page_frag_alloc_commit_noref(struct page_frag_cache *nc, + unsigned int fragsz) +{ + VM_BUG_ON(fragsz > nc->remaining); + nc->remaining -= fragsz; +} + +static inline void page_frag_alloc_abort(struct page_frag_cache *nc, + unsigned int fragsz) +{ + nc->pagecnt_bias++; + nc->remaining += fragsz; +} + void page_frag_free_va(void *addr); #endif diff --git a/mm/page_frag_cache.c b/mm/page_frag_cache.c index a24d6d5278d1..6a21d710c0e2 100644 --- a/mm/page_frag_cache.c +++ b/mm/page_frag_cache.c @@ -19,27 +19,27 @@ #include #include "internal.h" -static bool __page_frag_cache_reuse(unsigned long encoded_va, - unsigned int pagecnt_bias) +static struct page *__page_frag_cache_reuse(unsigned long encoded_va, + unsigned int pagecnt_bias) { struct page *page; page = virt_to_page((void *)encoded_va); if (!page_ref_sub_and_test(page, pagecnt_bias)) - return false; + return NULL; if (unlikely(encoded_page_pfmemalloc(encoded_va))) { free_unref_page(page, encoded_page_order(encoded_va)); - return false; + return NULL; } /* OK, page count is 0, we can safely set it */ set_page_count(page, PAGE_FRAG_CACHE_MAX_SIZE + 1); - return true; + return page; } -static bool __page_frag_cache_refill(struct page_frag_cache *nc, - gfp_t gfp_mask) +static struct page *__page_frag_cache_refill(struct page_frag_cache *nc, + gfp_t gfp_mask) { unsigned long order = PAGE_FRAG_CACHE_MAX_ORDER; struct page *page = NULL; @@ -55,7 +55,7 @@ static bool __page_frag_cache_refill(struct page_frag_cache *nc, page = __alloc_pages(gfp, 0, numa_mem_id(), NULL); if (unlikely(!page)) { memset(nc, 0, sizeof(*nc)); - return false; + return NULL; } order = 0; @@ -69,29 +69,151 @@ static bool __page_frag_cache_refill(struct page_frag_cache *nc, */ page_ref_add(page, PAGE_FRAG_CACHE_MAX_SIZE); - return true; + return page; } /* Reload cache by reusing the old cache if it is possible, or * refilling from the page allocator. */ -static bool __page_frag_cache_reload(struct page_frag_cache *nc, - gfp_t gfp_mask) +static struct page *__page_frag_cache_reload(struct page_frag_cache *nc, + gfp_t gfp_mask) { + struct page *page; + if (likely(nc->encoded_va)) { - if (__page_frag_cache_reuse(nc->encoded_va, nc->pagecnt_bias)) + page = __page_frag_cache_reuse(nc->encoded_va, nc->pagecnt_bias); + if (page) goto out; } - if (unlikely(!__page_frag_cache_refill(nc, gfp_mask))) - return false; + page = __page_frag_cache_refill(nc, gfp_mask); + if (unlikely(!page)) + return NULL; out: /* reset page count bias and remaining to start of new frag */ nc->pagecnt_bias = PAGE_FRAG_CACHE_MAX_SIZE + 1; nc->remaining = page_frag_cache_page_size(nc->encoded_va); - return true; + return page; +} + +void *page_frag_alloc_va_prepare(struct page_frag_cache *nc, + unsigned int *fragsz, gfp_t gfp) +{ + unsigned int remaining = nc->remaining; + + VM_BUG_ON(!*fragsz); + if (likely(remaining >= *fragsz)) { + unsigned long encoded_va = nc->encoded_va; + + *fragsz = remaining; + + return encoded_page_address(encoded_va) + + (page_frag_cache_page_size(encoded_va) - remaining); + } + + if (unlikely(*fragsz > PAGE_SIZE)) + return NULL; + + /* When reload fails, nc->encoded_va and nc->remaining are both reset + * to zero, so there is no need to check the return value here. + */ + __page_frag_cache_reload(nc, gfp); + + *fragsz = nc->remaining; + return encoded_page_address(nc->encoded_va); +} +EXPORT_SYMBOL(page_frag_alloc_va_prepare); + +struct page *page_frag_alloc_pg_prepare(struct page_frag_cache *nc, + unsigned int *offset, + unsigned int *fragsz, gfp_t gfp) +{ + unsigned int remaining = nc->remaining; + struct page *page; + + VM_BUG_ON(!*fragsz); + if (likely(remaining >= *fragsz)) { + unsigned long encoded_va = nc->encoded_va; + + *offset = page_frag_cache_page_size(encoded_va) - remaining; + *fragsz = remaining; + + return virt_to_page((void *)encoded_va); + } + + if (unlikely(*fragsz > PAGE_SIZE)) + return NULL; + + page = __page_frag_cache_reload(nc, gfp); + *offset = 0; + *fragsz = nc->remaining; + return page; +} +EXPORT_SYMBOL(page_frag_alloc_pg_prepare); + +struct page *page_frag_alloc_prepare(struct page_frag_cache *nc, + unsigned int *offset, + unsigned int *fragsz, + void **va, gfp_t gfp) +{ + unsigned int remaining = nc->remaining; + struct page *page; + + VM_BUG_ON(!*fragsz); + if (likely(remaining >= *fragsz)) { + unsigned long encoded_va = nc->encoded_va; + + *offset = page_frag_cache_page_size(encoded_va) - remaining; + *va = encoded_page_address(encoded_va) + *offset; + *fragsz = remaining; + + return virt_to_page((void *)encoded_va); + } + + if (unlikely(*fragsz > PAGE_SIZE)) + return NULL; + + page = __page_frag_cache_reload(nc, gfp); + *offset = 0; + *fragsz = nc->remaining; + *va = encoded_page_address(nc->encoded_va); + + return page; +} +EXPORT_SYMBOL(page_frag_alloc_prepare); + +struct page *page_frag_alloc_pg(struct page_frag_cache *nc, + unsigned int *offset, unsigned int fragsz, + gfp_t gfp) +{ + unsigned int remaining = nc->remaining; + struct page *page; + + VM_BUG_ON(!fragsz); + if (likely(remaining >= fragsz)) { + unsigned long encoded_va = nc->encoded_va; + + *offset = page_frag_cache_page_size(encoded_va) - + remaining; + + return virt_to_page((void *)encoded_va); + } + + if (unlikely(fragsz > PAGE_SIZE)) + return NULL; + + page = __page_frag_cache_reload(nc, gfp); + if (unlikely(!page)) + return NULL; + + *offset = 0; + nc->remaining = remaining - fragsz; + nc->pagecnt_bias--; + + return page; } +EXPORT_SYMBOL(page_frag_alloc_pg); void page_frag_cache_drain(struct page_frag_cache *nc) { From patchwork Wed Jul 31 12:45:03 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yunsheng Lin X-Patchwork-Id: 13748760 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id B5CCBC3DA7F for ; Wed, 31 Jul 2024 12:51:19 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 526BB6B00A0; Wed, 31 Jul 2024 08:51:19 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 4D8296B00A1; Wed, 31 Jul 2024 08:51:19 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 351A16B00A2; Wed, 31 Jul 2024 08:51:19 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id 042616B00A0 for ; Wed, 31 Jul 2024 08:51:18 -0400 (EDT) Received: from smtpin13.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id 8583980304 for ; Wed, 31 Jul 2024 12:51:18 +0000 (UTC) X-FDA: 82400033436.13.FFEFB20 Received: from szxga01-in.huawei.com (szxga01-in.huawei.com [45.249.212.187]) by imf13.hostedemail.com (Postfix) with ESMTP id 1343720027 for ; Wed, 31 Jul 2024 12:51:15 +0000 (UTC) Authentication-Results: imf13.hostedemail.com; dkim=none; dmarc=pass (policy=quarantine) header.from=huawei.com; spf=pass (imf13.hostedemail.com: domain of linyunsheng@huawei.com designates 45.249.212.187 as permitted sender) smtp.mailfrom=linyunsheng@huawei.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1722430235; a=rsa-sha256; cv=none; b=6+uh6NMIFp/Qz0IR0k5dXkOBrE8kdlkvWGfBsnSeM/HrE9lYxU/QEgHZipmq7glomXTPCu zj+nzLeyUbV3DTydar67h/vRj3AM2g0cxoHqkfeXHaeJ/9En685NOeHH0PpybDC2k7GGkU +UI05JcB6OCgORIHCRN69HAj7KvA6n8= ARC-Authentication-Results: i=1; imf13.hostedemail.com; dkim=none; dmarc=pass (policy=quarantine) header.from=huawei.com; spf=pass (imf13.hostedemail.com: domain of linyunsheng@huawei.com designates 45.249.212.187 as permitted sender) smtp.mailfrom=linyunsheng@huawei.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1722430235; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=Ld08kIw/Vsi30Hx/9ntRCA6xhRi+3lytRKLzw/Y9mxw=; b=e1AKPBLmkrtlJGnFyix2Q2V2TIqBQen00WCSWbXwbVr6MEH60oL8MK9mobjZUYWSpfqbU6 fLN4yix4Xnj+DGP/3tMHaGBbNnkquvTGne4CMgawMrNttt+MUYGO7ee3gRIQfZ5tE2mQTm IhMjZkQ6bHFzSHU/tszfPs5tleHFREw= Received: from mail.maildlp.com (unknown [172.19.163.174]) by szxga01-in.huawei.com (SkyGuard) with ESMTP id 4WYsKL6bQhzyPLV; Wed, 31 Jul 2024 20:46:14 +0800 (CST) Received: from dggpemf200006.china.huawei.com (unknown [7.185.36.61]) by mail.maildlp.com (Postfix) with ESMTPS id 5BB22140360; Wed, 31 Jul 2024 20:51:13 +0800 (CST) Received: from localhost.localdomain (10.90.30.45) by dggpemf200006.china.huawei.com (7.185.36.61) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.11; Wed, 31 Jul 2024 20:51:13 +0800 From: Yunsheng Lin To: , , CC: , , Yunsheng Lin , Alexander Duyck , Jonathan Corbet , Andrew Morton , , Subject: [PATCH net-next v12 13/14] mm: page_frag: update documentation for page_frag Date: Wed, 31 Jul 2024 20:45:03 +0800 Message-ID: <20240731124505.2903877-14-linyunsheng@huawei.com> X-Mailer: git-send-email 2.30.0 In-Reply-To: <20240731124505.2903877-1-linyunsheng@huawei.com> References: <20240731124505.2903877-1-linyunsheng@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.90.30.45] X-ClientProxiedBy: dggems705-chm.china.huawei.com (10.3.19.182) To dggpemf200006.china.huawei.com (7.185.36.61) X-Rspamd-Queue-Id: 1343720027 X-Rspam-User: X-Rspamd-Server: rspam05 X-Stat-Signature: soahya9fgukc3gw8qik9yum984z6zgdj X-HE-Tag: 1722430275-563929 X-HE-Meta: U2FsdGVkX1/7oEQiPmNsnwWbP6bIBFLcvlsx67jr7qHHyfyYle9cRVbDIaWVPgtjvZDTxkojaAQFVWLyLVzSo/FbTulJ54avbis2X4s+msFubUZwvsHcywc2GKhN8fTjJGvK/qETCtmtAvo0xYrrRL8dpRLEZuOrfNvwzfuB0d+g97qNkKfs+yziVfhfoMHio5AzEn3U3Gg2LqBq0DnZidVltTXGFhuBZMejvnw6Dg0jj13gJQl2eP30g2rgzP0U7AsYnCvvJRFkqfoFSnd28drO4ELpChWdgP5neZ06esl4706MGC/vz4Xvc2GKDT/jL6UPL02YSJeXxpqYzNeuQ7JO2q5gsLx0zq2LbfXF4/e+n+YIkvxCz+Lp2s5zd1GIxREdsBgqkMxwerbttTjItJ8lzbDmkuQ0rp9Imfy+qt11vFgvBK5U4vSEt8PEbEYjWA7bvS9QfcFBr/e10M1YrL+b2+uT8g5NsZ5LTuKY1b1M2bgU/xmdsPpfGV9LDlDFipUuRJzO9z9ftLqJvTlOMhc01VsRI1gPCEavEH2zr+dCxN34QqHCCvRf8WbLWgk+8uKuIPOYIrAL9kAWdPGPXp5TT/EZDge6tbE0Oh8AABtmSSrjXmpOkPHCm638vJvOVQZmwn4FxNQjfiXx/TtWJ3jkWebUMXHE9Ia7PIVt53XZOm+jP4Mr0uUZjXcJj2hAt3XlimA2SdrYqUMwmsFrTizp6KnoKM/9xhZ7NUJYro+fMt1vEnU+Jt3dPKmRLVCqeTKayO85Y5RNVeKT470qGnMRNS5lBzn53euvisgYLfUNt4yqhtU/KsFdLNgAGpFZNMikfFw0Xob5a0ISIiuHL8pRu4WzDmtKXujPBQlFQhGZ2ZjuNNKNuM+Q7P2XRaGB1HoxsZ9dQ4zWeJrm/gJwCcVr/WQWDQl294jY/TdLarSG2jZejn9XFs7bBMU37Tn5MNYGyqDmlYxdvCzZETb U8Y7sutX PSr6cL/TEGpRK3Smvudbic66ScZX4P6Q4hk/sWTLAui6L0YXTY0GlsrpxfYCVCHGlaszlRhzSpcZ2LXWLg2TsZcSg3HdOHvCtSq5QPPUBx0SkIzbbdtIXbavdOrnl9Uhu5lNbn7GzxOgv9DjcQKhgcwxBE4iODlelEnBCIJGPdnEe+fQku4FlVZShS0VQhEc+IiOAFdQlNuq5z0BSdjPRQZissEDsX32IyDLYewWBlNUfQJ+EmBGTSs+AdX3hdCcfz7y1iMn4C8upVBPpR3GonMECXQi4ucBfsVbH X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Update documentation about design, implementation and API usages for page_frag. CC: Alexander Duyck Signed-off-by: Yunsheng Lin --- Documentation/mm/page_frags.rst | 169 +++++++++++++++++++++++++++++++- include/linux/page_frag_cache.h | 107 ++++++++++++++++++++ mm/page_frag_cache.c | 77 ++++++++++++++- 3 files changed, 350 insertions(+), 3 deletions(-) diff --git a/Documentation/mm/page_frags.rst b/Documentation/mm/page_frags.rst index 503ca6cdb804..abdab415a8e2 100644 --- a/Documentation/mm/page_frags.rst +++ b/Documentation/mm/page_frags.rst @@ -1,3 +1,5 @@ +.. SPDX-License-Identifier: GPL-2.0 + ============== Page fragments ============== @@ -40,4 +42,169 @@ page via a single call. The advantage to doing this is that it allows for cleaning up the multiple references that were added to a page in order to avoid calling get_page per allocation. -Alexander Duyck, Nov 29, 2016. + +Architecture overview +===================== + +.. code-block:: none + + +----------------------+ + | page_frag API caller | + +----------------------+ + | + | + v + +------------------------------------------------------------------+ + | request page fragment | + +------------------------------------------------------------------+ + | | | + | | | + | Cache not enough | + | | | + | +-----------------+ | + | | reuse old cache |--Usable-->| + | +-----------------+ | + | | | + | Not usable | + | | | + | v | + Cache empty +-----------------+ | + | | drain old cache | | + | +-----------------+ | + | | | + v_________________________________v | + | | + | | + _________________v_______________ | + | | Cache is enough + | | | + PAGE_SIZE < PAGE_FRAG_CACHE_MAX_SIZE | | + | | | + | PAGE_SIZE >= PAGE_FRAG_CACHE_MAX_SIZE | + v | | + +----------------------------------+ | | + | refill cache with order > 0 page | | | + +----------------------------------+ | | + | | | | + | | | | + | Refill failed | | + | | | | + | v v | + | +------------------------------------+ | + | | refill cache with order 0 page | | + | +----------------------------------=-+ | + | | | + Refill succeed | | + | Refill succeed | + | | | + v v v + +------------------------------------------------------------------+ + | allocate fragment from cache | + +------------------------------------------------------------------+ + +API interface +============= +As the design and implementation of page_frag API implies, the allocation side +does not allow concurrent calling. Instead it is assumed that the caller must +ensure there is not concurrent alloc calling to the same page_frag_cache +instance by using its own lock or rely on some lockless guarantee like NAPI +softirq. + +Depending on different aligning requirement, the page_frag API caller may call +page_frag_alloc*_align*() to ensure the returned virtual address or offset of +the page is aligned according to the 'align/alignment' parameter. Note the size +of the allocated fragment is not aligned, the caller needs to provide an aligned +fragsz if there is an alignment requirement for the size of the fragment. + +Depending on different use cases, callers expecting to deal with va, page or +both va and page for them may call page_frag_alloc_va*, page_frag_alloc_pg*, +or page_frag_alloc* API accordingly. + +There is also a use case that needs minimum memory in order for forward progress, +but more performant if more memory is available. Using page_frag_alloc_prepare() +and page_frag_alloc_commit() related API, the caller requests the minimum memory +it needs and the prepare API will return the maximum size of the fragment +returned. The caller needs to either call the commit API to report how much +memory it actually uses, or not do so if deciding to not use any memory. + +.. kernel-doc:: include/linux/page_frag_cache.h + :identifiers: page_frag_cache_init page_frag_cache_is_pfmemalloc + page_frag_cache_page_offset page_frag_alloc_va + page_frag_alloc_va_align page_frag_alloc_va_prepare_align + page_frag_alloc_probe page_frag_alloc_commit + page_frag_alloc_commit_noref page_frag_alloc_abort + +.. kernel-doc:: mm/page_frag_cache.c + :identifiers: __page_frag_alloc_va_align page_frag_alloc_pg + page_frag_alloc_va_prepare page_frag_alloc_pg_prepare + page_frag_alloc_prepare page_frag_cache_drain + page_frag_free_va + +Coding examples +=============== + +Init & Drain API +---------------- + +.. code-block:: c + + page_frag_cache_init(pfrag); + ... + page_frag_cache_drain(pfrag); + + +Alloc & Free API +---------------- + +.. code-block:: c + + void *va; + + va = page_frag_alloc_va_align(pfrag, size, gfp, align); + if (!va) + goto do_error; + + err = do_something(va, size); + if (err) { + page_frag_free_va(va); + goto do_error; + } + +Prepare & Commit API +-------------------- + +.. code-block:: c + + unsigned int offset, size; + bool merge = true; + struct page *page; + void *va; + + size = 32U; + page = page_frag_alloc_prepare(pfrag, &offset, &size, &va); + if (!page) + goto wait_for_space; + + copy = min_t(unsigned int, copy, size); + if (!skb_can_coalesce(skb, i, page, offset)) { + if (i >= max_skb_frags) + goto new_segment; + + merge = false; + } + + copy = mem_schedule(copy); + if (!copy) + goto wait_for_space; + + err = copy_from_iter_full_nocache(va, copy, iter); + if (err) + goto do_error; + + if (merge) { + skb_frag_size_add(&skb_shinfo(skb)->frags[i - 1], copy); + page_frag_alloc_commit_noref(pfrag, offset, copy); + } else { + skb_fill_page_desc(skb, i, page, offset, copy); + page_frag_alloc_commit(pfrag, offset, copy); + } diff --git a/include/linux/page_frag_cache.h b/include/linux/page_frag_cache.h index ba5d7f8a03cd..9a2c9abd23d0 100644 --- a/include/linux/page_frag_cache.h +++ b/include/linux/page_frag_cache.h @@ -52,11 +52,28 @@ static inline void *encoded_page_address(unsigned long encoded_va) return (void *)(encoded_va & PAGE_MASK); } +/** + * page_frag_cache_init() - Init page_frag cache. + * @nc: page_frag cache from which to init + * + * Inline helper to init the page_frag cache. + */ static inline void page_frag_cache_init(struct page_frag_cache *nc) { memset(nc, 0, sizeof(*nc)); } +/** + * page_frag_cache_is_pfmemalloc() - Check for pfmemalloc. + * @nc: page_frag cache from which to check + * + * Used to check if the current page in page_frag cache is pfmemalloc'ed. + * It has the same calling context expectation as the alloc API. + * + * Return: + * true if the current page in page_frag cache is pfmemalloc'ed, otherwise + * return false. + */ static inline bool page_frag_cache_is_pfmemalloc(struct page_frag_cache *nc) { return encoded_page_pfmemalloc(nc->encoded_va); @@ -76,6 +93,19 @@ void *__page_frag_alloc_va_align(struct page_frag_cache *nc, unsigned int fragsz, gfp_t gfp_mask, unsigned int align_mask); +/** + * page_frag_alloc_va_align() - Alloc a page fragment with aligning requirement. + * @nc: page_frag cache from which to allocate + * @fragsz: the requested fragment size + * @gfp_mask: the allocation gfp to use when cache needs to be refilled + * @align: the requested aligning requirement for virtual address of fragment + * + * WARN_ON_ONCE() checking for @align before allocing a page fragment from + * page_frag cache with aligning requirement. + * + * Return: + * virtual address of the page fragment, otherwise return NULL. + */ static inline void *page_frag_alloc_va_align(struct page_frag_cache *nc, unsigned int fragsz, gfp_t gfp_mask, unsigned int align) @@ -84,11 +114,32 @@ static inline void *page_frag_alloc_va_align(struct page_frag_cache *nc, return __page_frag_alloc_va_align(nc, fragsz, gfp_mask, -align); } +/** + * page_frag_cache_page_offset() - Return the current page fragment's offset. + * @nc: page_frag cache from which to check + * + * The API is only used in net/sched/em_meta.c for historical reason, do not use + * it for new caller unless there is a strong reason. + * + * Return: + * the offset of the current page fragment in the page_frag cache. + */ static inline unsigned int page_frag_cache_page_offset(const struct page_frag_cache *nc) { return page_frag_cache_page_size(nc->encoded_va) - nc->remaining; } +/** + * page_frag_alloc_va() - Alloc a page fragment. + * @nc: page_frag cache from which to allocate + * @fragsz: the requested fragment size + * @gfp_mask: the allocation gfp to use when cache need to be refilled + * + * Get a page fragment from page_frag cache. + * + * Return: + * virtual address of the page fragment, otherwise return NULL. + */ static inline void *page_frag_alloc_va(struct page_frag_cache *nc, unsigned int fragsz, gfp_t gfp_mask) { @@ -98,6 +149,21 @@ static inline void *page_frag_alloc_va(struct page_frag_cache *nc, void *page_frag_alloc_va_prepare(struct page_frag_cache *nc, unsigned int *fragsz, gfp_t gfp); +/** + * page_frag_alloc_va_prepare_align() - Prepare allocing a page fragment with + * aligning requirement. + * @nc: page_frag cache from which to prepare + * @fragsz: in as the requested size, out as the available size + * @gfp: the allocation gfp to use when cache need to be refilled + * @align: the requested aligning requirement + * + * WARN_ON_ONCE() checking for @align before preparing an aligned page fragment + * with minimum size of @fragsz, @fragsz is also used to report the maximum size + * of the page fragment the caller can use. + * + * Return: + * virtual address of the page fragment, otherwise return NULL. + */ static inline void *page_frag_alloc_va_prepare_align(struct page_frag_cache *nc, unsigned int *fragsz, gfp_t gfp, @@ -117,6 +183,21 @@ struct page *page_frag_alloc_prepare(struct page_frag_cache *nc, unsigned int *fragsz, void **va, gfp_t gfp); +/** + * page_frag_alloc_probe - Probe the available page fragment. + * @nc: page_frag cache from which to probe + * @offset: out as the offset of the page fragment + * @fragsz: in as the requested size, out as the available size + * @va: out as the virtual address of the returned page fragment + * + * Probe the current available memory to caller without doing cache refilling. + * If no space is available in the page_frag cache, return NULL. + * If the requested space is available, up to @fragsz bytes may be added to the + * fragment using commit API. + * + * Return: + * the page fragment, otherwise return NULL. + */ static inline struct page *page_frag_alloc_probe(struct page_frag_cache *nc, unsigned int *offset, unsigned int *fragsz, @@ -138,6 +219,14 @@ static inline struct page *page_frag_alloc_probe(struct page_frag_cache *nc, return page; } +/** + * page_frag_alloc_commit - Commit allocing a page fragment. + * @nc: page_frag cache from which to commit + * @fragsz: size of the page fragment has been used + * + * Commit the actual used size for the allocation that was either prepared or + * probed. + */ static inline void page_frag_alloc_commit(struct page_frag_cache *nc, unsigned int fragsz) { @@ -146,6 +235,16 @@ static inline void page_frag_alloc_commit(struct page_frag_cache *nc, nc->remaining -= fragsz; } +/** + * page_frag_alloc_commit_noref - Commit allocing a page fragment without taking + * page refcount. + * @nc: page_frag cache from which to commit + * @fragsz: size of the page fragment has been used + * + * Commit the alloc preparing or probing by passing the actual used size, but + * not taking refcount. Mostly used for fragmemt coalescing case when the + * current fragment can share the same refcount with previous fragment. + */ static inline void page_frag_alloc_commit_noref(struct page_frag_cache *nc, unsigned int fragsz) { @@ -153,6 +252,14 @@ static inline void page_frag_alloc_commit_noref(struct page_frag_cache *nc, nc->remaining -= fragsz; } +/** + * page_frag_alloc_abort - Abort the page fragment allocation. + * @nc: page_frag cache to which the page fragment is aborted back + * @fragsz: size of the page fragment to be aborted + * + * It is expected to be called from the same context as the alloc API. + * Mostly used for error handling cases where the fragment is no longer needed. + */ static inline void page_frag_alloc_abort(struct page_frag_cache *nc, unsigned int fragsz) { diff --git a/mm/page_frag_cache.c b/mm/page_frag_cache.c index 6a21d710c0e2..f0028d2b673c 100644 --- a/mm/page_frag_cache.c +++ b/mm/page_frag_cache.c @@ -97,6 +97,18 @@ static struct page *__page_frag_cache_reload(struct page_frag_cache *nc, return page; } +/** + * page_frag_alloc_va_prepare() - Prepare allocing a page fragment. + * @nc: page_frag cache from which to prepare + * @fragsz: in as the requested size, out as the available size + * @gfp: the allocation gfp to use when cache needs to be refilled + * + * Prepare a page fragment with minimum size of @fragsz, @fragsz is also used + * to report the maximum size of the page fragment the caller can use. + * + * Return: + * virtual address of the page fragment, otherwise return NULL. + */ void *page_frag_alloc_va_prepare(struct page_frag_cache *nc, unsigned int *fragsz, gfp_t gfp) { @@ -125,6 +137,19 @@ void *page_frag_alloc_va_prepare(struct page_frag_cache *nc, } EXPORT_SYMBOL(page_frag_alloc_va_prepare); +/** + * page_frag_alloc_pg_prepare - Prepare allocing a page fragment. + * @nc: page_frag cache from which to prepare + * @offset: out as the offset of the page fragment + * @fragsz: in as the requested size, out as the available size + * @gfp: the allocation gfp to use when cache needs to be refilled + * + * Prepare a page fragment with minimum size of @fragsz, @fragsz is also used + * to report the maximum size of the page fragment the caller can use. + * + * Return: + * the page fragment, otherwise return NULL. + */ struct page *page_frag_alloc_pg_prepare(struct page_frag_cache *nc, unsigned int *offset, unsigned int *fragsz, gfp_t gfp) @@ -152,6 +177,21 @@ struct page *page_frag_alloc_pg_prepare(struct page_frag_cache *nc, } EXPORT_SYMBOL(page_frag_alloc_pg_prepare); +/** + * page_frag_alloc_prepare - Prepare allocing a page fragment. + * @nc: page_frag cache from which to prepare + * @offset: out as the offset of the page fragment + * @fragsz: in as the requested size, out as the available size + * @va: out as the virtual address of the returned page fragment + * @gfp: the allocation gfp to use when cache needs to be refilled + * + * Prepare a page fragment with minimum size of @fragsz, @fragsz is also used + * to report the maximum size of the page fragment. Return both 'struct page' + * and virtual address of the fragment to the caller. + * + * Return: + * the page fragment, otherwise return NULL. + */ struct page *page_frag_alloc_prepare(struct page_frag_cache *nc, unsigned int *offset, unsigned int *fragsz, @@ -183,6 +223,18 @@ struct page *page_frag_alloc_prepare(struct page_frag_cache *nc, } EXPORT_SYMBOL(page_frag_alloc_prepare); +/** + * page_frag_alloc_pg - Alloce a page fragment. + * @nc: page_frag cache from which to alloce + * @offset: out as the offset of the page fragment + * @fragsz: the requested fragment size + * @gfp: the allocation gfp to use when cache needs to be refilled + * + * Get a page fragment from page_frag cache. + * + * Return: + * the page fragment, otherwise return NULL. + */ struct page *page_frag_alloc_pg(struct page_frag_cache *nc, unsigned int *offset, unsigned int fragsz, gfp_t gfp) @@ -215,6 +267,10 @@ struct page *page_frag_alloc_pg(struct page_frag_cache *nc, } EXPORT_SYMBOL(page_frag_alloc_pg); +/** + * page_frag_cache_drain - Drain the current page from page_frag cache. + * @nc: page_frag cache from which to drain + */ void page_frag_cache_drain(struct page_frag_cache *nc) { if (!nc->encoded_va) @@ -235,6 +291,19 @@ void __page_frag_cache_drain(struct page *page, unsigned int count) } EXPORT_SYMBOL(__page_frag_cache_drain); +/** + * __page_frag_alloc_va_align() - Alloc a page fragment with aligning + * requirement. + * @nc: page_frag cache from which to allocate + * @fragsz: the requested fragment size + * @gfp_mask: the allocation gfp to use when cache need to be refilled + * @align_mask: the requested aligning requirement for the 'va' + * + * Get a page fragment from page_frag cache with aligning requirement. + * + * Return: + * Return va of the page fragment, otherwise return NULL. + */ void *__page_frag_alloc_va_align(struct page_frag_cache *nc, unsigned int fragsz, gfp_t gfp_mask, unsigned int align_mask) @@ -281,8 +350,12 @@ void *__page_frag_alloc_va_align(struct page_frag_cache *nc, } EXPORT_SYMBOL(__page_frag_alloc_va_align); -/* - * Frees a page fragment allocated out of either a compound or order 0 page. +/** + * page_frag_free_va - Free a page fragment. + * @addr: va of page fragment to be freed + * + * Free a page fragment allocated out of either a compound or order 0 page by + * virtual address. */ void page_frag_free_va(void *addr) {