From patchwork Tue Oct 8 11:20:35 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yunsheng Lin X-Patchwork-Id: 13826282 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 855A6CEF157 for ; Tue, 8 Oct 2024 11:27:09 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 8891A6B0083; Tue, 8 Oct 2024 07:27:08 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 838FC6B0085; Tue, 8 Oct 2024 07:27:08 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 700F36B0088; Tue, 8 Oct 2024 07:27:08 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id 4E06E6B0083 for ; Tue, 8 Oct 2024 07:27:08 -0400 (EDT) Received: from smtpin11.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id BABCC4026A for ; Tue, 8 Oct 2024 11:27:06 +0000 (UTC) X-FDA: 82650208494.11.137C67F Received: from szxga07-in.huawei.com (szxga07-in.huawei.com [45.249.212.35]) by imf22.hostedemail.com (Postfix) with ESMTP id 0BC01C0007 for ; Tue, 8 Oct 2024 11:27:03 +0000 (UTC) Authentication-Results: imf22.hostedemail.com; dkim=none; dmarc=pass (policy=quarantine) header.from=huawei.com; spf=pass (imf22.hostedemail.com: domain of linyunsheng@huawei.com designates 45.249.212.35 as permitted sender) smtp.mailfrom=linyunsheng@huawei.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1728386692; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=UmTh/2F8tV4uHN+sGbdEOvkeSGNNBHfzTDTSF2p5/II=; b=D8dPA6DEvvwlX98SMTtkTPjvgo8TOZsOqyYrq3qs0+UNAXeMOzPkz5t2ySUwxDIiXQ9PY1 Lngd8rVMt1APuxnUcIfInG+WIwiaBA18hM0uvXPD0ptmn5mPuglt1Zgy1GtGhmNE7nhb2b 2DSUX4empMuy2939d60qsG38KwOCQkQ= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1728386692; a=rsa-sha256; cv=none; b=dke5f1ttiAe5GdUklFWPICb1/Ble3Wz1mgwaoWFTZoVTRE3mffZbnS2qgE92v9LU5QeD76 HezFdfdn9XNsmD6Ng8TFxzEeHclQc0LtI4Ihc5+/Le/0aHykG2ljZB40xayUU7toKssLch Vtdva/8Ip1Xp63d3jkc9tPXeWj3vdDM= ARC-Authentication-Results: i=1; imf22.hostedemail.com; dkim=none; dmarc=pass (policy=quarantine) header.from=huawei.com; spf=pass (imf22.hostedemail.com: domain of linyunsheng@huawei.com designates 45.249.212.35 as permitted sender) smtp.mailfrom=linyunsheng@huawei.com Received: from mail.maildlp.com (unknown [172.19.162.112]) by szxga07-in.huawei.com (SkyGuard) with ESMTP id 4XNDGn3tSFz1SCDD; Tue, 8 Oct 2024 19:25:53 +0800 (CST) Received: from dggpemf200006.china.huawei.com (unknown [7.185.36.61]) by mail.maildlp.com (Postfix) with ESMTPS id 2703C14013B; Tue, 8 Oct 2024 19:26:58 +0800 (CST) Received: from localhost.localdomain (10.90.30.45) by dggpemf200006.china.huawei.com (7.185.36.61) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.11; Tue, 8 Oct 2024 19:26:57 +0800 From: Yunsheng Lin To: , , CC: , , Yunsheng Lin , Alexander Duyck , Alexander Duyck , Andrew Morton , Shuah Khan , , Subject: [PATCH net-next v20 01/14] mm: page_frag: add a test module for page_frag Date: Tue, 8 Oct 2024 19:20:35 +0800 Message-ID: <20241008112049.2279307-2-linyunsheng@huawei.com> X-Mailer: git-send-email 2.30.0 In-Reply-To: <20241008112049.2279307-1-linyunsheng@huawei.com> References: <20241008112049.2279307-1-linyunsheng@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.90.30.45] X-ClientProxiedBy: dggems701-chm.china.huawei.com (10.3.19.178) To dggpemf200006.china.huawei.com (7.185.36.61) X-Rspamd-Queue-Id: 0BC01C0007 X-Stat-Signature: s5939k8fc5wn73dc546snbebowd8ahyf X-Rspamd-Server: rspam09 X-Rspam-User: X-HE-Tag: 1728386823-416602 X-HE-Meta: U2FsdGVkX19TSBj4xp2mSs9/JpScx4ozXDFu6E6vR23NpT5gWm/9oCP9Y9d0I9cfKPu/OK5sAYVpw6fsMKWAA3l5OweIPawSXX0UhOpxw1cmj2UHcf2YkC12F4rjAEgOf2Ai7uSl14qfAPP3cHAIn93GBhgze69s2XsAa2iKpJy5Z8aWS41iC5fAlh6wmxMy8lN6uGOxZjwAuWmMvBp+YsViFTbr/ipMnv2voVeIBynnePhXGRvQ/Xb5/l3ZpvPDI7OvaUv9wtJCL3ADYcw6nKI5gO3zsdaeD6fewIpsOrHGA6pWZgzEv0bzLfoc75HYhT4R5E5hPp8Dmbyc6OKNzhbNjwTlay0j70sjv0Izau4/htRHoGsNZsfECWcLBQd6R3vC5kacFNopuKMKmc3IzkWv/kAXvK1TEHvRoypvQd5Q8aQvb5agjhPM3dOV/LoxuaEdXaVAYsdJmZgbl4gSOEJmhCtYnhzTgHi15z+wRLvVBx39YaP5C7A/rlCHQv92TgFwNhUeD+fakDB1svuW0z+/moCTMlhJzX58iGa2KbzRoisLgWPVWSGTBEFyN0k2UrfwXgNc7KtwVIUE6kZsgkxcZnVBfJoAa8PAswitne1IFYrHz2uY6HjBnB2BfbmUUTglKcKe9ZuOODUGeFiojHGK/IJKwBm8gwzCkohOB16FjNvh356uMW6YZFomIeD1CbP/ZaL8FoWcmoGy5+U4kEv18kHfORYH7utEyY3VdgmYJw15u3vzEGV38376ATFSEHz4zMmswBY7g6m8SyegNapxG8IO9VrEHVCzteCTPF7XLIOr0b/VyKBCLZvHwO7N1Q9Q/JErXM4ysw9RqYsYUi0vIEu8/rTvCGQ4XE77INVQ6FfHJqyXMlk9/3pg7hKWnKHYJXTZF4Fb0tTMgGcmSz6uoAX8xWmgKPtjnZd0f+RfrYxaIUbMl4SVFN2vEWgtVQSgVcl75mX4Pdsf4qs 3+uKkPky HlFntsWHQ4mtjBmUP8b3AKXBH5iVpqWPZ1ukHBKrAVRP9wLpnZiBWsQUUZxU9CBqrlz+kYJ5CMVgTgPQCAMlKUZB/ZCORKjObamY/9y5PbZbdxU+WzCJUeGw0a84xXp2ZPd+GYruzAKtlWg04THobolQ6eCgXAHWxfcpFE/zSDzOojgOFBufFht5AssISQlYtogLoHGEmMNmcxFVwdCsVJmb8PHjYIZeuZIfR+idkoWRO2pDMXokIuenlCExN27lBh3JPnny0vdp80tMI1Qs9c6WrYYrdAd7c/+V6gRdatvrN7EuwZz0um4ZmYg== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: The testing is done by ensuring that the fragment allocated from a frag_frag_cache instance is pushed into a ptr_ring instance in a kthread binded to a specified cpu, and a kthread binded to a specified cpu will pop the fragment from the ptr_ring and free the fragment. CC: Alexander Duyck Signed-off-by: Yunsheng Lin Reviewed-by: Alexander Duyck --- tools/testing/selftests/mm/Makefile | 3 + tools/testing/selftests/mm/page_frag/Makefile | 18 ++ .../selftests/mm/page_frag/page_frag_test.c | 173 ++++++++++++++++++ tools/testing/selftests/mm/run_vmtests.sh | 8 + tools/testing/selftests/mm/test_page_frag.sh | 171 +++++++++++++++++ 5 files changed, 373 insertions(+) create mode 100644 tools/testing/selftests/mm/page_frag/Makefile create mode 100644 tools/testing/selftests/mm/page_frag/page_frag_test.c create mode 100755 tools/testing/selftests/mm/test_page_frag.sh diff --git a/tools/testing/selftests/mm/Makefile b/tools/testing/selftests/mm/Makefile index 02e1204971b0..acec529baaca 100644 --- a/tools/testing/selftests/mm/Makefile +++ b/tools/testing/selftests/mm/Makefile @@ -36,6 +36,8 @@ MAKEFLAGS += --no-builtin-rules CFLAGS = -Wall -I $(top_srcdir) $(EXTRA_CFLAGS) $(KHDR_INCLUDES) $(TOOLS_INCLUDES) LDLIBS = -lrt -lpthread -lm +TEST_GEN_MODS_DIR := page_frag + TEST_GEN_FILES = cow TEST_GEN_FILES += compaction_test TEST_GEN_FILES += gup_longterm @@ -126,6 +128,7 @@ TEST_FILES += test_hmm.sh TEST_FILES += va_high_addr_switch.sh TEST_FILES += charge_reserved_hugetlb.sh TEST_FILES += hugetlb_reparenting_test.sh +TEST_FILES += test_page_frag.sh # required by charge_reserved_hugetlb.sh TEST_FILES += write_hugetlb_memory.sh diff --git a/tools/testing/selftests/mm/page_frag/Makefile b/tools/testing/selftests/mm/page_frag/Makefile new file mode 100644 index 000000000000..58dda74d50a3 --- /dev/null +++ b/tools/testing/selftests/mm/page_frag/Makefile @@ -0,0 +1,18 @@ +PAGE_FRAG_TEST_DIR := $(realpath $(dir $(abspath $(lastword $(MAKEFILE_LIST))))) +KDIR ?= $(abspath $(PAGE_FRAG_TEST_DIR)/../../../../..) + +ifeq ($(V),1) +Q = +else +Q = @ +endif + +MODULES = page_frag_test.ko + +obj-m += page_frag_test.o + +all: + +$(Q)make -C $(KDIR) M=$(PAGE_FRAG_TEST_DIR) modules + +clean: + +$(Q)make -C $(KDIR) M=$(PAGE_FRAG_TEST_DIR) clean diff --git a/tools/testing/selftests/mm/page_frag/page_frag_test.c b/tools/testing/selftests/mm/page_frag/page_frag_test.c new file mode 100644 index 000000000000..eeb2b6bc681a --- /dev/null +++ b/tools/testing/selftests/mm/page_frag/page_frag_test.c @@ -0,0 +1,173 @@ +// SPDX-License-Identifier: GPL-2.0 + +/* + * Test module for page_frag cache + * + * Copyright (C) 2024 Yunsheng Lin + */ + +#include +#include +#include +#include +#include +#include + +static struct ptr_ring ptr_ring; +static int nr_objs = 512; +static atomic_t nthreads; +static struct completion wait; +static struct page_frag_cache test_nc; +static int test_popped; +static int test_pushed; + +static int nr_test = 2000000; +module_param(nr_test, int, 0); +MODULE_PARM_DESC(nr_test, "number of iterations to test"); + +static bool test_align; +module_param(test_align, bool, 0); +MODULE_PARM_DESC(test_align, "use align API for testing"); + +static int test_alloc_len = 2048; +module_param(test_alloc_len, int, 0); +MODULE_PARM_DESC(test_alloc_len, "alloc len for testing"); + +static int test_push_cpu; +module_param(test_push_cpu, int, 0); +MODULE_PARM_DESC(test_push_cpu, "test cpu for pushing fragment"); + +static int test_pop_cpu; +module_param(test_pop_cpu, int, 0); +MODULE_PARM_DESC(test_pop_cpu, "test cpu for popping fragment"); + +static int page_frag_pop_thread(void *arg) +{ + struct ptr_ring *ring = arg; + + pr_info("page_frag pop test thread begins on cpu %d\n", + smp_processor_id()); + + while (test_popped < nr_test) { + void *obj = __ptr_ring_consume(ring); + + if (obj) { + test_popped++; + page_frag_free(obj); + } else { + cond_resched(); + } + } + + if (atomic_dec_and_test(&nthreads)) + complete(&wait); + + pr_info("page_frag pop test thread exits on cpu %d\n", + smp_processor_id()); + + return 0; +} + +static int page_frag_push_thread(void *arg) +{ + struct ptr_ring *ring = arg; + + pr_info("page_frag push test thread begins on cpu %d\n", + smp_processor_id()); + + while (test_pushed < nr_test) { + void *va; + int ret; + + if (test_align) { + va = page_frag_alloc_align(&test_nc, test_alloc_len, + GFP_KERNEL, SMP_CACHE_BYTES); + + WARN_ONCE((unsigned long)va & (SMP_CACHE_BYTES - 1), + "unaligned va returned\n"); + } else { + va = page_frag_alloc(&test_nc, test_alloc_len, GFP_KERNEL); + } + + if (!va) + continue; + + ret = __ptr_ring_produce(ring, va); + if (ret) { + page_frag_free(va); + cond_resched(); + } else { + test_pushed++; + } + } + + pr_info("page_frag push test thread exits on cpu %d\n", + smp_processor_id()); + + if (atomic_dec_and_test(&nthreads)) + complete(&wait); + + return 0; +} + +static int __init page_frag_test_init(void) +{ + struct task_struct *tsk_push, *tsk_pop; + ktime_t start; + u64 duration; + int ret; + + test_nc.va = NULL; + atomic_set(&nthreads, 2); + init_completion(&wait); + + if (test_alloc_len > PAGE_SIZE || test_alloc_len <= 0 || + !cpu_active(test_push_cpu) || !cpu_active(test_pop_cpu)) + return -EINVAL; + + ret = ptr_ring_init(&ptr_ring, nr_objs, GFP_KERNEL); + if (ret) + return ret; + + tsk_push = kthread_create_on_cpu(page_frag_push_thread, &ptr_ring, + test_push_cpu, "page_frag_push"); + if (IS_ERR(tsk_push)) + return PTR_ERR(tsk_push); + + tsk_pop = kthread_create_on_cpu(page_frag_pop_thread, &ptr_ring, + test_pop_cpu, "page_frag_pop"); + if (IS_ERR(tsk_pop)) { + kthread_stop(tsk_push); + return PTR_ERR(tsk_pop); + } + + start = ktime_get(); + wake_up_process(tsk_push); + wake_up_process(tsk_pop); + + pr_info("waiting for test to complete\n"); + + while (!wait_for_completion_timeout(&wait, msecs_to_jiffies(10000))) + pr_info("page_frag_test progress: pushed = %d, popped = %d\n", + test_pushed, test_popped); + + duration = (u64)ktime_us_delta(ktime_get(), start); + pr_info("%d of iterations for %s testing took: %lluus\n", nr_test, + test_align ? "aligned" : "non-aligned", duration); + + ptr_ring_cleanup(&ptr_ring, NULL); + page_frag_cache_drain(&test_nc); + + return -EAGAIN; +} + +static void __exit page_frag_test_exit(void) +{ +} + +module_init(page_frag_test_init); +module_exit(page_frag_test_exit); + +MODULE_LICENSE("GPL"); +MODULE_AUTHOR("Yunsheng Lin "); +MODULE_DESCRIPTION("Test module for page_frag"); diff --git a/tools/testing/selftests/mm/run_vmtests.sh b/tools/testing/selftests/mm/run_vmtests.sh index c5797ad1d37b..2c5394584af4 100755 --- a/tools/testing/selftests/mm/run_vmtests.sh +++ b/tools/testing/selftests/mm/run_vmtests.sh @@ -75,6 +75,8 @@ separated by spaces: read-only VMAs - mdwe test prctl(PR_SET_MDWE, ...) +- page_frag + test handling of page fragment allocation and freeing example: ./run_vmtests.sh -t "hmm mmap ksm" EOF @@ -456,6 +458,12 @@ CATEGORY="mkdirty" run_test ./mkdirty CATEGORY="mdwe" run_test ./mdwe_test +CATEGORY="page_frag" run_test ./test_page_frag.sh smoke + +CATEGORY="page_frag" run_test ./test_page_frag.sh aligned + +CATEGORY="page_frag" run_test ./test_page_frag.sh nonaligned + echo "SUMMARY: PASS=${count_pass} SKIP=${count_skip} FAIL=${count_fail}" | tap_prefix echo "1..${count_total}" | tap_output diff --git a/tools/testing/selftests/mm/test_page_frag.sh b/tools/testing/selftests/mm/test_page_frag.sh new file mode 100755 index 000000000000..d750d910c899 --- /dev/null +++ b/tools/testing/selftests/mm/test_page_frag.sh @@ -0,0 +1,171 @@ +#!/bin/bash +# SPDX-License-Identifier: GPL-2.0 +# +# Copyright (C) 2024 Yunsheng Lin +# Copyright (C) 2018 Uladzislau Rezki (Sony) +# +# This is a test script for the kernel test driver to test the +# correctness and performance of page_frag's implementation. +# Therefore it is just a kernel module loader. You can specify +# and pass different parameters in order to: +# a) analyse performance of page fragment allocations; +# b) stressing and stability check of page_frag subsystem. + +DRIVER="./page_frag/page_frag_test.ko" +CPU_LIST=$(grep -m 2 processor /proc/cpuinfo | cut -d ' ' -f 2) +TEST_CPU_0=$(echo $CPU_LIST | awk '{print $1}') + +if [ $(echo $CPU_LIST | wc -w) -gt 1 ]; then + TEST_CPU_1=$(echo $CPU_LIST | awk '{print $2}') + NR_TEST=100000000 +else + TEST_CPU_1=$TEST_CPU_0 + NR_TEST=1000000 +fi + +# 1 if fails +exitcode=1 + +# Kselftest framework requirement - SKIP code is 4. +ksft_skip=4 + +# +# Static templates for testing of page_frag APIs. +# Also it is possible to pass any supported parameters manually. +# +SMOKE_PARAM="test_push_cpu=$TEST_CPU_0 test_pop_cpu=$TEST_CPU_1" +NONALIGNED_PARAM="$SMOKE_PARAM test_alloc_len=75 nr_test=$NR_TEST" +ALIGNED_PARAM="$NONALIGNED_PARAM test_align=1" + +check_test_requirements() +{ + uid=$(id -u) + if [ $uid -ne 0 ]; then + echo "$0: Must be run as root" + exit $ksft_skip + fi + + if ! which insmod > /dev/null 2>&1; then + echo "$0: You need insmod installed" + exit $ksft_skip + fi + + if [ ! -f $DRIVER ]; then + echo "$0: You need to compile page_frag_test module" + exit $ksft_skip + fi +} + +run_nonaligned_check() +{ + echo "Run performance tests to evaluate how fast nonaligned alloc API is." + + insmod $DRIVER $NONALIGNED_PARAM > /dev/null 2>&1 + echo "Done." + echo "Check the kernel ring buffer to see the summary." +} + +run_aligned_check() +{ + echo "Run performance tests to evaluate how fast aligned alloc API is." + + insmod $DRIVER $ALIGNED_PARAM > /dev/null 2>&1 + echo "Done." + echo "Check the kernel ring buffer to see the summary." +} + +run_smoke_check() +{ + echo "Run smoke test." + + insmod $DRIVER $SMOKE_PARAM > /dev/null 2>&1 + echo "Done." + echo "Check the kernel ring buffer to see the summary." +} + +usage() +{ + echo -n "Usage: $0 [ aligned ] | [ nonaligned ] | | [ smoke ] | " + echo "manual parameters" + echo + echo "Valid tests and parameters:" + echo + modinfo $DRIVER + echo + echo "Example usage:" + echo + echo "# Shows help message" + echo "$0" + echo + echo "# Smoke testing" + echo "$0 smoke" + echo + echo "# Performance testing for nonaligned alloc API" + echo "$0 nonaligned" + echo + echo "# Performance testing for aligned alloc API" + echo "$0 aligned" + echo + exit 0 +} + +function validate_passed_args() +{ + VALID_ARGS=`modinfo $DRIVER | awk '/parm:/ {print $2}' | sed 's/:.*//'` + + # + # Something has been passed, check it. + # + for passed_arg in $@; do + key=${passed_arg//=*/} + valid=0 + + for valid_arg in $VALID_ARGS; do + if [[ $key = $valid_arg ]]; then + valid=1 + break + fi + done + + if [[ $valid -ne 1 ]]; then + echo "Error: key is not correct: ${key}" + exit $exitcode + fi + done +} + +function run_manual_check() +{ + # + # Validate passed parameters. If there is wrong one, + # the script exists and does not execute further. + # + validate_passed_args $@ + + echo "Run the test with following parameters: $@" + insmod $DRIVER $@ > /dev/null 2>&1 + echo "Done." + echo "Check the kernel ring buffer to see the summary." +} + +function run_test() +{ + if [ $# -eq 0 ]; then + usage + else + if [[ "$1" = "smoke" ]]; then + run_smoke_check + elif [[ "$1" = "nonaligned" ]]; then + run_nonaligned_check + elif [[ "$1" = "aligned" ]]; then + run_aligned_check + else + run_manual_check $@ + fi + fi +} + +check_test_requirements +run_test $@ + +exit 0 From patchwork Tue Oct 8 11:20:36 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yunsheng Lin X-Patchwork-Id: 13826283 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id E14BACEF152 for ; Tue, 8 Oct 2024 11:27:10 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id F40AB6B0085; Tue, 8 Oct 2024 07:27:08 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id EEFFA6B0088; Tue, 8 Oct 2024 07:27:08 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id DB71D6B0089; Tue, 8 Oct 2024 07:27:08 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id B7EBF6B0085 for ; Tue, 8 Oct 2024 07:27:08 -0400 (EDT) Received: from smtpin27.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id 53200A0FB4 for ; Tue, 8 Oct 2024 11:27:06 +0000 (UTC) X-FDA: 82650208536.27.9865454 Received: from szxga01-in.huawei.com (szxga01-in.huawei.com [45.249.212.187]) by imf02.hostedemail.com (Postfix) with ESMTP id EA36F80016 for ; Tue, 8 Oct 2024 11:27:05 +0000 (UTC) Authentication-Results: imf02.hostedemail.com; dkim=none; spf=pass (imf02.hostedemail.com: domain of linyunsheng@huawei.com designates 45.249.212.187 as permitted sender) smtp.mailfrom=linyunsheng@huawei.com; dmarc=pass (policy=quarantine) header.from=huawei.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1728386757; a=rsa-sha256; cv=none; b=Rlf9fh9340nl55KcD6jknxYszHVomAlnxWKuua2IbeCmqa+xg0ZmC7O2PBsHcSkhbYEnG8 rWz1mrzVEKgbCJ3Gl850bmTdLoPI7Sxh3CJWHMug+WjwEKKxyp/VCLNOGjqOxvUKWtsJTy O1CMehieHlbtVdDV7doV1WeFIytHWV0= ARC-Authentication-Results: i=1; imf02.hostedemail.com; dkim=none; spf=pass (imf02.hostedemail.com: domain of linyunsheng@huawei.com designates 45.249.212.187 as permitted sender) smtp.mailfrom=linyunsheng@huawei.com; dmarc=pass (policy=quarantine) header.from=huawei.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1728386757; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=mLkW5ztZWEoZgUZHCYSOoyGFc50vyU+4PWI4EuvhRKw=; b=CSTqp96qYiKL/MetBk1k+HKCQuXmuctplQnH29TCqvCgu3IfWorEorgaoQ8RH8LO5RK3QD B05jLH/2O6egPDEZrguYOiLvTOH8A+S6cJMqcJrK+EgkYAZZfYlpYfKXG+dkjQf6xaLgjC jE8BOGmrOiUi7P7IgFEepQzcbWy1ycY= Received: from mail.maildlp.com (unknown [172.19.163.252]) by szxga01-in.huawei.com (SkyGuard) with ESMTP id 4XNDGf5N2czySsn; Tue, 8 Oct 2024 19:25:46 +0800 (CST) Received: from dggpemf200006.china.huawei.com (unknown [7.185.36.61]) by mail.maildlp.com (Postfix) with ESMTPS id F2EC01800CF; Tue, 8 Oct 2024 19:27:01 +0800 (CST) Received: from localhost.localdomain (10.90.30.45) by dggpemf200006.china.huawei.com (7.185.36.61) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.11; Tue, 8 Oct 2024 19:27:01 +0800 From: Yunsheng Lin To: , , CC: , , Yunsheng Lin , David Howells , Alexander Duyck , Andrew Morton , Alexander Duyck , Eric Dumazet , Shuah Khan , , Subject: [PATCH net-next v20 02/14] mm: move the page fragment allocator from page_alloc into its own file Date: Tue, 8 Oct 2024 19:20:36 +0800 Message-ID: <20241008112049.2279307-3-linyunsheng@huawei.com> X-Mailer: git-send-email 2.30.0 In-Reply-To: <20241008112049.2279307-1-linyunsheng@huawei.com> References: <20241008112049.2279307-1-linyunsheng@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.90.30.45] X-ClientProxiedBy: dggems701-chm.china.huawei.com (10.3.19.178) To dggpemf200006.china.huawei.com (7.185.36.61) X-Stat-Signature: 9wbqx6gk7dq5x8u1so1md9p8npjmgq5i X-Rspamd-Queue-Id: EA36F80016 X-Rspam-User: X-Rspamd-Server: rspam10 X-HE-Tag: 1728386825-124415 X-HE-Meta: U2FsdGVkX19aT9ooflRvtqZKmNCiFpiuS3Qi2y8iWu9Ew6JBfPsZMelxf7Y5e14OV73hauddhTpHkIYNK33RqgCNMpHXEc7BCrx618qB/C0imOtvcIeEdOt0LpY2k8x+B99JHGQEbyBGBIZX/zhVZ/6BZs4olx4wIG5n/pco8XV9Tm4elmqrW5njbAT5pU/fyTlMuYqhcnprFWnF4N4IlRwfKk9cWHk2D+5kcsMCk5RSdrly3sYjKlIRo8UOmb0pNnTugd8l9Oi2QNfz+JYao+upGp0uTUM8XcOB4rQlj4DA65YCyizqo1j1+A5ZrKtSYgSrEOLoeqoA88zln9lcwxvMV3x3gP48GlLDAZc8xWK8ZhETn+4tMvtFZhOkoHAgCtd0u1MWWlWhq282qnwoaA4S5+u0AaH9fsSnNR9/vsqUbbFLfz9MQdlgaCUefiOaux5drjNpKuWsESnSmgjb3HWxJqlZj10Gk3Rfn2tdb7NpGBikl7YBZ1pUEIjPZWX9RH8sn2OHCKSOlsBrPt4krJRig5t0XakSObAMwt7HTaIh4VPELWl4xai7v8iwH3tAzDL2W+H/6rJ6mGe/nuN1sNsvWJQtjkRBw8AUfDuq4trtyPpdzZ0UCD40HTUcYIBhwQH9L+KXIeRkQYRYT3NiX0HAUexTAOAJxwKtSrDbWAFQFuGZpjCHuiCK8Ud4oXCYpHqNHrjTRKKmnOcPDazwfKtDB0+OM91p/L/Ive4ajRsIIqr7P+zHNq3ha1Dq75upR0Dr7sSa56dq4P/QrHNvVxVy7jw/38ts36qDH7h0i6DENelJIknOJvdNBmMmYutVG2azGwvcd7XQIALNxZKI/XVJuuObQuIug8ssqfIjwJFweRBN4lwNnmKdkwFciG4uLRn9NxgvFayIDKMSGZWESR/W+z3xi3hd6vR82vNyXYm9gTOqixp4AJWN8kyma4KOYzT0uonSR9MFdIYmjKO hGKTkxqL jhQHapYg5lbJexi4B9AczCreVfEQ02oFVp1ziqT3nfz+vPerDWCVFmkBiWUDBh8RNzcKws39J23aCLwAbB2ZuwABj/xC+2rTfrkAzWRH/nah6i1pt0PxVhGJZIRVumkJ6NWuqy4qpDQ46pYaSsaalUXKMkAEktakRjxf2ujjj2EkjD0UW4zfEFnUz00d+oRNYVwvdy0BZm605ZAOmxeVr5iQm4YaZYvtIPFZgGyryfjfgVz/nLRlLGt8v799A+MYg7Df2sptdothQ7RnymfcKAL82dUBOGxakhFlNMLwpzVDDzVa3iEizcEogjEdds6t7bnHG9BeTt++7uCs1aoiTjr+B0FluP75I30+lKuloJLy1fwPs+6wRZqjSN///1qTFHEal2ZVon7stG5IkTRVX5kIZXeqHlfdGwKJ9 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Inspired by [1], move the page fragment allocator from page_alloc into its own c file and header file, as we are about to make more change for it to replace another page_frag implementation in sock.c As this patchset is going to replace 'struct page_frag' with 'struct page_frag_cache' in sched.h, including page_frag_cache.h in sched.h has a compiler error caused by interdependence between mm_types.h and mm.h for asm-offsets.c, see [2]. So avoid the compiler error by moving 'struct page_frag_cache' to mm_types_task.h as suggested by Alexander, see [3]. 1. https://lore.kernel.org/all/20230411160902.4134381-3-dhowells@redhat.com/ 2. https://lore.kernel.org/all/15623dac-9358-4597-b3ee-3694a5956920@gmail.com/ 3. https://lore.kernel.org/all/CAKgT0UdH1yD=LSCXFJ=YM_aiA4OomD-2wXykO42bizaWMt_HOA@mail.gmail.com/ CC: David Howells CC: Alexander Duyck Signed-off-by: Yunsheng Lin Acked-by: Andrew Morton Reviewed-by: Alexander Duyck --- include/linux/gfp.h | 22 --- include/linux/mm_types.h | 18 --- include/linux/mm_types_task.h | 18 +++ include/linux/page_frag_cache.h | 31 ++++ include/linux/skbuff.h | 1 + mm/Makefile | 1 + mm/page_alloc.c | 136 ---------------- mm/page_frag_cache.c | 145 ++++++++++++++++++ .../selftests/mm/page_frag/page_frag_test.c | 2 +- 9 files changed, 197 insertions(+), 177 deletions(-) create mode 100644 include/linux/page_frag_cache.h create mode 100644 mm/page_frag_cache.c diff --git a/include/linux/gfp.h b/include/linux/gfp.h index a951de920e20..a0a6d25f883f 100644 --- a/include/linux/gfp.h +++ b/include/linux/gfp.h @@ -371,28 +371,6 @@ __meminit void *alloc_pages_exact_nid_noprof(int nid, size_t size, gfp_t gfp_mas extern void __free_pages(struct page *page, unsigned int order); extern void free_pages(unsigned long addr, unsigned int order); -struct page_frag_cache; -void page_frag_cache_drain(struct page_frag_cache *nc); -extern void __page_frag_cache_drain(struct page *page, unsigned int count); -void *__page_frag_alloc_align(struct page_frag_cache *nc, unsigned int fragsz, - gfp_t gfp_mask, unsigned int align_mask); - -static inline void *page_frag_alloc_align(struct page_frag_cache *nc, - unsigned int fragsz, gfp_t gfp_mask, - unsigned int align) -{ - WARN_ON_ONCE(!is_power_of_2(align)); - return __page_frag_alloc_align(nc, fragsz, gfp_mask, -align); -} - -static inline void *page_frag_alloc(struct page_frag_cache *nc, - unsigned int fragsz, gfp_t gfp_mask) -{ - return __page_frag_alloc_align(nc, fragsz, gfp_mask, ~0u); -} - -extern void page_frag_free(void *addr); - #define __free_page(page) __free_pages((page), 0) #define free_page(addr) free_pages((addr), 0) diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h index 6e3bdf8e38bc..92314ef2d978 100644 --- a/include/linux/mm_types.h +++ b/include/linux/mm_types.h @@ -521,9 +521,6 @@ static_assert(sizeof(struct ptdesc) <= sizeof(struct page)); */ #define STRUCT_PAGE_MAX_SHIFT (order_base_2(sizeof(struct page))) -#define PAGE_FRAG_CACHE_MAX_SIZE __ALIGN_MASK(32768, ~PAGE_MASK) -#define PAGE_FRAG_CACHE_MAX_ORDER get_order(PAGE_FRAG_CACHE_MAX_SIZE) - /* * page_private can be used on tail pages. However, PagePrivate is only * checked by the VM on the head page. So page_private on the tail pages @@ -542,21 +539,6 @@ static inline void *folio_get_private(struct folio *folio) return folio->private; } -struct page_frag_cache { - void * va; -#if (PAGE_SIZE < PAGE_FRAG_CACHE_MAX_SIZE) - __u16 offset; - __u16 size; -#else - __u32 offset; -#endif - /* we maintain a pagecount bias, so that we dont dirty cache line - * containing page->_refcount every time we allocate a fragment. - */ - unsigned int pagecnt_bias; - bool pfmemalloc; -}; - typedef unsigned long vm_flags_t; /* diff --git a/include/linux/mm_types_task.h b/include/linux/mm_types_task.h index bff5706b76e1..0ac6daebdd5c 100644 --- a/include/linux/mm_types_task.h +++ b/include/linux/mm_types_task.h @@ -8,6 +8,7 @@ * (These are defined separately to decouple sched.h from mm_types.h as much as possible.) */ +#include #include #include @@ -43,6 +44,23 @@ struct page_frag { #endif }; +#define PAGE_FRAG_CACHE_MAX_SIZE __ALIGN_MASK(32768, ~PAGE_MASK) +#define PAGE_FRAG_CACHE_MAX_ORDER get_order(PAGE_FRAG_CACHE_MAX_SIZE) +struct page_frag_cache { + void *va; +#if (PAGE_SIZE < PAGE_FRAG_CACHE_MAX_SIZE) + __u16 offset; + __u16 size; +#else + __u32 offset; +#endif + /* we maintain a pagecount bias, so that we dont dirty cache line + * containing page->_refcount every time we allocate a fragment. + */ + unsigned int pagecnt_bias; + bool pfmemalloc; +}; + /* Track pages that require TLB flushes */ struct tlbflush_unmap_batch { #ifdef CONFIG_ARCH_WANT_BATCHED_UNMAP_TLB_FLUSH diff --git a/include/linux/page_frag_cache.h b/include/linux/page_frag_cache.h new file mode 100644 index 000000000000..67ac8626ed9b --- /dev/null +++ b/include/linux/page_frag_cache.h @@ -0,0 +1,31 @@ +/* SPDX-License-Identifier: GPL-2.0 */ + +#ifndef _LINUX_PAGE_FRAG_CACHE_H +#define _LINUX_PAGE_FRAG_CACHE_H + +#include +#include +#include + +void page_frag_cache_drain(struct page_frag_cache *nc); +void __page_frag_cache_drain(struct page *page, unsigned int count); +void *__page_frag_alloc_align(struct page_frag_cache *nc, unsigned int fragsz, + gfp_t gfp_mask, unsigned int align_mask); + +static inline void *page_frag_alloc_align(struct page_frag_cache *nc, + unsigned int fragsz, gfp_t gfp_mask, + unsigned int align) +{ + WARN_ON_ONCE(!is_power_of_2(align)); + return __page_frag_alloc_align(nc, fragsz, gfp_mask, -align); +} + +static inline void *page_frag_alloc(struct page_frag_cache *nc, + unsigned int fragsz, gfp_t gfp_mask) +{ + return __page_frag_alloc_align(nc, fragsz, gfp_mask, ~0u); +} + +void page_frag_free(void *addr); + +#endif diff --git a/include/linux/skbuff.h b/include/linux/skbuff.h index 39f1d16f3628..560e2b49f98b 100644 --- a/include/linux/skbuff.h +++ b/include/linux/skbuff.h @@ -31,6 +31,7 @@ #include #include #include +#include #include #if IS_ENABLED(CONFIG_NF_CONNTRACK) #include diff --git a/mm/Makefile b/mm/Makefile index d5639b036166..dba52bb0da8a 100644 --- a/mm/Makefile +++ b/mm/Makefile @@ -65,6 +65,7 @@ page-alloc-$(CONFIG_SHUFFLE_PAGE_ALLOCATOR) += shuffle.o memory-hotplug-$(CONFIG_MEMORY_HOTPLUG) += memory_hotplug.o obj-y += page-alloc.o +obj-y += page_frag_cache.o obj-y += init-mm.o obj-y += memblock.o obj-y += $(memory-hotplug-y) diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 8afab64814dc..6ca2abce857b 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -4836,142 +4836,6 @@ void free_pages(unsigned long addr, unsigned int order) EXPORT_SYMBOL(free_pages); -/* - * Page Fragment: - * An arbitrary-length arbitrary-offset area of memory which resides - * within a 0 or higher order page. Multiple fragments within that page - * are individually refcounted, in the page's reference counter. - * - * The page_frag functions below provide a simple allocation framework for - * page fragments. This is used by the network stack and network device - * drivers to provide a backing region of memory for use as either an - * sk_buff->head, or to be used in the "frags" portion of skb_shared_info. - */ -static struct page *__page_frag_cache_refill(struct page_frag_cache *nc, - gfp_t gfp_mask) -{ - struct page *page = NULL; - gfp_t gfp = gfp_mask; - -#if (PAGE_SIZE < PAGE_FRAG_CACHE_MAX_SIZE) - gfp_mask = (gfp_mask & ~__GFP_DIRECT_RECLAIM) | __GFP_COMP | - __GFP_NOWARN | __GFP_NORETRY | __GFP_NOMEMALLOC; - page = alloc_pages_node(NUMA_NO_NODE, gfp_mask, - PAGE_FRAG_CACHE_MAX_ORDER); - nc->size = page ? PAGE_FRAG_CACHE_MAX_SIZE : PAGE_SIZE; -#endif - if (unlikely(!page)) - page = alloc_pages_node(NUMA_NO_NODE, gfp, 0); - - nc->va = page ? page_address(page) : NULL; - - return page; -} - -void page_frag_cache_drain(struct page_frag_cache *nc) -{ - if (!nc->va) - return; - - __page_frag_cache_drain(virt_to_head_page(nc->va), nc->pagecnt_bias); - nc->va = NULL; -} -EXPORT_SYMBOL(page_frag_cache_drain); - -void __page_frag_cache_drain(struct page *page, unsigned int count) -{ - VM_BUG_ON_PAGE(page_ref_count(page) == 0, page); - - if (page_ref_sub_and_test(page, count)) - free_unref_page(page, compound_order(page)); -} -EXPORT_SYMBOL(__page_frag_cache_drain); - -void *__page_frag_alloc_align(struct page_frag_cache *nc, - unsigned int fragsz, gfp_t gfp_mask, - unsigned int align_mask) -{ - unsigned int size = PAGE_SIZE; - struct page *page; - int offset; - - if (unlikely(!nc->va)) { -refill: - page = __page_frag_cache_refill(nc, gfp_mask); - if (!page) - return NULL; - -#if (PAGE_SIZE < PAGE_FRAG_CACHE_MAX_SIZE) - /* if size can vary use size else just use PAGE_SIZE */ - size = nc->size; -#endif - /* Even if we own the page, we do not use atomic_set(). - * This would break get_page_unless_zero() users. - */ - page_ref_add(page, PAGE_FRAG_CACHE_MAX_SIZE); - - /* reset page count bias and offset to start of new frag */ - nc->pfmemalloc = page_is_pfmemalloc(page); - nc->pagecnt_bias = PAGE_FRAG_CACHE_MAX_SIZE + 1; - nc->offset = size; - } - - offset = nc->offset - fragsz; - if (unlikely(offset < 0)) { - page = virt_to_page(nc->va); - - if (!page_ref_sub_and_test(page, nc->pagecnt_bias)) - goto refill; - - if (unlikely(nc->pfmemalloc)) { - free_unref_page(page, compound_order(page)); - goto refill; - } - -#if (PAGE_SIZE < PAGE_FRAG_CACHE_MAX_SIZE) - /* if size can vary use size else just use PAGE_SIZE */ - size = nc->size; -#endif - /* OK, page count is 0, we can safely set it */ - set_page_count(page, PAGE_FRAG_CACHE_MAX_SIZE + 1); - - /* reset page count bias and offset to start of new frag */ - nc->pagecnt_bias = PAGE_FRAG_CACHE_MAX_SIZE + 1; - offset = size - fragsz; - if (unlikely(offset < 0)) { - /* - * The caller is trying to allocate a fragment - * with fragsz > PAGE_SIZE but the cache isn't big - * enough to satisfy the request, this may - * happen in low memory conditions. - * We don't release the cache page because - * it could make memory pressure worse - * so we simply return NULL here. - */ - return NULL; - } - } - - nc->pagecnt_bias--; - offset &= align_mask; - nc->offset = offset; - - return nc->va + offset; -} -EXPORT_SYMBOL(__page_frag_alloc_align); - -/* - * Frees a page fragment allocated out of either a compound or order 0 page. - */ -void page_frag_free(void *addr) -{ - struct page *page = virt_to_head_page(addr); - - if (unlikely(put_page_testzero(page))) - free_unref_page(page, compound_order(page)); -} -EXPORT_SYMBOL(page_frag_free); - static void *make_alloc_exact(unsigned long addr, unsigned int order, size_t size) { diff --git a/mm/page_frag_cache.c b/mm/page_frag_cache.c new file mode 100644 index 000000000000..609a485cd02a --- /dev/null +++ b/mm/page_frag_cache.c @@ -0,0 +1,145 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* Page fragment allocator + * + * Page Fragment: + * An arbitrary-length arbitrary-offset area of memory which resides within a + * 0 or higher order page. Multiple fragments within that page are + * individually refcounted, in the page's reference counter. + * + * The page_frag functions provide a simple allocation framework for page + * fragments. This is used by the network stack and network device drivers to + * provide a backing region of memory for use as either an sk_buff->head, or to + * be used in the "frags" portion of skb_shared_info. + */ + +#include +#include +#include +#include +#include +#include "internal.h" + +static struct page *__page_frag_cache_refill(struct page_frag_cache *nc, + gfp_t gfp_mask) +{ + struct page *page = NULL; + gfp_t gfp = gfp_mask; + +#if (PAGE_SIZE < PAGE_FRAG_CACHE_MAX_SIZE) + gfp_mask = (gfp_mask & ~__GFP_DIRECT_RECLAIM) | __GFP_COMP | + __GFP_NOWARN | __GFP_NORETRY | __GFP_NOMEMALLOC; + page = alloc_pages_node(NUMA_NO_NODE, gfp_mask, + PAGE_FRAG_CACHE_MAX_ORDER); + nc->size = page ? PAGE_FRAG_CACHE_MAX_SIZE : PAGE_SIZE; +#endif + if (unlikely(!page)) + page = alloc_pages_node(NUMA_NO_NODE, gfp, 0); + + nc->va = page ? page_address(page) : NULL; + + return page; +} + +void page_frag_cache_drain(struct page_frag_cache *nc) +{ + if (!nc->va) + return; + + __page_frag_cache_drain(virt_to_head_page(nc->va), nc->pagecnt_bias); + nc->va = NULL; +} +EXPORT_SYMBOL(page_frag_cache_drain); + +void __page_frag_cache_drain(struct page *page, unsigned int count) +{ + VM_BUG_ON_PAGE(page_ref_count(page) == 0, page); + + if (page_ref_sub_and_test(page, count)) + free_unref_page(page, compound_order(page)); +} +EXPORT_SYMBOL(__page_frag_cache_drain); + +void *__page_frag_alloc_align(struct page_frag_cache *nc, + unsigned int fragsz, gfp_t gfp_mask, + unsigned int align_mask) +{ + unsigned int size = PAGE_SIZE; + struct page *page; + int offset; + + if (unlikely(!nc->va)) { +refill: + page = __page_frag_cache_refill(nc, gfp_mask); + if (!page) + return NULL; + +#if (PAGE_SIZE < PAGE_FRAG_CACHE_MAX_SIZE) + /* if size can vary use size else just use PAGE_SIZE */ + size = nc->size; +#endif + /* Even if we own the page, we do not use atomic_set(). + * This would break get_page_unless_zero() users. + */ + page_ref_add(page, PAGE_FRAG_CACHE_MAX_SIZE); + + /* reset page count bias and offset to start of new frag */ + nc->pfmemalloc = page_is_pfmemalloc(page); + nc->pagecnt_bias = PAGE_FRAG_CACHE_MAX_SIZE + 1; + nc->offset = size; + } + + offset = nc->offset - fragsz; + if (unlikely(offset < 0)) { + page = virt_to_page(nc->va); + + if (!page_ref_sub_and_test(page, nc->pagecnt_bias)) + goto refill; + + if (unlikely(nc->pfmemalloc)) { + free_unref_page(page, compound_order(page)); + goto refill; + } + +#if (PAGE_SIZE < PAGE_FRAG_CACHE_MAX_SIZE) + /* if size can vary use size else just use PAGE_SIZE */ + size = nc->size; +#endif + /* OK, page count is 0, we can safely set it */ + set_page_count(page, PAGE_FRAG_CACHE_MAX_SIZE + 1); + + /* reset page count bias and offset to start of new frag */ + nc->pagecnt_bias = PAGE_FRAG_CACHE_MAX_SIZE + 1; + offset = size - fragsz; + if (unlikely(offset < 0)) { + /* + * The caller is trying to allocate a fragment + * with fragsz > PAGE_SIZE but the cache isn't big + * enough to satisfy the request, this may + * happen in low memory conditions. + * We don't release the cache page because + * it could make memory pressure worse + * so we simply return NULL here. + */ + return NULL; + } + } + + nc->pagecnt_bias--; + offset &= align_mask; + nc->offset = offset; + + return nc->va + offset; +} +EXPORT_SYMBOL(__page_frag_alloc_align); + +/* + * Frees a page fragment allocated out of either a compound or order 0 page. + */ +void page_frag_free(void *addr) +{ + struct page *page = virt_to_head_page(addr); + + if (unlikely(put_page_testzero(page))) + free_unref_page(page, compound_order(page)); +} +EXPORT_SYMBOL(page_frag_free); diff --git a/tools/testing/selftests/mm/page_frag/page_frag_test.c b/tools/testing/selftests/mm/page_frag/page_frag_test.c index eeb2b6bc681a..fdf204550c9a 100644 --- a/tools/testing/selftests/mm/page_frag/page_frag_test.c +++ b/tools/testing/selftests/mm/page_frag/page_frag_test.c @@ -6,12 +6,12 @@ * Copyright (C) 2024 Yunsheng Lin */ -#include #include #include #include #include #include +#include static struct ptr_ring ptr_ring; static int nr_objs = 512; From patchwork Tue Oct 8 11:20:37 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yunsheng Lin X-Patchwork-Id: 13826284 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6B176CEF157 for ; Tue, 8 Oct 2024 11:27:13 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 556146B0089; Tue, 8 Oct 2024 07:27:12 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 505376B008A; Tue, 8 Oct 2024 07:27:12 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 380026B008C; Tue, 8 Oct 2024 07:27:12 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 19BB96B0089 for ; Tue, 8 Oct 2024 07:27:12 -0400 (EDT) Received: from smtpin28.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id 9CA63A1346 for ; Tue, 8 Oct 2024 11:27:09 +0000 (UTC) X-FDA: 82650208662.28.8A1B7C8 Received: from szxga03-in.huawei.com (szxga03-in.huawei.com [45.249.212.189]) by imf20.hostedemail.com (Postfix) with ESMTP id 1EAEB1C0008 for ; Tue, 8 Oct 2024 11:27:08 +0000 (UTC) Authentication-Results: imf20.hostedemail.com; dkim=none; spf=pass (imf20.hostedemail.com: domain of linyunsheng@huawei.com designates 45.249.212.189 as permitted sender) smtp.mailfrom=linyunsheng@huawei.com; dmarc=pass (policy=quarantine) header.from=huawei.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1728386694; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=PBbxlB0kz24iy7iokK1TfN2qwdTa4QMZvXDcqO85e8U=; b=MVr/dPIDlLWv0uu+soBuJpaXMnXn4sfgysnxYe/ZwpB5dm/8jbc78mcxaiXJ7CvIUI+x1c 4+Mh+liHkGckHkcCntl+qtOOfYzOHE86VuVkk29aoMFnuelF/9ahNf/BtRkUo4pl79FwH9 k4lNsQk5ZPBB11v5ea3Rd+tMT/QlImI= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1728386694; a=rsa-sha256; cv=none; b=8d6NVD2xPaPs4ohdaVb9uxwFHdq9wtsmShZju8xnoXCaSbhYJWEtt8N8IuH3hhoyzSo2yX 5eZHushusPHGnyyVPCGSju+F92bjlVSI4AHigpu7dUJJ6ByybhB80IbIxxEXUKqHvIikn1 tnsvdwXyY9RF9oirsC+2GFXa1I2QqAw= ARC-Authentication-Results: i=1; imf20.hostedemail.com; dkim=none; spf=pass (imf20.hostedemail.com: domain of linyunsheng@huawei.com designates 45.249.212.189 as permitted sender) smtp.mailfrom=linyunsheng@huawei.com; dmarc=pass (policy=quarantine) header.from=huawei.com Received: from mail.maildlp.com (unknown [172.19.163.48]) by szxga03-in.huawei.com (SkyGuard) with ESMTP id 4XNDHR5TVzzCt4M; Tue, 8 Oct 2024 19:26:27 +0800 (CST) Received: from dggpemf200006.china.huawei.com (unknown [7.185.36.61]) by mail.maildlp.com (Postfix) with ESMTPS id 7586118009B; Tue, 8 Oct 2024 19:27:03 +0800 (CST) Received: from localhost.localdomain (10.90.30.45) by dggpemf200006.china.huawei.com (7.185.36.61) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.11; Tue, 8 Oct 2024 19:27:03 +0800 From: Yunsheng Lin To: , , CC: , , Yunsheng Lin , Alexander Duyck , Alexander Duyck , Andrew Morton , Subject: [PATCH net-next v20 03/14] mm: page_frag: use initial zero offset for page_frag_alloc_align() Date: Tue, 8 Oct 2024 19:20:37 +0800 Message-ID: <20241008112049.2279307-4-linyunsheng@huawei.com> X-Mailer: git-send-email 2.30.0 In-Reply-To: <20241008112049.2279307-1-linyunsheng@huawei.com> References: <20241008112049.2279307-1-linyunsheng@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.90.30.45] X-ClientProxiedBy: dggems701-chm.china.huawei.com (10.3.19.178) To dggpemf200006.china.huawei.com (7.185.36.61) X-Stat-Signature: dxab69ykg8xioes7tpisp9w9wusegi86 X-Rspamd-Queue-Id: 1EAEB1C0008 X-Rspam-User: X-Rspamd-Server: rspam08 X-HE-Tag: 1728386828-898372 X-HE-Meta: U2FsdGVkX1+FLz287M4on+RYTyn2Lo8AHU7xaXMvveR2ETnKW9wi1RiTDLqW6TLA9bB7lkEFjMltlF+klW5og/UndDl9aK1PCzxlMX59ORIq3doQQfL0QfuQzc9rzfFh7uNaVGURX3qvknwagwOJjx1ZS6RhxuCodlFunj3vwFHmMwj0npdmEEneTk9GeLIYYulshx3cHsZJvyrHyd1fmEljWlE4YaNTg967CXd0fJRKfm6i4xL7tpCh8lRtIHkTnU3P7p13TivIDeS/N65SM5/3uOQOEDlZ/FMI4gwfy4eUYu8CNUOFVZGe9hqiL3kAA+dZYbNDNMzzR+mTuCBEllSZE2nNQslTAf56fJs5uuQRx+VBaeg++0cGZ30Oo+ZyZCp6t9ms/+i+sbeOLa8Xj14uDRMt7T8fvYAxZaGBZ80mUz4XE/P3SlnUgWnYW7d8tpfJfALC1+xbQCoz3a2mi/waIM2aScqMAk+jEsQXsLmRZjLpIfxvhSDk7ij+6IrwBJtChIiDryYFMWb13QxWrOm8iI2O8pegZhvIbI6UuAgLG9dvs9A90G27bjWYB1bY1ntujpJQ61mTrtZRWHWUkdVLcmVi5sxP2m36gOzBbFU743/rLvuAY3Zsgutwg7EC3z7eyfX5yDjZWK9s459loytwbnaG0u+daSHtM4y/D8gXZ7l86taX+BqeqYldk1MuIUpFegqhsSzwUdHof0a8/2o6/XPQMJRGGYzAWmzoqM/dSfa6/eSeALZPi6C4eNYC5admYTm4l1bkhe6/WreQ7Rn9FjUxd6l5GMCTc0t5rPxzujgzQ2LsKxYeQ8NWI+mtuApheEZIvObgg7xU+PusNZMUF8RM/DkMPVwH1oz7i921qWDaIf77Xq1jBggLCkyeK59t+Du+EDBvQj4tqJlCNvhQdu7lYbvzPx4RuHp2QsgYL1sbfpOB9UP0LQlOQzsx2jmYKIYhxcwAopzFwEl UFbS0HaC +FqrVN7JgSwV0PAPfdMYdt9HlrK9nS08LtC8hesHZpbv5kNAf8XONi3/a1/ypNRxojZrlOswKU0QVfNw7ypuwNryMV4zS4O0bZe1O0u/Go2JLPkraXfaZbpLbvYILLwGNsU64H4hBuE4n9g/gItCAvVCUrmNts9QlC/HZ4jAYMFb0/4lwdQVB3PCC78bwm5qH3BT5SDSVwGGbrzIlmcGEjvs20eIQyCdt7k6Yc4VGWxBsstqkYAzDGWlkkldayDQemjLFF5aUtZaU+bz/GtTS2in2/TPuOmJitp4dy7Cw+ce7KYXHZ8xmHQVRUMpQKv1hqyypja8lAQlQOmA= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: We are about to use page_frag_alloc_*() API to not just allocate memory for skb->data, but also use them to do the memory allocation for skb frag too. Currently the implementation of page_frag in mm subsystem is running the offset as a countdown rather than count-up value, there may have several advantages to that as mentioned in [1], but it may have some disadvantages, for example, it may disable skb frag coalescing and more correct cache prefetching We have a trade-off to make in order to have a unified implementation and API for page_frag, so use a initial zero offset in this patch, and the following patch will try to make some optimization to avoid the disadvantages as much as possible. 1. https://lore.kernel.org/all/f4abe71b3439b39d17a6fb2d410180f367cadf5c.camel@gmail.com/ CC: Alexander Duyck Signed-off-by: Yunsheng Lin Reviewed-by: Alexander Duyck --- mm/page_frag_cache.c | 46 ++++++++++++++++++++++---------------------- 1 file changed, 23 insertions(+), 23 deletions(-) diff --git a/mm/page_frag_cache.c b/mm/page_frag_cache.c index 609a485cd02a..4c8e04379cb3 100644 --- a/mm/page_frag_cache.c +++ b/mm/page_frag_cache.c @@ -63,9 +63,13 @@ void *__page_frag_alloc_align(struct page_frag_cache *nc, unsigned int fragsz, gfp_t gfp_mask, unsigned int align_mask) { +#if (PAGE_SIZE < PAGE_FRAG_CACHE_MAX_SIZE) + unsigned int size = nc->size; +#else unsigned int size = PAGE_SIZE; +#endif + unsigned int offset; struct page *page; - int offset; if (unlikely(!nc->va)) { refill: @@ -85,11 +89,24 @@ void *__page_frag_alloc_align(struct page_frag_cache *nc, /* reset page count bias and offset to start of new frag */ nc->pfmemalloc = page_is_pfmemalloc(page); nc->pagecnt_bias = PAGE_FRAG_CACHE_MAX_SIZE + 1; - nc->offset = size; + nc->offset = 0; } - offset = nc->offset - fragsz; - if (unlikely(offset < 0)) { + offset = __ALIGN_KERNEL_MASK(nc->offset, ~align_mask); + if (unlikely(offset + fragsz > size)) { + if (unlikely(fragsz > PAGE_SIZE)) { + /* + * The caller is trying to allocate a fragment + * with fragsz > PAGE_SIZE but the cache isn't big + * enough to satisfy the request, this may + * happen in low memory conditions. + * We don't release the cache page because + * it could make memory pressure worse + * so we simply return NULL here. + */ + return NULL; + } + page = virt_to_page(nc->va); if (!page_ref_sub_and_test(page, nc->pagecnt_bias)) @@ -100,33 +117,16 @@ void *__page_frag_alloc_align(struct page_frag_cache *nc, goto refill; } -#if (PAGE_SIZE < PAGE_FRAG_CACHE_MAX_SIZE) - /* if size can vary use size else just use PAGE_SIZE */ - size = nc->size; -#endif /* OK, page count is 0, we can safely set it */ set_page_count(page, PAGE_FRAG_CACHE_MAX_SIZE + 1); /* reset page count bias and offset to start of new frag */ nc->pagecnt_bias = PAGE_FRAG_CACHE_MAX_SIZE + 1; - offset = size - fragsz; - if (unlikely(offset < 0)) { - /* - * The caller is trying to allocate a fragment - * with fragsz > PAGE_SIZE but the cache isn't big - * enough to satisfy the request, this may - * happen in low memory conditions. - * We don't release the cache page because - * it could make memory pressure worse - * so we simply return NULL here. - */ - return NULL; - } + offset = 0; } nc->pagecnt_bias--; - offset &= align_mask; - nc->offset = offset; + nc->offset = offset + fragsz; return nc->va + offset; } From patchwork Tue Oct 8 11:20:38 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yunsheng Lin X-Patchwork-Id: 13826285 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id CFCA3CEF152 for ; Tue, 8 Oct 2024 11:27:15 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 927E86B008C; Tue, 8 Oct 2024 07:27:13 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 8FE656B0092; Tue, 8 Oct 2024 07:27:13 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 7786E6B0093; Tue, 8 Oct 2024 07:27:13 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 548626B008C for ; Tue, 8 Oct 2024 07:27:13 -0400 (EDT) Received: from smtpin21.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id E6F621C539B for ; Tue, 8 Oct 2024 11:27:11 +0000 (UTC) X-FDA: 82650208746.21.F8FE9FA Received: from szxga02-in.huawei.com (szxga02-in.huawei.com [45.249.212.188]) by imf04.hostedemail.com (Postfix) with ESMTP id 555CF40004 for ; Tue, 8 Oct 2024 11:27:09 +0000 (UTC) Authentication-Results: imf04.hostedemail.com; dkim=none; spf=pass (imf04.hostedemail.com: domain of linyunsheng@huawei.com designates 45.249.212.188 as permitted sender) smtp.mailfrom=linyunsheng@huawei.com; dmarc=pass (policy=quarantine) header.from=huawei.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1728386696; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=K7DaP8AjIvAeSiEghNo4IkYBUHyquAjZi+6Q36wegYo=; b=OBVftMjCJ/0h59M5d8wXH2zuSDzDqpDd4yT0wn3d6NF6m4UTO+fGafCeBMYIuSx3uBvc4/ 3PekorvAC8U6ifpfhX0pn3ZLwjlspQbDTKGef1g1s6jQ+ED6QO3YOGLWDsJ2F2duJEEcEq uX1TO0prYPkpK0IPMX6xqkuwC5NnT+U= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1728386696; a=rsa-sha256; cv=none; b=DO7F/EEM0gvbftov3/F7yLIMhHkHuqpvf6J+InwgLhB966iH7wfFAqWZjnFm+AzaNGZWH6 1xipC2ct5oBJQAS3e3yku9AnNou/ewf5LnnxiDMeyK9OfIipfml6/Y6AsESL2x/kLNpykI yw3LZlWjtusil1z6hB4z47Pq5t9QREs= ARC-Authentication-Results: i=1; imf04.hostedemail.com; dkim=none; spf=pass (imf04.hostedemail.com: domain of linyunsheng@huawei.com designates 45.249.212.188 as permitted sender) smtp.mailfrom=linyunsheng@huawei.com; dmarc=pass (policy=quarantine) header.from=huawei.com Received: from mail.maildlp.com (unknown [172.19.162.254]) by szxga02-in.huawei.com (SkyGuard) with ESMTP id 4XNDFQ6x3Xzfd5N; Tue, 8 Oct 2024 19:24:42 +0800 (CST) Received: from dggpemf200006.china.huawei.com (unknown [7.185.36.61]) by mail.maildlp.com (Postfix) with ESMTPS id 434271800DE; Tue, 8 Oct 2024 19:27:06 +0800 (CST) Received: from localhost.localdomain (10.90.30.45) by dggpemf200006.china.huawei.com (7.185.36.61) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.11; Tue, 8 Oct 2024 19:27:05 +0800 From: Yunsheng Lin To: , , CC: , , Yunsheng Lin , Alexander Duyck , Alexander Duyck , Chuck Lever , "Michael S. Tsirkin" , Jason Wang , =?utf-8?q?Eugenio_P=C3=A9rez?= , Andrew Morton , Eric Dumazet , David Howells , Marc Dionne , Trond Myklebust , Anna Schumaker , Jeff Layton , Neil Brown , Olga Kornievskaia , Dai Ngo , Tom Talpey , Shuah Khan , , , , , , Subject: [PATCH net-next v20 04/14] mm: page_frag: avoid caller accessing 'page_frag_cache' directly Date: Tue, 8 Oct 2024 19:20:38 +0800 Message-ID: <20241008112049.2279307-5-linyunsheng@huawei.com> X-Mailer: git-send-email 2.30.0 In-Reply-To: <20241008112049.2279307-1-linyunsheng@huawei.com> References: <20241008112049.2279307-1-linyunsheng@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.90.30.45] X-ClientProxiedBy: dggems701-chm.china.huawei.com (10.3.19.178) To dggpemf200006.china.huawei.com (7.185.36.61) X-Rspam-User: X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: 555CF40004 X-Stat-Signature: 5m7ce6bsupjbdrrwkq7kizdta5hhkhek X-HE-Tag: 1728386829-633742 X-HE-Meta: U2FsdGVkX19NAyw9JFtiEjiKXqWLmXgpPIPpNlbS/hFIf2NH9mDUQaNiPT9mvg116Kpttz5Li+ApRTl2sQrydpXhSXaj0mBeiiLPql0wLcLkleCNJh2mNvt1mX7z75CBauZFDzw7Iz65V304EZpGkNMtg1Pe0E0h2V2jcK1vRN40cD8IL+Q1VMRYum3P4q+dSW1TJ/M/v1LWmUH9VJOHr2No84iOi2C/20Sso3SNDYYUNdQKWaEdUOmNvvqhQJTr9B08pKXQ/V70Yk/KCypn4uHA57KulHohRMpzZjwnxDc9lU/W2iatVWF96u61JWPcGO5IuA1VGAVVMpjal7UYtgks5lohW2iAqIyRCly5E5er99H6NkFAAoU1KNS6miCys5Z+yutuS3mCKME6x94q+6cEY3LTmFt6lcuzO2BKI1Ol8DW894QStB6Q2aQSnh3BOmAvWUW8PHIhNWeC0Z6Y/YGPtfqoeS8G3iz6ekvc2/PoaIVisB/FGxJZBDzpGM4gEQak4tlT7Pruvwa+K+R7uU4rKusGEek/46YrKnSl++9Fyrzvbn22M4XBh/Aq1bkES3RwrfuVOGegmt9OVRsubn5h78VNJkfpMi+WeOwy7vdBbtdyXs21afWYU46T+ERyvX8aiYp0NSoe2yrYR7Tobzzj/2fXMkSkotXbNtGyHBtOkXI6p1/8QGgQ7NIqU2rTzQc9HA8PtVqu6dE98g/SfIdGu/PZosxfPdKxNH8vA2Ft8U8MGDVm3W0SGGpeSAPLryCgC/a7E8jtkRvcuPT7IMgPRIPT/qtxi3GVhs8xrzFkwSZc1pwnwV3llQxI2EM0Ccsj0tc++7HxAXc9kL3FlpzBVLlAXiLgoevVUAlgyJ8zf38H8sz8MZWXwXwqxEeS2D2T0o5gQ5G94NMcWq+r1AAJIvHxQLB2o1rh/xizSRmffSnEOBn/h10Yj7GwYOUbR0Zk3SlKfcNqel1rbpK xQ+xXc4Y YMlW2DiwP2RZYEzR6qX9fEp0/OA0UbatXyFaPCbNZaUFaXmHE5PPYLcaJt2WrVwLWBwJEF2OPi2HS0lgVsEbdMSiB/53rZkLF3ySM0lFKV1UKmrMEuzSEd1iO6Ob/o/PJ5xNHiW7pcFmoyZl0gjOhWvMhwQ/2RoOswGXp5XZoKG9VCJBOwiJzhcKvaHlEh6yxO72ZaI7jZW4kBKjKwVySUnU+tjGgZiJ/oDvOOPUaBkEPRkiExMNVZ4cy/JyZfJ/+cWqxrAG/gbYACMElY4/pApu6TLfAA5dmbW663s5LfFUgvG8QqVmmdI4es+1bmL9xe53G+angVAr5I+7SoYJ67nEEBA== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Use appropriate frag_page API instead of caller accessing 'page_frag_cache' directly. CC: Alexander Duyck Signed-off-by: Yunsheng Lin Reviewed-by: Alexander Duyck Acked-by: Chuck Lever --- drivers/vhost/net.c | 2 +- include/linux/page_frag_cache.h | 10 ++++++++++ net/core/skbuff.c | 6 +++--- net/rxrpc/conn_object.c | 4 +--- net/rxrpc/local_object.c | 4 +--- net/sunrpc/svcsock.c | 6 ++---- tools/testing/selftests/mm/page_frag/page_frag_test.c | 2 +- 7 files changed, 19 insertions(+), 15 deletions(-) diff --git a/drivers/vhost/net.c b/drivers/vhost/net.c index f16279351db5..9ad37c012189 100644 --- a/drivers/vhost/net.c +++ b/drivers/vhost/net.c @@ -1325,7 +1325,7 @@ static int vhost_net_open(struct inode *inode, struct file *f) vqs[VHOST_NET_VQ_RX]); f->private_data = n; - n->pf_cache.va = NULL; + page_frag_cache_init(&n->pf_cache); return 0; } diff --git a/include/linux/page_frag_cache.h b/include/linux/page_frag_cache.h index 67ac8626ed9b..0a52f7a179c8 100644 --- a/include/linux/page_frag_cache.h +++ b/include/linux/page_frag_cache.h @@ -7,6 +7,16 @@ #include #include +static inline void page_frag_cache_init(struct page_frag_cache *nc) +{ + nc->va = NULL; +} + +static inline bool page_frag_cache_is_pfmemalloc(struct page_frag_cache *nc) +{ + return !!nc->pfmemalloc; +} + void page_frag_cache_drain(struct page_frag_cache *nc); void __page_frag_cache_drain(struct page *page, unsigned int count); void *__page_frag_alloc_align(struct page_frag_cache *nc, unsigned int fragsz, diff --git a/net/core/skbuff.c b/net/core/skbuff.c index 74149dc4ee31..ca01880c7ad0 100644 --- a/net/core/skbuff.c +++ b/net/core/skbuff.c @@ -753,14 +753,14 @@ struct sk_buff *__netdev_alloc_skb(struct net_device *dev, unsigned int len, if (in_hardirq() || irqs_disabled()) { nc = this_cpu_ptr(&netdev_alloc_cache); data = page_frag_alloc(nc, len, gfp_mask); - pfmemalloc = nc->pfmemalloc; + pfmemalloc = page_frag_cache_is_pfmemalloc(nc); } else { local_bh_disable(); local_lock_nested_bh(&napi_alloc_cache.bh_lock); nc = this_cpu_ptr(&napi_alloc_cache.page); data = page_frag_alloc(nc, len, gfp_mask); - pfmemalloc = nc->pfmemalloc; + pfmemalloc = page_frag_cache_is_pfmemalloc(nc); local_unlock_nested_bh(&napi_alloc_cache.bh_lock); local_bh_enable(); @@ -850,7 +850,7 @@ struct sk_buff *napi_alloc_skb(struct napi_struct *napi, unsigned int len) len = SKB_HEAD_ALIGN(len); data = page_frag_alloc(&nc->page, len, gfp_mask); - pfmemalloc = nc->page.pfmemalloc; + pfmemalloc = page_frag_cache_is_pfmemalloc(&nc->page); } local_unlock_nested_bh(&napi_alloc_cache.bh_lock); diff --git a/net/rxrpc/conn_object.c b/net/rxrpc/conn_object.c index 1539d315afe7..694c4df7a1a3 100644 --- a/net/rxrpc/conn_object.c +++ b/net/rxrpc/conn_object.c @@ -337,9 +337,7 @@ static void rxrpc_clean_up_connection(struct work_struct *work) */ rxrpc_purge_queue(&conn->rx_queue); - if (conn->tx_data_alloc.va) - __page_frag_cache_drain(virt_to_page(conn->tx_data_alloc.va), - conn->tx_data_alloc.pagecnt_bias); + page_frag_cache_drain(&conn->tx_data_alloc); call_rcu(&conn->rcu, rxrpc_rcu_free_connection); } diff --git a/net/rxrpc/local_object.c b/net/rxrpc/local_object.c index 504453c688d7..a8cffe47cf01 100644 --- a/net/rxrpc/local_object.c +++ b/net/rxrpc/local_object.c @@ -452,9 +452,7 @@ void rxrpc_destroy_local(struct rxrpc_local *local) #endif rxrpc_purge_queue(&local->rx_queue); rxrpc_purge_client_connections(local); - if (local->tx_alloc.va) - __page_frag_cache_drain(virt_to_page(local->tx_alloc.va), - local->tx_alloc.pagecnt_bias); + page_frag_cache_drain(&local->tx_alloc); } /* diff --git a/net/sunrpc/svcsock.c b/net/sunrpc/svcsock.c index 825ec5357691..b785425c3315 100644 --- a/net/sunrpc/svcsock.c +++ b/net/sunrpc/svcsock.c @@ -1608,7 +1608,6 @@ static void svc_tcp_sock_detach(struct svc_xprt *xprt) static void svc_sock_free(struct svc_xprt *xprt) { struct svc_sock *svsk = container_of(xprt, struct svc_sock, sk_xprt); - struct page_frag_cache *pfc = &svsk->sk_frag_cache; struct socket *sock = svsk->sk_sock; trace_svcsock_free(svsk, sock); @@ -1618,8 +1617,7 @@ static void svc_sock_free(struct svc_xprt *xprt) sockfd_put(sock); else sock_release(sock); - if (pfc->va) - __page_frag_cache_drain(virt_to_head_page(pfc->va), - pfc->pagecnt_bias); + + page_frag_cache_drain(&svsk->sk_frag_cache); kfree(svsk); } diff --git a/tools/testing/selftests/mm/page_frag/page_frag_test.c b/tools/testing/selftests/mm/page_frag/page_frag_test.c index fdf204550c9a..36543a129e40 100644 --- a/tools/testing/selftests/mm/page_frag/page_frag_test.c +++ b/tools/testing/selftests/mm/page_frag/page_frag_test.c @@ -117,7 +117,7 @@ static int __init page_frag_test_init(void) u64 duration; int ret; - test_nc.va = NULL; + page_frag_cache_init(&test_nc); atomic_set(&nthreads, 2); init_completion(&wait); From patchwork Tue Oct 8 11:20:40 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yunsheng Lin X-Patchwork-Id: 13826286 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4A00ECEF158 for ; Tue, 8 Oct 2024 11:27:18 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 815026B0093; Tue, 8 Oct 2024 07:27:16 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 7C7826B0095; Tue, 8 Oct 2024 07:27:16 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 68DC96B0096; Tue, 8 Oct 2024 07:27:16 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id 4959B6B0093 for ; Tue, 8 Oct 2024 07:27:16 -0400 (EDT) Received: from smtpin20.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id C17F741BDC for ; Tue, 8 Oct 2024 11:27:14 +0000 (UTC) X-FDA: 82650208830.20.1FA623A Received: from szxga04-in.huawei.com (szxga04-in.huawei.com [45.249.212.190]) by imf15.hostedemail.com (Postfix) with ESMTP id 53F22A000D for ; Tue, 8 Oct 2024 11:27:12 +0000 (UTC) Authentication-Results: imf15.hostedemail.com; dkim=none; spf=pass (imf15.hostedemail.com: domain of linyunsheng@huawei.com designates 45.249.212.190 as permitted sender) smtp.mailfrom=linyunsheng@huawei.com; dmarc=pass (policy=quarantine) header.from=huawei.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1728386655; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=QVTvlQuTiuW0707pqZ5Os8qZ3TMNziWLl1bRShT6IgY=; b=F7Lx+P6mOvzV0QbOcw+Wi79cR+TA9AlASjtBfY0SLe6R0oo4ia3cXoaraPfc43LX7k7fuE QBViXREtZ1lFzjEmDPhiTHaLj9icPSoF+376lD811BsUbrSZeaxOC7WCUJ+xRIvb620pQN HmMyke5EaD0x0ipexDOgjbddNgHWL1E= ARC-Authentication-Results: i=1; imf15.hostedemail.com; dkim=none; spf=pass (imf15.hostedemail.com: domain of linyunsheng@huawei.com designates 45.249.212.190 as permitted sender) smtp.mailfrom=linyunsheng@huawei.com; dmarc=pass (policy=quarantine) header.from=huawei.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1728386655; a=rsa-sha256; cv=none; b=CZpr1HK4iYMsIFDAhOHfbUsEtk4/XyZVuhuv6N/y/mSstNJIwtbNABSCvvJV0VKY3jKuFg sB4SurK0Hv8dEF5huQW5JjmqhzKRewHdyw98XIhRzuIrg8zIesB7L5NRQK6PejhKKRIOwi 4siU1w1v3J9MPBbX3l3aTpF73zXM0SI= Received: from mail.maildlp.com (unknown [172.19.162.112]) by szxga04-in.huawei.com (SkyGuard) with ESMTP id 4XNDH16TJWz2DclM; Tue, 8 Oct 2024 19:26:05 +0800 (CST) Received: from dggpemf200006.china.huawei.com (unknown [7.185.36.61]) by mail.maildlp.com (Postfix) with ESMTPS id E6B0614013B; Tue, 8 Oct 2024 19:27:09 +0800 (CST) Received: from localhost.localdomain (10.90.30.45) by dggpemf200006.china.huawei.com (7.185.36.61) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.11; Tue, 8 Oct 2024 19:27:09 +0800 From: Yunsheng Lin To: , , CC: , , Yunsheng Lin , Alexander Duyck , Andrew Morton , Subject: [PATCH net-next v20 06/14] mm: page_frag: reuse existing space for 'size' and 'pfmemalloc' Date: Tue, 8 Oct 2024 19:20:40 +0800 Message-ID: <20241008112049.2279307-7-linyunsheng@huawei.com> X-Mailer: git-send-email 2.30.0 In-Reply-To: <20241008112049.2279307-1-linyunsheng@huawei.com> References: <20241008112049.2279307-1-linyunsheng@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.90.30.45] X-ClientProxiedBy: dggems701-chm.china.huawei.com (10.3.19.178) To dggpemf200006.china.huawei.com (7.185.36.61) X-Rspamd-Server: rspam06 X-Rspamd-Queue-Id: 53F22A000D X-Stat-Signature: nmurhpstm33br697gei7qhisde1cgwwx X-Rspam-User: X-HE-Tag: 1728386832-896799 X-HE-Meta: U2FsdGVkX18tVQJcDg1lwVghyG36d5gXLcGNnVAkEfxJ1VgIvx+PNX26nWc11DYzu4dwCdKqRf4ecmCDpMc6Hd3WfOenXvt3feqBnnUAiAVgICob4CrPa5uUs1JrMlQNiQ2TEbiqU0njKNDCAst51qK/R78R3M6a1GiU86bAWlhI7Qx5WeXs8fuGu4EgfL+uNxbCIb9lxKJtb4gPkvt7HLnhxvGt/+6z1HPeUaeh284uV1DGutTNHEyO4qhoWHlj6rs/ZlO9U+1jZ/D8mtidNLEiJMv0Tk9Pq1+V0zjz3Mx4Acni1WS0wOCuQwbg9TRUITPx40BHUFPWtHJ87WKLawCRMc16WVurg/Tsa140eYj2oI1lgfG3ovcVykUkRSXF9kRjrjASFWCkU/EIkkOOSbBSO1ji176BhC6n30BAfLVbV8/fhTT9mr8mZHSEYmxPLIi1/WK7U3E3v/nCPNAcOJqcRygRRvl0jWA+u+vdz8/XHZ1wxDENRiG7PANBl0yk1dHeoMCDdi0ZXnpdU0mmMrGGYWkTRG11HfIoTfoA01EnAPiKscjrvHSqymf4Ct5TKXMgu4JuQopPwY+zUW+ifUwiy88SMd/JrwKAHsHemhthoXKcVhlPCqf4meE9vGjklnSjL9jLi65FlbXL0oXgu8IFQenOfNUhXQDT0xVXFmaF6JOShL4kIaRXwkms0HULMQq3usM1fJwacjAelORpTgMynI4+UQjBVMcBThmix+MILZp8HH4h50u+qs/+aTVCAYdVZRQlZ4cVdNmuQO77HBCf2YD9HqiuH0vvv9mLGtXp+jg7ZCi8+IOrpUXRCgQNIZpH/z0VDuBt8049CtfaAFk8Sq6U+C2Yj/3cOEoBGB1U2nStpFjzc5R7hVf4KMatIJ7DdH3E1QM8u10gg7vZewmdTv21PreNLbbc77bTyWV8RNu/vqWQYmb3PhPbLJwuBO6sDo5Y8WLUtrfu6f8 OwzbkA76 /hCBZhT18mnYnMomtLZaxySvzendaGgwTC/C21xLR5JdD4DBmPP0ryuBr4ql3fYOOdkGW4Sm+KdkuUrtkNxADqaAx4cW2MJsyCzekIM6BT8rLSuNlIkA+q17NH9BiBcpTguB6mvn56Yq3ffsoX4J4sHo5RwDRoHSLi0C8+mhmB8d9Tcq6KXUCKIXH5VmHI2SK5l1+LwaYg4zhgEjCQ9+Xchv4Z3kZnCDEkr8wn9hlceXs5GKXxCPCxXU1LV90BtemElLH9+9aUqcZ8ESdfYwOmbsZNSaL3gzh5L04 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Currently there is one 'struct page_frag' for every 'struct sock' and 'struct task_struct', we are about to replace the 'struct page_frag' with 'struct page_frag_cache' for them. Before begin the replacing, we need to ensure the size of 'struct page_frag_cache' is not bigger than the size of 'struct page_frag', as there may be tens of thousands of 'struct sock' and 'struct task_struct' instances in the system. By or'ing the page order & pfmemalloc with lower bits of 'va' instead of using 'u16' or 'u32' for page size and 'u8' for pfmemalloc, we are able to avoid 3 or 5 bytes space waste. And page address & pfmemalloc & order is unchanged for the same page in the same 'page_frag_cache' instance, it makes sense to fit them together. After this patch, the size of 'struct page_frag_cache' should be the same as the size of 'struct page_frag'. CC: Alexander Duyck Signed-off-by: Yunsheng Lin --- include/linux/mm_types_task.h | 19 +++++---- include/linux/page_frag_cache.h | 24 ++++++++++- mm/page_frag_cache.c | 75 +++++++++++++++++++++++---------- 3 files changed, 86 insertions(+), 32 deletions(-) diff --git a/include/linux/mm_types_task.h b/include/linux/mm_types_task.h index 0ac6daebdd5c..a82aa80c0ba4 100644 --- a/include/linux/mm_types_task.h +++ b/include/linux/mm_types_task.h @@ -47,18 +47,21 @@ struct page_frag { #define PAGE_FRAG_CACHE_MAX_SIZE __ALIGN_MASK(32768, ~PAGE_MASK) #define PAGE_FRAG_CACHE_MAX_ORDER get_order(PAGE_FRAG_CACHE_MAX_SIZE) struct page_frag_cache { - void *va; -#if (PAGE_SIZE < PAGE_FRAG_CACHE_MAX_SIZE) + /* encoded_page consists of the virtual address, pfmemalloc bit and + * order of a page. + */ + unsigned long encoded_page; + + /* we maintain a pagecount bias, so that we dont dirty cache line + * containing page->_refcount every time we allocate a fragment. + */ +#if (PAGE_SIZE < PAGE_FRAG_CACHE_MAX_SIZE) && (BITS_PER_LONG <= 32) __u16 offset; - __u16 size; + __u16 pagecnt_bias; #else __u32 offset; + __u32 pagecnt_bias; #endif - /* we maintain a pagecount bias, so that we dont dirty cache line - * containing page->_refcount every time we allocate a fragment. - */ - unsigned int pagecnt_bias; - bool pfmemalloc; }; /* Track pages that require TLB flushes */ diff --git a/include/linux/page_frag_cache.h b/include/linux/page_frag_cache.h index 0a52f7a179c8..dba2268e451a 100644 --- a/include/linux/page_frag_cache.h +++ b/include/linux/page_frag_cache.h @@ -3,18 +3,38 @@ #ifndef _LINUX_PAGE_FRAG_CACHE_H #define _LINUX_PAGE_FRAG_CACHE_H +#include #include #include #include +#if (PAGE_SIZE < PAGE_FRAG_CACHE_MAX_SIZE) +/* Use a full byte here to enable assembler optimization as the shift + * operation is usually expecting a byte. + */ +#define PAGE_FRAG_CACHE_ORDER_MASK GENMASK(7, 0) +#else +/* Compiler should be able to figure out we don't read things as any value + * ANDed with 0 is 0. + */ +#define PAGE_FRAG_CACHE_ORDER_MASK 0 +#endif + +#define PAGE_FRAG_CACHE_PFMEMALLOC_BIT (PAGE_FRAG_CACHE_ORDER_MASK + 1) + +static inline bool page_frag_encoded_page_pfmemalloc(unsigned long encoded_page) +{ + return !!(encoded_page & PAGE_FRAG_CACHE_PFMEMALLOC_BIT); +} + static inline void page_frag_cache_init(struct page_frag_cache *nc) { - nc->va = NULL; + nc->encoded_page = 0; } static inline bool page_frag_cache_is_pfmemalloc(struct page_frag_cache *nc) { - return !!nc->pfmemalloc; + return page_frag_encoded_page_pfmemalloc(nc->encoded_page); } void page_frag_cache_drain(struct page_frag_cache *nc); diff --git a/mm/page_frag_cache.c b/mm/page_frag_cache.c index 4c8e04379cb3..4bff4de58808 100644 --- a/mm/page_frag_cache.c +++ b/mm/page_frag_cache.c @@ -12,6 +12,7 @@ * be used in the "frags" portion of skb_shared_info. */ +#include #include #include #include @@ -19,9 +20,41 @@ #include #include "internal.h" +static unsigned long page_frag_encode_page(struct page *page, unsigned int order, + bool pfmemalloc) +{ + BUILD_BUG_ON(PAGE_FRAG_CACHE_MAX_ORDER > PAGE_FRAG_CACHE_ORDER_MASK); + BUILD_BUG_ON(PAGE_FRAG_CACHE_PFMEMALLOC_BIT >= PAGE_SIZE); + + return (unsigned long)page_address(page) | + (order & PAGE_FRAG_CACHE_ORDER_MASK) | + ((unsigned long)pfmemalloc * PAGE_FRAG_CACHE_PFMEMALLOC_BIT); +} + +static unsigned long page_frag_encoded_page_order(unsigned long encoded_page) +{ + return encoded_page & PAGE_FRAG_CACHE_ORDER_MASK; +} + +static void *page_frag_encoded_page_address(unsigned long encoded_page) +{ + return (void *)(encoded_page & PAGE_MASK); +} + +static struct page *page_frag_encoded_page_ptr(unsigned long encoded_page) +{ + return virt_to_page((void *)encoded_page); +} + +static unsigned int page_frag_cache_page_size(unsigned long encoded_page) +{ + return PAGE_SIZE << page_frag_encoded_page_order(encoded_page); +} + static struct page *__page_frag_cache_refill(struct page_frag_cache *nc, gfp_t gfp_mask) { + unsigned long order = PAGE_FRAG_CACHE_MAX_ORDER; struct page *page = NULL; gfp_t gfp = gfp_mask; @@ -30,23 +63,26 @@ static struct page *__page_frag_cache_refill(struct page_frag_cache *nc, __GFP_NOWARN | __GFP_NORETRY | __GFP_NOMEMALLOC; page = alloc_pages_node(NUMA_NO_NODE, gfp_mask, PAGE_FRAG_CACHE_MAX_ORDER); - nc->size = page ? PAGE_FRAG_CACHE_MAX_SIZE : PAGE_SIZE; #endif - if (unlikely(!page)) + if (unlikely(!page)) { page = alloc_pages_node(NUMA_NO_NODE, gfp, 0); + order = 0; + } - nc->va = page ? page_address(page) : NULL; + nc->encoded_page = page ? + page_frag_encode_page(page, order, page_is_pfmemalloc(page)) : 0; return page; } void page_frag_cache_drain(struct page_frag_cache *nc) { - if (!nc->va) + if (!nc->encoded_page) return; - __page_frag_cache_drain(virt_to_head_page(nc->va), nc->pagecnt_bias); - nc->va = NULL; + __page_frag_cache_drain(page_frag_encoded_page_ptr(nc->encoded_page), + nc->pagecnt_bias); + nc->encoded_page = 0; } EXPORT_SYMBOL(page_frag_cache_drain); @@ -63,35 +99,29 @@ void *__page_frag_alloc_align(struct page_frag_cache *nc, unsigned int fragsz, gfp_t gfp_mask, unsigned int align_mask) { -#if (PAGE_SIZE < PAGE_FRAG_CACHE_MAX_SIZE) - unsigned int size = nc->size; -#else - unsigned int size = PAGE_SIZE; -#endif - unsigned int offset; + unsigned long encoded_page = nc->encoded_page; + unsigned int size, offset; struct page *page; - if (unlikely(!nc->va)) { + if (unlikely(!encoded_page)) { refill: page = __page_frag_cache_refill(nc, gfp_mask); if (!page) return NULL; -#if (PAGE_SIZE < PAGE_FRAG_CACHE_MAX_SIZE) - /* if size can vary use size else just use PAGE_SIZE */ - size = nc->size; -#endif + encoded_page = nc->encoded_page; + /* Even if we own the page, we do not use atomic_set(). * This would break get_page_unless_zero() users. */ page_ref_add(page, PAGE_FRAG_CACHE_MAX_SIZE); /* reset page count bias and offset to start of new frag */ - nc->pfmemalloc = page_is_pfmemalloc(page); nc->pagecnt_bias = PAGE_FRAG_CACHE_MAX_SIZE + 1; nc->offset = 0; } + size = page_frag_cache_page_size(encoded_page); offset = __ALIGN_KERNEL_MASK(nc->offset, ~align_mask); if (unlikely(offset + fragsz > size)) { if (unlikely(fragsz > PAGE_SIZE)) { @@ -107,13 +137,14 @@ void *__page_frag_alloc_align(struct page_frag_cache *nc, return NULL; } - page = virt_to_page(nc->va); + page = page_frag_encoded_page_ptr(encoded_page); if (!page_ref_sub_and_test(page, nc->pagecnt_bias)) goto refill; - if (unlikely(nc->pfmemalloc)) { - free_unref_page(page, compound_order(page)); + if (unlikely(page_frag_encoded_page_pfmemalloc(encoded_page))) { + free_unref_page(page, + page_frag_encoded_page_order(encoded_page)); goto refill; } @@ -128,7 +159,7 @@ void *__page_frag_alloc_align(struct page_frag_cache *nc, nc->pagecnt_bias--; nc->offset = offset + fragsz; - return nc->va + offset; + return page_frag_encoded_page_address(encoded_page) + offset; } EXPORT_SYMBOL(__page_frag_alloc_align); From patchwork Tue Oct 8 11:20:41 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yunsheng Lin X-Patchwork-Id: 13826287 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id A423BCEF152 for ; Tue, 8 Oct 2024 11:27:20 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 975716B0096; Tue, 8 Oct 2024 07:27:17 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 8D6BB6B0099; Tue, 8 Oct 2024 07:27:17 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 79FFE6B009A; Tue, 8 Oct 2024 07:27:17 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id 4C7316B0096 for ; Tue, 8 Oct 2024 07:27:17 -0400 (EDT) Received: from smtpin22.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id DB785A1BB4 for ; Tue, 8 Oct 2024 11:27:14 +0000 (UTC) X-FDA: 82650208914.22.F599572 Received: from szxga01-in.huawei.com (szxga01-in.huawei.com [45.249.212.187]) by imf01.hostedemail.com (Postfix) with ESMTP id 8FE9740013 for ; Tue, 8 Oct 2024 11:27:14 +0000 (UTC) Authentication-Results: imf01.hostedemail.com; dkim=none; dmarc=pass (policy=quarantine) header.from=huawei.com; spf=pass (imf01.hostedemail.com: domain of linyunsheng@huawei.com designates 45.249.212.187 as permitted sender) smtp.mailfrom=linyunsheng@huawei.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1728386701; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=uTAfBw0JOsAgFu+1MYVD4eHI4b1Z4Ik4BXo0D+lUKKM=; b=6m3PwGZlvV53RGNk9Fhp/CTuB65VveqYySe4I/S/BUm4DXgzynKE+Wxbz4MkJ/RxrrQeLT WAhPFkjjz/aiqJN4Fl+4RW+Fg1KY2SdOQYynK38fccEDi9fPKvLKo6yiBXwagKLLz2XaZ8 aopXvFC2939RSu5XC0ZncHXTFz/MQBc= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1728386701; a=rsa-sha256; cv=none; b=We8wrishnlKGgTM4E0oGWbfF3mI3w1fNd3Vi668+vwu/QBC5rakXlMD4/RcXAZIJCe+ARz mpZ2HtdnL6e86RU4zeP3u2ssDlN7YZR12gJH5NVNqB+LG059ebJ/h/f/rWcQ/KHdXf8zQK nMVY7ZmNCEnaS9e9v28Cpo9O5SWhCPA= ARC-Authentication-Results: i=1; imf01.hostedemail.com; dkim=none; dmarc=pass (policy=quarantine) header.from=huawei.com; spf=pass (imf01.hostedemail.com: domain of linyunsheng@huawei.com designates 45.249.212.187 as permitted sender) smtp.mailfrom=linyunsheng@huawei.com Received: from mail.maildlp.com (unknown [172.19.88.194]) by szxga01-in.huawei.com (SkyGuard) with ESMTP id 4XNDGr3VFCzySww; Tue, 8 Oct 2024 19:25:56 +0800 (CST) Received: from dggpemf200006.china.huawei.com (unknown [7.185.36.61]) by mail.maildlp.com (Postfix) with ESMTPS id B86AC140202; Tue, 8 Oct 2024 19:27:11 +0800 (CST) Received: from localhost.localdomain (10.90.30.45) by dggpemf200006.china.huawei.com (7.185.36.61) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.11; Tue, 8 Oct 2024 19:27:11 +0800 From: Yunsheng Lin To: , , CC: , , Yunsheng Lin , Alexander Duyck , Andrew Morton , Subject: [PATCH net-next v20 07/14] mm: page_frag: some minor refactoring before adding new API Date: Tue, 8 Oct 2024 19:20:41 +0800 Message-ID: <20241008112049.2279307-8-linyunsheng@huawei.com> X-Mailer: git-send-email 2.30.0 In-Reply-To: <20241008112049.2279307-1-linyunsheng@huawei.com> References: <20241008112049.2279307-1-linyunsheng@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.90.30.45] X-ClientProxiedBy: dggems701-chm.china.huawei.com (10.3.19.178) To dggpemf200006.china.huawei.com (7.185.36.61) X-Rspamd-Queue-Id: 8FE9740013 X-Stat-Signature: 9cn55pmjmutewxam1i5y7hw4ynos5hat X-Rspamd-Server: rspam09 X-Rspam-User: X-HE-Tag: 1728386834-881070 X-HE-Meta: U2FsdGVkX1/i0g61HKWKIr/GPTHpkauHXQF69MwoFcyDbXZi+5Vs1TlvwqzECgmyd5cW8cQ1mqCCEkWH+TzmbLXrvteWw25uJnhGbUJ4N3aeg9SG2/lz18JIQTxo11mycJEDLFoW//lQUp8BTfPDJVUZJjQRJS8WFLYe7bbIO3Roxzt3Pz7Uk22nDv70zd0b1iAhANpVRLEn8y96Bv6k25NZyA7pwaxCuvOf3o5xGeYu1/UPL9Qosl6y0nDTbwfxaGkavLymOtIy6Z7gGROabuP5ZS3WB68LB/rnVP/DQiXPJqt+WpuUpcdnIqxtnmdTvWOU4il3lOjfglvuqArwsS9OPL/2z0SFHBfzFOt3gFmRFfZt5d7ldGXQtu/J5tk9PPonWuYVEB54W7ssFwb2eRTEhO2zNf88fwfvu4g4XuJdy3SqM81nQUmE3b95w6ExjbFKxmC/U6g2nXBIlHxJGh2nTDFFZY/oNF9DTsq9xMvG03eh/n9uvM9dEi/Hnwb1z+onlF2EebSmuPCVJ+9BOVmAM+vJshbiTCwChm5jvLCUzF1oMX+e+PRagIMfGI7YhyCx6ZJrEgzRL3+ydPJJOD366jvv5K4q9tBrluMNvcNGjtXCwaufATZKB8WnVphT2SSkYb1jA5kOjWuLkKGGFZEBfPc4Y2qcVPgVbksgCyEPB2by9jOJW05GsS5hYaHCQ0IITyn0il5GOku71eE33IzlPooFHWIVBoV7SLyJW7envh+oOtgxPDTL7Cwg0fL7ewb05ny692P4cPQqo4T5fLrgybTSByWIoOMrQoJur0yDhBJfKLIn+INa4w8WbS0zai4BwPRCEWR5+GDF3HyiIrTJcy+2zSnyA1VKvFSKRzDfcOyxFr9AJZqeWNAqMZqVI3cAW/JvHrEL9+HceOWXtMgabDU8xlfwa8ua3c1jOKx9b/mGDKzIOC9MsC+XQrTDF/O8EEB/ynq6G8EuhIw TdhKSP0g pfNfR6GWZtVl5TmSsvARB2GdZ6RkqDjClf8rVeBZiY+VWIChucusHOVuLwS8WFJ2556KPHG0uo5aQBeXaVGSMA1TrlxBYtCSbctxme5m+QtF7NkjWYYrI5wUQtOO+ZXtIuSXMaTAupWRH7JcopSZgt3DI4PpOtRO+ILiJzs4kjfqc/Hn3eOJqZvCaPPxQ6JgwiCuPrbZZTJHCEEHMSCDJFz2Rh7fIb14kgW95U5zO20ZhKMINFK9fuCs3DlJIxsuEbYpeAL5u8usiQ3vIxXnan4//fxzZXwZahxUC X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Refactor common codes from __page_frag_alloc_va_align() to __page_frag_cache_prepare() and __page_frag_cache_commit(), so that the new API can make use of them. CC: Alexander Duyck Signed-off-by: Yunsheng Lin --- include/linux/page_frag_cache.h | 36 +++++++++++++++++++++++++++-- mm/page_frag_cache.c | 40 ++++++++++++++++++++++++++------- 2 files changed, 66 insertions(+), 10 deletions(-) diff --git a/include/linux/page_frag_cache.h b/include/linux/page_frag_cache.h index dba2268e451a..a6cb32b1d1ca 100644 --- a/include/linux/page_frag_cache.h +++ b/include/linux/page_frag_cache.h @@ -5,6 +5,7 @@ #include #include +#include #include #include @@ -39,8 +40,39 @@ static inline bool page_frag_cache_is_pfmemalloc(struct page_frag_cache *nc) void page_frag_cache_drain(struct page_frag_cache *nc); void __page_frag_cache_drain(struct page *page, unsigned int count); -void *__page_frag_alloc_align(struct page_frag_cache *nc, unsigned int fragsz, - gfp_t gfp_mask, unsigned int align_mask); +void *__page_frag_cache_prepare(struct page_frag_cache *nc, unsigned int fragsz, + struct page_frag *pfrag, gfp_t gfp_mask, + unsigned int align_mask); +unsigned int __page_frag_cache_commit_noref(struct page_frag_cache *nc, + struct page_frag *pfrag, + unsigned int used_sz); + +static inline unsigned int __page_frag_cache_commit(struct page_frag_cache *nc, + struct page_frag *pfrag, + unsigned int used_sz) +{ + VM_BUG_ON(!nc->pagecnt_bias); + nc->pagecnt_bias--; + + return __page_frag_cache_commit_noref(nc, pfrag, used_sz); +} + +static inline void *__page_frag_alloc_align(struct page_frag_cache *nc, + unsigned int fragsz, gfp_t gfp_mask, + unsigned int align_mask) +{ + struct page_frag page_frag; + void *va; + + va = __page_frag_cache_prepare(nc, fragsz, &page_frag, gfp_mask, + align_mask); + if (unlikely(!va)) + return NULL; + + __page_frag_cache_commit(nc, &page_frag, fragsz); + + return va; +} static inline void *page_frag_alloc_align(struct page_frag_cache *nc, unsigned int fragsz, gfp_t gfp_mask, diff --git a/mm/page_frag_cache.c b/mm/page_frag_cache.c index 4bff4de58808..e17f4a530af2 100644 --- a/mm/page_frag_cache.c +++ b/mm/page_frag_cache.c @@ -95,9 +95,31 @@ void __page_frag_cache_drain(struct page *page, unsigned int count) } EXPORT_SYMBOL(__page_frag_cache_drain); -void *__page_frag_alloc_align(struct page_frag_cache *nc, - unsigned int fragsz, gfp_t gfp_mask, - unsigned int align_mask) +unsigned int __page_frag_cache_commit_noref(struct page_frag_cache *nc, + struct page_frag *pfrag, + unsigned int used_sz) +{ + unsigned int orig_offset; + + VM_BUG_ON(used_sz > pfrag->size); + VM_BUG_ON(pfrag->page != page_frag_encoded_page_ptr(nc->encoded_page)); + VM_BUG_ON(pfrag->offset + pfrag->size > + page_frag_cache_page_size(nc->encoded_page)); + + /* pfrag->offset might be bigger than the nc->offset due to alignment */ + VM_BUG_ON(nc->offset > pfrag->offset); + + orig_offset = nc->offset; + nc->offset = pfrag->offset + used_sz; + + /* Return true size back to caller considering the offset alignment */ + return nc->offset - orig_offset; +} +EXPORT_SYMBOL(__page_frag_cache_commit_noref); + +void *__page_frag_cache_prepare(struct page_frag_cache *nc, unsigned int fragsz, + struct page_frag *pfrag, gfp_t gfp_mask, + unsigned int align_mask) { unsigned long encoded_page = nc->encoded_page; unsigned int size, offset; @@ -119,6 +141,8 @@ void *__page_frag_alloc_align(struct page_frag_cache *nc, /* reset page count bias and offset to start of new frag */ nc->pagecnt_bias = PAGE_FRAG_CACHE_MAX_SIZE + 1; nc->offset = 0; + } else { + page = page_frag_encoded_page_ptr(encoded_page); } size = page_frag_cache_page_size(encoded_page); @@ -137,8 +161,6 @@ void *__page_frag_alloc_align(struct page_frag_cache *nc, return NULL; } - page = page_frag_encoded_page_ptr(encoded_page); - if (!page_ref_sub_and_test(page, nc->pagecnt_bias)) goto refill; @@ -153,15 +175,17 @@ void *__page_frag_alloc_align(struct page_frag_cache *nc, /* reset page count bias and offset to start of new frag */ nc->pagecnt_bias = PAGE_FRAG_CACHE_MAX_SIZE + 1; + nc->offset = 0; offset = 0; } - nc->pagecnt_bias--; - nc->offset = offset + fragsz; + pfrag->page = page; + pfrag->offset = offset; + pfrag->size = size - offset; return page_frag_encoded_page_address(encoded_page) + offset; } -EXPORT_SYMBOL(__page_frag_alloc_align); +EXPORT_SYMBOL(__page_frag_cache_prepare); /* * Frees a page fragment allocated out of either a compound or order 0 page. From patchwork Tue Oct 8 11:20:42 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yunsheng Lin X-Patchwork-Id: 13826288 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9ABBBCEF157 for ; Tue, 8 Oct 2024 11:27:22 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 298006B009A; Tue, 8 Oct 2024 07:27:19 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 2213E6B009B; Tue, 8 Oct 2024 07:27:19 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 073606B009C; Tue, 8 Oct 2024 07:27:18 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id D8C116B009A for ; Tue, 8 Oct 2024 07:27:18 -0400 (EDT) Received: from smtpin17.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id 4BBAAC1201 for ; Tue, 8 Oct 2024 11:27:17 +0000 (UTC) X-FDA: 82650208956.17.949524E Received: from szxga03-in.huawei.com (szxga03-in.huawei.com [45.249.212.189]) by imf23.hostedemail.com (Postfix) with ESMTP id 41F13140024 for ; Tue, 8 Oct 2024 11:27:15 +0000 (UTC) Authentication-Results: imf23.hostedemail.com; dkim=none; dmarc=pass (policy=quarantine) header.from=huawei.com; spf=pass (imf23.hostedemail.com: domain of linyunsheng@huawei.com designates 45.249.212.189 as permitted sender) smtp.mailfrom=linyunsheng@huawei.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1728386737; a=rsa-sha256; cv=none; b=nDdtucS9fw/0F4hjcgscUIlfOJF7pOvLzo2t6ijNyflDMF98CY6nh/3MG/UBVMD0geKAYx 1I9vrgxsw/QrlM5PlqHrCMdAegpT25VmG1wtDMWhSJ+gjiVtNJyVYwalKLDAKnoelgXnSa /MdARsh6FR8j8e2scyM0V4Vz7Qz6abY= ARC-Authentication-Results: i=1; imf23.hostedemail.com; dkim=none; dmarc=pass (policy=quarantine) header.from=huawei.com; spf=pass (imf23.hostedemail.com: domain of linyunsheng@huawei.com designates 45.249.212.189 as permitted sender) smtp.mailfrom=linyunsheng@huawei.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1728386737; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=Nq2KmnF09Rk6AEkftK2+7007fKrv7e9tUB3VuZeZ2fg=; b=LYzJzuQVJgyF9TEsLRKbAhjJX+FI0uPnrCselAdx/PNC6gWwgAhwK7v7IdDtepyL9Kki3j 0lWbvJmr7LkP8RYezAb3YHhZLjHjrerQj5qJTwDAbhRa3QdVYGoar/SGExAQDlDVcqpg0d NOk/hsOcYcJP/UMvEJEZiaKQegpoiU0= Received: from mail.maildlp.com (unknown [172.19.162.254]) by szxga03-in.huawei.com (SkyGuard) with ESMTP id 4XNDHd3dZpzCt4S; Tue, 8 Oct 2024 19:26:37 +0800 (CST) Received: from dggpemf200006.china.huawei.com (unknown [7.185.36.61]) by mail.maildlp.com (Postfix) with ESMTPS id 388E81800DE; Tue, 8 Oct 2024 19:27:13 +0800 (CST) Received: from localhost.localdomain (10.90.30.45) by dggpemf200006.china.huawei.com (7.185.36.61) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.11; Tue, 8 Oct 2024 19:27:12 +0800 From: Yunsheng Lin To: , , CC: , , Yunsheng Lin , Alexander Duyck , Alexander Duyck , Andrew Morton , Subject: [PATCH net-next v20 08/14] mm: page_frag: use __alloc_pages() to replace alloc_pages_node() Date: Tue, 8 Oct 2024 19:20:42 +0800 Message-ID: <20241008112049.2279307-9-linyunsheng@huawei.com> X-Mailer: git-send-email 2.30.0 In-Reply-To: <20241008112049.2279307-1-linyunsheng@huawei.com> References: <20241008112049.2279307-1-linyunsheng@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.90.30.45] X-ClientProxiedBy: dggems701-chm.china.huawei.com (10.3.19.178) To dggpemf200006.china.huawei.com (7.185.36.61) X-Rspamd-Queue-Id: 41F13140024 X-Rspam-User: X-Rspamd-Server: rspam05 X-Stat-Signature: wfnak9qqhyz1gxaqhrtymrghtu9t5uxw X-HE-Tag: 1728386835-799168 X-HE-Meta: U2FsdGVkX1+Qsvv0K6Fxr8XMLmRGlCIh+uL29O3KAfUkRcoD+k8thlsMnVLKwPcaHy+nyMJMOx1oelsTQJ3qgYy984VGxXg8gxTTuh3OwXL4ZUgHjn3DnYN2t+voX+gI+Fyxnk9BPfJqERhMqiqOI8BgDWsJNJw548fnLjxQ3Hy71ZaiQDNtqA/lIf8rKdCO9utuUuqMwBSEwbZi6ffXKvR2QQBZVtJzXQZ8J4M+Aejfdfq7u+rncOofWskujzXoncn6/XmcTX/HScMiXRVkAQI0Oy+6I4FtXYtDR6a/OkB/cLySuwnXVtpTv1jjLCKaUxsGLqpnooTioIrKyN1Uel34RZfGRSd3UaVidx4NiaYQjRQElT7iGoQB/fwXs3io3HE3ATVescgPMC4Qm5aFZWTDibqN6Onu5Q48dbk3RYeEM0xLdjhDJ9aVI+07XF/S2I7zVKrLov2ABVKaqw5SCFYxvK/W7QgnMw4xGLTtTjndRzRk7leOEGzOZC1ITHl7Y6scqBnItxJmok+6Q1HKfGP4hy8SlVF0hxDuth/7IMfYTFTUIlfT3c8yrVcEnHujpJvdyy7IJut6R1u4wMlHplAaHvcXuenQIu4MKQzsoPVAD68uBF1wpni8+KJ5kmUPxkhnhSs7EvwXDgbdUy4kc/Z7IuIqkSJFFpUojJdZgXJoqVLth4uvNKfyZ5qiEPSakdpczY9MgIcLx4NA7Fjnqs/1af69OqOJvvLs9Rdo3yAtziIh3ajuoJ3W9RfzzefWHhhBYt/z4G7HHaHTchWOHbJ+qQoHqDtciScKV2Jlj1qffzfj+lZIPICFiWe4t2gxl3GjAk+9pWxqOS5V1CH97+QtGi8FNV5fqbPZ92FdKHdbr5yYMNM1Wrobfa9NHKO4ExipsYailuIyqyUF+OZv+SgxdF3InomSE9c4Q13jekWktQKjttG01YzMN0qT3a0zkIEff+t9eDvCAnYaBqy 1j9yL9i1 k8lSuUNHWAMVMFpCMcOMK6sL8X87Yh8++8znmDWmU0o6F6a8J644FGc8lxxqCQxgsQk4uokmAdsa8Prc08D+Z81Dq43fDk67G2H5BTqD4bqD4ZsyJRPHupT3pxQ/NmKaieqF8IEcHVa5W4Ebuq4ftku53MUoQ2kLuMWNCwHkdBtFBKG7n9vjbzHIn8VXOBXK+5gn4IchGx9zpCC/UKU94sTAo1VlPQgk2/x7Ubb0vREiFPTIWnY2eL++RRx6QxGzZlqaBC7/fDOhFFx/d4mM+8uTaU5lXpYMeHHEJIxMPwtXWbceaKZTHcM45Qw== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: It seems there is about 24Bytes binary size increase for __page_frag_cache_refill() after refactoring in arm64 system with 64K PAGE_SIZE. By doing the gdb disassembling, It seems we can have more than 100Bytes decrease for the binary size by using __alloc_pages() to replace alloc_pages_node(), as there seems to be some unnecessary checking for nid being NUMA_NO_NODE, especially when page_frag is part of the mm system. CC: Alexander Duyck Signed-off-by: Yunsheng Lin Reviewed-by: Alexander Duyck --- mm/page_frag_cache.c | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/mm/page_frag_cache.c b/mm/page_frag_cache.c index e17f4a530af2..4666dbec38eb 100644 --- a/mm/page_frag_cache.c +++ b/mm/page_frag_cache.c @@ -61,11 +61,11 @@ static struct page *__page_frag_cache_refill(struct page_frag_cache *nc, #if (PAGE_SIZE < PAGE_FRAG_CACHE_MAX_SIZE) gfp_mask = (gfp_mask & ~__GFP_DIRECT_RECLAIM) | __GFP_COMP | __GFP_NOWARN | __GFP_NORETRY | __GFP_NOMEMALLOC; - page = alloc_pages_node(NUMA_NO_NODE, gfp_mask, - PAGE_FRAG_CACHE_MAX_ORDER); + page = __alloc_pages(gfp_mask, PAGE_FRAG_CACHE_MAX_ORDER, + numa_mem_id(), NULL); #endif if (unlikely(!page)) { - page = alloc_pages_node(NUMA_NO_NODE, gfp, 0); + page = __alloc_pages(gfp, 0, numa_mem_id(), NULL); order = 0; } From patchwork Tue Oct 8 11:20:44 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yunsheng Lin X-Patchwork-Id: 13826289 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id A6BB5CEF158 for ; Tue, 8 Oct 2024 11:27:24 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id E7F526B00A0; Tue, 8 Oct 2024 07:27:22 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id D6BD06B009F; Tue, 8 Oct 2024 07:27:22 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id C0CBE6B00A0; Tue, 8 Oct 2024 07:27:22 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id A0B2D6B009E for ; Tue, 8 Oct 2024 07:27:22 -0400 (EDT) Received: from smtpin27.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id 4ED29A1BC3 for ; Tue, 8 Oct 2024 11:27:20 +0000 (UTC) X-FDA: 82650209124.27.E76A9D2 Received: from szxga01-in.huawei.com (szxga01-in.huawei.com [45.249.212.187]) by imf19.hostedemail.com (Postfix) with ESMTP id EC4D31A0011 for ; Tue, 8 Oct 2024 11:27:19 +0000 (UTC) Authentication-Results: imf19.hostedemail.com; dkim=none; dmarc=pass (policy=quarantine) header.from=huawei.com; spf=pass (imf19.hostedemail.com: domain of linyunsheng@huawei.com designates 45.249.212.187 as permitted sender) smtp.mailfrom=linyunsheng@huawei.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1728386731; a=rsa-sha256; cv=none; b=NNSkwwrmiElJ/sLxK71dVblNftrG8IlwvHUuTqsdTnRp+O4oAVO0M1jhgm3LMYGL/MyFLy iNe+Sg1Y0r7Xo8SfNlvVzNdXDDzAHwM2b+wdiY31/RauKcewW8QvnCfC15Yd5pkNsVyUX3 +1h6kXN2o2dZiReG+jN/8hB2uBxawS4= ARC-Authentication-Results: i=1; imf19.hostedemail.com; dkim=none; dmarc=pass (policy=quarantine) header.from=huawei.com; spf=pass (imf19.hostedemail.com: domain of linyunsheng@huawei.com designates 45.249.212.187 as permitted sender) smtp.mailfrom=linyunsheng@huawei.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1728386731; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=3jDssO56C/Wma2LUOUQH2rXXn9yGJOyjIfc5W95aj58=; b=1Ctf8RNllmi59bv+9AoB1gxCkXrtER2QDnMXxjd34+26PuqFKVBDrRzkAFxyAqswRd6vYE ukGNuiACEkp7EQ3loSb1VUblD/kntIoxqqo0taN+ZwgBtedbauhsPBQS1fIx/JdXrbN1zQ hysw1mQbjplParV/9q+8T+40yDNs4bA= Received: from mail.maildlp.com (unknown [172.19.163.48]) by szxga01-in.huawei.com (SkyGuard) with ESMTP id 4XNDGx4lp6zySx8; Tue, 8 Oct 2024 19:26:01 +0800 (CST) Received: from dggpemf200006.china.huawei.com (unknown [7.185.36.61]) by mail.maildlp.com (Postfix) with ESMTPS id E3A7618009B; Tue, 8 Oct 2024 19:27:16 +0800 (CST) Received: from localhost.localdomain (10.90.30.45) by dggpemf200006.china.huawei.com (7.185.36.61) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.11; Tue, 8 Oct 2024 19:27:16 +0800 From: Yunsheng Lin To: , , CC: , , Yunsheng Lin , Alexander Duyck , Andrew Morton , Subject: [PATCH net-next v20 10/14] mm: page_frag: introduce prepare/probe/commit API Date: Tue, 8 Oct 2024 19:20:44 +0800 Message-ID: <20241008112049.2279307-11-linyunsheng@huawei.com> X-Mailer: git-send-email 2.30.0 In-Reply-To: <20241008112049.2279307-1-linyunsheng@huawei.com> References: <20241008112049.2279307-1-linyunsheng@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.90.30.45] X-ClientProxiedBy: dggems701-chm.china.huawei.com (10.3.19.178) To dggpemf200006.china.huawei.com (7.185.36.61) X-Rspamd-Server: rspam12 X-Rspamd-Queue-Id: EC4D31A0011 X-Stat-Signature: j9c7dgaum4c3wu4dhp397qkubqrxhge4 X-Rspam-User: X-HE-Tag: 1728386839-381578 X-HE-Meta: U2FsdGVkX1825MYKPrm1AvTW2LMS3YxJuTh8Kz1To/8Jj6VOEkGflIQL14VFW3JsMLOd3/bzbBNqWEANGYgMj5fU1VCtB5qr/CpFsR5gXxlrlgmQAQ3q83fFlYLWz3/HXOZdxyaGLvG+70MmlYsh7KVYDlNj4LOr3OuGr7imsoLkztpOyI0RS2LHpy7hidBaxm0idP1lCA4NO7u2D+mGCrOkhfQvLfzGdJFA4q8J37YLz3bXnuQn2cIHdTk0tb4Z0iC4E2C8yMdJ7HRBAdqAWEiyceI64vEX9WdIXnrGC3d8ZVudoDjsU1+h13EI8voW6f1k+dBqVBJz7SmYsOPBZWpLcfYUsrn0x2gOb7DfUEtJbKG1eM5Oi9xdRLOK1yf8rN5rEm2vDGZC46RjNK+jmpgvjtjTeHRWbnWNuND5glrjH3pLOy9Y4EObiYIx43QZZWtJwQSY3wE58MJYT8s93+DYXBaWCYd5IW76tGsc0BDw3aay/ulDrbj7PoUTdiVNNfV0wQDmlbMYjUYBOsdJBKKpQ/Aullh5CQ4IO9TC02UmWQ1v+fn89IC1ftBgY27FQMHhwbWdt8knWhtSTA3N2WseNHoLo5SleexwhrPx+TIvd7w2jlkH9MomyclVZ6eN4ULdSwpslcp0/qrTGRPnSOnpE/C+rEBYLSHiuNqUdWJtu68+1IPwHxo91HQ9NlS20PBQb9OyqsEhK/t7g0kkQ6JG/sEDAvTJOUEcAvegWuTR6dDqBhWbY2hE7Fs+35eu0U349Cy2m2jkR7ezvc/pxQ1NxvBjeaiZDyWfVA8kwsyFMYHj0uLNg3K/XXjW0k90xP3jl+DmmCyhkwSmKn73U29YKJc+mwyE7bX9AanWUmB4LwjINwi2TWeB9BBqvWdrjNCJHsBxGN6iT2D0HHSUBm52wwBR8PYFYCudWsAUFgEK9eDBsbPzF/OzeuFgFylq8qiWagoFdA0JnLFBvBV LaunDNoo 8nmAmJ8UtGIlyuTJhMdwuVlUlawWl9MpdZDHnM8fSKf4FpgjiphD8lZj8Tnt83BqoCYpm9FIHN4WICKlhMyR4ZO6vhyvle5Q7tjZDlBMcCAoPmQMvL2VAu6/2Rj4x+nNTSF8vIKIcfvpcIRV3OTFFcuQzVkGASRNhdTaxeRwxRczCT/t4+Re6c+LQF/S20CTvlwaOORKV3cliUi1OfmB5/VXTurJklHtyTGaLFYnUQpBeRxkTd4rTOe02lmDv4Q5fIHOxfi6/DKntZJGq1ifVH+oy70RQmTcHTKpT X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: There are many use cases that need minimum memory in order for forward progress, but more performant if more memory is available or need to probe the cache info to use any memory available for frag caoleasing reason. Currently skb_page_frag_refill() API is used to solve the above use cases, but caller needs to know about the internal detail and access the data field of 'struct page_frag' to meet the requirement of the above use cases and its implementation is similar to the one in mm subsystem. To unify those two page_frag implementations, introduce a prepare API to ensure minimum memory is satisfied and return how much the actual memory is available to the caller and a probe API to report the current available memory to caller without doing cache refilling. The caller needs to either call the commit API to report how much memory it actually uses, or not do so if deciding to not use any memory. CC: Alexander Duyck Signed-off-by: Yunsheng Lin --- include/linux/page_frag_cache.h | 135 ++++++++++++++++++++++++++++++++ mm/page_frag_cache.c | 21 +++++ 2 files changed, 156 insertions(+) diff --git a/include/linux/page_frag_cache.h b/include/linux/page_frag_cache.h index a6cb32b1d1ca..d91ad53f25d3 100644 --- a/include/linux/page_frag_cache.h +++ b/include/linux/page_frag_cache.h @@ -38,6 +38,11 @@ static inline bool page_frag_cache_is_pfmemalloc(struct page_frag_cache *nc) return page_frag_encoded_page_pfmemalloc(nc->encoded_page); } +static inline unsigned int page_frag_cache_page_offset(const struct page_frag_cache *nc) +{ + return nc->offset; +} + void page_frag_cache_drain(struct page_frag_cache *nc); void __page_frag_cache_drain(struct page *page, unsigned int count); void *__page_frag_cache_prepare(struct page_frag_cache *nc, unsigned int fragsz, @@ -46,6 +51,10 @@ void *__page_frag_cache_prepare(struct page_frag_cache *nc, unsigned int fragsz, unsigned int __page_frag_cache_commit_noref(struct page_frag_cache *nc, struct page_frag *pfrag, unsigned int used_sz); +void *__page_frag_alloc_refill_probe_align(struct page_frag_cache *nc, + unsigned int fragsz, + struct page_frag *pfrag, + unsigned int align_mask); static inline unsigned int __page_frag_cache_commit(struct page_frag_cache *nc, struct page_frag *pfrag, @@ -88,6 +97,132 @@ static inline void *page_frag_alloc(struct page_frag_cache *nc, return __page_frag_alloc_align(nc, fragsz, gfp_mask, ~0u); } +static inline bool __page_frag_refill_align(struct page_frag_cache *nc, + unsigned int fragsz, + struct page_frag *pfrag, + gfp_t gfp_mask, + unsigned int align_mask) +{ + if (unlikely(!__page_frag_cache_prepare(nc, fragsz, pfrag, gfp_mask, + align_mask))) + return false; + + __page_frag_cache_commit(nc, pfrag, fragsz); + return true; +} + +static inline bool page_frag_refill_align(struct page_frag_cache *nc, + unsigned int fragsz, + struct page_frag *pfrag, + gfp_t gfp_mask, unsigned int align) +{ + WARN_ON_ONCE(!is_power_of_2(align)); + return __page_frag_refill_align(nc, fragsz, pfrag, gfp_mask, -align); +} + +static inline bool page_frag_refill(struct page_frag_cache *nc, + unsigned int fragsz, + struct page_frag *pfrag, gfp_t gfp_mask) +{ + return __page_frag_refill_align(nc, fragsz, pfrag, gfp_mask, ~0u); +} + +static inline bool __page_frag_refill_prepare_align(struct page_frag_cache *nc, + unsigned int fragsz, + struct page_frag *pfrag, + gfp_t gfp_mask, + unsigned int align_mask) +{ + return !!__page_frag_cache_prepare(nc, fragsz, pfrag, gfp_mask, + align_mask); +} + +static inline bool page_frag_refill_prepare_align(struct page_frag_cache *nc, + unsigned int fragsz, + struct page_frag *pfrag, + gfp_t gfp_mask, + unsigned int align) +{ + WARN_ON_ONCE(!is_power_of_2(align)); + return __page_frag_refill_prepare_align(nc, fragsz, pfrag, gfp_mask, + -align); +} + +static inline bool page_frag_refill_prepare(struct page_frag_cache *nc, + unsigned int fragsz, + struct page_frag *pfrag, + gfp_t gfp_mask) +{ + return __page_frag_refill_prepare_align(nc, fragsz, pfrag, gfp_mask, + ~0u); +} + +static inline void *__page_frag_alloc_refill_prepare_align(struct page_frag_cache *nc, + unsigned int fragsz, + struct page_frag *pfrag, + gfp_t gfp_mask, + unsigned int align_mask) +{ + return __page_frag_cache_prepare(nc, fragsz, pfrag, gfp_mask, align_mask); +} + +static inline void *page_frag_alloc_refill_prepare_align(struct page_frag_cache *nc, + unsigned int fragsz, + struct page_frag *pfrag, + gfp_t gfp_mask, + unsigned int align) +{ + WARN_ON_ONCE(!is_power_of_2(align)); + return __page_frag_alloc_refill_prepare_align(nc, fragsz, pfrag, + gfp_mask, -align); +} + +static inline void *page_frag_alloc_refill_prepare(struct page_frag_cache *nc, + unsigned int fragsz, + struct page_frag *pfrag, + gfp_t gfp_mask) +{ + return __page_frag_alloc_refill_prepare_align(nc, fragsz, pfrag, + gfp_mask, ~0u); +} + +static inline void *page_frag_alloc_refill_probe(struct page_frag_cache *nc, + unsigned int fragsz, + struct page_frag *pfrag) +{ + return __page_frag_alloc_refill_probe_align(nc, fragsz, pfrag, ~0u); +} + +static inline bool page_frag_refill_probe(struct page_frag_cache *nc, + unsigned int fragsz, + struct page_frag *pfrag) +{ + return !!page_frag_alloc_refill_probe(nc, fragsz, pfrag); +} + +static inline void page_frag_commit(struct page_frag_cache *nc, + struct page_frag *pfrag, + unsigned int used_sz) +{ + __page_frag_cache_commit(nc, pfrag, used_sz); +} + +static inline void page_frag_commit_noref(struct page_frag_cache *nc, + struct page_frag *pfrag, + unsigned int used_sz) +{ + __page_frag_cache_commit_noref(nc, pfrag, used_sz); +} + +static inline void page_frag_alloc_abort(struct page_frag_cache *nc, + unsigned int fragsz) +{ + VM_BUG_ON(fragsz > nc->offset); + + nc->pagecnt_bias++; + nc->offset -= fragsz; +} + void page_frag_free(void *addr); #endif diff --git a/mm/page_frag_cache.c b/mm/page_frag_cache.c index 4666dbec38eb..1e7757a433d0 100644 --- a/mm/page_frag_cache.c +++ b/mm/page_frag_cache.c @@ -117,6 +117,27 @@ unsigned int __page_frag_cache_commit_noref(struct page_frag_cache *nc, } EXPORT_SYMBOL(__page_frag_cache_commit_noref); +void *__page_frag_alloc_refill_probe_align(struct page_frag_cache *nc, + unsigned int fragsz, + struct page_frag *pfrag, + unsigned int align_mask) +{ + unsigned long encoded_page = nc->encoded_page; + unsigned int size, offset; + + size = page_frag_cache_page_size(encoded_page); + offset = __ALIGN_KERNEL_MASK(nc->offset, ~align_mask); + if (unlikely(!encoded_page || offset + fragsz > size)) + return NULL; + + pfrag->page = page_frag_encoded_page_ptr(encoded_page); + pfrag->size = size - offset; + pfrag->offset = offset; + + return page_frag_encoded_page_address(encoded_page) + offset; +} +EXPORT_SYMBOL(__page_frag_alloc_refill_probe_align); + void *__page_frag_cache_prepare(struct page_frag_cache *nc, unsigned int fragsz, struct page_frag *pfrag, gfp_t gfp_mask, unsigned int align_mask) From patchwork Tue Oct 8 11:20:45 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yunsheng Lin X-Patchwork-Id: 13826290 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id C62F3CEF152 for ; Tue, 8 Oct 2024 11:27:26 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id A12A86B00A4; Tue, 8 Oct 2024 07:27:25 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 972CF6B00A5; Tue, 8 Oct 2024 07:27:25 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 7EBEE6B00A6; Tue, 8 Oct 2024 07:27:25 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id 59AC96B00A4 for ; Tue, 8 Oct 2024 07:27:25 -0400 (EDT) Received: from smtpin05.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id 32434141B41 for ; Tue, 8 Oct 2024 11:27:24 +0000 (UTC) X-FDA: 82650209250.05.0ED740C Received: from szxga04-in.huawei.com (szxga04-in.huawei.com [45.249.212.190]) by imf16.hostedemail.com (Postfix) with ESMTP id 817B4180002 for ; Tue, 8 Oct 2024 11:27:22 +0000 (UTC) Authentication-Results: imf16.hostedemail.com; dkim=none; dmarc=pass (policy=quarantine) header.from=huawei.com; spf=pass (imf16.hostedemail.com: domain of linyunsheng@huawei.com designates 45.249.212.190 as permitted sender) smtp.mailfrom=linyunsheng@huawei.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1728386743; a=rsa-sha256; cv=none; b=wq5yrbIdE2FbS05Y+uC0Oyp7kfcH8dhY/mfCwqQy2cSsT/CNYRuSbAjParN8h9zDflJ899 c7AT86gK4uuRhFCHr5p+5fQOqQq0/krtXtRlwCiIZZPxjWrDOhu9gTgPFOJcMDDMpXBRQP JrZMu/K4wNMFDIzlCbjqTbf1lYtNNa8= ARC-Authentication-Results: i=1; imf16.hostedemail.com; dkim=none; dmarc=pass (policy=quarantine) header.from=huawei.com; spf=pass (imf16.hostedemail.com: domain of linyunsheng@huawei.com designates 45.249.212.190 as permitted sender) smtp.mailfrom=linyunsheng@huawei.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1728386743; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=vcscuEIx/nOXzPzA8rC3XdtS3OQ+vc0nt0pl8LKmBHk=; b=wm956MZPpmjGA1qQSCsMk/nI3N62Pj7sqfaVEdfJAUlNEjK1aFK/QsmQKoZethvsui2ZRM OZjyxVZlvMqnKoouVPYSLai8W7Ry6g8aLxgabEWRdDUoOpR3IFFVeSwI7v5IQ79OZHwM4l dn0mY4GgBd4hP0uW8lzTLPIa/MuIu8g= Received: from mail.maildlp.com (unknown [172.19.163.44]) by szxga04-in.huawei.com (SkyGuard) with ESMTP id 4XNDHB71Djz2DdFy; Tue, 8 Oct 2024 19:26:14 +0800 (CST) Received: from dggpemf200006.china.huawei.com (unknown [7.185.36.61]) by mail.maildlp.com (Postfix) with ESMTPS id 0559B1401F1; Tue, 8 Oct 2024 19:27:19 +0800 (CST) Received: from localhost.localdomain (10.90.30.45) by dggpemf200006.china.huawei.com (7.185.36.61) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.11; Tue, 8 Oct 2024 19:27:18 +0800 From: Yunsheng Lin To: , , CC: , , Yunsheng Lin , Alexander Duyck , Andrew Morton , Shuah Khan , , Subject: [PATCH net-next v20 11/14] mm: page_frag: add testing for the newly added prepare API Date: Tue, 8 Oct 2024 19:20:45 +0800 Message-ID: <20241008112049.2279307-12-linyunsheng@huawei.com> X-Mailer: git-send-email 2.30.0 In-Reply-To: <20241008112049.2279307-1-linyunsheng@huawei.com> References: <20241008112049.2279307-1-linyunsheng@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.90.30.45] X-ClientProxiedBy: dggems701-chm.china.huawei.com (10.3.19.178) To dggpemf200006.china.huawei.com (7.185.36.61) X-Rspamd-Queue-Id: 817B4180002 X-Rspam-User: X-Rspamd-Server: rspam05 X-Stat-Signature: s8ggihjqqs5foctzpaxwiriit55ygetu X-HE-Tag: 1728386842-458598 X-HE-Meta: U2FsdGVkX1+R3HgoHFMRpE/FH0PP4WAFZiqmWpdg+B8iZm7oriIP32rgMoQES1k+6LS/0m48pbfvu6EDzHAAkXV/gxGAkX6BRIWNT/+me2g/txceufd/iG7I2lYx/zY/r1RQRalFKiT3nyiF6qVmIOv/N7Xp3eLffIBlORExzFI2q9JCjgaRDMmwXwUi2Rhpqm4DpgieLdUtfCfUMeDBWDd8vG/suH5MKMZzt0jJv+E4abgkaJDGVHLL8bzaMS1+sk61wotc6kWv6s+5O+SgfmIYMti/J3tDMJB/94MeCMud/KIm+KXBYpI5aEugUz9YwMdp+zzA5lWYG0N3/1uZrkE4qSI3Qxq/1M7gBcCt0czM5Oucxd+C3RZneZsTEPhVrm0/wthrXSX8nfxmwIJsNTNdNfOscB0om3NiRC2l7Irl3gcY4LQ40y9LI1Li4CG+DybeLIOeszhD7LV+gFCW7HvMzzIy+bjU9YZ67HhQi+gx0WpMMQmyznkREB3atbcERrgVybcUIaBXgMlVIwOaKpQVgVPQ1tyg+Co+DVfHplt5bR+ao1mqKsM3I6xKwXDKqsLfKAm5IOYeUl8tOuuywIVRXK4JGezEd5ZpZw0yueNh5QCiEf3AUuJ5qAJv8MVTkR8XZ+yH67qZxTkCUQik0cSq+vvrtJmddfm873wAi5e0hk0R03UaaDWyCB3lLWh/9GyzAMCYxU0vNGG8iNPJhJMC6hSOeJdhOx3pvT1PqoUNnWovHnnHWVf8xojxavwq1Ue4PY+BlhJxEyB+ZGgk8PAztW3AANo0I+1oicGyfYieB/SYHT5d65Ip3XamjA2mWB6B1uRZBsXdTQdMeAmrgy+wjacmHB3aukVUOohqgwianzfFSAluRtQu4iMHtTpzUxiDwYoP4jlEIesUPHHo+ai8hz+iAeAkAcJY0m7ZRns3B3Js/PfqQgWBqlphlcYKPDkCUi9VH/4bdjqcmw8 eSukt/TJ 5YISi0dV0gvQfxIl7dV22oBj1i+AWDAghz2vO7VBW2+rtfPuySpO+TNeHJxy917iX2id98pxeAaqvBINkE4C/2i2Zv7HMVT9/6ywKFqOwVjwrXFX/kGm67x92MEYfqowYaG/5XhPzqkwuXFo7M6RRUQY4Ws6xkSiFp/DzzoD5ekjDq3xMsN1zjMJ7JPEiZM0u3+HkN7vU9++uQlkHbZUzJtHcwFV1H1zwo8ZzeED1CKZVRkGqsF/iEMmrs+k5HzPigaqv+BBep81rLGo39MdrvXA/hkFjdZYCiLS5 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Add testing for the newly added prepare API, for both aligned and non-aligned API, also probe API is also tested along with prepare API. CC: Alexander Duyck Signed-off-by: Yunsheng Lin --- .../selftests/mm/page_frag/page_frag_test.c | 66 +++++++++++++++++-- tools/testing/selftests/mm/run_vmtests.sh | 4 ++ tools/testing/selftests/mm/test_page_frag.sh | 31 +++++++++ 3 files changed, 96 insertions(+), 5 deletions(-) diff --git a/tools/testing/selftests/mm/page_frag/page_frag_test.c b/tools/testing/selftests/mm/page_frag/page_frag_test.c index 36543a129e40..567bcc6a181e 100644 --- a/tools/testing/selftests/mm/page_frag/page_frag_test.c +++ b/tools/testing/selftests/mm/page_frag/page_frag_test.c @@ -29,6 +29,10 @@ static bool test_align; module_param(test_align, bool, 0); MODULE_PARM_DESC(test_align, "use align API for testing"); +static bool test_prepare; +module_param(test_prepare, bool, 0); +MODULE_PARM_DESC(test_prepare, "use prepare API for testing"); + static int test_alloc_len = 2048; module_param(test_alloc_len, int, 0); MODULE_PARM_DESC(test_alloc_len, "alloc len for testing"); @@ -68,6 +72,18 @@ static int page_frag_pop_thread(void *arg) return 0; } +static void frag_frag_test_commit(struct page_frag_cache *nc, + struct page_frag *prepare_pfrag, + struct page_frag *probe_pfrag, + unsigned int used_sz) +{ + WARN_ON_ONCE(prepare_pfrag->page != probe_pfrag->page || + prepare_pfrag->offset != probe_pfrag->offset || + prepare_pfrag->size != probe_pfrag->size); + + page_frag_commit(nc, prepare_pfrag, used_sz); +} + static int page_frag_push_thread(void *arg) { struct ptr_ring *ring = arg; @@ -80,13 +96,52 @@ static int page_frag_push_thread(void *arg) int ret; if (test_align) { - va = page_frag_alloc_align(&test_nc, test_alloc_len, - GFP_KERNEL, SMP_CACHE_BYTES); + if (test_prepare) { + struct page_frag prepare_frag, probe_frag; + void *probe_va; + + va = page_frag_alloc_refill_prepare_align(&test_nc, + test_alloc_len, + &prepare_frag, + GFP_KERNEL, + SMP_CACHE_BYTES); + + probe_va = __page_frag_alloc_refill_probe_align(&test_nc, + test_alloc_len, + &probe_frag, + -SMP_CACHE_BYTES); + WARN_ON_ONCE(va != probe_va); + + if (likely(va)) + frag_frag_test_commit(&test_nc, &prepare_frag, + &probe_frag, test_alloc_len); + } else { + va = page_frag_alloc_align(&test_nc, + test_alloc_len, + GFP_KERNEL, + SMP_CACHE_BYTES); + } WARN_ONCE((unsigned long)va & (SMP_CACHE_BYTES - 1), "unaligned va returned\n"); } else { - va = page_frag_alloc(&test_nc, test_alloc_len, GFP_KERNEL); + if (test_prepare) { + struct page_frag prepare_frag, probe_frag; + void *probe_va; + + va = page_frag_alloc_refill_prepare(&test_nc, test_alloc_len, + &prepare_frag, GFP_KERNEL); + + probe_va = page_frag_alloc_refill_probe(&test_nc, test_alloc_len, + &probe_frag); + + WARN_ON_ONCE(va != probe_va); + if (likely(va)) + frag_frag_test_commit(&test_nc, &prepare_frag, + &probe_frag, test_alloc_len); + } else { + va = page_frag_alloc(&test_nc, test_alloc_len, GFP_KERNEL); + } } if (!va) @@ -152,8 +207,9 @@ static int __init page_frag_test_init(void) test_pushed, test_popped); duration = (u64)ktime_us_delta(ktime_get(), start); - pr_info("%d of iterations for %s testing took: %lluus\n", nr_test, - test_align ? "aligned" : "non-aligned", duration); + pr_info("%d of iterations for %s %s API testing took: %lluus\n", nr_test, + test_align ? "aligned" : "non-aligned", + test_prepare ? "prepare" : "alloc", duration); ptr_ring_cleanup(&ptr_ring, NULL); page_frag_cache_drain(&test_nc); diff --git a/tools/testing/selftests/mm/run_vmtests.sh b/tools/testing/selftests/mm/run_vmtests.sh index 2c5394584af4..f6ff9080a6f2 100755 --- a/tools/testing/selftests/mm/run_vmtests.sh +++ b/tools/testing/selftests/mm/run_vmtests.sh @@ -464,6 +464,10 @@ CATEGORY="page_frag" run_test ./test_page_frag.sh aligned CATEGORY="page_frag" run_test ./test_page_frag.sh nonaligned +CATEGORY="page_frag" run_test ./test_page_frag.sh aligned_prepare + +CATEGORY="page_frag" run_test ./test_page_frag.sh nonaligned_prepare + echo "SUMMARY: PASS=${count_pass} SKIP=${count_skip} FAIL=${count_fail}" | tap_prefix echo "1..${count_total}" | tap_output diff --git a/tools/testing/selftests/mm/test_page_frag.sh b/tools/testing/selftests/mm/test_page_frag.sh index d750d910c899..71c3531fa38e 100755 --- a/tools/testing/selftests/mm/test_page_frag.sh +++ b/tools/testing/selftests/mm/test_page_frag.sh @@ -36,6 +36,8 @@ ksft_skip=4 SMOKE_PARAM="test_push_cpu=$TEST_CPU_0 test_pop_cpu=$TEST_CPU_1" NONALIGNED_PARAM="$SMOKE_PARAM test_alloc_len=75 nr_test=$NR_TEST" ALIGNED_PARAM="$NONALIGNED_PARAM test_align=1" +NONALIGNED_PREPARE_PARAM="$NONALIGNED_PARAM test_prepare=1" +ALIGNED_PREPARE_PARAM="$ALIGNED_PARAM test_prepare=1" check_test_requirements() { @@ -74,6 +76,24 @@ run_aligned_check() echo "Check the kernel ring buffer to see the summary." } +run_nonaligned_prepare_check() +{ + echo "Run performance tests to evaluate how fast nonaligned prepare API is." + + insmod $DRIVER $NONALIGNED_PREPARE_PARAM > /dev/null 2>&1 + echo "Done." + echo "Ccheck the kernel ring buffer to see the summary." +} + +run_aligned_prepare_check() +{ + echo "Run performance tests to evaluate how fast aligned prepare API is." + + insmod $DRIVER $ALIGNED_PREPARE_PARAM > /dev/null 2>&1 + echo "Done." + echo "Check the kernel ring buffer to see the summary." +} + run_smoke_check() { echo "Run smoke test." @@ -86,6 +106,7 @@ run_smoke_check() usage() { echo -n "Usage: $0 [ aligned ] | [ nonaligned ] | | [ smoke ] | " + echo "[ aligned_prepare ] | [ nonaligned_prepare ] | " echo "manual parameters" echo echo "Valid tests and parameters:" @@ -106,6 +127,12 @@ usage() echo "# Performance testing for aligned alloc API" echo "$0 aligned" echo + echo "# Performance testing for nonaligned prepare API" + echo "$0 nonaligned_prepare" + echo + echo "# Performance testing for aligned prepare API" + echo "$0 aligned_prepare" + echo exit 0 } @@ -159,6 +186,10 @@ function run_test() run_nonaligned_check elif [[ "$1" = "aligned" ]]; then run_aligned_check + elif [[ "$1" = "nonaligned_prepare" ]]; then + run_nonaligned_prepare_check + elif [[ "$1" = "aligned_prepare" ]]; then + run_aligned_prepare_check else run_manual_check $@ fi From patchwork Tue Oct 8 11:20:47 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yunsheng Lin X-Patchwork-Id: 13826291 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8AD80CEF157 for ; Tue, 8 Oct 2024 11:27:34 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 15EB96B00A8; Tue, 8 Oct 2024 07:27:34 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 10E016B00A9; Tue, 8 Oct 2024 07:27:34 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id EC9B26B00AA; Tue, 8 Oct 2024 07:27:33 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id C95756B00A8 for ; Tue, 8 Oct 2024 07:27:33 -0400 (EDT) Received: from smtpin16.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id CD4D31618FE for ; Tue, 8 Oct 2024 11:27:32 +0000 (UTC) X-FDA: 82650209586.16.AB625BA Received: from szxga04-in.huawei.com (szxga04-in.huawei.com [45.249.212.190]) by imf17.hostedemail.com (Postfix) with ESMTP id 0685440008 for ; Tue, 8 Oct 2024 11:27:30 +0000 (UTC) Authentication-Results: imf17.hostedemail.com; dkim=none; spf=pass (imf17.hostedemail.com: domain of linyunsheng@huawei.com designates 45.249.212.190 as permitted sender) smtp.mailfrom=linyunsheng@huawei.com; dmarc=pass (policy=quarantine) header.from=huawei.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1728386716; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=H+R1rK1sgbQjQepcwsZSrf/d3M55f62HX6fn9P0p6cU=; b=vDuThpijPIguDgQ8e6nbsiaJBKE5JjT7kJXcQ2p8NB6qUir/l3ciIPGZRVgaVIyL6SVotu IVKREGVTkMb0p0QwyR7Qx8kGl/Q9BrK56/Rq4NA+Ppk8r7Gyu2bTolChGBQ3n26EZr8tPw lzSTdHoiStf/nRkXEGBc3PvmYql989E= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1728386716; a=rsa-sha256; cv=none; b=GL1u8UDGpSPMGE5CX0BvnnmVLVCzIcR2BQ7HBhCey3TpvQzyH9dfrsOW0KKNZK903V5kU6 h5iUWDPSJkNBqCe6nl6KbVx1NynlkkU8R0FIXwT+8wOLk6N0Eb94aJte9hM8zUIw0Is8i3 PABI+JmqdXr6X6OrxGca1DdwTcERG24= ARC-Authentication-Results: i=1; imf17.hostedemail.com; dkim=none; spf=pass (imf17.hostedemail.com: domain of linyunsheng@huawei.com designates 45.249.212.190 as permitted sender) smtp.mailfrom=linyunsheng@huawei.com; dmarc=pass (policy=quarantine) header.from=huawei.com Received: from mail.maildlp.com (unknown [172.19.163.17]) by szxga04-in.huawei.com (SkyGuard) with ESMTP id 4XNDHN0lp1z2DclM; Tue, 8 Oct 2024 19:26:24 +0800 (CST) Received: from dggpemf200006.china.huawei.com (unknown [7.185.36.61]) by mail.maildlp.com (Postfix) with ESMTPS id 20C971A0188; Tue, 8 Oct 2024 19:27:28 +0800 (CST) Received: from localhost.localdomain (10.90.30.45) by dggpemf200006.china.huawei.com (7.185.36.61) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.11; Tue, 8 Oct 2024 19:27:27 +0800 From: Yunsheng Lin To: , , CC: , , Yunsheng Lin , Alexander Duyck , Jonathan Corbet , Andrew Morton , , Subject: [PATCH net-next v20 13/14] mm: page_frag: update documentation for page_frag Date: Tue, 8 Oct 2024 19:20:47 +0800 Message-ID: <20241008112049.2279307-14-linyunsheng@huawei.com> X-Mailer: git-send-email 2.30.0 In-Reply-To: <20241008112049.2279307-1-linyunsheng@huawei.com> References: <20241008112049.2279307-1-linyunsheng@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.90.30.45] X-ClientProxiedBy: dggems701-chm.china.huawei.com (10.3.19.178) To dggpemf200006.china.huawei.com (7.185.36.61) X-Rspam-User: X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: 0685440008 X-Stat-Signature: ryywuihm1cjmohdwwn1r4t5j3uwjp73j X-HE-Tag: 1728386850-8790 X-HE-Meta: U2FsdGVkX19U9YPCJhnocSnwMIzMFN7GHkSj7f4kcB7J1dgEEdOMZMlLNEowAfimcws9heIPYpsxkxaOdnbyEXeAmI8GXj9URYLgLUzWHyjPyoPTa3NcDhkmKT67NvutrJvK4Ah80/TlRmWWFmFeNfFzfDR1zw8bo4F/BDYctd6oQYZzYHtTIYuMWfAyVpsRqDogH//pSusIXZT496oCK+Ctkkj7Q4FdKNbr7Y7EWAk1FDzv12SecaGMZz/cNL0im15CDi9FFxpsswvqA7UEJYVCgsQ+W2nNHKqISavPkNKmWxkPpulEal6rJrpY4A+DmmG9/SCHbLzTjNXTGRIVKTsqYSMLciFmqgUhQ2119ROR+Fvic5Zw4HhL2PjyR7bjzX1goDTEh3m2kgJThwJldDYCoU1ojM7P6BCNjky8vUQaFF10yZf0jESfUnX3iEWVuphIL4V07q8D0SDCGlFyqVFfk3L3a9i2NYR9BrSM66ehdYgD2TgjOyJPVfio1TGIA+XUZ+vLbl2ecLh/PkVYHY9uVCjCsMsvTHUkfzVVUm6RIbQxg+N8C7kNuyCT5XlFNpJVQziiwSSRmS2W6vc5DXu1Zv6kb0azXvlsyZMdBqL6E3ijD+BnOKhLtVS2jdL22j++Jrp539rqn7HUMal3yjyjkC97skLYRnqh85OSwprigLVAJVI2JrpUHrqzFgPRuYCx6tyV1/oROy0+WMVC4u/+laWL4O0peMJE3NGzPWSKuDEW98aFaqSzZNeGP6ZAt1EWYELtRL59ZqIZEv7kPfKMgiyyiOeV+BSYBd4nWJjLB1v5GxXYwnlT0WbYQX5yuzBZjhRNeiB+pNpfQ/hyDRVF55SWVIAC6q/g1h8Onirz2RbcNUtMfZUDugnC5cE1NPFaV2ip4WHlhOykw21x5M+SjV1Awbz/DhQSXLB20B7qQ0XfFvWltGvpvnnXUnyacd4lbK5uZ2YEXKayqdn aEqp5INl icQlhGSmW3a4RaIdmpe4QytR+EM+n50tC0432D70FA2zmGl3ALEWNudqICpyZaJ0YparzAa1fQpPR6PscZf/hZ7hVRWn8tFdkN4S4jYxlpC3qksccL9exT6pSqpJdEyMwdM56ZPTaOPvkL4BJGrqGm6wF+EiSYmLOq1FWsVKZsRx4x2gj55IvL7j1G2QCxQe4yV+iT3bA/B05zoXa1c28hQYjDORX3EK53MvkHxFQpZJZKKDn+VrQjCedCkQkMbp76fft2w8OOJYkwFqTBfIxO5hQtXjH4f2b+Bqx X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Update documentation about design, implementation and API usages for page_frag. CC: Alexander Duyck Signed-off-by: Yunsheng Lin --- Documentation/mm/page_frags.rst | 177 +++++++++++++++++++++- include/linux/page_frag_cache.h | 259 +++++++++++++++++++++++++++++++- mm/page_frag_cache.c | 26 +++- 3 files changed, 451 insertions(+), 11 deletions(-) diff --git a/Documentation/mm/page_frags.rst b/Documentation/mm/page_frags.rst index 503ca6cdb804..5eec04a3fe90 100644 --- a/Documentation/mm/page_frags.rst +++ b/Documentation/mm/page_frags.rst @@ -1,3 +1,5 @@ +.. SPDX-License-Identifier: GPL-2.0 + ============== Page fragments ============== @@ -40,4 +42,177 @@ page via a single call. The advantage to doing this is that it allows for cleaning up the multiple references that were added to a page in order to avoid calling get_page per allocation. -Alexander Duyck, Nov 29, 2016. + +Architecture overview +===================== + +.. code-block:: none + + +----------------------+ + | page_frag API caller | + +----------------------+ + | + | + v + +------------------------------------------------------------------+ + | request page fragment | + +------------------------------------------------------------------+ + | | | + | | | + | Cache not enough | + | | | + | +-----------------+ | + | | reuse old cache |--Usable-->| + | +-----------------+ | + | | | + | Not usable | + | | | + | v | + Cache empty +-----------------+ | + | | drain old cache | | + | +-----------------+ | + | | | + v_________________________________v | + | | + | | + _________________v_______________ | + | | Cache is enough + | | | + PAGE_SIZE < PAGE_FRAG_CACHE_MAX_SIZE | | + | | | + | PAGE_SIZE >= PAGE_FRAG_CACHE_MAX_SIZE | + v | | + +----------------------------------+ | | + | refill cache with order > 0 page | | | + +----------------------------------+ | | + | | | | + | | | | + | Refill failed | | + | | | | + | v v | + | +------------------------------------+ | + | | refill cache with order 0 page | | + | +----------------------------------=-+ | + | | | + Refill succeed | | + | Refill succeed | + | | | + v v v + +------------------------------------------------------------------+ + | allocate fragment from cache | + +------------------------------------------------------------------+ + +API interface +============= +As the design and implementation of page_frag API implies, the allocation side +does not allow concurrent calling. Instead it is assumed that the caller must +ensure there is not concurrent alloc calling to the same page_frag_cache +instance by using its own lock or rely on some lockless guarantee like NAPI +softirq. + +Depending on different aligning requirement, the page_frag API caller may call +page_frag_*_align*() to ensure the returned virtual address or offset of the +page is aligned according to the 'align/alignment' parameter. Note the size of +the allocated fragment is not aligned, the caller needs to provide an aligned +fragsz if there is an alignment requirement for the size of the fragment. + +Depending on different use cases, callers expecting to deal with va, page or +both va and page for them may call page_frag_alloc, page_frag_refill, or +page_frag_alloc_refill API accordingly. + +There is also a use case that needs minimum memory in order for forward progress, +but more performant if more memory is available. Using page_frag_*_prepare() and +page_frag_commit*() related API, the caller requests the minimum memory it needs +and the prepare API will return the maximum size of the fragment returned. The +caller needs to either call the commit API to report how much memory it actually +uses, or not do so if deciding to not use any memory. + +.. kernel-doc:: include/linux/page_frag_cache.h + :identifiers: page_frag_cache_init page_frag_cache_is_pfmemalloc + page_frag_cache_page_offset __page_frag_alloc_align + page_frag_alloc_align page_frag_alloc + __page_frag_refill_align page_frag_refill_align + page_frag_refill __page_frag_refill_prepare_align + page_frag_refill_prepare_align page_frag_refill_prepare + __page_frag_alloc_refill_prepare_align + page_frag_alloc_refill_prepare_align + page_frag_alloc_refill_prepare page_frag_alloc_refill_probe + page_frag_refill_probe page_frag_commit + page_frag_commit_noref page_frag_alloc_abort + +.. kernel-doc:: mm/page_frag_cache.c + :identifiers: page_frag_cache_drain page_frag_free + __page_frag_alloc_refill_probe_align + +Coding examples +=============== + +Init & Drain API +---------------- + +.. code-block:: c + + page_frag_cache_init(nc); + ... + page_frag_cache_drain(nc); + + +Alloc & Free API +---------------- + +.. code-block:: c + + void *va; + + va = page_frag_alloc_align(nc, size, gfp, align); + if (!va) + goto do_error; + + err = do_something(va, size); + if (err) { + page_frag_abort(nc, size); + goto do_error; + } + + ... + + page_frag_free(va); + + +Prepare & Commit API +-------------------- + +.. code-block:: c + + struct page_frag page_frag, *pfrag; + bool merge = true; + void *va; + + pfrag = &page_frag; + va = page_frag_alloc_refill_prepare(nc, 32U, pfrag, GFP_KERNEL); + if (!va) + goto wait_for_space; + + copy = min_t(unsigned int, copy, pfrag->size); + if (!skb_can_coalesce(skb, i, pfrag->page, pfrag->offset)) { + if (i >= max_skb_frags) + goto new_segment; + + merge = false; + } + + copy = mem_schedule(copy); + if (!copy) + goto wait_for_space; + + err = copy_from_iter_full_nocache(va, copy, iter); + if (err) + goto do_error; + + if (merge) { + skb_frag_size_add(&skb_shinfo(skb)->frags[i - 1], copy); + page_frag_commit_noref(nc, pfrag, copy); + } else { + skb_fill_page_desc(skb, i, pfrag->page, pfrag->offset, copy); + page_frag_commit(nc, pfrag, copy); + } diff --git a/include/linux/page_frag_cache.h b/include/linux/page_frag_cache.h index d91ad53f25d3..922d412469c7 100644 --- a/include/linux/page_frag_cache.h +++ b/include/linux/page_frag_cache.h @@ -28,16 +28,43 @@ static inline bool page_frag_encoded_page_pfmemalloc(unsigned long encoded_page) return !!(encoded_page & PAGE_FRAG_CACHE_PFMEMALLOC_BIT); } +/** + * page_frag_cache_init() - Init page_frag cache. + * @nc: page_frag cache from which to init + * + * Inline helper to init the page_frag cache. + */ static inline void page_frag_cache_init(struct page_frag_cache *nc) { nc->encoded_page = 0; } +/** + * page_frag_cache_is_pfmemalloc() - Check for pfmemalloc. + * @nc: page_frag cache from which to check + * + * Used to check if the current page in page_frag cache is pfmemalloc'ed. + * It has the same calling context expectation as the alloc API. + * + * Return: + * true if the current page in page_frag cache is pfmemalloc'ed, otherwise + * return false. + */ static inline bool page_frag_cache_is_pfmemalloc(struct page_frag_cache *nc) { return page_frag_encoded_page_pfmemalloc(nc->encoded_page); } +/** + * page_frag_cache_page_offset() - Return the current page fragment's offset. + * @nc: page_frag cache from which to check + * + * The API is only used in net/sched/em_meta.c for historical reason, do not use + * it for new caller unless there is a strong reason. + * + * Return: + * the offset of the current page fragment in the page_frag cache. + */ static inline unsigned int page_frag_cache_page_offset(const struct page_frag_cache *nc) { return nc->offset; @@ -66,6 +93,19 @@ static inline unsigned int __page_frag_cache_commit(struct page_frag_cache *nc, return __page_frag_cache_commit_noref(nc, pfrag, used_sz); } +/** + * __page_frag_alloc_align() - Alloc a page fragment with aligning + * requirement. + * @nc: page_frag cache from which to allocate + * @fragsz: the requested fragment size + * @gfp_mask: the allocation gfp to use when cache need to be refilled + * @align_mask: the requested aligning requirement for the 'va' + * + * Alloc a page fragment from page_frag cache with aligning requirement. + * + * Return: + * Virtual address of the page fragment, otherwise return NULL. + */ static inline void *__page_frag_alloc_align(struct page_frag_cache *nc, unsigned int fragsz, gfp_t gfp_mask, unsigned int align_mask) @@ -83,6 +123,19 @@ static inline void *__page_frag_alloc_align(struct page_frag_cache *nc, return va; } +/** + * page_frag_alloc_align() - Alloc a page fragment with aligning requirement. + * @nc: page_frag cache from which to allocate + * @fragsz: the requested fragment size + * @gfp_mask: the allocation gfp to use when cache needs to be refilled + * @align: the requested aligning requirement for the fragment + * + * WARN_ON_ONCE() checking for @align before allocing a page fragment from + * page_frag cache with aligning requirement. + * + * Return: + * virtual address of the page fragment, otherwise return NULL. + */ static inline void *page_frag_alloc_align(struct page_frag_cache *nc, unsigned int fragsz, gfp_t gfp_mask, unsigned int align) @@ -91,12 +144,36 @@ static inline void *page_frag_alloc_align(struct page_frag_cache *nc, return __page_frag_alloc_align(nc, fragsz, gfp_mask, -align); } +/** + * page_frag_alloc() - Alloc a page fragment. + * @nc: page_frag cache from which to allocate + * @fragsz: the requested fragment size + * @gfp_mask: the allocation gfp to use when cache need to be refilled + * + * Alloc a page fragment from page_frag cache. + * + * Return: + * virtual address of the page fragment, otherwise return NULL. + */ static inline void *page_frag_alloc(struct page_frag_cache *nc, unsigned int fragsz, gfp_t gfp_mask) { return __page_frag_alloc_align(nc, fragsz, gfp_mask, ~0u); } +/** + * __page_frag_refill_align() - Refill a page_frag with aligning requirement. + * @nc: page_frag cache from which to refill + * @fragsz: the requested fragment size + * @pfrag: the page_frag to be refilled. + * @gfp_mask: the allocation gfp to use when cache need to be refilled + * @align_mask: the requested aligning requirement for the fragment + * + * Refill a page_frag from page_frag cache with aligning requirement. + * + * Return: + * True if refill succeeds, otherwise return false. + */ static inline bool __page_frag_refill_align(struct page_frag_cache *nc, unsigned int fragsz, struct page_frag *pfrag, @@ -111,6 +188,20 @@ static inline bool __page_frag_refill_align(struct page_frag_cache *nc, return true; } +/** + * page_frag_refill_align() - Refill a page_frag with aligning requirement. + * @nc: page_frag cache from which to refill + * @fragsz: the requested fragment size + * @pfrag: the page_frag to be refilled. + * @gfp_mask: the allocation gfp to use when cache needs to be refilled + * @align: the requested aligning requirement for the fragment + * + * WARN_ON_ONCE() checking for @align before refilling a page_frag from + * page_frag cache with aligning requirement. + * + * Return: + * True if refill succeeds, otherwise return false. + */ static inline bool page_frag_refill_align(struct page_frag_cache *nc, unsigned int fragsz, struct page_frag *pfrag, @@ -120,6 +211,18 @@ static inline bool page_frag_refill_align(struct page_frag_cache *nc, return __page_frag_refill_align(nc, fragsz, pfrag, gfp_mask, -align); } +/** + * page_frag_refill() - Refill a page_frag. + * @nc: page_frag cache from which to refill + * @fragsz: the requested fragment size + * @pfrag: the page_frag to be refilled. + * @gfp_mask: the allocation gfp to use when cache need to be refilled + * + * Refill a page_frag from page_frag cache. + * + * Return: + * True if refill succeeds, otherwise return false. + */ static inline bool page_frag_refill(struct page_frag_cache *nc, unsigned int fragsz, struct page_frag *pfrag, gfp_t gfp_mask) @@ -127,6 +230,20 @@ static inline bool page_frag_refill(struct page_frag_cache *nc, return __page_frag_refill_align(nc, fragsz, pfrag, gfp_mask, ~0u); } +/** + * __page_frag_refill_prepare_align() - Prepare refilling a page_frag with + * aligning requirement. + * @nc: page_frag cache from which to refill + * @fragsz: the requested fragment size + * @pfrag: the page_frag to be refilled. + * @gfp_mask: the allocation gfp to use when cache need to be refilled + * @align_mask: the requested aligning requirement for the fragment + * + * Prepare refill a page_frag from page_frag cache with aligning requirement. + * + * Return: + * True if prepare refilling succeeds, otherwise return false. + */ static inline bool __page_frag_refill_prepare_align(struct page_frag_cache *nc, unsigned int fragsz, struct page_frag *pfrag, @@ -137,6 +254,21 @@ static inline bool __page_frag_refill_prepare_align(struct page_frag_cache *nc, align_mask); } +/** + * page_frag_refill_prepare_align() - Prepare refilling a page_frag with + * aligning requirement. + * @nc: page_frag cache from which to refill + * @fragsz: the requested fragment size + * @pfrag: the page_frag to be refilled. + * @gfp_mask: the allocation gfp to use when cache needs to be refilled + * @align: the requested aligning requirement for the fragment + * + * WARN_ON_ONCE() checking for @align before prepare refilling a page_frag from + * page_frag cache with aligning requirement. + * + * Return: + * True if prepare refilling succeeds, otherwise return false. + */ static inline bool page_frag_refill_prepare_align(struct page_frag_cache *nc, unsigned int fragsz, struct page_frag *pfrag, @@ -148,6 +280,18 @@ static inline bool page_frag_refill_prepare_align(struct page_frag_cache *nc, -align); } +/** + * page_frag_refill_prepare() - Prepare refilling a page_frag. + * @nc: page_frag cache from which to refill + * @fragsz: the requested fragment size + * @pfrag: the page_frag to be refilled. + * @gfp_mask: the allocation gfp to use when cache need to be refilled + * + * Prepare refilling a page_frag from page_frag cache. + * + * Return: + * True if refill succeeds, otherwise return false. + */ static inline bool page_frag_refill_prepare(struct page_frag_cache *nc, unsigned int fragsz, struct page_frag *pfrag, @@ -157,6 +301,20 @@ static inline bool page_frag_refill_prepare(struct page_frag_cache *nc, ~0u); } +/** + * __page_frag_alloc_refill_prepare_align() - Prepare allocing a fragment and + * refilling a page_frag with aligning requirement. + * @nc: page_frag cache from which to allocate and refill + * @fragsz: the requested fragment size + * @pfrag: the page_frag to be refilled. + * @gfp_mask: the allocation gfp to use when cache need to be refilled + * @align_mask: the requested aligning requirement for the fragment. + * + * Prepare allocing a fragment and refilling a page_frag from page_frag cache. + * + * Return: + * virtual address of the page fragment, otherwise return NULL. + */ static inline void *__page_frag_alloc_refill_prepare_align(struct page_frag_cache *nc, unsigned int fragsz, struct page_frag *pfrag, @@ -166,6 +324,21 @@ static inline void *__page_frag_alloc_refill_prepare_align(struct page_frag_cach return __page_frag_cache_prepare(nc, fragsz, pfrag, gfp_mask, align_mask); } +/** + * page_frag_alloc_refill_prepare_align() - Prepare allocing a fragment and + * refilling a page_frag with aligning requirement. + * @nc: page_frag cache from which to allocate and refill + * @fragsz: the requested fragment size + * @pfrag: the page_frag to be refilled. + * @gfp_mask: the allocation gfp to use when cache need to be refilled + * @align: the requested aligning requirement for the fragment. + * + * WARN_ON_ONCE() checking for @align before prepare allocing a fragment and + * refilling a page_frag from page_frag cache. + * + * Return: + * virtual address of the page fragment, otherwise return NULL. + */ static inline void *page_frag_alloc_refill_prepare_align(struct page_frag_cache *nc, unsigned int fragsz, struct page_frag *pfrag, @@ -177,6 +350,19 @@ static inline void *page_frag_alloc_refill_prepare_align(struct page_frag_cache gfp_mask, -align); } +/** + * page_frag_alloc_refill_prepare() - Prepare allocing a fragment and refilling + * a page_frag. + * @nc: page_frag cache from which to allocate and refill + * @fragsz: the requested fragment size + * @pfrag: the page_frag to be refilled. + * @gfp_mask: the allocation gfp to use when cache need to be refilled + * + * Prepare allocing a fragment and refilling a page_frag from page_frag cache. + * + * Return: + * virtual address of the page fragment, otherwise return NULL. + */ static inline void *page_frag_alloc_refill_prepare(struct page_frag_cache *nc, unsigned int fragsz, struct page_frag *pfrag, @@ -186,6 +372,18 @@ static inline void *page_frag_alloc_refill_prepare(struct page_frag_cache *nc, gfp_mask, ~0u); } +/** + * page_frag_alloc_refill_probe() - Probe allocing a fragment and refilling + * a page_frag. + * @nc: page_frag cache from which to allocate and refill + * @fragsz: the requested fragment size + * @pfrag: the page_frag to be refilled + * + * Probe allocing a fragment and refilling a page_frag from page_frag cache. + * + * Return: + * virtual address of the page fragment, otherwise return NULL. + */ static inline void *page_frag_alloc_refill_probe(struct page_frag_cache *nc, unsigned int fragsz, struct page_frag *pfrag) @@ -193,6 +391,17 @@ static inline void *page_frag_alloc_refill_probe(struct page_frag_cache *nc, return __page_frag_alloc_refill_probe_align(nc, fragsz, pfrag, ~0u); } +/** + * page_frag_refill_probe() - Probe refilling a page_frag. + * @nc: page_frag cache from which to refill + * @fragsz: the requested fragment size + * @pfrag: the page_frag to be refilled + * + * Probe refilling a page_frag from page_frag cache. + * + * Return: + * True if refill succeeds, otherwise return false. + */ static inline bool page_frag_refill_probe(struct page_frag_cache *nc, unsigned int fragsz, struct page_frag *pfrag) @@ -200,20 +409,54 @@ static inline bool page_frag_refill_probe(struct page_frag_cache *nc, return !!page_frag_alloc_refill_probe(nc, fragsz, pfrag); } -static inline void page_frag_commit(struct page_frag_cache *nc, - struct page_frag *pfrag, - unsigned int used_sz) +/** + * page_frag_commit - Commit allocing a page fragment. + * @nc: page_frag cache from which to commit + * @pfrag: the page_frag to be committed + * @used_sz: size of the page fragment has been used + * + * Commit the actual used size for the allocation that was either prepared + * or probed. + * + * Return: + * The true size of the fragment considering the offset alignment. + */ +static inline unsigned int page_frag_commit(struct page_frag_cache *nc, + struct page_frag *pfrag, + unsigned int used_sz) { - __page_frag_cache_commit(nc, pfrag, used_sz); + return __page_frag_cache_commit(nc, pfrag, used_sz); } -static inline void page_frag_commit_noref(struct page_frag_cache *nc, - struct page_frag *pfrag, - unsigned int used_sz) +/** + * page_frag_commit_noref - Commit allocing a page fragment without taking + * page refcount. + * @nc: page_frag cache from which to commit + * @pfrag: the page_frag to be committed + * @used_sz: size of the page fragment has been used + * + * Commit the alloc preparing or probing by passing the actual used size, but + * not taking refcount. Mostly used for fragmemt coalescing case when the + * current fragment can share the same refcount with previous fragment. + * + * Return: + * The true size of the fragment considering the offset alignment. + */ +static inline unsigned int page_frag_commit_noref(struct page_frag_cache *nc, + struct page_frag *pfrag, + unsigned int used_sz) { - __page_frag_cache_commit_noref(nc, pfrag, used_sz); + return __page_frag_cache_commit_noref(nc, pfrag, used_sz); } +/** + * page_frag_alloc_abort - Abort the page fragment allocation. + * @nc: page_frag cache to which the page fragment is aborted back + * @fragsz: size of the page fragment to be aborted + * + * It is expected to be called from the same context as the alloc API. + * Mostly used for error handling cases where the fragment is no longer needed. + */ static inline void page_frag_alloc_abort(struct page_frag_cache *nc, unsigned int fragsz) { diff --git a/mm/page_frag_cache.c b/mm/page_frag_cache.c index 1e7757a433d0..7b801856fd98 100644 --- a/mm/page_frag_cache.c +++ b/mm/page_frag_cache.c @@ -75,6 +75,10 @@ static struct page *__page_frag_cache_refill(struct page_frag_cache *nc, return page; } +/** + * page_frag_cache_drain - Drain the current page from page_frag cache. + * @nc: page_frag cache from which to drain + */ void page_frag_cache_drain(struct page_frag_cache *nc) { if (!nc->encoded_page) @@ -117,6 +121,20 @@ unsigned int __page_frag_cache_commit_noref(struct page_frag_cache *nc, } EXPORT_SYMBOL(__page_frag_cache_commit_noref); +/** + * __page_frag_alloc_refill_probe_align() - Probe allocing a fragment and + * refilling a page_frag with aligning requirement. + * @nc: page_frag cache from which to allocate and refill + * @fragsz: the requested fragment size + * @pfrag: the page_frag to be refilled. + * @align_mask: the requested aligning requirement for the fragment. + * + * Probe allocing a fragment and refilling a page_frag from page_frag cache with + * aligning requirement. + * + * Return: + * virtual address of the page fragment, otherwise return NULL. + */ void *__page_frag_alloc_refill_probe_align(struct page_frag_cache *nc, unsigned int fragsz, struct page_frag *pfrag, @@ -208,8 +226,12 @@ void *__page_frag_cache_prepare(struct page_frag_cache *nc, unsigned int fragsz, } EXPORT_SYMBOL(__page_frag_cache_prepare); -/* - * Frees a page fragment allocated out of either a compound or order 0 page. +/** + * page_frag_free - Free a page fragment. + * @addr: va of page fragment to be freed + * + * Free a page fragment allocated out of either a compound or order 0 page by + * virtual address. */ void page_frag_free(void *addr) {