From patchwork Fri Sep 6 07:36:33 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yunsheng Lin X-Patchwork-Id: 13793527 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7B6DDCE7AE0 for ; Fri, 6 Sep 2024 07:42:49 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id E9A596B0082; Fri, 6 Sep 2024 03:42:48 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id E252A6B0088; Fri, 6 Sep 2024 03:42:48 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id C9C2A6B0089; Fri, 6 Sep 2024 03:42:48 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id A240F6B0082 for ; Fri, 6 Sep 2024 03:42:48 -0400 (EDT) Received: from smtpin30.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id 475B3141014 for ; Fri, 6 Sep 2024 07:42:48 +0000 (UTC) X-FDA: 82533521616.30.E83F70F Received: from szxga02-in.huawei.com (szxga02-in.huawei.com [45.249.212.188]) by imf01.hostedemail.com (Postfix) with ESMTP id DA7A640014 for ; Fri, 6 Sep 2024 07:42:43 +0000 (UTC) Authentication-Results: imf01.hostedemail.com; dkim=none; dmarc=pass (policy=quarantine) header.from=huawei.com; spf=pass (imf01.hostedemail.com: domain of linyunsheng@huawei.com designates 45.249.212.188 as permitted sender) smtp.mailfrom=linyunsheng@huawei.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1725608564; a=rsa-sha256; cv=none; b=GcLYz4kf68muSW7o6+VONEkQnpvnbO5euNQZP9EKvjBdym6hVteuoqP2ibGJnPAFeXLLBx k5GHIcgmHI62Q1TQnlaP04pHoJkzv3Cl+90VYHIHo+n5sB3rluA8qIKn7sTAicM0AkvfmH hNDWhSfo9UNepnbIG5LvE1zjm1RATbU= ARC-Authentication-Results: i=1; imf01.hostedemail.com; dkim=none; dmarc=pass (policy=quarantine) header.from=huawei.com; spf=pass (imf01.hostedemail.com: domain of linyunsheng@huawei.com designates 45.249.212.188 as permitted sender) smtp.mailfrom=linyunsheng@huawei.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1725608564; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=6HSXFQhnTX1bR1guuXU8QXtxlbeM3Wtk3yTKyvK31hw=; b=uMDndwToLDLWrJjw5SeAgtXh6Z44vqGjM2TuajRCrXVMvyuvlL3+ZUQYjQnRY9ZoszD69j 7PpFQV8MKXM2HYL85H3VWcRyLih7B77CMbO+ynjUbIADZp0tHt6FyOimLkSjeioEV/ucUY 8KO8YUKPI9ZRSkVYGYDjk8L7xHzPWUo= Received: from mail.maildlp.com (unknown [172.19.163.174]) by szxga02-in.huawei.com (SkyGuard) with ESMTP id 4X0SnW5gFLzfbpD; Fri, 6 Sep 2024 15:40:31 +0800 (CST) Received: from dggpemf200006.china.huawei.com (unknown [7.185.36.61]) by mail.maildlp.com (Postfix) with ESMTPS id E50821401E9; Fri, 6 Sep 2024 15:42:39 +0800 (CST) Received: from localhost.localdomain (10.90.30.45) by dggpemf200006.china.huawei.com (7.185.36.61) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.11; Fri, 6 Sep 2024 15:42:39 +0800 From: Yunsheng Lin To: , , CC: , , Yunsheng Lin , Alexander Duyck , Alexander Duyck , Andrew Morton , Shuah Khan , , Subject: [PATCH net-next v18 01/14] mm: page_frag: add a test module for page_frag Date: Fri, 6 Sep 2024 15:36:33 +0800 Message-ID: <20240906073646.2930809-2-linyunsheng@huawei.com> X-Mailer: git-send-email 2.30.0 In-Reply-To: <20240906073646.2930809-1-linyunsheng@huawei.com> References: <20240906073646.2930809-1-linyunsheng@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.90.30.45] X-ClientProxiedBy: dggems705-chm.china.huawei.com (10.3.19.182) To dggpemf200006.china.huawei.com (7.185.36.61) X-Rspam-User: X-Stat-Signature: 36hi9ii8u4d7i5ghjypdm1zr84wa4mbx X-Rspamd-Queue-Id: DA7A640014 X-Rspamd-Server: rspam02 X-HE-Tag: 1725608563-225410 X-HE-Meta: U2FsdGVkX181GaYkDWUUWuYZ6ZNLGoDynaEAeq38AH83qQFcFw4AqAqk1Djbv3FzzxnxsvFsZOq38MiJAhWxcyxDeE3AKiDBLg/srHM7/NzmleE+0FMvWZNNSn+9y0ZbrfHFm0nUe66avEoTGzCSItUkXIXWHh93jdBbENyAVKrghVhaFEmgAJf7iy7NMR+zYl8ht4AZTmTjsXARVth6e+K6Zw8dz1qboCxnmgp63/qsnDsPbnmV2oheyxWmeyg3g5+ZXw46JfoWivYw6YSx//ZkZYkHMyAMfOAGLQCJmf1CxckmL9Xv/W9UJXwlf81LzujZXeuSii+4EECEvqDTYmb+em4KPJHtShY1BO6+csNHXI83E6y/yL5Ua4LoKDhfdd0c2mMPfyYNZz1aZfQyUjXTtiMKyUSvvvrsjMrHXulzJ+OzYiS92evPLJtA7u5alBk3DWO1u65UE7MmiS4hNIM9syFu180d03DAgh6oPy5GltPKxRezEHFaE0lhOfWDK+ifNU3c8LnAQdXhzJdxyjXIz7wMjJpiWcyIsv21RefDjjDOmjtjr/nDux/VPblyUonL4r5LXZ5E+r1SopOXTK/vUhb46UvxWjuwMzZ8vEHbnYmPiAjslXnEb6JhGw4YVDqYSGnGFO1s29NK56a/62iu0XmOgyOWUJ4UuTBfoN8uJsB9JT12Mbvc6Ff6r9Y3YznNew5BY/foGD4Uc6bxavL6oofaq2zIqad1S+7lqgGVHKMO5VSGsStkHiAqSXdKndfp2+1ZNfljsJqr9PnU+RMp9GVGhpbqVNmToQCU3I43nwONNc/xeu6JWTZA7Bj1XUzNtn/9+X7ZWPPAH8apqsmHStp/8DH3xztNz8V+RqYvVQq5/0gBr9GHi64ltC6sUUsHIE9oMd6j4P9Bg4MSk9qtmX/20uDJIKpbG/rk2bpow4DVd9xJWt6YAv1kIx2E/b8+unAjgMrR+R/fTlm H4zQZX6Q /8iBVQm7UtCpkh4Wsae50CChwqQhSYJgWsNryPRU6NRiAjg90QA+QWRp2GdIa0FxlJR5XAvafUCm5D+GG28gLj+JoHbMvj4YTMOQF3L5CbCx1tqee0Ymhl8m24qO3k+nj1ScyivjUTXSf5v+D3PdBbRgFeGe3mLhDDIwzR3O24BC23n7bM7rG9xVKuxIuoc8WDc9JtV0pKcj51ANEBHOK+aM5QDGeRWAJg2jD4P/rw8yC4aury7s8nocQYicHTqW5zdm3K6rhhwjpYBHR2yt5DD1ot9n9XfAjgZF2WkRVt5jZpVG+gDXPaSS+Nw== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: The testing is done by ensuring that the fragment allocated from a frag_frag_cache instance is pushed into a ptr_ring instance in a kthread binded to a specified cpu, and a kthread binded to a specified cpu will pop the fragment from the ptr_ring and free the fragment. CC: Alexander Duyck Signed-off-by: Yunsheng Lin Reviewed-by: Alexander Duyck --- tools/testing/selftests/mm/Makefile | 3 + tools/testing/selftests/mm/page_frag/Makefile | 18 ++ .../selftests/mm/page_frag/page_frag_test.c | 170 +++++++++++++++++ tools/testing/selftests/mm/run_vmtests.sh | 8 + tools/testing/selftests/mm/test_page_frag.sh | 171 ++++++++++++++++++ 5 files changed, 370 insertions(+) create mode 100644 tools/testing/selftests/mm/page_frag/Makefile create mode 100644 tools/testing/selftests/mm/page_frag/page_frag_test.c create mode 100755 tools/testing/selftests/mm/test_page_frag.sh diff --git a/tools/testing/selftests/mm/Makefile b/tools/testing/selftests/mm/Makefile index cfad627e8d94..e98ec779b2aa 100644 --- a/tools/testing/selftests/mm/Makefile +++ b/tools/testing/selftests/mm/Makefile @@ -36,6 +36,8 @@ MAKEFLAGS += --no-builtin-rules CFLAGS = -Wall -I $(top_srcdir) $(EXTRA_CFLAGS) $(KHDR_INCLUDES) $(TOOLS_INCLUDES) LDLIBS = -lrt -lpthread -lm +TEST_GEN_MODS_DIR := page_frag + TEST_GEN_FILES = cow TEST_GEN_FILES += compaction_test TEST_GEN_FILES += gup_longterm @@ -125,6 +127,7 @@ TEST_FILES += test_hmm.sh TEST_FILES += va_high_addr_switch.sh TEST_FILES += charge_reserved_hugetlb.sh TEST_FILES += hugetlb_reparenting_test.sh +TEST_FILES += test_page_frag.sh # required by charge_reserved_hugetlb.sh TEST_FILES += write_hugetlb_memory.sh diff --git a/tools/testing/selftests/mm/page_frag/Makefile b/tools/testing/selftests/mm/page_frag/Makefile new file mode 100644 index 000000000000..58dda74d50a3 --- /dev/null +++ b/tools/testing/selftests/mm/page_frag/Makefile @@ -0,0 +1,18 @@ +PAGE_FRAG_TEST_DIR := $(realpath $(dir $(abspath $(lastword $(MAKEFILE_LIST))))) +KDIR ?= $(abspath $(PAGE_FRAG_TEST_DIR)/../../../../..) + +ifeq ($(V),1) +Q = +else +Q = @ +endif + +MODULES = page_frag_test.ko + +obj-m += page_frag_test.o + +all: + +$(Q)make -C $(KDIR) M=$(PAGE_FRAG_TEST_DIR) modules + +clean: + +$(Q)make -C $(KDIR) M=$(PAGE_FRAG_TEST_DIR) clean diff --git a/tools/testing/selftests/mm/page_frag/page_frag_test.c b/tools/testing/selftests/mm/page_frag/page_frag_test.c new file mode 100644 index 000000000000..6d6f31936b10 --- /dev/null +++ b/tools/testing/selftests/mm/page_frag/page_frag_test.c @@ -0,0 +1,170 @@ +// SPDX-License-Identifier: GPL-2.0 + +/* + * Test module for page_frag cache + * + * Copyright (C) 2024 Yunsheng Lin + */ + +#include +#include +#include +#include +#include +#include + +static struct ptr_ring ptr_ring; +static int nr_objs = 512; +static atomic_t nthreads; +static struct completion wait; +static struct page_frag_cache test_nc; + +static int nr_test = 2000000; +module_param(nr_test, int, 0); +MODULE_PARM_DESC(nr_test, "number of iterations to test"); + +static bool test_align; +module_param(test_align, bool, 0); +MODULE_PARM_DESC(test_align, "use align API for testing"); + +static int test_alloc_len = 2048; +module_param(test_alloc_len, int, 0); +MODULE_PARM_DESC(test_alloc_len, "alloc len for testing"); + +static int test_push_cpu; +module_param(test_push_cpu, int, 0); +MODULE_PARM_DESC(test_push_cpu, "test cpu for pushing fragment"); + +static int test_pop_cpu; +module_param(test_pop_cpu, int, 0); +MODULE_PARM_DESC(test_pop_cpu, "test cpu for popping fragment"); + +static int page_frag_pop_thread(void *arg) +{ + struct ptr_ring *ring = arg; + int nr = nr_test; + + pr_info("page_frag pop test thread begins on cpu %d\n", + smp_processor_id()); + + while (nr > 0) { + void *obj = __ptr_ring_consume(ring); + + if (obj) { + nr--; + page_frag_free(obj); + } else { + cond_resched(); + } + } + + if (atomic_dec_and_test(&nthreads)) + complete(&wait); + + pr_info("page_frag pop test thread exits on cpu %d\n", + smp_processor_id()); + + return 0; +} + +static int page_frag_push_thread(void *arg) +{ + struct ptr_ring *ring = arg; + int nr = nr_test; + + pr_info("page_frag push test thread begins on cpu %d\n", + smp_processor_id()); + + while (nr > 0) { + void *va; + int ret; + + if (test_align) { + va = page_frag_alloc_align(&test_nc, test_alloc_len, + GFP_KERNEL, SMP_CACHE_BYTES); + + WARN_ONCE((unsigned long)va & (SMP_CACHE_BYTES - 1), + "unaligned va returned\n"); + } else { + va = page_frag_alloc(&test_nc, test_alloc_len, GFP_KERNEL); + } + + if (!va) + continue; + + ret = __ptr_ring_produce(ring, va); + if (ret) { + page_frag_free(va); + cond_resched(); + } else { + nr--; + } + } + + pr_info("page_frag push test thread exits on cpu %d\n", + smp_processor_id()); + + if (atomic_dec_and_test(&nthreads)) + complete(&wait); + + return 0; +} + +static int __init page_frag_test_init(void) +{ + struct task_struct *tsk_push, *tsk_pop; + ktime_t start; + u64 duration; + int ret; + + test_nc.va = NULL; + atomic_set(&nthreads, 2); + init_completion(&wait); + + if (test_alloc_len > PAGE_SIZE || test_alloc_len <= 0 || + !cpu_active(test_push_cpu) || !cpu_active(test_pop_cpu)) + return -EINVAL; + + ret = ptr_ring_init(&ptr_ring, nr_objs, GFP_KERNEL); + if (ret) + return ret; + + tsk_push = kthread_create_on_cpu(page_frag_push_thread, &ptr_ring, + test_push_cpu, "page_frag_push"); + if (IS_ERR(tsk_push)) + return PTR_ERR(tsk_push); + + tsk_pop = kthread_create_on_cpu(page_frag_pop_thread, &ptr_ring, + test_pop_cpu, "page_frag_pop"); + if (IS_ERR(tsk_pop)) { + kthread_stop(tsk_push); + return PTR_ERR(tsk_pop); + } + + start = ktime_get(); + wake_up_process(tsk_push); + wake_up_process(tsk_pop); + + pr_info("waiting for test to complete\n"); + wait_for_completion(&wait); + + duration = (u64)ktime_us_delta(ktime_get(), start); + pr_info("%d of iterations for %s testing took: %lluus\n", nr_test, + test_align ? "aligned" : "non-aligned", duration); + + ptr_ring_cleanup(&ptr_ring, NULL); + page_frag_cache_drain(&test_nc); + + return -EAGAIN; +} + +static void __exit page_frag_test_exit(void) +{ +} + +module_init(page_frag_test_init); +module_exit(page_frag_test_exit); + +MODULE_LICENSE("GPL"); +MODULE_AUTHOR("Yunsheng Lin "); +MODULE_DESCRIPTION("Test module for page_frag"); diff --git a/tools/testing/selftests/mm/run_vmtests.sh b/tools/testing/selftests/mm/run_vmtests.sh index 36045edb10de..96fd470b9f51 100755 --- a/tools/testing/selftests/mm/run_vmtests.sh +++ b/tools/testing/selftests/mm/run_vmtests.sh @@ -75,6 +75,8 @@ separated by spaces: read-only VMAs - mdwe test prctl(PR_SET_MDWE, ...) +- page_frag + test handling of page fragment allocation and freeing example: ./run_vmtests.sh -t "hmm mmap ksm" EOF @@ -456,6 +458,12 @@ CATEGORY="mkdirty" run_test ./mkdirty CATEGORY="mdwe" run_test ./mdwe_test +CATEGORY="page_frag" run_test ./test_page_frag.sh smoke + +CATEGORY="page_frag" run_test ./test_page_frag.sh aligned + +CATEGORY="page_frag" run_test ./test_page_frag.sh nonaligned + echo "SUMMARY: PASS=${count_pass} SKIP=${count_skip} FAIL=${count_fail}" | tap_prefix echo "1..${count_total}" | tap_output diff --git a/tools/testing/selftests/mm/test_page_frag.sh b/tools/testing/selftests/mm/test_page_frag.sh new file mode 100755 index 000000000000..d750d910c899 --- /dev/null +++ b/tools/testing/selftests/mm/test_page_frag.sh @@ -0,0 +1,171 @@ +#!/bin/bash +# SPDX-License-Identifier: GPL-2.0 +# +# Copyright (C) 2024 Yunsheng Lin +# Copyright (C) 2018 Uladzislau Rezki (Sony) +# +# This is a test script for the kernel test driver to test the +# correctness and performance of page_frag's implementation. +# Therefore it is just a kernel module loader. You can specify +# and pass different parameters in order to: +# a) analyse performance of page fragment allocations; +# b) stressing and stability check of page_frag subsystem. + +DRIVER="./page_frag/page_frag_test.ko" +CPU_LIST=$(grep -m 2 processor /proc/cpuinfo | cut -d ' ' -f 2) +TEST_CPU_0=$(echo $CPU_LIST | awk '{print $1}') + +if [ $(echo $CPU_LIST | wc -w) -gt 1 ]; then + TEST_CPU_1=$(echo $CPU_LIST | awk '{print $2}') + NR_TEST=100000000 +else + TEST_CPU_1=$TEST_CPU_0 + NR_TEST=1000000 +fi + +# 1 if fails +exitcode=1 + +# Kselftest framework requirement - SKIP code is 4. +ksft_skip=4 + +# +# Static templates for testing of page_frag APIs. +# Also it is possible to pass any supported parameters manually. +# +SMOKE_PARAM="test_push_cpu=$TEST_CPU_0 test_pop_cpu=$TEST_CPU_1" +NONALIGNED_PARAM="$SMOKE_PARAM test_alloc_len=75 nr_test=$NR_TEST" +ALIGNED_PARAM="$NONALIGNED_PARAM test_align=1" + +check_test_requirements() +{ + uid=$(id -u) + if [ $uid -ne 0 ]; then + echo "$0: Must be run as root" + exit $ksft_skip + fi + + if ! which insmod > /dev/null 2>&1; then + echo "$0: You need insmod installed" + exit $ksft_skip + fi + + if [ ! -f $DRIVER ]; then + echo "$0: You need to compile page_frag_test module" + exit $ksft_skip + fi +} + +run_nonaligned_check() +{ + echo "Run performance tests to evaluate how fast nonaligned alloc API is." + + insmod $DRIVER $NONALIGNED_PARAM > /dev/null 2>&1 + echo "Done." + echo "Check the kernel ring buffer to see the summary." +} + +run_aligned_check() +{ + echo "Run performance tests to evaluate how fast aligned alloc API is." + + insmod $DRIVER $ALIGNED_PARAM > /dev/null 2>&1 + echo "Done." + echo "Check the kernel ring buffer to see the summary." +} + +run_smoke_check() +{ + echo "Run smoke test." + + insmod $DRIVER $SMOKE_PARAM > /dev/null 2>&1 + echo "Done." + echo "Check the kernel ring buffer to see the summary." +} + +usage() +{ + echo -n "Usage: $0 [ aligned ] | [ nonaligned ] | | [ smoke ] | " + echo "manual parameters" + echo + echo "Valid tests and parameters:" + echo + modinfo $DRIVER + echo + echo "Example usage:" + echo + echo "# Shows help message" + echo "$0" + echo + echo "# Smoke testing" + echo "$0 smoke" + echo + echo "# Performance testing for nonaligned alloc API" + echo "$0 nonaligned" + echo + echo "# Performance testing for aligned alloc API" + echo "$0 aligned" + echo + exit 0 +} + +function validate_passed_args() +{ + VALID_ARGS=`modinfo $DRIVER | awk '/parm:/ {print $2}' | sed 's/:.*//'` + + # + # Something has been passed, check it. + # + for passed_arg in $@; do + key=${passed_arg//=*/} + valid=0 + + for valid_arg in $VALID_ARGS; do + if [[ $key = $valid_arg ]]; then + valid=1 + break + fi + done + + if [[ $valid -ne 1 ]]; then + echo "Error: key is not correct: ${key}" + exit $exitcode + fi + done +} + +function run_manual_check() +{ + # + # Validate passed parameters. If there is wrong one, + # the script exists and does not execute further. + # + validate_passed_args $@ + + echo "Run the test with following parameters: $@" + insmod $DRIVER $@ > /dev/null 2>&1 + echo "Done." + echo "Check the kernel ring buffer to see the summary." +} + +function run_test() +{ + if [ $# -eq 0 ]; then + usage + else + if [[ "$1" = "smoke" ]]; then + run_smoke_check + elif [[ "$1" = "nonaligned" ]]; then + run_nonaligned_check + elif [[ "$1" = "aligned" ]]; then + run_aligned_check + else + run_manual_check $@ + fi + fi +} + +check_test_requirements +run_test $@ + +exit 0 From patchwork Fri Sep 6 07:36:34 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yunsheng Lin X-Patchwork-Id: 13793528 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 908C3CE7AB8 for ; Fri, 6 Sep 2024 07:42:50 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 748B06B0088; Fri, 6 Sep 2024 03:42:49 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 6F6BA6B0089; Fri, 6 Sep 2024 03:42:49 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 570F26B008A; Fri, 6 Sep 2024 03:42:49 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id 3656A6B0088 for ; Fri, 6 Sep 2024 03:42:49 -0400 (EDT) Received: from smtpin15.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id D7A3012106D for ; Fri, 6 Sep 2024 07:42:48 +0000 (UTC) X-FDA: 82533521616.15.EA3C0EA Received: from szxga02-in.huawei.com (szxga02-in.huawei.com [45.249.212.188]) by imf12.hostedemail.com (Postfix) with ESMTP id 72E6C40005 for ; Fri, 6 Sep 2024 07:42:46 +0000 (UTC) Authentication-Results: imf12.hostedemail.com; dkim=none; spf=pass (imf12.hostedemail.com: domain of linyunsheng@huawei.com designates 45.249.212.188 as permitted sender) smtp.mailfrom=linyunsheng@huawei.com; dmarc=pass (policy=quarantine) header.from=huawei.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1725608542; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=Ks6F/9+A+qOiod52CzURm40WHOWDWfQUgP99M8WGeKY=; b=roesj9lWR3sk1F7RvG9u6hn8w+8DA48hUfnYC34Mv5YhcLDXGEfw3x7hTdUIXLG0XsH7uz 2SzYoIPXfak95LCu9M2U5LYJ4rEQzTX7UPr8g49E3PLO4YMufxnl2P7Ai8sJ7cdTy+GcA6 z4b+u5YxY+g4LjjNlFeN2sY58UB2+hg= ARC-Authentication-Results: i=1; imf12.hostedemail.com; dkim=none; spf=pass (imf12.hostedemail.com: domain of linyunsheng@huawei.com designates 45.249.212.188 as permitted sender) smtp.mailfrom=linyunsheng@huawei.com; dmarc=pass (policy=quarantine) header.from=huawei.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1725608542; a=rsa-sha256; cv=none; b=sb5PPfawkG+fVGwGN+3Igj5nYWbYs3za6kHk69Pqc4J3MDl+3FWR/KcqtlM5XSW2yM/c7r Dbo8yANVcSS+qzcAZQ7gMQIgNTMJTHsho7JswzFNuargXcJWSWlExo+8n/55a2lq006VS1 nNuZCamBRZt00I8XpJK2ygeobvb96h0= Received: from mail.maildlp.com (unknown [172.19.88.105]) by szxga02-in.huawei.com (SkyGuard) with ESMTP id 4X0Snr1yFlzpVW7; Fri, 6 Sep 2024 15:40:48 +0800 (CST) Received: from dggpemf200006.china.huawei.com (unknown [7.185.36.61]) by mail.maildlp.com (Postfix) with ESMTPS id BD77F140157; Fri, 6 Sep 2024 15:42:43 +0800 (CST) Received: from localhost.localdomain (10.90.30.45) by dggpemf200006.china.huawei.com (7.185.36.61) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.11; Fri, 6 Sep 2024 15:42:43 +0800 From: Yunsheng Lin To: , , CC: , , Yunsheng Lin , David Howells , Alexander Duyck , Andrew Morton , Alexander Duyck , Eric Dumazet , Shuah Khan , , Subject: [PATCH net-next v18 02/14] mm: move the page fragment allocator from page_alloc into its own file Date: Fri, 6 Sep 2024 15:36:34 +0800 Message-ID: <20240906073646.2930809-3-linyunsheng@huawei.com> X-Mailer: git-send-email 2.30.0 In-Reply-To: <20240906073646.2930809-1-linyunsheng@huawei.com> References: <20240906073646.2930809-1-linyunsheng@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.90.30.45] X-ClientProxiedBy: dggems705-chm.china.huawei.com (10.3.19.182) To dggpemf200006.china.huawei.com (7.185.36.61) X-Rspam-User: X-Stat-Signature: aycbebbp6qn3p7u31ij1m9wohoz3usfx X-Rspamd-Queue-Id: 72E6C40005 X-Rspamd-Server: rspam11 X-HE-Tag: 1725608566-183435 X-HE-Meta: U2FsdGVkX18fMTd37Dcr2zo9nLzpd1lirQw1vGMPAHg6VHgi9bd1a8oAuAv8H+HPrx30rHzabefxS7PHNIIMRJdUY1XqvqzsKuqises8LuHgOl6PQR9jt4r5D8DIsQFH0R2KYtFywBQGFwca9Ve1Mp1W25U5mKTmlw1r22cQL+MxlHJOWUBjakI80kqJzMxAbUHYf+UHSM/d5XCVBeCZ9LCT2SjRtEGSak76ZEgP86M+X/iqNWVf+SvJfc+AVzQj5dcpA5ohNqcHeWe9NZK18ATdglJkvVM4TunVbdFGeXA3h0IUfD/3M4jt5bgBseqNzZvdH+FdQWGSC8cgUJFQY+vLzXhVZrkjeNQPayuGuyIUQ5nLw65XgLXXQf6/K0F2Xgwt3QHXwvweHJAXKSZt1RwFeLf2QlW9oYeJyVua3EznLkTx38G4J4pUZBlA4AAcfnVr69a+Rx5Phs+o6vyLj70n3TsVAai2vXdSuI6QPADApld/IZ5oqhX5ZBtcofdNNrUxHpnhRFhxbgmPWdWLX+YD3nYS2mEOpL1TN8gKiMBhhpq7WE9H6BEZdMnoyj+bXAWfKVvQ1tJHhrNQqCqE8BLqKBAzKw9CD0uOixfAJ77v6Wg3BBOexS0Y2p9t/1eUGxwX2AwfDZgnMYNNmeMFHZnMxnezPcnHpqDyfNIsEMvW/oCttRrM/M+lMQkiB+QAHBbNHdm5uME5R90rPL/E/D4GbyYp4wTILhNv+3IWoUtd+oxB7JEwIWvL+xZBIGGyvFo9xKxh82xJAX3V9FItffetiLdvKx5nVsTOElIFLCyZgMSD9DgUK5NnXMA1j5XCDTMXWG3MvYotOO7eOX7AsZn6WX7UzzhIzcz8mULUSD12bbYC2QBs9cxLHGI9/xU/RgKAERxQm5fT0BfOpeW5PS4Tv3gz4I87svi4Lwc9dKqXI+Z24zbZ8FxI18Y+Y70fuXoBdZjfnZ4FRnznBpT jWn4TP6Q BLJT95vFBiVsdeRauZ0QnzjrsJaoyjOFTiWtStBkF9PFF5W3N+UuSu2Ih/J3pqfo3LN8rqLVrFoSNRP8n/yc2FhS5Dhhqu3/Q13+sCblMMLKbjPZJoANtJD2pzbWP+qezbFDmMhS6eQuq3cD1l9fEi2oxZncIJr3kSoMYCFAhvEZZCM8+phog0zgXtEHIBYq+URSHhRosDDcX8Nit8tF5ZMioyH2avpV6O06fEJdkU7lN7NQr5+d3fWfRtqSp90GutP9wejoKbRuDnPOWpPQEM2exM97HYxR6v0NbnHAzqV02oQij4LfRSua4xM2hgl1mmLTYmiauXgPJor6+/i8wBZzi8C0K61GrI6JlbJAJBlRC9j5J9UVEvx5Qgt4F/o0smc8mDklyRxnlqoqqK0pzbAbEIlt5gujjee7k X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Inspired by [1], move the page fragment allocator from page_alloc into its own c file and header file, as we are about to make more change for it to replace another page_frag implementation in sock.c As this patchset is going to replace 'struct page_frag' with 'struct page_frag_cache' in sched.h, including page_frag_cache.h in sched.h has a compiler error caused by interdependence between mm_types.h and mm.h for asm-offsets.c, see [2]. So avoid the compiler error by moving 'struct page_frag_cache' to mm_types_task.h as suggested by Alexander, see [3]. 1. https://lore.kernel.org/all/20230411160902.4134381-3-dhowells@redhat.com/ 2. https://lore.kernel.org/all/15623dac-9358-4597-b3ee-3694a5956920@gmail.com/ 3. https://lore.kernel.org/all/CAKgT0UdH1yD=LSCXFJ=YM_aiA4OomD-2wXykO42bizaWMt_HOA@mail.gmail.com/ CC: David Howells CC: Alexander Duyck Signed-off-by: Yunsheng Lin Acked-by: Andrew Morton Reviewed-by: Alexander Duyck --- include/linux/gfp.h | 22 --- include/linux/mm_types.h | 18 --- include/linux/mm_types_task.h | 18 +++ include/linux/page_frag_cache.h | 31 ++++ include/linux/skbuff.h | 1 + mm/Makefile | 1 + mm/page_alloc.c | 136 ---------------- mm/page_frag_cache.c | 145 ++++++++++++++++++ .../selftests/mm/page_frag/page_frag_test.c | 2 +- 9 files changed, 197 insertions(+), 177 deletions(-) create mode 100644 include/linux/page_frag_cache.h create mode 100644 mm/page_frag_cache.c diff --git a/include/linux/gfp.h b/include/linux/gfp.h index f53f76e0b17e..01a49be7c98d 100644 --- a/include/linux/gfp.h +++ b/include/linux/gfp.h @@ -371,28 +371,6 @@ __meminit void *alloc_pages_exact_nid_noprof(int nid, size_t size, gfp_t gfp_mas extern void __free_pages(struct page *page, unsigned int order); extern void free_pages(unsigned long addr, unsigned int order); -struct page_frag_cache; -void page_frag_cache_drain(struct page_frag_cache *nc); -extern void __page_frag_cache_drain(struct page *page, unsigned int count); -void *__page_frag_alloc_align(struct page_frag_cache *nc, unsigned int fragsz, - gfp_t gfp_mask, unsigned int align_mask); - -static inline void *page_frag_alloc_align(struct page_frag_cache *nc, - unsigned int fragsz, gfp_t gfp_mask, - unsigned int align) -{ - WARN_ON_ONCE(!is_power_of_2(align)); - return __page_frag_alloc_align(nc, fragsz, gfp_mask, -align); -} - -static inline void *page_frag_alloc(struct page_frag_cache *nc, - unsigned int fragsz, gfp_t gfp_mask) -{ - return __page_frag_alloc_align(nc, fragsz, gfp_mask, ~0u); -} - -extern void page_frag_free(void *addr); - #define __free_page(page) __free_pages((page), 0) #define free_page(addr) free_pages((addr), 0) diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h index 485424979254..843d75412105 100644 --- a/include/linux/mm_types.h +++ b/include/linux/mm_types.h @@ -521,9 +521,6 @@ static_assert(sizeof(struct ptdesc) <= sizeof(struct page)); */ #define STRUCT_PAGE_MAX_SHIFT (order_base_2(sizeof(struct page))) -#define PAGE_FRAG_CACHE_MAX_SIZE __ALIGN_MASK(32768, ~PAGE_MASK) -#define PAGE_FRAG_CACHE_MAX_ORDER get_order(PAGE_FRAG_CACHE_MAX_SIZE) - /* * page_private can be used on tail pages. However, PagePrivate is only * checked by the VM on the head page. So page_private on the tail pages @@ -542,21 +539,6 @@ static inline void *folio_get_private(struct folio *folio) return folio->private; } -struct page_frag_cache { - void * va; -#if (PAGE_SIZE < PAGE_FRAG_CACHE_MAX_SIZE) - __u16 offset; - __u16 size; -#else - __u32 offset; -#endif - /* we maintain a pagecount bias, so that we dont dirty cache line - * containing page->_refcount every time we allocate a fragment. - */ - unsigned int pagecnt_bias; - bool pfmemalloc; -}; - typedef unsigned long vm_flags_t; /* diff --git a/include/linux/mm_types_task.h b/include/linux/mm_types_task.h index a2f6179b672b..cdc1e3696439 100644 --- a/include/linux/mm_types_task.h +++ b/include/linux/mm_types_task.h @@ -8,6 +8,7 @@ * (These are defined separately to decouple sched.h from mm_types.h as much as possible.) */ +#include #include #include @@ -46,6 +47,23 @@ struct page_frag { #endif }; +#define PAGE_FRAG_CACHE_MAX_SIZE __ALIGN_MASK(32768, ~PAGE_MASK) +#define PAGE_FRAG_CACHE_MAX_ORDER get_order(PAGE_FRAG_CACHE_MAX_SIZE) +struct page_frag_cache { + void *va; +#if (PAGE_SIZE < PAGE_FRAG_CACHE_MAX_SIZE) + __u16 offset; + __u16 size; +#else + __u32 offset; +#endif + /* we maintain a pagecount bias, so that we dont dirty cache line + * containing page->_refcount every time we allocate a fragment. + */ + unsigned int pagecnt_bias; + bool pfmemalloc; +}; + /* Track pages that require TLB flushes */ struct tlbflush_unmap_batch { #ifdef CONFIG_ARCH_WANT_BATCHED_UNMAP_TLB_FLUSH diff --git a/include/linux/page_frag_cache.h b/include/linux/page_frag_cache.h new file mode 100644 index 000000000000..67ac8626ed9b --- /dev/null +++ b/include/linux/page_frag_cache.h @@ -0,0 +1,31 @@ +/* SPDX-License-Identifier: GPL-2.0 */ + +#ifndef _LINUX_PAGE_FRAG_CACHE_H +#define _LINUX_PAGE_FRAG_CACHE_H + +#include +#include +#include + +void page_frag_cache_drain(struct page_frag_cache *nc); +void __page_frag_cache_drain(struct page *page, unsigned int count); +void *__page_frag_alloc_align(struct page_frag_cache *nc, unsigned int fragsz, + gfp_t gfp_mask, unsigned int align_mask); + +static inline void *page_frag_alloc_align(struct page_frag_cache *nc, + unsigned int fragsz, gfp_t gfp_mask, + unsigned int align) +{ + WARN_ON_ONCE(!is_power_of_2(align)); + return __page_frag_alloc_align(nc, fragsz, gfp_mask, -align); +} + +static inline void *page_frag_alloc(struct page_frag_cache *nc, + unsigned int fragsz, gfp_t gfp_mask) +{ + return __page_frag_alloc_align(nc, fragsz, gfp_mask, ~0u); +} + +void page_frag_free(void *addr); + +#endif diff --git a/include/linux/skbuff.h b/include/linux/skbuff.h index cf8f6ce06742..7482997c719f 100644 --- a/include/linux/skbuff.h +++ b/include/linux/skbuff.h @@ -31,6 +31,7 @@ #include #include #include +#include #include #if IS_ENABLED(CONFIG_NF_CONNTRACK) #include diff --git a/mm/Makefile b/mm/Makefile index d2915f8c9dc0..e9d342fa8058 100644 --- a/mm/Makefile +++ b/mm/Makefile @@ -65,6 +65,7 @@ page-alloc-$(CONFIG_SHUFFLE_PAGE_ALLOCATOR) += shuffle.o memory-hotplug-$(CONFIG_MEMORY_HOTPLUG) += memory_hotplug.o obj-y += page-alloc.o +obj-y += page_frag_cache.o obj-y += init-mm.o obj-y += memblock.o obj-y += $(memory-hotplug-y) diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 91ace8ca97e2..baa19130f6d9 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -4805,142 +4805,6 @@ void free_pages(unsigned long addr, unsigned int order) EXPORT_SYMBOL(free_pages); -/* - * Page Fragment: - * An arbitrary-length arbitrary-offset area of memory which resides - * within a 0 or higher order page. Multiple fragments within that page - * are individually refcounted, in the page's reference counter. - * - * The page_frag functions below provide a simple allocation framework for - * page fragments. This is used by the network stack and network device - * drivers to provide a backing region of memory for use as either an - * sk_buff->head, or to be used in the "frags" portion of skb_shared_info. - */ -static struct page *__page_frag_cache_refill(struct page_frag_cache *nc, - gfp_t gfp_mask) -{ - struct page *page = NULL; - gfp_t gfp = gfp_mask; - -#if (PAGE_SIZE < PAGE_FRAG_CACHE_MAX_SIZE) - gfp_mask = (gfp_mask & ~__GFP_DIRECT_RECLAIM) | __GFP_COMP | - __GFP_NOWARN | __GFP_NORETRY | __GFP_NOMEMALLOC; - page = alloc_pages_node(NUMA_NO_NODE, gfp_mask, - PAGE_FRAG_CACHE_MAX_ORDER); - nc->size = page ? PAGE_FRAG_CACHE_MAX_SIZE : PAGE_SIZE; -#endif - if (unlikely(!page)) - page = alloc_pages_node(NUMA_NO_NODE, gfp, 0); - - nc->va = page ? page_address(page) : NULL; - - return page; -} - -void page_frag_cache_drain(struct page_frag_cache *nc) -{ - if (!nc->va) - return; - - __page_frag_cache_drain(virt_to_head_page(nc->va), nc->pagecnt_bias); - nc->va = NULL; -} -EXPORT_SYMBOL(page_frag_cache_drain); - -void __page_frag_cache_drain(struct page *page, unsigned int count) -{ - VM_BUG_ON_PAGE(page_ref_count(page) == 0, page); - - if (page_ref_sub_and_test(page, count)) - free_unref_page(page, compound_order(page)); -} -EXPORT_SYMBOL(__page_frag_cache_drain); - -void *__page_frag_alloc_align(struct page_frag_cache *nc, - unsigned int fragsz, gfp_t gfp_mask, - unsigned int align_mask) -{ - unsigned int size = PAGE_SIZE; - struct page *page; - int offset; - - if (unlikely(!nc->va)) { -refill: - page = __page_frag_cache_refill(nc, gfp_mask); - if (!page) - return NULL; - -#if (PAGE_SIZE < PAGE_FRAG_CACHE_MAX_SIZE) - /* if size can vary use size else just use PAGE_SIZE */ - size = nc->size; -#endif - /* Even if we own the page, we do not use atomic_set(). - * This would break get_page_unless_zero() users. - */ - page_ref_add(page, PAGE_FRAG_CACHE_MAX_SIZE); - - /* reset page count bias and offset to start of new frag */ - nc->pfmemalloc = page_is_pfmemalloc(page); - nc->pagecnt_bias = PAGE_FRAG_CACHE_MAX_SIZE + 1; - nc->offset = size; - } - - offset = nc->offset - fragsz; - if (unlikely(offset < 0)) { - page = virt_to_page(nc->va); - - if (!page_ref_sub_and_test(page, nc->pagecnt_bias)) - goto refill; - - if (unlikely(nc->pfmemalloc)) { - free_unref_page(page, compound_order(page)); - goto refill; - } - -#if (PAGE_SIZE < PAGE_FRAG_CACHE_MAX_SIZE) - /* if size can vary use size else just use PAGE_SIZE */ - size = nc->size; -#endif - /* OK, page count is 0, we can safely set it */ - set_page_count(page, PAGE_FRAG_CACHE_MAX_SIZE + 1); - - /* reset page count bias and offset to start of new frag */ - nc->pagecnt_bias = PAGE_FRAG_CACHE_MAX_SIZE + 1; - offset = size - fragsz; - if (unlikely(offset < 0)) { - /* - * The caller is trying to allocate a fragment - * with fragsz > PAGE_SIZE but the cache isn't big - * enough to satisfy the request, this may - * happen in low memory conditions. - * We don't release the cache page because - * it could make memory pressure worse - * so we simply return NULL here. - */ - return NULL; - } - } - - nc->pagecnt_bias--; - offset &= align_mask; - nc->offset = offset; - - return nc->va + offset; -} -EXPORT_SYMBOL(__page_frag_alloc_align); - -/* - * Frees a page fragment allocated out of either a compound or order 0 page. - */ -void page_frag_free(void *addr) -{ - struct page *page = virt_to_head_page(addr); - - if (unlikely(put_page_testzero(page))) - free_unref_page(page, compound_order(page)); -} -EXPORT_SYMBOL(page_frag_free); - static void *make_alloc_exact(unsigned long addr, unsigned int order, size_t size) { diff --git a/mm/page_frag_cache.c b/mm/page_frag_cache.c new file mode 100644 index 000000000000..609a485cd02a --- /dev/null +++ b/mm/page_frag_cache.c @@ -0,0 +1,145 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* Page fragment allocator + * + * Page Fragment: + * An arbitrary-length arbitrary-offset area of memory which resides within a + * 0 or higher order page. Multiple fragments within that page are + * individually refcounted, in the page's reference counter. + * + * The page_frag functions provide a simple allocation framework for page + * fragments. This is used by the network stack and network device drivers to + * provide a backing region of memory for use as either an sk_buff->head, or to + * be used in the "frags" portion of skb_shared_info. + */ + +#include +#include +#include +#include +#include +#include "internal.h" + +static struct page *__page_frag_cache_refill(struct page_frag_cache *nc, + gfp_t gfp_mask) +{ + struct page *page = NULL; + gfp_t gfp = gfp_mask; + +#if (PAGE_SIZE < PAGE_FRAG_CACHE_MAX_SIZE) + gfp_mask = (gfp_mask & ~__GFP_DIRECT_RECLAIM) | __GFP_COMP | + __GFP_NOWARN | __GFP_NORETRY | __GFP_NOMEMALLOC; + page = alloc_pages_node(NUMA_NO_NODE, gfp_mask, + PAGE_FRAG_CACHE_MAX_ORDER); + nc->size = page ? PAGE_FRAG_CACHE_MAX_SIZE : PAGE_SIZE; +#endif + if (unlikely(!page)) + page = alloc_pages_node(NUMA_NO_NODE, gfp, 0); + + nc->va = page ? page_address(page) : NULL; + + return page; +} + +void page_frag_cache_drain(struct page_frag_cache *nc) +{ + if (!nc->va) + return; + + __page_frag_cache_drain(virt_to_head_page(nc->va), nc->pagecnt_bias); + nc->va = NULL; +} +EXPORT_SYMBOL(page_frag_cache_drain); + +void __page_frag_cache_drain(struct page *page, unsigned int count) +{ + VM_BUG_ON_PAGE(page_ref_count(page) == 0, page); + + if (page_ref_sub_and_test(page, count)) + free_unref_page(page, compound_order(page)); +} +EXPORT_SYMBOL(__page_frag_cache_drain); + +void *__page_frag_alloc_align(struct page_frag_cache *nc, + unsigned int fragsz, gfp_t gfp_mask, + unsigned int align_mask) +{ + unsigned int size = PAGE_SIZE; + struct page *page; + int offset; + + if (unlikely(!nc->va)) { +refill: + page = __page_frag_cache_refill(nc, gfp_mask); + if (!page) + return NULL; + +#if (PAGE_SIZE < PAGE_FRAG_CACHE_MAX_SIZE) + /* if size can vary use size else just use PAGE_SIZE */ + size = nc->size; +#endif + /* Even if we own the page, we do not use atomic_set(). + * This would break get_page_unless_zero() users. + */ + page_ref_add(page, PAGE_FRAG_CACHE_MAX_SIZE); + + /* reset page count bias and offset to start of new frag */ + nc->pfmemalloc = page_is_pfmemalloc(page); + nc->pagecnt_bias = PAGE_FRAG_CACHE_MAX_SIZE + 1; + nc->offset = size; + } + + offset = nc->offset - fragsz; + if (unlikely(offset < 0)) { + page = virt_to_page(nc->va); + + if (!page_ref_sub_and_test(page, nc->pagecnt_bias)) + goto refill; + + if (unlikely(nc->pfmemalloc)) { + free_unref_page(page, compound_order(page)); + goto refill; + } + +#if (PAGE_SIZE < PAGE_FRAG_CACHE_MAX_SIZE) + /* if size can vary use size else just use PAGE_SIZE */ + size = nc->size; +#endif + /* OK, page count is 0, we can safely set it */ + set_page_count(page, PAGE_FRAG_CACHE_MAX_SIZE + 1); + + /* reset page count bias and offset to start of new frag */ + nc->pagecnt_bias = PAGE_FRAG_CACHE_MAX_SIZE + 1; + offset = size - fragsz; + if (unlikely(offset < 0)) { + /* + * The caller is trying to allocate a fragment + * with fragsz > PAGE_SIZE but the cache isn't big + * enough to satisfy the request, this may + * happen in low memory conditions. + * We don't release the cache page because + * it could make memory pressure worse + * so we simply return NULL here. + */ + return NULL; + } + } + + nc->pagecnt_bias--; + offset &= align_mask; + nc->offset = offset; + + return nc->va + offset; +} +EXPORT_SYMBOL(__page_frag_alloc_align); + +/* + * Frees a page fragment allocated out of either a compound or order 0 page. + */ +void page_frag_free(void *addr) +{ + struct page *page = virt_to_head_page(addr); + + if (unlikely(put_page_testzero(page))) + free_unref_page(page, compound_order(page)); +} +EXPORT_SYMBOL(page_frag_free); diff --git a/tools/testing/selftests/mm/page_frag/page_frag_test.c b/tools/testing/selftests/mm/page_frag/page_frag_test.c index 6d6f31936b10..5395a36e4030 100644 --- a/tools/testing/selftests/mm/page_frag/page_frag_test.c +++ b/tools/testing/selftests/mm/page_frag/page_frag_test.c @@ -6,12 +6,12 @@ * Copyright (C) 2024 Yunsheng Lin */ -#include #include #include #include #include #include +#include static struct ptr_ring ptr_ring; static int nr_objs = 512; From patchwork Fri Sep 6 07:36:35 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yunsheng Lin X-Patchwork-Id: 13793529 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 59A37CE7AE2 for ; Fri, 6 Sep 2024 07:42:53 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 797CE6B0089; Fri, 6 Sep 2024 03:42:51 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 7219B6B008A; Fri, 6 Sep 2024 03:42:51 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 599306B008C; Fri, 6 Sep 2024 03:42:51 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id 3E25F6B0089 for ; Fri, 6 Sep 2024 03:42:51 -0400 (EDT) Received: from smtpin21.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id DDFC0AA150 for ; Fri, 6 Sep 2024 07:42:50 +0000 (UTC) X-FDA: 82533521700.21.BED7B23 Received: from szxga01-in.huawei.com (szxga01-in.huawei.com [45.249.212.187]) by imf28.hostedemail.com (Postfix) with ESMTP id 6C4AFC0018 for ; Fri, 6 Sep 2024 07:42:48 +0000 (UTC) Authentication-Results: imf28.hostedemail.com; dkim=none; spf=pass (imf28.hostedemail.com: domain of linyunsheng@huawei.com designates 45.249.212.187 as permitted sender) smtp.mailfrom=linyunsheng@huawei.com; dmarc=pass (policy=quarantine) header.from=huawei.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1725608439; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=PBbxlB0kz24iy7iokK1TfN2qwdTa4QMZvXDcqO85e8U=; b=4yGbngEcJsz6m//zfut85fFVzSl9cLtY6l3mpf84x8Gg6BXX2ZHrayNQvR3vj5hJtRWerN gKUcujGHIKIFTEQWcBEynTuA3JmmycrM1rj/0ofJyfDwFFlYDwDSHSiLSqrmGMFjEBnayB kzgK7eUsrVVOJIJKKfO0oQ7iLnfR0ns= ARC-Authentication-Results: i=1; imf28.hostedemail.com; dkim=none; spf=pass (imf28.hostedemail.com: domain of linyunsheng@huawei.com designates 45.249.212.187 as permitted sender) smtp.mailfrom=linyunsheng@huawei.com; dmarc=pass (policy=quarantine) header.from=huawei.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1725608439; a=rsa-sha256; cv=none; b=S5ZmSDg2CmlmWspO5QdQWgu0WQbzVge6RBTn2Z0CdXEiYC2lKOW7oV/zvb0c95bcaxnfKC DXZa7g2Q0WyOmSem/q1xq+s14zfcK2OOQ9fVxAbJpJKRCJB2upFc1a+y/gNYp3VMn9tITF bAfDJ+P+PRLRZyycIoec2VZHj3dPjWY= Received: from mail.maildlp.com (unknown [172.19.163.48]) by szxga01-in.huawei.com (SkyGuard) with ESMTP id 4X0SqK11wLzyR1R; Fri, 6 Sep 2024 15:42:05 +0800 (CST) Received: from dggpemf200006.china.huawei.com (unknown [7.185.36.61]) by mail.maildlp.com (Postfix) with ESMTPS id 41B8818006C; Fri, 6 Sep 2024 15:42:45 +0800 (CST) Received: from localhost.localdomain (10.90.30.45) by dggpemf200006.china.huawei.com (7.185.36.61) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.11; Fri, 6 Sep 2024 15:42:45 +0800 From: Yunsheng Lin To: , , CC: , , Yunsheng Lin , Alexander Duyck , Alexander Duyck , Andrew Morton , Subject: [PATCH net-next v18 03/14] mm: page_frag: use initial zero offset for page_frag_alloc_align() Date: Fri, 6 Sep 2024 15:36:35 +0800 Message-ID: <20240906073646.2930809-4-linyunsheng@huawei.com> X-Mailer: git-send-email 2.30.0 In-Reply-To: <20240906073646.2930809-1-linyunsheng@huawei.com> References: <20240906073646.2930809-1-linyunsheng@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.90.30.45] X-ClientProxiedBy: dggems705-chm.china.huawei.com (10.3.19.182) To dggpemf200006.china.huawei.com (7.185.36.61) X-Rspamd-Server: rspam06 X-Rspamd-Queue-Id: 6C4AFC0018 X-Stat-Signature: wx5hsppob8aiiwse1zb3njfxpn8hchet X-Rspam-User: X-HE-Tag: 1725608568-707398 X-HE-Meta: U2FsdGVkX19YFetZYiCF3TzwsGHyF3oyAXjq3s+jfnVmziNwAQdApAoWBd8FQt1r29o4TOdgFOzIhgcs58QmeAWxRMso/iVOJrDLUQGMyMCAV0N18s0ewZVZufjqS6XEODmYlEA44SxJCsXgoCqKnt4PS7ne7GYLnHEBhHoAfDBd8Rp4T+QDPm7huXBwYaUjOpWucrzF58rYcm+5WsE7dlg0NlKNSEti12JJvk3iXQ3NhRsD4GPAoruHBA39YTeIPNYkIkKV3HQfaFf17c1e8OjVXZJz8pUaCajr/DiiwHkI5Vt26CY3a5RaWpSomi+T60d0oSKhKDZpDzngU8iEiINMylkyze6t57HW0CS+rTMLXC/2d4u+FfZ1nWti0TxA/ejRiU160V8ZXLe3rRN6QVTmOKJOmLe2mTW2mWbdFgxS7uRuYPcXY1KE1S7OVYCW4WB+ENj6ldYdbs6pRw5M9eIgnGG1hwwnymnLXqkTFJOhDRm9le47ldT07ISOo1VRR9OFQPCfQ3coanYHZYO1wjbR/F1R1hRRHrY5KQM2FxFStmXdniU/L2xOL/5uT/GfdiPP/mdHSlV2UkjJpwc+gqjKQ9ia1Aw1u5kSrrdczND2fh1OFoIfkwQ/SiX09yASoGh2IrE+YhzVJ21+ftxPcsAdfLqjEeqjVPRCB7bKgequr9wysSAmm4xeuXXPDEbUxXJHsHt4OlcTQvsKNg+ToADYOMwzXjkT8YJKlwehf3sGp0X25tqgQN27UcaSMp9QyiZ+72/vTvsoMnbt8y7zDS3tsp+3xcrGO6/qA3GdI+pnhw3UO3FToqC1L/Dc3ZDiK/4QbadAUsSz0+ZcZn+P/DSCYsTChPzoftxS25ktZ0vvb5b3oii8heizv6Iv1X7GMWJO+S2SbujwGi1PYE6+/0tqt6BkZ8UwCS2/N5RZI0LIYHvKyh1S4A5CyPaPm5Iwn4jN9waLU5633fdD0Py 3t9w4kXh n49dpiTFzY3P4NXgN7+VdkMaYlDW9mhkkt/mOPPnLBv9nl+xOwNwKfTHiZEvTYSoLktya6hLHtKzqvapK/I0jT2TluYqqZ9j1b05nMC2G9kvDIpb2oAmxqhjuaCI6uWF48NT5kp7I5RaptUGBW91M83y5MBJFJEm/wGy2v71U04oO+nsntW/wuBjlzdqxQm8MS3LPrd8DY/pQdyCtRR1aReOe4lKHYXuy6nMaxvfia4lWcz3AqABNjowh/MGWUisUjs06kaMS2ba/zRC66PeFhEB7CDyNm69npn/69nxPcCDMSjzuMiN1Ssc0vsS6968ye4f7rJdo7BiB0lc= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: We are about to use page_frag_alloc_*() API to not just allocate memory for skb->data, but also use them to do the memory allocation for skb frag too. Currently the implementation of page_frag in mm subsystem is running the offset as a countdown rather than count-up value, there may have several advantages to that as mentioned in [1], but it may have some disadvantages, for example, it may disable skb frag coalescing and more correct cache prefetching We have a trade-off to make in order to have a unified implementation and API for page_frag, so use a initial zero offset in this patch, and the following patch will try to make some optimization to avoid the disadvantages as much as possible. 1. https://lore.kernel.org/all/f4abe71b3439b39d17a6fb2d410180f367cadf5c.camel@gmail.com/ CC: Alexander Duyck Signed-off-by: Yunsheng Lin Reviewed-by: Alexander Duyck --- mm/page_frag_cache.c | 46 ++++++++++++++++++++++---------------------- 1 file changed, 23 insertions(+), 23 deletions(-) diff --git a/mm/page_frag_cache.c b/mm/page_frag_cache.c index 609a485cd02a..4c8e04379cb3 100644 --- a/mm/page_frag_cache.c +++ b/mm/page_frag_cache.c @@ -63,9 +63,13 @@ void *__page_frag_alloc_align(struct page_frag_cache *nc, unsigned int fragsz, gfp_t gfp_mask, unsigned int align_mask) { +#if (PAGE_SIZE < PAGE_FRAG_CACHE_MAX_SIZE) + unsigned int size = nc->size; +#else unsigned int size = PAGE_SIZE; +#endif + unsigned int offset; struct page *page; - int offset; if (unlikely(!nc->va)) { refill: @@ -85,11 +89,24 @@ void *__page_frag_alloc_align(struct page_frag_cache *nc, /* reset page count bias and offset to start of new frag */ nc->pfmemalloc = page_is_pfmemalloc(page); nc->pagecnt_bias = PAGE_FRAG_CACHE_MAX_SIZE + 1; - nc->offset = size; + nc->offset = 0; } - offset = nc->offset - fragsz; - if (unlikely(offset < 0)) { + offset = __ALIGN_KERNEL_MASK(nc->offset, ~align_mask); + if (unlikely(offset + fragsz > size)) { + if (unlikely(fragsz > PAGE_SIZE)) { + /* + * The caller is trying to allocate a fragment + * with fragsz > PAGE_SIZE but the cache isn't big + * enough to satisfy the request, this may + * happen in low memory conditions. + * We don't release the cache page because + * it could make memory pressure worse + * so we simply return NULL here. + */ + return NULL; + } + page = virt_to_page(nc->va); if (!page_ref_sub_and_test(page, nc->pagecnt_bias)) @@ -100,33 +117,16 @@ void *__page_frag_alloc_align(struct page_frag_cache *nc, goto refill; } -#if (PAGE_SIZE < PAGE_FRAG_CACHE_MAX_SIZE) - /* if size can vary use size else just use PAGE_SIZE */ - size = nc->size; -#endif /* OK, page count is 0, we can safely set it */ set_page_count(page, PAGE_FRAG_CACHE_MAX_SIZE + 1); /* reset page count bias and offset to start of new frag */ nc->pagecnt_bias = PAGE_FRAG_CACHE_MAX_SIZE + 1; - offset = size - fragsz; - if (unlikely(offset < 0)) { - /* - * The caller is trying to allocate a fragment - * with fragsz > PAGE_SIZE but the cache isn't big - * enough to satisfy the request, this may - * happen in low memory conditions. - * We don't release the cache page because - * it could make memory pressure worse - * so we simply return NULL here. - */ - return NULL; - } + offset = 0; } nc->pagecnt_bias--; - offset &= align_mask; - nc->offset = offset; + nc->offset = offset + fragsz; return nc->va + offset; } From patchwork Fri Sep 6 07:36:36 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yunsheng Lin X-Patchwork-Id: 13793530 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 14795CE7AE6 for ; Fri, 6 Sep 2024 07:42:56 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 43BF36B008A; Fri, 6 Sep 2024 03:42:55 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 3C7EF6B008C; Fri, 6 Sep 2024 03:42:55 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 1F0436B0092; Fri, 6 Sep 2024 03:42:55 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id F06B16B008A for ; Fri, 6 Sep 2024 03:42:54 -0400 (EDT) Received: from smtpin28.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id A91781C562C for ; Fri, 6 Sep 2024 07:42:54 +0000 (UTC) X-FDA: 82533521868.28.F26F246 Received: from szxga05-in.huawei.com (szxga05-in.huawei.com [45.249.212.191]) by imf18.hostedemail.com (Postfix) with ESMTP id 0C5F61C001C for ; Fri, 6 Sep 2024 07:42:51 +0000 (UTC) Authentication-Results: imf18.hostedemail.com; dkim=none; spf=pass (imf18.hostedemail.com: domain of linyunsheng@huawei.com designates 45.249.212.191 as permitted sender) smtp.mailfrom=linyunsheng@huawei.com; dmarc=pass (policy=quarantine) header.from=huawei.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1725608547; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=GtonIyiO4poQSNhyMpkdJV5gDy9SV0Nh687A6ckntA4=; b=joQNGuDe3L/Tr3981rH0eI27X7waEqhgl+Ei2exNwsKRn3YgUNPagkVcHsMqU6Fj4aIJym Y6snqdXmC60b0h7L01O1v9dPD4TshvlvyyZi1R8KtBXQ5KNqsRWVkUP1juJxbMJEYpgx2x uHbSPkDJfuQD0FzqsfWMnJyk+xU9Sro= ARC-Authentication-Results: i=1; imf18.hostedemail.com; dkim=none; spf=pass (imf18.hostedemail.com: domain of linyunsheng@huawei.com designates 45.249.212.191 as permitted sender) smtp.mailfrom=linyunsheng@huawei.com; dmarc=pass (policy=quarantine) header.from=huawei.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1725608547; a=rsa-sha256; cv=none; b=bIMVr3sjLCovjINdwacfWpllCAK6N4482k9puD3CsgGxQIS4ytab4Wigku92Pfo8vGAcPb Tv/bR/dU4u13iDDcIhx992lIPg6Z7paRR/up7NMqkcHqtq9cC2y15Fz3/duhmoOCTTLHAc rYk22Q9akY/r89BDn6p//ZExTdTTFnU= Received: from mail.maildlp.com (unknown [172.19.88.234]) by szxga05-in.huawei.com (SkyGuard) with ESMTP id 4X0Sqj3ntrz1j8Dx; Fri, 6 Sep 2024 15:42:25 +0800 (CST) Received: from dggpemf200006.china.huawei.com (unknown [7.185.36.61]) by mail.maildlp.com (Postfix) with ESMTPS id 19E2B1402CD; Fri, 6 Sep 2024 15:42:48 +0800 (CST) Received: from localhost.localdomain (10.90.30.45) by dggpemf200006.china.huawei.com (7.185.36.61) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.11; Fri, 6 Sep 2024 15:42:47 +0800 From: Yunsheng Lin To: , , CC: , , Yunsheng Lin , Alexander Duyck , Alexander Duyck , Chuck Lever , "Michael S. Tsirkin" , Jason Wang , =?utf-8?q?Eugenio_P=C3=A9rez?= , Andrew Morton , Eric Dumazet , David Howells , Marc Dionne , Jeff Layton , Neil Brown , Olga Kornievskaia , Dai Ngo , Tom Talpey , Trond Myklebust , Anna Schumaker , Shuah Khan , , , , , , Subject: [PATCH net-next v18 04/14] mm: page_frag: avoid caller accessing 'page_frag_cache' directly Date: Fri, 6 Sep 2024 15:36:36 +0800 Message-ID: <20240906073646.2930809-5-linyunsheng@huawei.com> X-Mailer: git-send-email 2.30.0 In-Reply-To: <20240906073646.2930809-1-linyunsheng@huawei.com> References: <20240906073646.2930809-1-linyunsheng@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.90.30.45] X-ClientProxiedBy: dggems705-chm.china.huawei.com (10.3.19.182) To dggpemf200006.china.huawei.com (7.185.36.61) X-Rspam-User: X-Stat-Signature: 5nd5efckfj97o4obhf7uwj9fn4gx3hpz X-Rspamd-Queue-Id: 0C5F61C001C X-Rspamd-Server: rspam11 X-HE-Tag: 1725608571-684507 X-HE-Meta: U2FsdGVkX1+WYpsrZbdB269yD7zr/QiCQTYgOIiE3jvG3vgH+CaKxxbmZh4N/FwRKHi7uGF5J3J9v1ZqZOlzqc2ZIxDUiQQKLBQQUE6mazXxJwxPfLp67Bs2TfFbSRnfEo28bYkW9X9zmmIr93sxgfY876g9LmkAbhzVgd4zsnM9ekOK8lNmbd78cJ6g6WadIzr6+PpyXBai5TiyD0PS/w3R0jHXNE3y8IUDYmYg6qANpTCvTyEpC4OYeWRI21CS/ildvuM3DuZD3TegnnjxBXRCUCP0dokfJDwXwMAecqotDxfzwboxTPYwAeU2CtW+l5/XPEuwboY3xonpgLwy7uv2iqZW5KpFQWX0vlo3oCMgjFsT69zUJyxL3UZWlT23BwGOpBRJ2+4jyLqhEwfSiKtsL2Eq8skpaA/dr9VSHThOMcanQsw4h094xSF00WU/Q4c0uhxnnKZhr3hksUf30SIjCIYg4wXAQ/sPdJoKPZ/Vk4AsDpjdlM1lci/A4mwxFqIfiY60/cTReL9P0vuIqOyX2m49GV37KYfHccuWdCd3CpWDHouZ3zQ9RpaD4bygEysO5M+82du5hcYiw3kd3UgQZOfz2rzu85vuZoH1WUfsZ8Dmh36fxNB+iA0YpU2BVHooTQ1OZHPukuK7Ka/syp1/Up4ACk4VVkggatBMzfAmLBXZjeDOdGQBVpXweUAn3XvwcCP3eItaH+hqW068AEWu8s6dGoDhHcymPq2N8GxBAoic2i+SyH8I+SgGkXPKhFsRVQcUSgzYYjojSG8gQAalrWY/eAy8TY/9tZF6sUKhkVdXCESUHFdBWZbujdpdH6IT0BCSd/w7jQd54lufRr558QvQ6GPzJ5C7lQPhQ6D++6FbMdO3QuGyr0/nia9CCuifmuf5ktDlOxhx1vrzR2Ro0zlzCtcBa4m/WTVTqN0dA1UP8QF0dVEhlgVZlqBi+jdUVVbe4omBy+aLiq8 52v/qNTm GZ8joMnGVYEPuLpOpdPz01hZq6ZiujzICabeKPBHVjt31+78Adc0Fj+todTQ2D8LfVMklG3F7CJBNfnGTFELyTnscXxc4oGLxHZSZnQG4m/Fq0L8CTE7pC7pNEEP3n6GMH0NSl15TTf6GAyXJumz0gO1MbwgWOlkJBdU7hcE7ZdgwgMhctINGtg5S5kBPXYVjIxDnoNnbMs9TirUKM+X60PkSIoXvCKO6PxDr7mq21bkQTsW22pUD0ZePf2fnNJspEm+t6a1iRs6eo2MrOgvHh8c8ZLZQcFcZLnl/HeCateG3OQkzzl5Pt+KgFWXZkzrfuRygH8xBc+Nd/nM= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Use appropriate frag_page API instead of caller accessing 'page_frag_cache' directly. CC: Alexander Duyck Signed-off-by: Yunsheng Lin Reviewed-by: Alexander Duyck Acked-by: Chuck Lever --- drivers/vhost/net.c | 2 +- include/linux/page_frag_cache.h | 10 ++++++++++ net/core/skbuff.c | 6 +++--- net/rxrpc/conn_object.c | 4 +--- net/rxrpc/local_object.c | 4 +--- net/sunrpc/svcsock.c | 6 ++---- tools/testing/selftests/mm/page_frag/page_frag_test.c | 2 +- 7 files changed, 19 insertions(+), 15 deletions(-) diff --git a/drivers/vhost/net.c b/drivers/vhost/net.c index f16279351db5..9ad37c012189 100644 --- a/drivers/vhost/net.c +++ b/drivers/vhost/net.c @@ -1325,7 +1325,7 @@ static int vhost_net_open(struct inode *inode, struct file *f) vqs[VHOST_NET_VQ_RX]); f->private_data = n; - n->pf_cache.va = NULL; + page_frag_cache_init(&n->pf_cache); return 0; } diff --git a/include/linux/page_frag_cache.h b/include/linux/page_frag_cache.h index 67ac8626ed9b..0a52f7a179c8 100644 --- a/include/linux/page_frag_cache.h +++ b/include/linux/page_frag_cache.h @@ -7,6 +7,16 @@ #include #include +static inline void page_frag_cache_init(struct page_frag_cache *nc) +{ + nc->va = NULL; +} + +static inline bool page_frag_cache_is_pfmemalloc(struct page_frag_cache *nc) +{ + return !!nc->pfmemalloc; +} + void page_frag_cache_drain(struct page_frag_cache *nc); void __page_frag_cache_drain(struct page *page, unsigned int count); void *__page_frag_alloc_align(struct page_frag_cache *nc, unsigned int fragsz, diff --git a/net/core/skbuff.c b/net/core/skbuff.c index a52638363ea5..a5f8e4e0c649 100644 --- a/net/core/skbuff.c +++ b/net/core/skbuff.c @@ -752,14 +752,14 @@ struct sk_buff *__netdev_alloc_skb(struct net_device *dev, unsigned int len, if (in_hardirq() || irqs_disabled()) { nc = this_cpu_ptr(&netdev_alloc_cache); data = page_frag_alloc(nc, len, gfp_mask); - pfmemalloc = nc->pfmemalloc; + pfmemalloc = page_frag_cache_is_pfmemalloc(nc); } else { local_bh_disable(); local_lock_nested_bh(&napi_alloc_cache.bh_lock); nc = this_cpu_ptr(&napi_alloc_cache.page); data = page_frag_alloc(nc, len, gfp_mask); - pfmemalloc = nc->pfmemalloc; + pfmemalloc = page_frag_cache_is_pfmemalloc(nc); local_unlock_nested_bh(&napi_alloc_cache.bh_lock); local_bh_enable(); @@ -849,7 +849,7 @@ struct sk_buff *napi_alloc_skb(struct napi_struct *napi, unsigned int len) len = SKB_HEAD_ALIGN(len); data = page_frag_alloc(&nc->page, len, gfp_mask); - pfmemalloc = nc->page.pfmemalloc; + pfmemalloc = page_frag_cache_is_pfmemalloc(&nc->page); } local_unlock_nested_bh(&napi_alloc_cache.bh_lock); diff --git a/net/rxrpc/conn_object.c b/net/rxrpc/conn_object.c index 1539d315afe7..694c4df7a1a3 100644 --- a/net/rxrpc/conn_object.c +++ b/net/rxrpc/conn_object.c @@ -337,9 +337,7 @@ static void rxrpc_clean_up_connection(struct work_struct *work) */ rxrpc_purge_queue(&conn->rx_queue); - if (conn->tx_data_alloc.va) - __page_frag_cache_drain(virt_to_page(conn->tx_data_alloc.va), - conn->tx_data_alloc.pagecnt_bias); + page_frag_cache_drain(&conn->tx_data_alloc); call_rcu(&conn->rcu, rxrpc_rcu_free_connection); } diff --git a/net/rxrpc/local_object.c b/net/rxrpc/local_object.c index 504453c688d7..a8cffe47cf01 100644 --- a/net/rxrpc/local_object.c +++ b/net/rxrpc/local_object.c @@ -452,9 +452,7 @@ void rxrpc_destroy_local(struct rxrpc_local *local) #endif rxrpc_purge_queue(&local->rx_queue); rxrpc_purge_client_connections(local); - if (local->tx_alloc.va) - __page_frag_cache_drain(virt_to_page(local->tx_alloc.va), - local->tx_alloc.pagecnt_bias); + page_frag_cache_drain(&local->tx_alloc); } /* diff --git a/net/sunrpc/svcsock.c b/net/sunrpc/svcsock.c index 6b3f01beb294..dcfd84cf0694 100644 --- a/net/sunrpc/svcsock.c +++ b/net/sunrpc/svcsock.c @@ -1609,7 +1609,6 @@ static void svc_tcp_sock_detach(struct svc_xprt *xprt) static void svc_sock_free(struct svc_xprt *xprt) { struct svc_sock *svsk = container_of(xprt, struct svc_sock, sk_xprt); - struct page_frag_cache *pfc = &svsk->sk_frag_cache; struct socket *sock = svsk->sk_sock; trace_svcsock_free(svsk, sock); @@ -1619,8 +1618,7 @@ static void svc_sock_free(struct svc_xprt *xprt) sockfd_put(sock); else sock_release(sock); - if (pfc->va) - __page_frag_cache_drain(virt_to_head_page(pfc->va), - pfc->pagecnt_bias); + + page_frag_cache_drain(&svsk->sk_frag_cache); kfree(svsk); } diff --git a/tools/testing/selftests/mm/page_frag/page_frag_test.c b/tools/testing/selftests/mm/page_frag/page_frag_test.c index 5395a36e4030..a4bd543d6950 100644 --- a/tools/testing/selftests/mm/page_frag/page_frag_test.c +++ b/tools/testing/selftests/mm/page_frag/page_frag_test.c @@ -117,7 +117,7 @@ static int __init page_frag_test_init(void) u64 duration; int ret; - test_nc.va = NULL; + page_frag_cache_init(&test_nc); atomic_set(&nthreads, 2); init_completion(&wait); From patchwork Fri Sep 6 07:36:38 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yunsheng Lin X-Patchwork-Id: 13793531 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id D5F53CE7AE0 for ; Fri, 6 Sep 2024 07:42:58 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 39F4E6B0092; Fri, 6 Sep 2024 03:42:58 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 28BDE6B0093; Fri, 6 Sep 2024 03:42:58 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 08F4A6B0095; Fri, 6 Sep 2024 03:42:57 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id D0E786B0092 for ; Fri, 6 Sep 2024 03:42:57 -0400 (EDT) Received: from smtpin02.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id 8B3FE121067 for ; Fri, 6 Sep 2024 07:42:57 +0000 (UTC) X-FDA: 82533521994.02.EB7FDF7 Received: from szxga04-in.huawei.com (szxga04-in.huawei.com [45.249.212.190]) by imf01.hostedemail.com (Postfix) with ESMTP id C3CED40018 for ; Fri, 6 Sep 2024 07:42:54 +0000 (UTC) Authentication-Results: imf01.hostedemail.com; dkim=none; dmarc=pass (policy=quarantine) header.from=huawei.com; spf=pass (imf01.hostedemail.com: domain of linyunsheng@huawei.com designates 45.249.212.190 as permitted sender) smtp.mailfrom=linyunsheng@huawei.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1725608550; a=rsa-sha256; cv=none; b=FCaS/g3hI4K+FFzvTmgsG6k5RFk2MOJbTCSq2IVHa9ih9zzClun5z8NGeLDtmsdwtjRLDl hxojVpY2Of9eyRpvNIFtY/IPbxxBo3Bw2vDBFMYGTWPWnJyFjdc52gST0Gn7bmlYuNd26f 79dU+MHQDumMIrLrmZRID9NwkIU8wVA= ARC-Authentication-Results: i=1; imf01.hostedemail.com; dkim=none; dmarc=pass (policy=quarantine) header.from=huawei.com; spf=pass (imf01.hostedemail.com: domain of linyunsheng@huawei.com designates 45.249.212.190 as permitted sender) smtp.mailfrom=linyunsheng@huawei.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1725608550; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=HLX9bEEtBxA2j3IE9QbpZ1DKN3jZIEZd8xQgchcf6SI=; b=5WjKp8KApDvisJ2DdDZYtY90/xogswyTn8YTmXKLPw6vrYGEmGb+hDjBZW/23+4LhihIYj 7CSoiTZtqHuBDQDyqIXH6M8KDPPi4hRWL9ArMlgTZbcCGaSj4c4df7FNyCV8ISlNJuJ/jk G/mFEpfErUbZ/pEOzVLZOfrbNduRJ9Q= Received: from mail.maildlp.com (unknown [172.19.88.214]) by szxga04-in.huawei.com (SkyGuard) with ESMTP id 4X0Sqm5fm8z2DbqN; Fri, 6 Sep 2024 15:42:28 +0800 (CST) Received: from dggpemf200006.china.huawei.com (unknown [7.185.36.61]) by mail.maildlp.com (Postfix) with ESMTPS id 88E921A016C; Fri, 6 Sep 2024 15:42:51 +0800 (CST) Received: from localhost.localdomain (10.90.30.45) by dggpemf200006.china.huawei.com (7.185.36.61) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.11; Fri, 6 Sep 2024 15:42:51 +0800 From: Yunsheng Lin To: , , CC: , , Yunsheng Lin , Alexander Duyck , Andrew Morton , Subject: [PATCH net-next v18 06/14] mm: page_frag: reuse existing space for 'size' and 'pfmemalloc' Date: Fri, 6 Sep 2024 15:36:38 +0800 Message-ID: <20240906073646.2930809-7-linyunsheng@huawei.com> X-Mailer: git-send-email 2.30.0 In-Reply-To: <20240906073646.2930809-1-linyunsheng@huawei.com> References: <20240906073646.2930809-1-linyunsheng@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.90.30.45] X-ClientProxiedBy: dggems705-chm.china.huawei.com (10.3.19.182) To dggpemf200006.china.huawei.com (7.185.36.61) X-Rspam-User: X-Rspamd-Queue-Id: C3CED40018 X-Rspamd-Server: rspam01 X-Stat-Signature: hdug99dm1ycf5tty1gmz6ta645qwccjn X-HE-Tag: 1725608574-403035 X-HE-Meta: U2FsdGVkX18zgGAwHrqBOFchBJ0WEF/Fl/2a5JOLG/02N3XmMTSh/UiBwrqWloz6jvTzsBH+FfrhO/BmJgAodSJ434Is3HMAnfanUGELiXg6LRVQVu37IBrGlWU5Ec5//iksQ+TxayLSOPvxGofOwyQtR6msr+1Omsjv8gdcABcR3aRmF4KLw+aBbQi+NeRAP1Y48iacbDm+4glwnxT0zRaM0vzigsGyfaPweNdNsQ+IjGb3uQSHgut4D3X4pGQWU6JEcMbU4dBkBRDI1wGwQYYgoI+pyu0F9ImlCW70JCvUyIOg6IYAXhj04/ZwTgZZsjg0jhHNGngYvxKsDNMpqAGvdj0kQMDdw2jjblrOsceaXlybOMLQd4NY6jJQN9tmeqslOPkZ8z4VWvi9SUParjBI2hbL96rhmNvTAmPr/dfLuJpdjfLP6a1Jpgy7vub/khRCJxEL61+2Ca7OIM6mR0n7uX21OqArs7P4/0I6ieonSgrjS4s0Zr2mxGWaC8tIL24t7NNFVBYKKAq4+Z9vDy8kSWErp2QDWZ1IMsOWEByer7EnIZKwbw5ARK1Qcn9lhwzIQskMT7fO1eyDoxulft2rQIQI+oQP3ymlEu3ka13hVp/Vh3LVEVlp4UxWIGcBfLdU3bKIIabqoET95qXIEYha0jupJCsN3ibR/TYSJ1Jq1zk2aseI3AA1xZ2vdjqXlAlQFHqGqz76LdGIm0ixaXg/AN1u/W02NaFrjQ4oC2/KGIcM/7GmUqP6+MN6tU/FGzZUpGPK80WHEE6LrRzUEboXkHvg4G2Dx3LZp/xuu6GTVuuEar6OtXGmol6DquQwbFY7ZxfwXdnrYXQ86b4lqoZdQfXH7n5q0itm2qQkb7+L8QG+Y/rUMVK9bLKl59zJ1IGhNUABzP1QlrKsK6gNfnOmVN8XQntmqax52/1Yz0f5QRqAzdCfmU7GZEuaWNlIuMQ3z4aIHsTV8mhslu+ XmjBUd4h K3coKmddAjjd4JOttAQ0yhQPkBtCI2reEEOdi4mUy3zVvOdjV4QuHQHXLWgrcay754MKsdL4fWPfLrQAHAUzDS2+zTsJAGbDUARU5oXes1798Cex1soyRKhvc6NMJw2XoswxoA9vbAwk5j9qpO80PRjH5yb/Kakg6SUFoZl0MP++w2uuI6zGsEIPYi8RxpuM2F2Uu4gxhgZkKsHsXPXVdSSkCpQKSEjTJwnxRS4DXP+LEU4NdzV5WchxTJVTtzqeQxqf6SB1qHeQ01l0lWQn5zNVtaFzJvtUyKTrg X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Currently there is one 'struct page_frag' for every 'struct sock' and 'struct task_struct', we are about to replace the 'struct page_frag' with 'struct page_frag_cache' for them. Before begin the replacing, we need to ensure the size of 'struct page_frag_cache' is not bigger than the size of 'struct page_frag', as there may be tens of thousands of 'struct sock' and 'struct task_struct' instances in the system. By or'ing the page order & pfmemalloc with lower bits of 'va' instead of using 'u16' or 'u32' for page size and 'u8' for pfmemalloc, we are able to avoid 3 or 5 bytes space waste. And page address & pfmemalloc & order is unchanged for the same page in the same 'page_frag_cache' instance, it makes sense to fit them together. After this patch, the size of 'struct page_frag_cache' should be the same as the size of 'struct page_frag'. CC: Alexander Duyck Signed-off-by: Yunsheng Lin --- include/linux/mm_types_task.h | 19 +++++---- include/linux/page_frag_cache.h | 26 +++++++++++- mm/page_frag_cache.c | 75 +++++++++++++++++++++++---------- 3 files changed, 88 insertions(+), 32 deletions(-) diff --git a/include/linux/mm_types_task.h b/include/linux/mm_types_task.h index cdc1e3696439..73a574a0e8f9 100644 --- a/include/linux/mm_types_task.h +++ b/include/linux/mm_types_task.h @@ -50,18 +50,21 @@ struct page_frag { #define PAGE_FRAG_CACHE_MAX_SIZE __ALIGN_MASK(32768, ~PAGE_MASK) #define PAGE_FRAG_CACHE_MAX_ORDER get_order(PAGE_FRAG_CACHE_MAX_SIZE) struct page_frag_cache { - void *va; -#if (PAGE_SIZE < PAGE_FRAG_CACHE_MAX_SIZE) + /* encoded_page consists of the virtual address, pfmemalloc bit and + * order of a page. + */ + unsigned long encoded_page; + + /* we maintain a pagecount bias, so that we dont dirty cache line + * containing page->_refcount every time we allocate a fragment. + */ +#if (PAGE_SIZE < PAGE_FRAG_CACHE_MAX_SIZE) && (BITS_PER_LONG <= 32) __u16 offset; - __u16 size; + __u16 pagecnt_bias; #else __u32 offset; + __u32 pagecnt_bias; #endif - /* we maintain a pagecount bias, so that we dont dirty cache line - * containing page->_refcount every time we allocate a fragment. - */ - unsigned int pagecnt_bias; - bool pfmemalloc; }; /* Track pages that require TLB flushes */ diff --git a/include/linux/page_frag_cache.h b/include/linux/page_frag_cache.h index 0a52f7a179c8..75aaad6eaea2 100644 --- a/include/linux/page_frag_cache.h +++ b/include/linux/page_frag_cache.h @@ -3,18 +3,40 @@ #ifndef _LINUX_PAGE_FRAG_CACHE_H #define _LINUX_PAGE_FRAG_CACHE_H +#include #include #include #include +#if (PAGE_SIZE < PAGE_FRAG_CACHE_MAX_SIZE) +/* Use a full byte here to enable assembler optimization as the shift + * operation is usually expecting a byte. + */ +#define PAGE_FRAG_CACHE_ORDER_MASK GENMASK(7, 0) +#define PAGE_FRAG_CACHE_PFMEMALLOC_SHIFT 8 +#define PAGE_FRAG_CACHE_PFMEMALLOC_BIT BIT(PAGE_FRAG_CACHE_PFMEMALLOC_SHIFT) +#else +/* Compiler should be able to figure out we don't read things as any value + * ANDed with 0 is 0. + */ +#define PAGE_FRAG_CACHE_ORDER_MASK 0 +#define PAGE_FRAG_CACHE_PFMEMALLOC_SHIFT 0 +#define PAGE_FRAG_CACHE_PFMEMALLOC_BIT BIT(PAGE_FRAG_CACHE_PFMEMALLOC_SHIFT) +#endif + +static inline bool page_frag_encoded_page_pfmemalloc(unsigned long encoded_page) +{ + return !!(encoded_page & PAGE_FRAG_CACHE_PFMEMALLOC_BIT); +} + static inline void page_frag_cache_init(struct page_frag_cache *nc) { - nc->va = NULL; + nc->encoded_page = 0; } static inline bool page_frag_cache_is_pfmemalloc(struct page_frag_cache *nc) { - return !!nc->pfmemalloc; + return page_frag_encoded_page_pfmemalloc(nc->encoded_page); } void page_frag_cache_drain(struct page_frag_cache *nc); diff --git a/mm/page_frag_cache.c b/mm/page_frag_cache.c index 4c8e04379cb3..cf9375a81a64 100644 --- a/mm/page_frag_cache.c +++ b/mm/page_frag_cache.c @@ -12,6 +12,7 @@ * be used in the "frags" portion of skb_shared_info. */ +#include #include #include #include @@ -19,9 +20,41 @@ #include #include "internal.h" +static unsigned long page_frag_encode_page(struct page *page, unsigned int order, + bool pfmemalloc) +{ + BUILD_BUG_ON(PAGE_FRAG_CACHE_MAX_ORDER > PAGE_FRAG_CACHE_ORDER_MASK); + BUILD_BUG_ON(PAGE_FRAG_CACHE_PFMEMALLOC_BIT >= PAGE_SIZE); + + return (unsigned long)page_address(page) | + (order & PAGE_FRAG_CACHE_ORDER_MASK) | + ((unsigned long)pfmemalloc << PAGE_FRAG_CACHE_PFMEMALLOC_SHIFT); +} + +static unsigned long page_frag_encoded_page_order(unsigned long encoded_page) +{ + return encoded_page & PAGE_FRAG_CACHE_ORDER_MASK; +} + +static void *page_frag_encoded_page_address(unsigned long encoded_page) +{ + return (void *)(encoded_page & PAGE_MASK); +} + +static struct page *page_frag_encoded_page_ptr(unsigned long encoded_page) +{ + return virt_to_page((void *)encoded_page); +} + +static unsigned int page_frag_cache_page_size(unsigned long encoded_page) +{ + return PAGE_SIZE << page_frag_encoded_page_order(encoded_page); +} + static struct page *__page_frag_cache_refill(struct page_frag_cache *nc, gfp_t gfp_mask) { + unsigned long order = PAGE_FRAG_CACHE_MAX_ORDER; struct page *page = NULL; gfp_t gfp = gfp_mask; @@ -30,23 +63,26 @@ static struct page *__page_frag_cache_refill(struct page_frag_cache *nc, __GFP_NOWARN | __GFP_NORETRY | __GFP_NOMEMALLOC; page = alloc_pages_node(NUMA_NO_NODE, gfp_mask, PAGE_FRAG_CACHE_MAX_ORDER); - nc->size = page ? PAGE_FRAG_CACHE_MAX_SIZE : PAGE_SIZE; #endif - if (unlikely(!page)) + if (unlikely(!page)) { page = alloc_pages_node(NUMA_NO_NODE, gfp, 0); + order = 0; + } - nc->va = page ? page_address(page) : NULL; + nc->encoded_page = page ? + page_frag_encode_page(page, order, page_is_pfmemalloc(page)) : 0; return page; } void page_frag_cache_drain(struct page_frag_cache *nc) { - if (!nc->va) + if (!nc->encoded_page) return; - __page_frag_cache_drain(virt_to_head_page(nc->va), nc->pagecnt_bias); - nc->va = NULL; + __page_frag_cache_drain(page_frag_encoded_page_ptr(nc->encoded_page), + nc->pagecnt_bias); + nc->encoded_page = 0; } EXPORT_SYMBOL(page_frag_cache_drain); @@ -63,35 +99,29 @@ void *__page_frag_alloc_align(struct page_frag_cache *nc, unsigned int fragsz, gfp_t gfp_mask, unsigned int align_mask) { -#if (PAGE_SIZE < PAGE_FRAG_CACHE_MAX_SIZE) - unsigned int size = nc->size; -#else - unsigned int size = PAGE_SIZE; -#endif - unsigned int offset; + unsigned long encoded_page = nc->encoded_page; + unsigned int size, offset; struct page *page; - if (unlikely(!nc->va)) { + if (unlikely(!encoded_page)) { refill: page = __page_frag_cache_refill(nc, gfp_mask); if (!page) return NULL; -#if (PAGE_SIZE < PAGE_FRAG_CACHE_MAX_SIZE) - /* if size can vary use size else just use PAGE_SIZE */ - size = nc->size; -#endif + encoded_page = nc->encoded_page; + /* Even if we own the page, we do not use atomic_set(). * This would break get_page_unless_zero() users. */ page_ref_add(page, PAGE_FRAG_CACHE_MAX_SIZE); /* reset page count bias and offset to start of new frag */ - nc->pfmemalloc = page_is_pfmemalloc(page); nc->pagecnt_bias = PAGE_FRAG_CACHE_MAX_SIZE + 1; nc->offset = 0; } + size = page_frag_cache_page_size(encoded_page); offset = __ALIGN_KERNEL_MASK(nc->offset, ~align_mask); if (unlikely(offset + fragsz > size)) { if (unlikely(fragsz > PAGE_SIZE)) { @@ -107,13 +137,14 @@ void *__page_frag_alloc_align(struct page_frag_cache *nc, return NULL; } - page = virt_to_page(nc->va); + page = page_frag_encoded_page_ptr(encoded_page); if (!page_ref_sub_and_test(page, nc->pagecnt_bias)) goto refill; - if (unlikely(nc->pfmemalloc)) { - free_unref_page(page, compound_order(page)); + if (unlikely(page_frag_encoded_page_pfmemalloc(encoded_page))) { + free_unref_page(page, + page_frag_encoded_page_order(encoded_page)); goto refill; } @@ -128,7 +159,7 @@ void *__page_frag_alloc_align(struct page_frag_cache *nc, nc->pagecnt_bias--; nc->offset = offset + fragsz; - return nc->va + offset; + return page_frag_encoded_page_address(encoded_page) + offset; } EXPORT_SYMBOL(__page_frag_alloc_align); From patchwork Fri Sep 6 07:36:39 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yunsheng Lin X-Patchwork-Id: 13793532 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id D60BDCE7AB8 for ; Fri, 6 Sep 2024 07:43:01 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id AF3296B0093; Fri, 6 Sep 2024 03:42:59 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id A527E6B0095; Fri, 6 Sep 2024 03:42:59 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 8CC8D6B0096; Fri, 6 Sep 2024 03:42:59 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id 6E6996B0093 for ; Fri, 6 Sep 2024 03:42:59 -0400 (EDT) Received: from smtpin14.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id B18D7C1080 for ; Fri, 6 Sep 2024 07:42:58 +0000 (UTC) X-FDA: 82533522036.14.6908815 Received: from szxga07-in.huawei.com (szxga07-in.huawei.com [45.249.212.35]) by imf09.hostedemail.com (Postfix) with ESMTP id 429FA14000C for ; Fri, 6 Sep 2024 07:42:55 +0000 (UTC) Authentication-Results: imf09.hostedemail.com; dkim=none; spf=pass (imf09.hostedemail.com: domain of linyunsheng@huawei.com designates 45.249.212.35 as permitted sender) smtp.mailfrom=linyunsheng@huawei.com; dmarc=pass (policy=quarantine) header.from=huawei.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1725608527; a=rsa-sha256; cv=none; b=AycxlSTd1DFwrMCy4SqJx68674IU0ELBF22Yl6xjsEhNBN0namFiplo5TRKqwYwKVA7iZ9 St4+w/qskvGSwMHNR55wFxl97pVk7NKNOs74/tr68EjSotR8+UYi5rKdvBjhWzlsJnxkQc VVFlBLvWyRcN1z1PpfVCUcmbtJATCVc= ARC-Authentication-Results: i=1; imf09.hostedemail.com; dkim=none; spf=pass (imf09.hostedemail.com: domain of linyunsheng@huawei.com designates 45.249.212.35 as permitted sender) smtp.mailfrom=linyunsheng@huawei.com; dmarc=pass (policy=quarantine) header.from=huawei.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1725608527; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=v2/aZMoWgQtjCquiJJ1G8YfRSVBJSWDwe2hHP2fBdbE=; b=izm2hux6hJPSlCf1ktaBZD2N59ZMS2KcLkvEXJAEe2cVYssxGhUdSrmYJhy9i1ffSM+XwU P4CWNfq4mzUKWmUVnKCrpLqGqL4fSa8vC0y4PhISpxx8mpC51HXlaO4QiMqFk+sw0M2Voc 5yrZ4EKxnNkR7nUDfZBbFX2ydf/mKjc= Received: from mail.maildlp.com (unknown [172.19.88.163]) by szxga07-in.huawei.com (SkyGuard) with ESMTP id 4X0Sqn6WX5z1S9vd; Fri, 6 Sep 2024 15:42:29 +0800 (CST) Received: from dggpemf200006.china.huawei.com (unknown [7.185.36.61]) by mail.maildlp.com (Postfix) with ESMTPS id 404E2180041; Fri, 6 Sep 2024 15:42:53 +0800 (CST) Received: from localhost.localdomain (10.90.30.45) by dggpemf200006.china.huawei.com (7.185.36.61) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.11; Fri, 6 Sep 2024 15:42:53 +0800 From: Yunsheng Lin To: , , CC: , , Yunsheng Lin , Alexander Duyck , Andrew Morton , Subject: [PATCH net-next v18 07/14] mm: page_frag: some minor refactoring before adding new API Date: Fri, 6 Sep 2024 15:36:39 +0800 Message-ID: <20240906073646.2930809-8-linyunsheng@huawei.com> X-Mailer: git-send-email 2.30.0 In-Reply-To: <20240906073646.2930809-1-linyunsheng@huawei.com> References: <20240906073646.2930809-1-linyunsheng@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.90.30.45] X-ClientProxiedBy: dggems705-chm.china.huawei.com (10.3.19.182) To dggpemf200006.china.huawei.com (7.185.36.61) X-Stat-Signature: kzhxwfuuhwm8erdmx558zsoirewwrs9a X-Rspamd-Queue-Id: 429FA14000C X-Rspam-User: X-Rspamd-Server: rspam10 X-HE-Tag: 1725608575-921616 X-HE-Meta: U2FsdGVkX1+fOjOUOpBLD4RYu82t8m1VjDciD6t9Z1FpLLtoHK/hir2i0hO7oMHyRdMw2u9YR6M5jGoWLS0Y/rm+9AhUACZTAZx2Xg9CnvgylXN3EyicWZwK3pFp/eS+R6isB58zu7Zs/a63xpNPGqCDpsUOzE0YGdQR4W+NIdrPKWGLHuQAJ0v4qrUI/q50hFIEVwKtod7ohCMXnd8vmqyE/PFUtEfhqMXex6XXmVCuQiGlgxe2P3ahr4/3bvgV9899o6/Up5aDbHxeam7PV/Av5fJr/t7+ddPf2JjGNEb2mDiG6lClELeqR9k9xovrTxk6aVkDG+HG5HDYilaSAw3dUQj3m51gAgKfoLfjmK7uMW+bccMZhyl4ucwY39xnbisw7qIgC55kNCCAgfMr6oLH4xIO9oWVKOcVqvqmJu4MZvuCw4Q9IqhXnE7ty4qtl8Eqqx2B4IvKD8/ZoFi8ESmUeWmQVc2kRwexcoiR17NZ5fjuPj+seg9eXLZpDEQYdK2mdw9X7+hobzWEEnVO/PDieiYy4APhvMGuSjHvXOyTZsPcbvi0ZwwE539EJb3ixvrWKNzyl3NqWxYTu4hc42bR3B9lQtB77cVnCcuq0jg/0nR0vj1xPhgB9kGJYHCBh+A/O58gUv8biddUJyAGw2q5LVrC/tGlQosxIcSQkU9/oGgUd82cG0rXqZpo2hfBGtcuRCBhwNXm6lTWGbncFZkThrF2+FrELawThGSbtDBDTVFM3oKMfP8EAk6ooP8PK4zwYZIItTuKqLRv9ufY/HY/R4GwTWI6Vmm+bX2rnBXZzmW2nvYF1U/CVUGME/6LI/ooT+4y58cvyO+a8B8CiNfwAq7jeXlGEtEzIi3BqHc/3gyt3jD75NIdeNwLO62SjWskatZFzQVCNJO4L1PX473IHC0v9Nmo0rMCgeV+xDV3wG0ip32LJBTXCkidXdv+y6ZPgox7uki0tT1HS3W /pgE5L5M VIcBBjp1C/0B9mlmF6j1PbCegQZ6KBfjqS6dCmtHOYqBzBVwfGZWNe6e4rYcIxKDHM6mSN7InAomuuLHf91ocBToyqYG5h7RdR/WTh75pjkW5e3q58id/OFHK4l+kFOFlbM1zVtmnt3/gzaxRYkIiUTExytx0AIKACVEuo4ZevFwam8DOEy5SNjZGeIyoAGyvtMIGNjFYkghgdCkSwRWybOMj6JjPpP4JZc4rvMzF2pIGQGC4lzYDpefkBDqQ2/wJmjC5cEZv1zg9IUng8+NNq1y+s+pzh18JtNxK X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Refactor common codes from __page_frag_alloc_va_align() to __page_frag_cache_prepare() and __page_frag_cache_commit(), so that the new API can make use of them. CC: Alexander Duyck Signed-off-by: Yunsheng Lin --- include/linux/page_frag_cache.h | 36 +++++++++++++++++++++++++++-- mm/page_frag_cache.c | 40 ++++++++++++++++++++++++++------- 2 files changed, 66 insertions(+), 10 deletions(-) diff --git a/include/linux/page_frag_cache.h b/include/linux/page_frag_cache.h index 75aaad6eaea2..b634e1338741 100644 --- a/include/linux/page_frag_cache.h +++ b/include/linux/page_frag_cache.h @@ -5,6 +5,7 @@ #include #include +#include #include #include @@ -41,8 +42,39 @@ static inline bool page_frag_cache_is_pfmemalloc(struct page_frag_cache *nc) void page_frag_cache_drain(struct page_frag_cache *nc); void __page_frag_cache_drain(struct page *page, unsigned int count); -void *__page_frag_alloc_align(struct page_frag_cache *nc, unsigned int fragsz, - gfp_t gfp_mask, unsigned int align_mask); +void *__page_frag_cache_prepare(struct page_frag_cache *nc, unsigned int fragsz, + struct page_frag *pfrag, gfp_t gfp_mask, + unsigned int align_mask); +unsigned int __page_frag_cache_commit_noref(struct page_frag_cache *nc, + struct page_frag *pfrag, + unsigned int used_sz); + +static inline unsigned int __page_frag_cache_commit(struct page_frag_cache *nc, + struct page_frag *pfrag, + unsigned int used_sz) +{ + VM_BUG_ON(!nc->pagecnt_bias); + nc->pagecnt_bias--; + + return __page_frag_cache_commit_noref(nc, pfrag, used_sz); +} + +static inline void *__page_frag_alloc_align(struct page_frag_cache *nc, + unsigned int fragsz, gfp_t gfp_mask, + unsigned int align_mask) +{ + struct page_frag page_frag; + void *va; + + va = __page_frag_cache_prepare(nc, fragsz, &page_frag, gfp_mask, + align_mask); + if (unlikely(!va)) + return NULL; + + __page_frag_cache_commit(nc, &page_frag, fragsz); + + return va; +} static inline void *page_frag_alloc_align(struct page_frag_cache *nc, unsigned int fragsz, gfp_t gfp_mask, diff --git a/mm/page_frag_cache.c b/mm/page_frag_cache.c index cf9375a81a64..6f6e47bbdc8d 100644 --- a/mm/page_frag_cache.c +++ b/mm/page_frag_cache.c @@ -95,9 +95,31 @@ void __page_frag_cache_drain(struct page *page, unsigned int count) } EXPORT_SYMBOL(__page_frag_cache_drain); -void *__page_frag_alloc_align(struct page_frag_cache *nc, - unsigned int fragsz, gfp_t gfp_mask, - unsigned int align_mask) +unsigned int __page_frag_cache_commit_noref(struct page_frag_cache *nc, + struct page_frag *pfrag, + unsigned int used_sz) +{ + unsigned int orig_offset; + + VM_BUG_ON(used_sz > pfrag->size); + VM_BUG_ON(pfrag->page != page_frag_encoded_page_ptr(nc->encoded_page)); + VM_BUG_ON(pfrag->offset + pfrag->size > + page_frag_cache_page_size(nc->encoded_page)); + + /* pfrag->offset might be bigger than the nc->offset due to alignment */ + VM_BUG_ON(nc->offset > pfrag->offset); + + orig_offset = nc->offset; + nc->offset = pfrag->offset + used_sz; + + /* Return true size back to caller considering the offset alignment */ + return nc->offset - orig_offset; +} +EXPORT_SYMBOL(__page_frag_cache_commit_noref); + +void *__page_frag_cache_prepare(struct page_frag_cache *nc, unsigned int fragsz, + struct page_frag *pfrag, gfp_t gfp_mask, + unsigned int align_mask) { unsigned long encoded_page = nc->encoded_page; unsigned int size, offset; @@ -119,6 +141,8 @@ void *__page_frag_alloc_align(struct page_frag_cache *nc, /* reset page count bias and offset to start of new frag */ nc->pagecnt_bias = PAGE_FRAG_CACHE_MAX_SIZE + 1; nc->offset = 0; + } else { + page = page_frag_encoded_page_ptr(encoded_page); } size = page_frag_cache_page_size(encoded_page); @@ -137,8 +161,6 @@ void *__page_frag_alloc_align(struct page_frag_cache *nc, return NULL; } - page = page_frag_encoded_page_ptr(encoded_page); - if (!page_ref_sub_and_test(page, nc->pagecnt_bias)) goto refill; @@ -153,15 +175,17 @@ void *__page_frag_alloc_align(struct page_frag_cache *nc, /* reset page count bias and offset to start of new frag */ nc->pagecnt_bias = PAGE_FRAG_CACHE_MAX_SIZE + 1; + nc->offset = 0; offset = 0; } - nc->pagecnt_bias--; - nc->offset = offset + fragsz; + pfrag->page = page; + pfrag->offset = offset; + pfrag->size = size - offset; return page_frag_encoded_page_address(encoded_page) + offset; } -EXPORT_SYMBOL(__page_frag_alloc_align); +EXPORT_SYMBOL(__page_frag_cache_prepare); /* * Frees a page fragment allocated out of either a compound or order 0 page. From patchwork Fri Sep 6 07:36:40 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yunsheng Lin X-Patchwork-Id: 13793533 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id D65FBCE7AB8 for ; Fri, 6 Sep 2024 07:43:04 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 770996B0098; Fri, 6 Sep 2024 03:43:01 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 6FA4E6B0099; Fri, 6 Sep 2024 03:43:01 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 524196B009A; Fri, 6 Sep 2024 03:43:01 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id 298306B0098 for ; Fri, 6 Sep 2024 03:43:01 -0400 (EDT) Received: from smtpin30.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id C9B5A41020 for ; Fri, 6 Sep 2024 07:43:00 +0000 (UTC) X-FDA: 82533522120.30.0DADF58 Received: from szxga04-in.huawei.com (szxga04-in.huawei.com [45.249.212.190]) by imf14.hostedemail.com (Postfix) with ESMTP id A13AA100013 for ; Fri, 6 Sep 2024 07:42:58 +0000 (UTC) Authentication-Results: imf14.hostedemail.com; dkim=none; spf=pass (imf14.hostedemail.com: domain of linyunsheng@huawei.com designates 45.249.212.190 as permitted sender) smtp.mailfrom=linyunsheng@huawei.com; dmarc=pass (policy=quarantine) header.from=huawei.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1725608480; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=sR9vRYEcXuBQgfnyXyRtsK0T+JgIrK+AMvd6vS9TV5A=; b=iX0lnr8diOdhGz6bWVdcU2fVRyMrSeH8tRxUjhaCrBBZvyAu2VDKFe/boDNf3tLbYJjLTr NQf9JJYntkoP8A2QakZ5S091ZSya0yqoT7JLRIo6y/RGZUHPwIOrr2Z8oceeTkgBSh3nO8 F2U1N3E7T0mqmH+boV5zNwiUZ1/mDCI= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1725608480; a=rsa-sha256; cv=none; b=XZKwlB3VPYU3ZNxOmWhR1DZcxY4ufenzWoOdacF9QZ7ERrmOOo309GIDxLbq8eJEzvMS6Y 8tAGNelHwd7TS+mLzbuB/twIvMDEZAa2LtM7aR3Isj5jzwMGjli7pEuuiuXmpR0Hy9dEOa DXFl094NXAyiueQtdsQaX83D3Cw0IKo= ARC-Authentication-Results: i=1; imf14.hostedemail.com; dkim=none; spf=pass (imf14.hostedemail.com: domain of linyunsheng@huawei.com designates 45.249.212.190 as permitted sender) smtp.mailfrom=linyunsheng@huawei.com; dmarc=pass (policy=quarantine) header.from=huawei.com Received: from mail.maildlp.com (unknown [172.19.163.17]) by szxga04-in.huawei.com (SkyGuard) with ESMTP id 4X0Sqq6QV5z2DbrH; Fri, 6 Sep 2024 15:42:31 +0800 (CST) Received: from dggpemf200006.china.huawei.com (unknown [7.185.36.61]) by mail.maildlp.com (Postfix) with ESMTPS id A48851A0188; Fri, 6 Sep 2024 15:42:54 +0800 (CST) Received: from localhost.localdomain (10.90.30.45) by dggpemf200006.china.huawei.com (7.185.36.61) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.11; Fri, 6 Sep 2024 15:42:54 +0800 From: Yunsheng Lin To: , , CC: , , Yunsheng Lin , Alexander Duyck , Andrew Morton , Subject: [PATCH net-next v18 08/14] mm: page_frag: use __alloc_pages() to replace alloc_pages_node() Date: Fri, 6 Sep 2024 15:36:40 +0800 Message-ID: <20240906073646.2930809-9-linyunsheng@huawei.com> X-Mailer: git-send-email 2.30.0 In-Reply-To: <20240906073646.2930809-1-linyunsheng@huawei.com> References: <20240906073646.2930809-1-linyunsheng@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.90.30.45] X-ClientProxiedBy: dggems705-chm.china.huawei.com (10.3.19.182) To dggpemf200006.china.huawei.com (7.185.36.61) X-Stat-Signature: 9pdndh3uoenyx83mhgy49s3wt3miz3j6 X-Rspamd-Queue-Id: A13AA100013 X-Rspam-User: X-Rspamd-Server: rspam08 X-HE-Tag: 1725608578-517899 X-HE-Meta: U2FsdGVkX1/QUwQdOE2ejKTAwjqvZ4JMlPZ0mpfaGlNCLlsOzJiaOBwOds24I4mqt57m7Qny6uxtWlclk8bNyd2ImiSEukPKVUBgrcYhczOUnW571BPX7nmH+z9RSht/IO6KebeWrMMTMDpiiyw+tsO7SDGh76vuA7rZ/O9jBdTj3oIRp4XsTgpD6wahqR9DcLhLI1nUayo1vz0oy9Ux232HxHSLVyUyUHQmO6R15ZRIVtIhTOEWvpi9opC5izCq7tcqxmvpvlBgbToPgCFHOh1s6afHkym2qpRdDSTD6Ce+8FslbZLPBLejL0pb5k3LvlCZ68rotmMwQe9DKYrUSqo0718HKtVyi16/uUuT3+JXO146JDcg07oTwE1BfdXigYO3ccFfI4p/Arvo6IAJZgRyY/wr3Dm0WsyTI6xSNvLmyG+PkP4Y8HIC5X7PjBx7+l/ts9YqYQOssSuL80ie8P8/Cj5gYlX2b2WpgWDwW45m0WvV7U4m94toIp0CDe2WF0hJnSnh07124oE5pmq0Ja/tGZoyUk0U6RBCWw/wrXTq1uNDVX/szNFzqcgvN21rxgwcl26i/62dJ+148BwxGwpic9rqX2IXNEzCaDIXCIxj0nvMDJtOOsjzWGvs6fuM4ia9cz+vetp6CtylOtuaCicVvutvgI7yCD/HgFqaLiouUVY6KT43In79sHuRrwj5L6TOQlB5LBdeTpB9e6pYQqwIUPvwf9O7YAfT3BnFN70fStklwAtzElVyPROEldsrDhOPBwnUc2iMUh4cxGvalo31i4ytcrKiQWTJL/Qa+y7X65l333XloM/brSHvKj2ASBil9gGQ4Y9IlpH3P8NKXcrXtWoMhReono4tYiBS+QoDpR/W7d3fEyaGNewDy9tck0Kg+BR30O8hRlPodFcsBRSSqpspKeE1G+u/nopWx3DSagVKSIJTmVevbxPrjtGDYLOC10gj+HU+T33wmuL 2R3YD9ib 2ORTIBU6lyWjxSxhtt/heY1t8tW8aqy9321NBSF9x4uqAxpbBJR/vLozdCQ5LIy5xcwi8f0rjeuVSb87sQSwPTckyEaEMlNCCE2XlObsOKUBpDTTfBZhY6d/xL4kx22cd7rqpQ1/ICbRjc+NcIvyPh6Oq+wPgQ0ruBdBH4M9P/XzfwPfsjPw+KsGQK7TqscB+S9HTP5wt8cdhzm5cvY1IwZNkulaDYZD4ZXbr6v3T9Pz7kjzq8ldvJdaS4nXbHjhJp+fnKxSsp43Vk6x+xaj+t/P/LHh766x+h+O5 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: It seems there is about 24Bytes binary size increase for __page_frag_cache_refill() after refactoring in arm64 system with 64K PAGE_SIZE. By doing the gdb disassembling, It seems we can have more than 100Bytes decrease for the binary size by using __alloc_pages() to replace alloc_pages_node(), as there seems to be some unnecessary checking for nid being NUMA_NO_NODE, especially when page_frag is part of the mm system. CC: Alexander Duyck Signed-off-by: Yunsheng Lin --- mm/page_frag_cache.c | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/mm/page_frag_cache.c b/mm/page_frag_cache.c index 6f6e47bbdc8d..a5448b44068a 100644 --- a/mm/page_frag_cache.c +++ b/mm/page_frag_cache.c @@ -61,11 +61,11 @@ static struct page *__page_frag_cache_refill(struct page_frag_cache *nc, #if (PAGE_SIZE < PAGE_FRAG_CACHE_MAX_SIZE) gfp_mask = (gfp_mask & ~__GFP_DIRECT_RECLAIM) | __GFP_COMP | __GFP_NOWARN | __GFP_NORETRY | __GFP_NOMEMALLOC; - page = alloc_pages_node(NUMA_NO_NODE, gfp_mask, - PAGE_FRAG_CACHE_MAX_ORDER); + page = __alloc_pages(gfp_mask, PAGE_FRAG_CACHE_MAX_ORDER, + numa_mem_id(), NULL); #endif if (unlikely(!page)) { - page = alloc_pages_node(NUMA_NO_NODE, gfp, 0); + page = __alloc_pages(gfp, 0, numa_mem_id(), NULL); order = 0; } From patchwork Fri Sep 6 07:36:42 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yunsheng Lin X-Patchwork-Id: 13793534 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 939DBCE7AB8 for ; Fri, 6 Sep 2024 07:43:07 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id C50796B009A; Fri, 6 Sep 2024 03:43:03 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id B8BE76B009B; Fri, 6 Sep 2024 03:43:03 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id A05366B009C; Fri, 6 Sep 2024 03:43:03 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id 74B806B009A for ; Fri, 6 Sep 2024 03:43:03 -0400 (EDT) Received: from smtpin16.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id 2D1721610A3 for ; Fri, 6 Sep 2024 07:43:03 +0000 (UTC) X-FDA: 82533522246.16.8E42E10 Received: from szxga06-in.huawei.com (szxga06-in.huawei.com [45.249.212.32]) by imf02.hostedemail.com (Postfix) with ESMTP id B92E680007 for ; Fri, 6 Sep 2024 07:43:00 +0000 (UTC) Authentication-Results: imf02.hostedemail.com; dkim=none; spf=pass (imf02.hostedemail.com: domain of linyunsheng@huawei.com designates 45.249.212.32 as permitted sender) smtp.mailfrom=linyunsheng@huawei.com; dmarc=pass (policy=quarantine) header.from=huawei.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1725608483; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=eOaCyTD/L0pnYKYFJeylR5QH3fD4vNfrwCuJCaLyShM=; b=mIGuv0p7rvleTdaj2S9wfBerb13WxbnshTwoFh1RCY825lTgsg6Z+tiAnuLx6OxXHyRpLV BiNKU5zv316F7dqL4XStpYI4bOaJBy5y7ULdlaDva7VMIgTdXo78OtHAqwq+1RXN3Q0cOR rQPGhrq+LCBxSsA5BxlfYJwRmyh4NkI= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1725608483; a=rsa-sha256; cv=none; b=kLkpny9mMbCtrG3ipZ36CTW1TZwbMkhm5YOJi52jIT0LRCJYshEL/g42Q8EXlZQ0A8Ldd7 KvNG6CuBsUyrjTZZE+9jUeuEM0atkTd5XjPuKDcBafq1m4iZ1fgf5surEB+fSwTQIe+4ZD PMcxte1vQ2920BT1blZs0a6cR3j+o6Q= ARC-Authentication-Results: i=1; imf02.hostedemail.com; dkim=none; spf=pass (imf02.hostedemail.com: domain of linyunsheng@huawei.com designates 45.249.212.32 as permitted sender) smtp.mailfrom=linyunsheng@huawei.com; dmarc=pass (policy=quarantine) header.from=huawei.com Received: from mail.maildlp.com (unknown [172.19.163.44]) by szxga06-in.huawei.com (SkyGuard) with ESMTP id 4X0Sp04Rrzz1xx4R; Fri, 6 Sep 2024 15:40:56 +0800 (CST) Received: from dggpemf200006.china.huawei.com (unknown [7.185.36.61]) by mail.maildlp.com (Postfix) with ESMTPS id 3CCD31401F0; Fri, 6 Sep 2024 15:42:58 +0800 (CST) Received: from localhost.localdomain (10.90.30.45) by dggpemf200006.china.huawei.com (7.185.36.61) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.11; Fri, 6 Sep 2024 15:42:57 +0800 From: Yunsheng Lin To: , , CC: , , Yunsheng Lin , Alexander Duyck , Andrew Morton , Subject: [PATCH net-next v18 10/14] mm: page_frag: introduce prepare/probe/commit API Date: Fri, 6 Sep 2024 15:36:42 +0800 Message-ID: <20240906073646.2930809-11-linyunsheng@huawei.com> X-Mailer: git-send-email 2.30.0 In-Reply-To: <20240906073646.2930809-1-linyunsheng@huawei.com> References: <20240906073646.2930809-1-linyunsheng@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.90.30.45] X-ClientProxiedBy: dggems705-chm.china.huawei.com (10.3.19.182) To dggpemf200006.china.huawei.com (7.185.36.61) X-Rspam-User: X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: B92E680007 X-Stat-Signature: 8ibkkfbp7f3ahg8jjtetm9ohmt4x77qf X-HE-Tag: 1725608580-720190 X-HE-Meta: U2FsdGVkX18E0I9Udj/ZbZbyh1b+POEtFMl+WtiXmo2QVRR+DJiZzvtqXhqoV4SIXmz3Ou9feulXnmaFa4Em77FFCgZAb9NJCBkTjXDcrT3SCPxGwhYtZN3LlKRUWO/VEDVRPnT7ZY20pJiezdihpuv1J5CvowGGW6RbVLevCqwU/IIQHTEC9FlrOgMDl6jvadOI6P80dY5Bn8jvOJuapUv+fiRRNRTvIj0AKUZmL4lHsw6atrNJgv9IEUzHf6I9yJYN9WVviDlIIhRxxKI4YrRWDW0RNYrpwZJOFDk6mb1E0zKtrVzORiy6i5SL7KfgSdpxpsyEj8kBxxAULDUJn4SlSX5eXBwibVwAtF6TeLTpNKxHYOZhTNMHPCYOyClLwHUMeJ1Ei012atDImI1FJhtKpnDwox/8WB4fmPlVqJq6n0ZOWUyk15Q+5VGSwDTICKqAZDVrFwrj4TxAf7k3/7G4HehLddnhtxEb++YxvmPSCnUy5jwHZGvyZjOn1c1mMuNsG/DuCENrraPnERr9VVwg6Muz24gMUvxgno9Pna35Hb2f1YnpRxiJnulqv8AOc6deDfEzZH7MtyvpEjBCaF1CPOfBxB4fQrmExDSRMAIb7SHewt7CMgCdipXQueJzvMRzxxmpSk+st28BUEcDt9aCH65farOPZ4KLrRqFeEhzPioz39V8YJeLujCypCG0CXN9+UZbb2Au8uPUwlvDifR8NsIEQUmCKMMDKyJNiprCTmBigmicFD2uCfKPd5CRJSeXI1E6yT3jJC+5CIJO0EBgBnSRbUFrziazFcyHWUr/uVIYgc438Y9Z2XRCBHJc72/ZSrUzInGZ3sZplKKOa2tRbLJDtyWwMRUThpGBM9TiffK3T8dTeahVPSTFUZ87ytcGSB1XOWJEv6Wd2BYp3wn8BzM5L9L+JaAt4HcO1ixWHYwwQE4RJPAK4ovJQUTilBCSowo7y3Ig7wNSZJg RX1d9n5E 595KGHMwbhj8GCVENBEfHnpLR3EKkdSaMEUpk4YoDRPaLZew4tsNbs/CrdrA+UuMKXOzj5MwTJRcWbWmo3d78PwpJKn38wTovNXoNAq1Lfu+q2385NZ1m1PLAwI4Q/JvSY6w5nlkBI6xQfJpNEpT+sLg3n/fEyfaffVJfnIL9ahg7CHoJRUB/2bJ7HWJUvs+z+/Zcz5Ftx0E81Iemn6L1FtAH90v2Ph8BW6umnTt9sqqfNg+uoIjpWbs3yq7lM5w5+Q7zVIlTpAtcfNJqnBim6pCj0EhZljf54bhL X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: There are many use cases that need minimum memory in order for forward progress, but more performant if more memory is available or need to probe the cache info to use any memory available for frag caoleasing reason. Currently skb_page_frag_refill() API is used to solve the above use cases, but caller needs to know about the internal detail and access the data field of 'struct page_frag' to meet the requirement of the above use cases and its implementation is similar to the one in mm subsystem. To unify those two page_frag implementations, introduce a prepare API to ensure minimum memory is satisfied and return how much the actual memory is available to the caller and a probe API to report the current available memory to caller without doing cache refilling. The caller needs to either call the commit API to report how much memory it actually uses, or not do so if deciding to not use any memory. CC: Alexander Duyck Signed-off-by: Yunsheng Lin --- include/linux/page_frag_cache.h | 135 ++++++++++++++++++++++++++++++++ mm/page_frag_cache.c | 21 +++++ 2 files changed, 156 insertions(+) diff --git a/include/linux/page_frag_cache.h b/include/linux/page_frag_cache.h index b634e1338741..4e9018051956 100644 --- a/include/linux/page_frag_cache.h +++ b/include/linux/page_frag_cache.h @@ -40,6 +40,11 @@ static inline bool page_frag_cache_is_pfmemalloc(struct page_frag_cache *nc) return page_frag_encoded_page_pfmemalloc(nc->encoded_page); } +static inline unsigned int page_frag_cache_page_offset(const struct page_frag_cache *nc) +{ + return nc->offset; +} + void page_frag_cache_drain(struct page_frag_cache *nc); void __page_frag_cache_drain(struct page *page, unsigned int count); void *__page_frag_cache_prepare(struct page_frag_cache *nc, unsigned int fragsz, @@ -48,6 +53,10 @@ void *__page_frag_cache_prepare(struct page_frag_cache *nc, unsigned int fragsz, unsigned int __page_frag_cache_commit_noref(struct page_frag_cache *nc, struct page_frag *pfrag, unsigned int used_sz); +void *__page_frag_alloc_refill_probe_align(struct page_frag_cache *nc, + unsigned int fragsz, + struct page_frag *pfrag, + unsigned int align_mask); static inline unsigned int __page_frag_cache_commit(struct page_frag_cache *nc, struct page_frag *pfrag, @@ -90,6 +99,132 @@ static inline void *page_frag_alloc(struct page_frag_cache *nc, return __page_frag_alloc_align(nc, fragsz, gfp_mask, ~0u); } +static inline bool __page_frag_refill_align(struct page_frag_cache *nc, + unsigned int fragsz, + struct page_frag *pfrag, + gfp_t gfp_mask, + unsigned int align_mask) +{ + if (unlikely(!__page_frag_cache_prepare(nc, fragsz, pfrag, gfp_mask, + align_mask))) + return false; + + __page_frag_cache_commit(nc, pfrag, fragsz); + return true; +} + +static inline bool page_frag_refill_align(struct page_frag_cache *nc, + unsigned int fragsz, + struct page_frag *pfrag, + gfp_t gfp_mask, unsigned int align) +{ + WARN_ON_ONCE(!is_power_of_2(align)); + return __page_frag_refill_align(nc, fragsz, pfrag, gfp_mask, -align); +} + +static inline bool page_frag_refill(struct page_frag_cache *nc, + unsigned int fragsz, + struct page_frag *pfrag, gfp_t gfp_mask) +{ + return __page_frag_refill_align(nc, fragsz, pfrag, gfp_mask, ~0u); +} + +static inline bool __page_frag_refill_prepare_align(struct page_frag_cache *nc, + unsigned int fragsz, + struct page_frag *pfrag, + gfp_t gfp_mask, + unsigned int align_mask) +{ + return !!__page_frag_cache_prepare(nc, fragsz, pfrag, gfp_mask, + align_mask); +} + +static inline bool page_frag_refill_prepare_align(struct page_frag_cache *nc, + unsigned int fragsz, + struct page_frag *pfrag, + gfp_t gfp_mask, + unsigned int align) +{ + WARN_ON_ONCE(!is_power_of_2(align)); + return __page_frag_refill_prepare_align(nc, fragsz, pfrag, gfp_mask, + -align); +} + +static inline bool page_frag_refill_prepare(struct page_frag_cache *nc, + unsigned int fragsz, + struct page_frag *pfrag, + gfp_t gfp_mask) +{ + return __page_frag_refill_prepare_align(nc, fragsz, pfrag, gfp_mask, + ~0u); +} + +static inline void *__page_frag_alloc_refill_prepare_align(struct page_frag_cache *nc, + unsigned int fragsz, + struct page_frag *pfrag, + gfp_t gfp_mask, + unsigned int align_mask) +{ + return __page_frag_cache_prepare(nc, fragsz, pfrag, gfp_mask, align_mask); +} + +static inline void *page_frag_alloc_refill_prepare_align(struct page_frag_cache *nc, + unsigned int fragsz, + struct page_frag *pfrag, + gfp_t gfp_mask, + unsigned int align) +{ + WARN_ON_ONCE(!is_power_of_2(align)); + return __page_frag_alloc_refill_prepare_align(nc, fragsz, pfrag, + gfp_mask, -align); +} + +static inline void *page_frag_alloc_refill_prepare(struct page_frag_cache *nc, + unsigned int fragsz, + struct page_frag *pfrag, + gfp_t gfp_mask) +{ + return __page_frag_alloc_refill_prepare_align(nc, fragsz, pfrag, + gfp_mask, ~0u); +} + +static inline void *page_frag_alloc_refill_probe(struct page_frag_cache *nc, + unsigned int fragsz, + struct page_frag *pfrag) +{ + return __page_frag_alloc_refill_probe_align(nc, fragsz, pfrag, ~0u); +} + +static inline bool page_frag_refill_probe(struct page_frag_cache *nc, + unsigned int fragsz, + struct page_frag *pfrag) +{ + return !!page_frag_alloc_refill_probe(nc, fragsz, pfrag); +} + +static inline void page_frag_commit(struct page_frag_cache *nc, + struct page_frag *pfrag, + unsigned int used_sz) +{ + __page_frag_cache_commit(nc, pfrag, used_sz); +} + +static inline void page_frag_commit_noref(struct page_frag_cache *nc, + struct page_frag *pfrag, + unsigned int used_sz) +{ + __page_frag_cache_commit_noref(nc, pfrag, used_sz); +} + +static inline void page_frag_alloc_abort(struct page_frag_cache *nc, + unsigned int fragsz) +{ + VM_BUG_ON(fragsz > nc->offset); + + nc->pagecnt_bias++; + nc->offset -= fragsz; +} + void page_frag_free(void *addr); #endif diff --git a/mm/page_frag_cache.c b/mm/page_frag_cache.c index a5448b44068a..c052c77a96eb 100644 --- a/mm/page_frag_cache.c +++ b/mm/page_frag_cache.c @@ -117,6 +117,27 @@ unsigned int __page_frag_cache_commit_noref(struct page_frag_cache *nc, } EXPORT_SYMBOL(__page_frag_cache_commit_noref); +void *__page_frag_alloc_refill_probe_align(struct page_frag_cache *nc, + unsigned int fragsz, + struct page_frag *pfrag, + unsigned int align_mask) +{ + unsigned long encoded_page = nc->encoded_page; + unsigned int size, offset; + + size = page_frag_cache_page_size(encoded_page); + offset = __ALIGN_KERNEL_MASK(nc->offset, ~align_mask); + if (unlikely(!encoded_page || offset + fragsz > size)) + return NULL; + + pfrag->page = page_frag_encoded_page_ptr(encoded_page); + pfrag->size = size - offset; + pfrag->offset = offset; + + return page_frag_encoded_page_address(encoded_page) + offset; +} +EXPORT_SYMBOL(__page_frag_alloc_refill_probe_align); + void *__page_frag_cache_prepare(struct page_frag_cache *nc, unsigned int fragsz, struct page_frag *pfrag, gfp_t gfp_mask, unsigned int align_mask) From patchwork Fri Sep 6 07:36:43 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yunsheng Lin X-Patchwork-Id: 13793535 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 94E5DCE7AE0 for ; Fri, 6 Sep 2024 07:43:10 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 4DA926B009C; Fri, 6 Sep 2024 03:43:06 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 411836B009D; Fri, 6 Sep 2024 03:43:06 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 28B636B009E; Fri, 6 Sep 2024 03:43:06 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id 085216B009C for ; Fri, 6 Sep 2024 03:43:06 -0400 (EDT) Received: from smtpin07.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id B4BD441005 for ; Fri, 6 Sep 2024 07:43:05 +0000 (UTC) X-FDA: 82533522330.07.86E05AC Received: from szxga08-in.huawei.com (szxga08-in.huawei.com [45.249.212.255]) by imf30.hostedemail.com (Postfix) with ESMTP id 4654E80016 for ; Fri, 6 Sep 2024 07:43:02 +0000 (UTC) Authentication-Results: imf30.hostedemail.com; dkim=none; dmarc=pass (policy=quarantine) header.from=huawei.com; spf=pass (imf30.hostedemail.com: domain of linyunsheng@huawei.com designates 45.249.212.255 as permitted sender) smtp.mailfrom=linyunsheng@huawei.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1725608504; a=rsa-sha256; cv=none; b=qnPLA7Ww3+cco9CcTDOztD/Gm7CzHHxfy7AJcPq6IPUbQoTkuZY+k5hiyT5Tyj05MNyYWV Is1LZCzBbDKXBhE7cgr7fh/Gwo7eIawhb02WbjNDFKlpM6PbqOipQ5TcpYJurB2eAG1aGG 2uBfgWzUYfL5kDP112ijhGe8KJrzYx4= ARC-Authentication-Results: i=1; imf30.hostedemail.com; dkim=none; dmarc=pass (policy=quarantine) header.from=huawei.com; spf=pass (imf30.hostedemail.com: domain of linyunsheng@huawei.com designates 45.249.212.255 as permitted sender) smtp.mailfrom=linyunsheng@huawei.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1725608504; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=dPXY15X/d6Jk2gv8dJgifbFkwOJQaGu5Vqiv5kv7lTM=; b=FYeP7DPFlCqBOoHXFcy79owK3aPRgNCUIj9CTmYL6uHYOq/qjIxMQzJNBXTkdZQJssi1Lx Z/W+n9++QRoaFqW6ETzhHu0J+X9e45yLEeeb/YYp5RcG57n44DGWpgO6jTpMgylrmyQU6m XzKdTwgMSTlHISa2/ii1a95mxqpTny8= Received: from mail.maildlp.com (unknown [172.19.162.254]) by szxga08-in.huawei.com (SkyGuard) with ESMTP id 4X0SqD2R1kz1P9HM; Fri, 6 Sep 2024 15:42:00 +0800 (CST) Received: from dggpemf200006.china.huawei.com (unknown [7.185.36.61]) by mail.maildlp.com (Postfix) with ESMTPS id 344891800FE; Fri, 6 Sep 2024 15:43:00 +0800 (CST) Received: from localhost.localdomain (10.90.30.45) by dggpemf200006.china.huawei.com (7.185.36.61) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.11; Fri, 6 Sep 2024 15:42:59 +0800 From: Yunsheng Lin To: , , CC: , , Yunsheng Lin , Alexander Duyck , Andrew Morton , Shuah Khan , , Subject: [PATCH net-next v18 11/14] mm: page_frag: add testing for the newly added prepare API Date: Fri, 6 Sep 2024 15:36:43 +0800 Message-ID: <20240906073646.2930809-12-linyunsheng@huawei.com> X-Mailer: git-send-email 2.30.0 In-Reply-To: <20240906073646.2930809-1-linyunsheng@huawei.com> References: <20240906073646.2930809-1-linyunsheng@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.90.30.45] X-ClientProxiedBy: dggems705-chm.china.huawei.com (10.3.19.182) To dggpemf200006.china.huawei.com (7.185.36.61) X-Rspamd-Server: rspam12 X-Rspamd-Queue-Id: 4654E80016 X-Stat-Signature: 4prpxhbu5ehya7dsb8aesf6anqeaif75 X-Rspam-User: X-HE-Tag: 1725608582-991736 X-HE-Meta: U2FsdGVkX19uRgp3G2HZ6tYgC5mpF+hhlE53+/1B3LmP8r8UelVeEGNIfehoWhDv9QHmJiQ0TKCsE725GQf0/9ocz87D8cf7Nc4B0zZY5uwYZHQ7iw9VCYD9lHyJKyD9bllXGBw4bCpwfwDf2z4hVs8VijYsYTRFMg1RhOQ4DQOQBpUJsX+Zixjy8Okv1M16USWgtGWtYA+1QRvjdEcB1m+mmj2PoQ8C8UEgHkBEuIQCJANABwwMGd8/cbDk0FeU+UPpSu1Aw6Uk3u1iISRPygEpQSUnvOl0yVf5hzgNtDDAMPUD2cd9Pfn2UvC7Z8bUuuBA3NSUNGEkuAS+j0RFUviAKB8AMosnoxBRAJr1BDgBDVZtMEgwXUekyaqPLJQPMvV6D50j1lht41AnQnLQgEQOPL5eJwro+qCQ7/i051HbGT3x4LLvaa54uHBJOa6WwaDbJKprPMYaExKgkAgXwfwhJV20R9X0Ht5a4oCdSBwBc29bX0cScqp3ADCY4eUoBaG9SC1VhJPKfWdgTh4s0gZ6ScjcNMpMyEpnDm/oHjbc5dOzhzrD/m+rnPOHAkHttmZTZb6UO/m1Ggp520bd/iii5w8Ym9R0ljAsRdowiqN8sm/qBWg8X/KS4sV9zMGto5PpDbATlrlNCsyX4BcScNoK8TFkan/B36T0wvxzVH1fV5gOFVg8TxQEA8RypLRrz1UXvjDaBVCNJrh/6r6sVN7lAkBeNDP10pB5JEQMfyY/IBGKX5pve8tshElJezVuV2t3xvKr4z/BxAaIk8HLr81EkDEC/W7e9XHtbPanPwAOTc+fUvZ8VLedEHpJ8C0lq0Y9pBoohctkn4lrjXJ6hlJFO808QFQvwNg9MzmFQjJDPQSY3y8KwX59eAwERnj7M91QYdk76Fu7EfkKx43tWq6rxEGwnBoPtILGc3Li41CdmDM9clt4+bYA7p2vxZY2bRa4rqgSIF7G2Q6UUKz zBitzdQS WcirzYcrZ11mQwz77VVqShUGRIRLjvLb43MJr6cxLEkjAVM8NjQm3KVPRbJzzITc2O9yO8ps0VnlXihvFfeX5hY90SDI+aH1bXvC1rdYPa3kkmhCEjOw9I11ZLv8wliItk711yPt1HfqMK366P6c0II504CPetJ2iC+Lu3qmRFkH6fVocAFpcaj8PhUaiv6FyTbIxaGq/8nV+8OIRJE4LlUQAO2DheMQz2ykTBXniQfyrRLSODOlzYDS6IXtgSg1km0kuG1PDj74VV863t1kpYCjXG6W4AFDZpOlJ X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Add testing for the newly added prepare API, for both aligned and non-aligned API, also probe API is also tested along with prepare API. CC: Alexander Duyck Signed-off-by: Yunsheng Lin --- .../selftests/mm/page_frag/page_frag_test.c | 66 +++++++++++++++++-- tools/testing/selftests/mm/run_vmtests.sh | 4 ++ tools/testing/selftests/mm/test_page_frag.sh | 31 +++++++++ 3 files changed, 96 insertions(+), 5 deletions(-) diff --git a/tools/testing/selftests/mm/page_frag/page_frag_test.c b/tools/testing/selftests/mm/page_frag/page_frag_test.c index a4bd543d6950..7cfa896f69cb 100644 --- a/tools/testing/selftests/mm/page_frag/page_frag_test.c +++ b/tools/testing/selftests/mm/page_frag/page_frag_test.c @@ -27,6 +27,10 @@ static bool test_align; module_param(test_align, bool, 0); MODULE_PARM_DESC(test_align, "use align API for testing"); +static bool test_prepare; +module_param(test_prepare, bool, 0); +MODULE_PARM_DESC(test_prepare, "use prepare API for testing"); + static int test_alloc_len = 2048; module_param(test_alloc_len, int, 0); MODULE_PARM_DESC(test_alloc_len, "alloc len for testing"); @@ -67,6 +71,18 @@ static int page_frag_pop_thread(void *arg) return 0; } +static void frag_frag_test_commit(struct page_frag_cache *nc, + struct page_frag *prepare_pfrag, + struct page_frag *probe_pfrag, + unsigned int used_sz) +{ + WARN_ON_ONCE(prepare_pfrag->page != probe_pfrag->page || + prepare_pfrag->offset != probe_pfrag->offset || + prepare_pfrag->size != probe_pfrag->size); + + page_frag_commit(nc, prepare_pfrag, used_sz); +} + static int page_frag_push_thread(void *arg) { struct ptr_ring *ring = arg; @@ -80,13 +96,52 @@ static int page_frag_push_thread(void *arg) int ret; if (test_align) { - va = page_frag_alloc_align(&test_nc, test_alloc_len, - GFP_KERNEL, SMP_CACHE_BYTES); + if (test_prepare) { + struct page_frag prepare_frag, probe_frag; + void *probe_va; + + va = page_frag_alloc_refill_prepare_align(&test_nc, + test_alloc_len, + &prepare_frag, + GFP_KERNEL, + SMP_CACHE_BYTES); + + probe_va = __page_frag_alloc_refill_probe_align(&test_nc, + test_alloc_len, + &probe_frag, + -SMP_CACHE_BYTES); + WARN_ON_ONCE(va != probe_va); + + if (likely(va)) + frag_frag_test_commit(&test_nc, &prepare_frag, + &probe_frag, test_alloc_len); + } else { + va = page_frag_alloc_align(&test_nc, + test_alloc_len, + GFP_KERNEL, + SMP_CACHE_BYTES); + } WARN_ONCE((unsigned long)va & (SMP_CACHE_BYTES - 1), "unaligned va returned\n"); } else { - va = page_frag_alloc(&test_nc, test_alloc_len, GFP_KERNEL); + if (test_prepare) { + struct page_frag prepare_frag, probe_frag; + void *probe_va; + + va = page_frag_alloc_refill_prepare(&test_nc, test_alloc_len, + &prepare_frag, GFP_KERNEL); + + probe_va = page_frag_alloc_refill_probe(&test_nc, test_alloc_len, + &probe_frag); + + WARN_ON_ONCE(va != probe_va); + if (likely(va)) + frag_frag_test_commit(&test_nc, &prepare_frag, + &probe_frag, test_alloc_len); + } else { + va = page_frag_alloc(&test_nc, test_alloc_len, GFP_KERNEL); + } } if (!va) @@ -149,8 +204,9 @@ static int __init page_frag_test_init(void) wait_for_completion(&wait); duration = (u64)ktime_us_delta(ktime_get(), start); - pr_info("%d of iterations for %s testing took: %lluus\n", nr_test, - test_align ? "aligned" : "non-aligned", duration); + pr_info("%d of iterations for %s %s API testing took: %lluus\n", nr_test, + test_align ? "aligned" : "non-aligned", + test_prepare ? "prepare" : "alloc", duration); ptr_ring_cleanup(&ptr_ring, NULL); page_frag_cache_drain(&test_nc); diff --git a/tools/testing/selftests/mm/run_vmtests.sh b/tools/testing/selftests/mm/run_vmtests.sh index 96fd470b9f51..e4a36231bbea 100755 --- a/tools/testing/selftests/mm/run_vmtests.sh +++ b/tools/testing/selftests/mm/run_vmtests.sh @@ -464,6 +464,10 @@ CATEGORY="page_frag" run_test ./test_page_frag.sh aligned CATEGORY="page_frag" run_test ./test_page_frag.sh nonaligned +CATEGORY="page_frag" run_test ./test_page_frag.sh aligned_prepare + +CATEGORY="page_frag" run_test ./test_page_frag.sh nonaligned_prepare + echo "SUMMARY: PASS=${count_pass} SKIP=${count_skip} FAIL=${count_fail}" | tap_prefix echo "1..${count_total}" | tap_output diff --git a/tools/testing/selftests/mm/test_page_frag.sh b/tools/testing/selftests/mm/test_page_frag.sh index d750d910c899..71c3531fa38e 100755 --- a/tools/testing/selftests/mm/test_page_frag.sh +++ b/tools/testing/selftests/mm/test_page_frag.sh @@ -36,6 +36,8 @@ ksft_skip=4 SMOKE_PARAM="test_push_cpu=$TEST_CPU_0 test_pop_cpu=$TEST_CPU_1" NONALIGNED_PARAM="$SMOKE_PARAM test_alloc_len=75 nr_test=$NR_TEST" ALIGNED_PARAM="$NONALIGNED_PARAM test_align=1" +NONALIGNED_PREPARE_PARAM="$NONALIGNED_PARAM test_prepare=1" +ALIGNED_PREPARE_PARAM="$ALIGNED_PARAM test_prepare=1" check_test_requirements() { @@ -74,6 +76,24 @@ run_aligned_check() echo "Check the kernel ring buffer to see the summary." } +run_nonaligned_prepare_check() +{ + echo "Run performance tests to evaluate how fast nonaligned prepare API is." + + insmod $DRIVER $NONALIGNED_PREPARE_PARAM > /dev/null 2>&1 + echo "Done." + echo "Ccheck the kernel ring buffer to see the summary." +} + +run_aligned_prepare_check() +{ + echo "Run performance tests to evaluate how fast aligned prepare API is." + + insmod $DRIVER $ALIGNED_PREPARE_PARAM > /dev/null 2>&1 + echo "Done." + echo "Check the kernel ring buffer to see the summary." +} + run_smoke_check() { echo "Run smoke test." @@ -86,6 +106,7 @@ run_smoke_check() usage() { echo -n "Usage: $0 [ aligned ] | [ nonaligned ] | | [ smoke ] | " + echo "[ aligned_prepare ] | [ nonaligned_prepare ] | " echo "manual parameters" echo echo "Valid tests and parameters:" @@ -106,6 +127,12 @@ usage() echo "# Performance testing for aligned alloc API" echo "$0 aligned" echo + echo "# Performance testing for nonaligned prepare API" + echo "$0 nonaligned_prepare" + echo + echo "# Performance testing for aligned prepare API" + echo "$0 aligned_prepare" + echo exit 0 } @@ -159,6 +186,10 @@ function run_test() run_nonaligned_check elif [[ "$1" = "aligned" ]]; then run_aligned_check + elif [[ "$1" = "nonaligned_prepare" ]]; then + run_nonaligned_prepare_check + elif [[ "$1" = "aligned_prepare" ]]; then + run_aligned_prepare_check else run_manual_check $@ fi From patchwork Fri Sep 6 07:36:45 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yunsheng Lin X-Patchwork-Id: 13793536 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id B5AFFCE7AE0 for ; Fri, 6 Sep 2024 07:43:15 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 462BA6B00A0; Fri, 6 Sep 2024 03:43:15 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 3EA206B00A1; Fri, 6 Sep 2024 03:43:15 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 217E36B00A2; Fri, 6 Sep 2024 03:43:15 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id ECEF36B00A0 for ; Fri, 6 Sep 2024 03:43:14 -0400 (EDT) Received: from smtpin21.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id A769FA9DC8 for ; Fri, 6 Sep 2024 07:43:14 +0000 (UTC) X-FDA: 82533522708.21.BC6B308 Received: from szxga07-in.huawei.com (szxga07-in.huawei.com [45.249.212.35]) by imf18.hostedemail.com (Postfix) with ESMTP id 3C5291C0005 for ; Fri, 6 Sep 2024 07:43:11 +0000 (UTC) Authentication-Results: imf18.hostedemail.com; dkim=none; spf=pass (imf18.hostedemail.com: domain of linyunsheng@huawei.com designates 45.249.212.35 as permitted sender) smtp.mailfrom=linyunsheng@huawei.com; dmarc=pass (policy=quarantine) header.from=huawei.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1725608463; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=1yMCHx2D+922WyGCFdwAJzVgpbHFy0OJlDi3cB1Up7M=; b=4cRBa27HHxawpr3QlF7hiOm+W3JgQun31XXAQO1qjYDHLtIjJZLTgfDuVgZR8SaffOGicS tlg+vhCGYE6sj2esruwa1jgRBULEargk8rvDF/HM4JuR15JdoJ0YMm9um2m2mzdFOIQnJk 8EP9ORU2uj5ziM430DNGYHuq/D8OxR4= ARC-Authentication-Results: i=1; imf18.hostedemail.com; dkim=none; spf=pass (imf18.hostedemail.com: domain of linyunsheng@huawei.com designates 45.249.212.35 as permitted sender) smtp.mailfrom=linyunsheng@huawei.com; dmarc=pass (policy=quarantine) header.from=huawei.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1725608463; a=rsa-sha256; cv=none; b=kMD0XiTTpD2+aaBixb4yo9vkvn6+iLdcr81qyxNSYj7byoMC27sT1cGjE67f7sd7awrYKn amjWx9hWLNFrkz/P7IukAwfiqEw12qRE+Y5eeK3q+0jA5HRwgsgwYCdiRyvNsccLjf4EAL sw0SgSCGY4PQTuVDyNve4GkvqkcsJvA= Received: from mail.maildlp.com (unknown [172.19.163.44]) by szxga07-in.huawei.com (SkyGuard) with ESMTP id 4X0Sr56Wc3z1S9mZ; Fri, 6 Sep 2024 15:42:45 +0800 (CST) Received: from dggpemf200006.china.huawei.com (unknown [7.185.36.61]) by mail.maildlp.com (Postfix) with ESMTPS id 3F5E21401F0; Fri, 6 Sep 2024 15:43:09 +0800 (CST) Received: from localhost.localdomain (10.90.30.45) by dggpemf200006.china.huawei.com (7.185.36.61) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.11; Fri, 6 Sep 2024 15:43:08 +0800 From: Yunsheng Lin To: , , CC: , , Yunsheng Lin , Alexander Duyck , Jonathan Corbet , Andrew Morton , , Subject: [PATCH net-next v18 13/14] mm: page_frag: update documentation for page_frag Date: Fri, 6 Sep 2024 15:36:45 +0800 Message-ID: <20240906073646.2930809-14-linyunsheng@huawei.com> X-Mailer: git-send-email 2.30.0 In-Reply-To: <20240906073646.2930809-1-linyunsheng@huawei.com> References: <20240906073646.2930809-1-linyunsheng@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.90.30.45] X-ClientProxiedBy: dggems705-chm.china.huawei.com (10.3.19.182) To dggpemf200006.china.huawei.com (7.185.36.61) X-Rspamd-Server: rspam06 X-Rspamd-Queue-Id: 3C5291C0005 X-Stat-Signature: bftq5bqa4h8imo1kp4izfz57ea74hru5 X-Rspam-User: X-HE-Tag: 1725608591-792846 X-HE-Meta: U2FsdGVkX1/ja6tGbcAIgyLwjY9oPfrQUtS5883uYqUPksh1ZnU8f8lzpcod91xVnk2opjm0ft5xFOujoO9ERPJDly4gVEOzjqSEGE7/8Ae7Q3W93wskA2502ejIDe1LI+AOmo2TE2YNeS8P8pKpaM52a6G0Zed+zKrWan8jUsfNBtRhSS1uAq7Z34nSLKbVMG2QLPRlxCBpqqI9XIlow72GTui3c3ZNCoJkyulOo2d0Nxez4ThHQ/uKsP/EVMMzfZFi5pb2a6I0tgzyHAImKt1RowM7XcP5f8YWltm4G+6merqewhhpwy15Oj9ppwP8tXuGFPFV5oh/8rghUrVP52BgA98gjurlYc65wQsavskhsWmXCcJkLUCXBup8aqxTd5gbcCNvoJBpmT+1Nqmy6bIXx1W5L0V1dqgWgjSr751adcJ53ZC6fACBezIinZGqkl11z7UKlyyOls/JG2hNVfFJII8QEaBUnlLYniwK4OIC9pCuNa+N2LeYYo8tzlcgMtgWiOcyEcLMDbpekzViesPVrCtUg8dSyF5OeBZ/alzIwrbCAvUiOS46Vo9CjQVbyE2nVmQusoPrexVWI9sSV+FWA+Aczj1ZYbs+mNgGcdd7KbI9fQMA3V3QhRXt4VYPA7PRMO8Poz1OwfweT+M/5bJbdU8QG3uSZlCYj33hji+KfhxT5WIQ296ipYPr84VUAaxKyMP5/VPnak5nEQeuHVKzvZcPIWMBreBf9Whth2Ct9Te7HHktWHyYQ42aDdiseyVnzXLvLKAB39ujIiUjy/yQqb/n9U9sfK5hrSOoJuQN32Ht6L7QpxYJoIz2Os7ZZEI+frgSL5+XycTP31dADI0BUJKfcoMdCDJpQa0qQTQteWIF3+w7sILRnnvhthZB69CUwnTIpZI9btN0m425ibp13PhnXDkNDCwk0GnhZY5Bja04DAUy/TIPWFRLApqBeoskv3qTUVHTburM69B piB+/xnN InVWqDyfAQ0b3uhl7gMc8SvdnKEQb/yF48xo15HMDIvgaRvbvpPdEDgnMkS81XKt3NH5q6vs5KMzLzB/7gMrtUXuINDgMDPF2s6/cMOjgHrLR8lHrxzoHybq/q8h/1hf68UthbAC3iS060GrEFXwkE9/HSoVEcS2rsCzT95XsXPboI/Wck9gQjMzy4hplFydhgbANT1LldrKVehttb3CESJScltA/0XVAC/PMN21afB2We0vcG7GHLNi9DV4Di1qYPYQAXLAISTwzp8nXbZzl9WUnH23d193I4YC6 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Update documentation about design, implementation and API usages for page_frag. CC: Alexander Duyck Signed-off-by: Yunsheng Lin --- Documentation/mm/page_frags.rst | 177 +++++++++++++++++++++- include/linux/page_frag_cache.h | 259 +++++++++++++++++++++++++++++++- mm/page_frag_cache.c | 26 +++- 3 files changed, 451 insertions(+), 11 deletions(-) diff --git a/Documentation/mm/page_frags.rst b/Documentation/mm/page_frags.rst index 503ca6cdb804..5eec04a3fe90 100644 --- a/Documentation/mm/page_frags.rst +++ b/Documentation/mm/page_frags.rst @@ -1,3 +1,5 @@ +.. SPDX-License-Identifier: GPL-2.0 + ============== Page fragments ============== @@ -40,4 +42,177 @@ page via a single call. The advantage to doing this is that it allows for cleaning up the multiple references that were added to a page in order to avoid calling get_page per allocation. -Alexander Duyck, Nov 29, 2016. + +Architecture overview +===================== + +.. code-block:: none + + +----------------------+ + | page_frag API caller | + +----------------------+ + | + | + v + +------------------------------------------------------------------+ + | request page fragment | + +------------------------------------------------------------------+ + | | | + | | | + | Cache not enough | + | | | + | +-----------------+ | + | | reuse old cache |--Usable-->| + | +-----------------+ | + | | | + | Not usable | + | | | + | v | + Cache empty +-----------------+ | + | | drain old cache | | + | +-----------------+ | + | | | + v_________________________________v | + | | + | | + _________________v_______________ | + | | Cache is enough + | | | + PAGE_SIZE < PAGE_FRAG_CACHE_MAX_SIZE | | + | | | + | PAGE_SIZE >= PAGE_FRAG_CACHE_MAX_SIZE | + v | | + +----------------------------------+ | | + | refill cache with order > 0 page | | | + +----------------------------------+ | | + | | | | + | | | | + | Refill failed | | + | | | | + | v v | + | +------------------------------------+ | + | | refill cache with order 0 page | | + | +----------------------------------=-+ | + | | | + Refill succeed | | + | Refill succeed | + | | | + v v v + +------------------------------------------------------------------+ + | allocate fragment from cache | + +------------------------------------------------------------------+ + +API interface +============= +As the design and implementation of page_frag API implies, the allocation side +does not allow concurrent calling. Instead it is assumed that the caller must +ensure there is not concurrent alloc calling to the same page_frag_cache +instance by using its own lock or rely on some lockless guarantee like NAPI +softirq. + +Depending on different aligning requirement, the page_frag API caller may call +page_frag_*_align*() to ensure the returned virtual address or offset of the +page is aligned according to the 'align/alignment' parameter. Note the size of +the allocated fragment is not aligned, the caller needs to provide an aligned +fragsz if there is an alignment requirement for the size of the fragment. + +Depending on different use cases, callers expecting to deal with va, page or +both va and page for them may call page_frag_alloc, page_frag_refill, or +page_frag_alloc_refill API accordingly. + +There is also a use case that needs minimum memory in order for forward progress, +but more performant if more memory is available. Using page_frag_*_prepare() and +page_frag_commit*() related API, the caller requests the minimum memory it needs +and the prepare API will return the maximum size of the fragment returned. The +caller needs to either call the commit API to report how much memory it actually +uses, or not do so if deciding to not use any memory. + +.. kernel-doc:: include/linux/page_frag_cache.h + :identifiers: page_frag_cache_init page_frag_cache_is_pfmemalloc + page_frag_cache_page_offset __page_frag_alloc_align + page_frag_alloc_align page_frag_alloc + __page_frag_refill_align page_frag_refill_align + page_frag_refill __page_frag_refill_prepare_align + page_frag_refill_prepare_align page_frag_refill_prepare + __page_frag_alloc_refill_prepare_align + page_frag_alloc_refill_prepare_align + page_frag_alloc_refill_prepare page_frag_alloc_refill_probe + page_frag_refill_probe page_frag_commit + page_frag_commit_noref page_frag_alloc_abort + +.. kernel-doc:: mm/page_frag_cache.c + :identifiers: page_frag_cache_drain page_frag_free + __page_frag_alloc_refill_probe_align + +Coding examples +=============== + +Init & Drain API +---------------- + +.. code-block:: c + + page_frag_cache_init(nc); + ... + page_frag_cache_drain(nc); + + +Alloc & Free API +---------------- + +.. code-block:: c + + void *va; + + va = page_frag_alloc_align(nc, size, gfp, align); + if (!va) + goto do_error; + + err = do_something(va, size); + if (err) { + page_frag_abort(nc, size); + goto do_error; + } + + ... + + page_frag_free(va); + + +Prepare & Commit API +-------------------- + +.. code-block:: c + + struct page_frag page_frag, *pfrag; + bool merge = true; + void *va; + + pfrag = &page_frag; + va = page_frag_alloc_refill_prepare(nc, 32U, pfrag, GFP_KERNEL); + if (!va) + goto wait_for_space; + + copy = min_t(unsigned int, copy, pfrag->size); + if (!skb_can_coalesce(skb, i, pfrag->page, pfrag->offset)) { + if (i >= max_skb_frags) + goto new_segment; + + merge = false; + } + + copy = mem_schedule(copy); + if (!copy) + goto wait_for_space; + + err = copy_from_iter_full_nocache(va, copy, iter); + if (err) + goto do_error; + + if (merge) { + skb_frag_size_add(&skb_shinfo(skb)->frags[i - 1], copy); + page_frag_commit_noref(nc, pfrag, copy); + } else { + skb_fill_page_desc(skb, i, pfrag->page, pfrag->offset, copy); + page_frag_commit(nc, pfrag, copy); + } diff --git a/include/linux/page_frag_cache.h b/include/linux/page_frag_cache.h index 4e9018051956..dff68d8e0f30 100644 --- a/include/linux/page_frag_cache.h +++ b/include/linux/page_frag_cache.h @@ -30,16 +30,43 @@ static inline bool page_frag_encoded_page_pfmemalloc(unsigned long encoded_page) return !!(encoded_page & PAGE_FRAG_CACHE_PFMEMALLOC_BIT); } +/** + * page_frag_cache_init() - Init page_frag cache. + * @nc: page_frag cache from which to init + * + * Inline helper to init the page_frag cache. + */ static inline void page_frag_cache_init(struct page_frag_cache *nc) { nc->encoded_page = 0; } +/** + * page_frag_cache_is_pfmemalloc() - Check for pfmemalloc. + * @nc: page_frag cache from which to check + * + * Used to check if the current page in page_frag cache is pfmemalloc'ed. + * It has the same calling context expectation as the alloc API. + * + * Return: + * true if the current page in page_frag cache is pfmemalloc'ed, otherwise + * return false. + */ static inline bool page_frag_cache_is_pfmemalloc(struct page_frag_cache *nc) { return page_frag_encoded_page_pfmemalloc(nc->encoded_page); } +/** + * page_frag_cache_page_offset() - Return the current page fragment's offset. + * @nc: page_frag cache from which to check + * + * The API is only used in net/sched/em_meta.c for historical reason, do not use + * it for new caller unless there is a strong reason. + * + * Return: + * the offset of the current page fragment in the page_frag cache. + */ static inline unsigned int page_frag_cache_page_offset(const struct page_frag_cache *nc) { return nc->offset; @@ -68,6 +95,19 @@ static inline unsigned int __page_frag_cache_commit(struct page_frag_cache *nc, return __page_frag_cache_commit_noref(nc, pfrag, used_sz); } +/** + * __page_frag_alloc_align() - Alloc a page fragment with aligning + * requirement. + * @nc: page_frag cache from which to allocate + * @fragsz: the requested fragment size + * @gfp_mask: the allocation gfp to use when cache need to be refilled + * @align_mask: the requested aligning requirement for the 'va' + * + * Alloc a page fragment from page_frag cache with aligning requirement. + * + * Return: + * Virtual address of the page fragment, otherwise return NULL. + */ static inline void *__page_frag_alloc_align(struct page_frag_cache *nc, unsigned int fragsz, gfp_t gfp_mask, unsigned int align_mask) @@ -85,6 +125,19 @@ static inline void *__page_frag_alloc_align(struct page_frag_cache *nc, return va; } +/** + * page_frag_alloc_align() - Alloc a page fragment with aligning requirement. + * @nc: page_frag cache from which to allocate + * @fragsz: the requested fragment size + * @gfp_mask: the allocation gfp to use when cache needs to be refilled + * @align: the requested aligning requirement for the fragment + * + * WARN_ON_ONCE() checking for @align before allocing a page fragment from + * page_frag cache with aligning requirement. + * + * Return: + * virtual address of the page fragment, otherwise return NULL. + */ static inline void *page_frag_alloc_align(struct page_frag_cache *nc, unsigned int fragsz, gfp_t gfp_mask, unsigned int align) @@ -93,12 +146,36 @@ static inline void *page_frag_alloc_align(struct page_frag_cache *nc, return __page_frag_alloc_align(nc, fragsz, gfp_mask, -align); } +/** + * page_frag_alloc() - Alloc a page fragment. + * @nc: page_frag cache from which to allocate + * @fragsz: the requested fragment size + * @gfp_mask: the allocation gfp to use when cache need to be refilled + * + * Alloc a page fragment from page_frag cache. + * + * Return: + * virtual address of the page fragment, otherwise return NULL. + */ static inline void *page_frag_alloc(struct page_frag_cache *nc, unsigned int fragsz, gfp_t gfp_mask) { return __page_frag_alloc_align(nc, fragsz, gfp_mask, ~0u); } +/** + * __page_frag_refill_align() - Refill a page_frag with aligning requirement. + * @nc: page_frag cache from which to refill + * @fragsz: the requested fragment size + * @pfrag: the page_frag to be refilled. + * @gfp_mask: the allocation gfp to use when cache need to be refilled + * @align_mask: the requested aligning requirement for the fragment + * + * Refill a page_frag from page_frag cache with aligning requirement. + * + * Return: + * True if refill succeeds, otherwise return false. + */ static inline bool __page_frag_refill_align(struct page_frag_cache *nc, unsigned int fragsz, struct page_frag *pfrag, @@ -113,6 +190,20 @@ static inline bool __page_frag_refill_align(struct page_frag_cache *nc, return true; } +/** + * page_frag_refill_align() - Refill a page_frag with aligning requirement. + * @nc: page_frag cache from which to refill + * @fragsz: the requested fragment size + * @pfrag: the page_frag to be refilled. + * @gfp_mask: the allocation gfp to use when cache needs to be refilled + * @align: the requested aligning requirement for the fragment + * + * WARN_ON_ONCE() checking for @align before refilling a page_frag from + * page_frag cache with aligning requirement. + * + * Return: + * True if refill succeeds, otherwise return false. + */ static inline bool page_frag_refill_align(struct page_frag_cache *nc, unsigned int fragsz, struct page_frag *pfrag, @@ -122,6 +213,18 @@ static inline bool page_frag_refill_align(struct page_frag_cache *nc, return __page_frag_refill_align(nc, fragsz, pfrag, gfp_mask, -align); } +/** + * page_frag_refill() - Refill a page_frag. + * @nc: page_frag cache from which to refill + * @fragsz: the requested fragment size + * @pfrag: the page_frag to be refilled. + * @gfp_mask: the allocation gfp to use when cache need to be refilled + * + * Refill a page_frag from page_frag cache. + * + * Return: + * True if refill succeeds, otherwise return false. + */ static inline bool page_frag_refill(struct page_frag_cache *nc, unsigned int fragsz, struct page_frag *pfrag, gfp_t gfp_mask) @@ -129,6 +232,20 @@ static inline bool page_frag_refill(struct page_frag_cache *nc, return __page_frag_refill_align(nc, fragsz, pfrag, gfp_mask, ~0u); } +/** + * __page_frag_refill_prepare_align() - Prepare refilling a page_frag with + * aligning requirement. + * @nc: page_frag cache from which to refill + * @fragsz: the requested fragment size + * @pfrag: the page_frag to be refilled. + * @gfp_mask: the allocation gfp to use when cache need to be refilled + * @align_mask: the requested aligning requirement for the fragment + * + * Prepare refill a page_frag from page_frag cache with aligning requirement. + * + * Return: + * True if prepare refilling succeeds, otherwise return false. + */ static inline bool __page_frag_refill_prepare_align(struct page_frag_cache *nc, unsigned int fragsz, struct page_frag *pfrag, @@ -139,6 +256,21 @@ static inline bool __page_frag_refill_prepare_align(struct page_frag_cache *nc, align_mask); } +/** + * page_frag_refill_prepare_align() - Prepare refilling a page_frag with + * aligning requirement. + * @nc: page_frag cache from which to refill + * @fragsz: the requested fragment size + * @pfrag: the page_frag to be refilled. + * @gfp_mask: the allocation gfp to use when cache needs to be refilled + * @align: the requested aligning requirement for the fragment + * + * WARN_ON_ONCE() checking for @align before prepare refilling a page_frag from + * page_frag cache with aligning requirement. + * + * Return: + * True if prepare refilling succeeds, otherwise return false. + */ static inline bool page_frag_refill_prepare_align(struct page_frag_cache *nc, unsigned int fragsz, struct page_frag *pfrag, @@ -150,6 +282,18 @@ static inline bool page_frag_refill_prepare_align(struct page_frag_cache *nc, -align); } +/** + * page_frag_refill_prepare() - Prepare refilling a page_frag. + * @nc: page_frag cache from which to refill + * @fragsz: the requested fragment size + * @pfrag: the page_frag to be refilled. + * @gfp_mask: the allocation gfp to use when cache need to be refilled + * + * Prepare refilling a page_frag from page_frag cache. + * + * Return: + * True if refill succeeds, otherwise return false. + */ static inline bool page_frag_refill_prepare(struct page_frag_cache *nc, unsigned int fragsz, struct page_frag *pfrag, @@ -159,6 +303,20 @@ static inline bool page_frag_refill_prepare(struct page_frag_cache *nc, ~0u); } +/** + * __page_frag_alloc_refill_prepare_align() - Prepare allocing a fragment and + * refilling a page_frag with aligning requirement. + * @nc: page_frag cache from which to allocate and refill + * @fragsz: the requested fragment size + * @pfrag: the page_frag to be refilled. + * @gfp_mask: the allocation gfp to use when cache need to be refilled + * @align_mask: the requested aligning requirement for the fragment. + * + * Prepare allocing a fragment and refilling a page_frag from page_frag cache. + * + * Return: + * virtual address of the page fragment, otherwise return NULL. + */ static inline void *__page_frag_alloc_refill_prepare_align(struct page_frag_cache *nc, unsigned int fragsz, struct page_frag *pfrag, @@ -168,6 +326,21 @@ static inline void *__page_frag_alloc_refill_prepare_align(struct page_frag_cach return __page_frag_cache_prepare(nc, fragsz, pfrag, gfp_mask, align_mask); } +/** + * page_frag_alloc_refill_prepare_align() - Prepare allocing a fragment and + * refilling a page_frag with aligning requirement. + * @nc: page_frag cache from which to allocate and refill + * @fragsz: the requested fragment size + * @pfrag: the page_frag to be refilled. + * @gfp_mask: the allocation gfp to use when cache need to be refilled + * @align: the requested aligning requirement for the fragment. + * + * WARN_ON_ONCE() checking for @align before prepare allocing a fragment and + * refilling a page_frag from page_frag cache. + * + * Return: + * virtual address of the page fragment, otherwise return NULL. + */ static inline void *page_frag_alloc_refill_prepare_align(struct page_frag_cache *nc, unsigned int fragsz, struct page_frag *pfrag, @@ -179,6 +352,19 @@ static inline void *page_frag_alloc_refill_prepare_align(struct page_frag_cache gfp_mask, -align); } +/** + * page_frag_alloc_refill_prepare() - Prepare allocing a fragment and refilling + * a page_frag. + * @nc: page_frag cache from which to allocate and refill + * @fragsz: the requested fragment size + * @pfrag: the page_frag to be refilled. + * @gfp_mask: the allocation gfp to use when cache need to be refilled + * + * Prepare allocing a fragment and refilling a page_frag from page_frag cache. + * + * Return: + * virtual address of the page fragment, otherwise return NULL. + */ static inline void *page_frag_alloc_refill_prepare(struct page_frag_cache *nc, unsigned int fragsz, struct page_frag *pfrag, @@ -188,6 +374,18 @@ static inline void *page_frag_alloc_refill_prepare(struct page_frag_cache *nc, gfp_mask, ~0u); } +/** + * page_frag_alloc_refill_probe() - Probe allocing a fragment and refilling + * a page_frag. + * @nc: page_frag cache from which to allocate and refill + * @fragsz: the requested fragment size + * @pfrag: the page_frag to be refilled + * + * Probe allocing a fragment and refilling a page_frag from page_frag cache. + * + * Return: + * virtual address of the page fragment, otherwise return NULL. + */ static inline void *page_frag_alloc_refill_probe(struct page_frag_cache *nc, unsigned int fragsz, struct page_frag *pfrag) @@ -195,6 +393,17 @@ static inline void *page_frag_alloc_refill_probe(struct page_frag_cache *nc, return __page_frag_alloc_refill_probe_align(nc, fragsz, pfrag, ~0u); } +/** + * page_frag_refill_probe() - Probe refilling a page_frag. + * @nc: page_frag cache from which to refill + * @fragsz: the requested fragment size + * @pfrag: the page_frag to be refilled + * + * Probe refilling a page_frag from page_frag cache. + * + * Return: + * True if refill succeeds, otherwise return false. + */ static inline bool page_frag_refill_probe(struct page_frag_cache *nc, unsigned int fragsz, struct page_frag *pfrag) @@ -202,20 +411,54 @@ static inline bool page_frag_refill_probe(struct page_frag_cache *nc, return !!page_frag_alloc_refill_probe(nc, fragsz, pfrag); } -static inline void page_frag_commit(struct page_frag_cache *nc, - struct page_frag *pfrag, - unsigned int used_sz) +/** + * page_frag_commit - Commit allocing a page fragment. + * @nc: page_frag cache from which to commit + * @pfrag: the page_frag to be committed + * @used_sz: size of the page fragment has been used + * + * Commit the actual used size for the allocation that was either prepared + * or probed. + * + * Return: + * The true size of the fragment considering the offset alignment. + */ +static inline unsigned int page_frag_commit(struct page_frag_cache *nc, + struct page_frag *pfrag, + unsigned int used_sz) { - __page_frag_cache_commit(nc, pfrag, used_sz); + return __page_frag_cache_commit(nc, pfrag, used_sz); } -static inline void page_frag_commit_noref(struct page_frag_cache *nc, - struct page_frag *pfrag, - unsigned int used_sz) +/** + * page_frag_commit_noref - Commit allocing a page fragment without taking + * page refcount. + * @nc: page_frag cache from which to commit + * @pfrag: the page_frag to be committed + * @used_sz: size of the page fragment has been used + * + * Commit the alloc preparing or probing by passing the actual used size, but + * not taking refcount. Mostly used for fragmemt coalescing case when the + * current fragment can share the same refcount with previous fragment. + * + * Return: + * The true size of the fragment considering the offset alignment. + */ +static inline unsigned int page_frag_commit_noref(struct page_frag_cache *nc, + struct page_frag *pfrag, + unsigned int used_sz) { - __page_frag_cache_commit_noref(nc, pfrag, used_sz); + return __page_frag_cache_commit_noref(nc, pfrag, used_sz); } +/** + * page_frag_alloc_abort - Abort the page fragment allocation. + * @nc: page_frag cache to which the page fragment is aborted back + * @fragsz: size of the page fragment to be aborted + * + * It is expected to be called from the same context as the alloc API. + * Mostly used for error handling cases where the fragment is no longer needed. + */ static inline void page_frag_alloc_abort(struct page_frag_cache *nc, unsigned int fragsz) { diff --git a/mm/page_frag_cache.c b/mm/page_frag_cache.c index c052c77a96eb..209cc1e278ab 100644 --- a/mm/page_frag_cache.c +++ b/mm/page_frag_cache.c @@ -75,6 +75,10 @@ static struct page *__page_frag_cache_refill(struct page_frag_cache *nc, return page; } +/** + * page_frag_cache_drain - Drain the current page from page_frag cache. + * @nc: page_frag cache from which to drain + */ void page_frag_cache_drain(struct page_frag_cache *nc) { if (!nc->encoded_page) @@ -117,6 +121,20 @@ unsigned int __page_frag_cache_commit_noref(struct page_frag_cache *nc, } EXPORT_SYMBOL(__page_frag_cache_commit_noref); +/** + * __page_frag_alloc_refill_probe_align() - Probe allocing a fragment and + * refilling a page_frag with aligning requirement. + * @nc: page_frag cache from which to allocate and refill + * @fragsz: the requested fragment size + * @pfrag: the page_frag to be refilled. + * @align_mask: the requested aligning requirement for the fragment. + * + * Probe allocing a fragment and refilling a page_frag from page_frag cache with + * aligning requirement. + * + * Return: + * virtual address of the page fragment, otherwise return NULL. + */ void *__page_frag_alloc_refill_probe_align(struct page_frag_cache *nc, unsigned int fragsz, struct page_frag *pfrag, @@ -208,8 +226,12 @@ void *__page_frag_cache_prepare(struct page_frag_cache *nc, unsigned int fragsz, } EXPORT_SYMBOL(__page_frag_cache_prepare); -/* - * Frees a page fragment allocated out of either a compound or order 0 page. +/** + * page_frag_free - Free a page fragment. + * @addr: va of page fragment to be freed + * + * Free a page fragment allocated out of either a compound or order 0 page by + * virtual address. */ void page_frag_free(void *addr) {