From patchwork Thu Nov 14 12:16:02 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yunsheng Lin X-Patchwork-Id: 13875064 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id B34F5D65C7E for ; Thu, 14 Nov 2024 12:23:03 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 4BF6E6B009A; Thu, 14 Nov 2024 07:23:03 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 448E26B009C; Thu, 14 Nov 2024 07:23:03 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 2E9D66B009D; Thu, 14 Nov 2024 07:23:03 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id 0D1376B009A for ; Thu, 14 Nov 2024 07:23:03 -0500 (EST) Received: from smtpin30.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id B9EF9141107 for ; Thu, 14 Nov 2024 12:23:02 +0000 (UTC) X-FDA: 82784613702.30.A106F9B Received: from szxga04-in.huawei.com (szxga04-in.huawei.com [45.249.212.190]) by imf16.hostedemail.com (Postfix) with ESMTP id 6C4DC18000B for ; Thu, 14 Nov 2024 12:22:16 +0000 (UTC) Authentication-Results: imf16.hostedemail.com; dkim=none; spf=pass (imf16.hostedemail.com: domain of linyunsheng@huawei.com designates 45.249.212.190 as permitted sender) smtp.mailfrom=linyunsheng@huawei.com; dmarc=pass (policy=quarantine) header.from=huawei.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1731586805; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=J1srTfzk09opjcf5dAMfaweqcdxDibImPR2py+TJtKc=; b=FdufU0Ugos1HMC+MTwOG+LFciECQa3pczi8ykyqEmCQiOY6oGPbWZuJYjmU+ZlKFd3wzR9 hbwQ3vpH2gwt0dSqozHNTXyJq/A322yAVn8vntrUvu3NsM2S/79ZDbVRZDjIhrm8xRX529 2EVeugiutwzMajihRC421/B/35Cnhes= ARC-Authentication-Results: i=1; imf16.hostedemail.com; dkim=none; spf=pass (imf16.hostedemail.com: domain of linyunsheng@huawei.com designates 45.249.212.190 as permitted sender) smtp.mailfrom=linyunsheng@huawei.com; dmarc=pass (policy=quarantine) header.from=huawei.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1731586805; a=rsa-sha256; cv=none; b=WPriwQV1nVmn8sl5WZlG5V+frlygn99PRAehnKurEWoXz0p6TYNZr76Ka77LftYpvj+tGN h/8GDo9/qnGqMB5+OkGOviSjv+3XKzvp3qveQ7R747rPJxVDUq7h7R1fP2Qal5q8tpH/P9 cdYXzc5QQM5omw9R3LmxDqMah/3Khoo= Received: from mail.maildlp.com (unknown [172.19.88.214]) by szxga04-in.huawei.com (SkyGuard) with ESMTP id 4XpzlP60DHz2Dh2P; Thu, 14 Nov 2024 20:21:05 +0800 (CST) Received: from dggpemf200006.china.huawei.com (unknown [7.185.36.61]) by mail.maildlp.com (Postfix) with ESMTPS id 3C66E1A016C; Thu, 14 Nov 2024 20:22:57 +0800 (CST) Received: from localhost.localdomain (10.90.30.45) by dggpemf200006.china.huawei.com (7.185.36.61) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.11; Thu, 14 Nov 2024 20:22:56 +0800 From: Yunsheng Lin To: , , CC: , , Yunsheng Lin , Alexander Duyck , Andrew Morton , Linux-MM , Jonathan Corbet , Subject: [PATCH net-next v1 07/10] mm: page_frag: introduce probe related API Date: Thu, 14 Nov 2024 20:16:02 +0800 Message-ID: <20241114121606.3434517-8-linyunsheng@huawei.com> X-Mailer: git-send-email 2.30.0 In-Reply-To: <20241114121606.3434517-1-linyunsheng@huawei.com> References: <20241114121606.3434517-1-linyunsheng@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.90.30.45] X-ClientProxiedBy: dggems703-chm.china.huawei.com (10.3.19.180) To dggpemf200006.china.huawei.com (7.185.36.61) X-Rspamd-Server: rspam09 X-Rspamd-Queue-Id: 6C4DC18000B X-Stat-Signature: frzqe5asg9di36bqtnjpq1fgd3puqhz8 X-Rspam-User: X-HE-Tag: 1731586936-822785 X-HE-Meta: U2FsdGVkX18616r61xrb0St496MGuzYsbjPlg7DK1RZazyHjab7X0287oXACknLuaavj/UBH8N7StpZmn4nc/ATO8pswgghapF9Chue41Oua49LRnMi08LQqlFiacwTLBhDxmzblRI+KmzASrsIN+p0yw7CMdALwNouJb/NfrKDCCPtCPtb0cF8H2NKU3+61qplbNUddGi8aYaCDl/Kf0ePZWh/1f+lbevFCpAWKB2yt9tbUrnWPYZTuQy7Kt2cmzrFlH2HdTXZW2m7GBHNOzyNbFdOHz9WoihPqWBWfKO9XXG+gpvLpLLpy+B/ovdfVVGUjpo8fbbt4SrfVOIT1wYZZ330QIr1v+qW8X4cSW+4XUxklNJw3kp+xyu1XSRwqaIdNWMDP+T0dK49X+E1kRoyCP/WiFno1f/sD/lAeLdpNXY/7TQVV1o644YQFSdikkZ4GpNYEuBTtB44Ot03IFA6EqgoTaRIfKg0FAuv8HvoQ7AbdtlH+8HrxOBoEFLn+vbREgfJ38Q1EexABVvOQFskdqKSfY1NfxltYogJ2JgYfjYido1hqccyrw8HABO44u573YdvK1m7ENAlw5nvafGMHRaN92FummnzZxLBzVecj+08hhay3UnZtoeUV/CZnbnu2k3R3rc+BXFj7ejzHKaNqOhEEO53j957mGQIVZBTmGRuTb/kqWBNFBhoKAZEbZ//7SYKAawjivUIxEBdbcN0+FoPHVc1M6TQ1Q5MJV6CqzIJcXD4pCEWcBGdeVqAjOxrfiPvtnve6PW+pTMXrd9mXwV8H5MUTFDNuUWS9UYH7BrFv/vzF5xxi9ew1bmd0pb5S+ka/DXq3gY6JUlp1wbj40kPbyjFlcjQq6lLjt80bPLv4AVlr/OvOWACHOd5x6fx9umOPv6oNzG9Afy0ZGoEGYbeau/QRTLox1BDHNTW9jV+bK91CjmvtTDf7knQSIuNPPpZVd+cFfQUhtq/ dkgFK/4i hQ1G7Glz7cP2qE1w4GxIyU5vooNtk9RJZZgLDxb+/+RQNhreXpuc8qXUVUbjtQTSdI8Jk+K/hQtcA/GW0JPZtU81A8sN7RlpLhXSNzUyT/snQbZTNnpMcr9N2zUyyDEBeLSRXJvCtZpmFqNSPB81Z4gd7ChxCyChV6lSFRJ073TZ2fb6iRhr0OBD6xa76UxgCrR5HhaHh7W6nbRogpgTa+XgzDYNPB+I0zXrCRVFCSSUjJgIee1Mogu/1+ruBNUb23K2rVG+aoTqTB3Ofej0Uiw6HXCliE7Wc0R0REA+bHkJ9a4t+acHbsgCUrrPftN9FNq8Z79qFAAl55OlC76HoOV330w0Jcqt+T5WnCG2GdmLLbouJOW8XCa/TpXbfq0HjNBwsuDBzSDbNtXId2H8wZIpDNw== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Some usecase may need a bigger fragment if current fragment can't be coalesced to previous fragment because more space for some header may be needed if it is a new fragment. So introduce probe related API to tell if there are minimum remaining memory in the cache to be coalesced to the previous fragment, in order to save memory as much as possible. CC: Alexander Duyck CC: Andrew Morton CC: Linux-MM Signed-off-by: Yunsheng Lin --- Documentation/mm/page_frags.rst | 10 +++++++- include/linux/page_frag_cache.h | 41 +++++++++++++++++++++++++++++++++ mm/page_frag_cache.c | 35 ++++++++++++++++++++++++++++ 3 files changed, 85 insertions(+), 1 deletion(-) diff --git a/Documentation/mm/page_frags.rst b/Documentation/mm/page_frags.rst index 1c98f7090d92..3e34831a0029 100644 --- a/Documentation/mm/page_frags.rst +++ b/Documentation/mm/page_frags.rst @@ -119,7 +119,13 @@ more performant if more memory is available. By using the prepare and commit related API, the caller calls prepare API to requests the minimum memory it needs and prepare API will return the maximum size of the fragment returned. The caller needs to either call the commit API to report how much memory it actually -uses, or not do so if deciding to not use any memory. +uses, or not do so if deciding to not use any memory. Some usecase may need a +bigger fragment if the current fragment can't be coalesced to previous fragment +because more space for some header may be needed if it is a new fragment, probe +related API can be used to tell if there are minimum remaining memory in the +cache to be coalesced to the previous fragment, in order to save memory as much +as possible. + .. kernel-doc:: include/linux/page_frag_cache.h :identifiers: page_frag_cache_init page_frag_cache_is_pfmemalloc @@ -129,9 +135,11 @@ uses, or not do so if deciding to not use any memory. __page_frag_alloc_refill_prepare_align page_frag_alloc_refill_prepare_align page_frag_alloc_refill_prepare + page_frag_alloc_refill_probe page_frag_refill_probe .. kernel-doc:: mm/page_frag_cache.c :identifiers: page_frag_cache_drain page_frag_free page_frag_alloc_abort_ref + __page_frag_alloc_refill_probe_align Coding examples =============== diff --git a/include/linux/page_frag_cache.h b/include/linux/page_frag_cache.h index 329390afbe78..0f7e8da91a67 100644 --- a/include/linux/page_frag_cache.h +++ b/include/linux/page_frag_cache.h @@ -63,6 +63,10 @@ void *__page_frag_cache_prepare(struct page_frag_cache *nc, unsigned int fragsz, unsigned int __page_frag_cache_commit_noref(struct page_frag_cache *nc, struct page_frag *pfrag, unsigned int used_sz); +void *__page_frag_alloc_refill_probe_align(struct page_frag_cache *nc, + unsigned int fragsz, + struct page_frag *pfrag, + unsigned int align_mask); static inline unsigned int __page_frag_cache_commit(struct page_frag_cache *nc, struct page_frag *pfrag, @@ -282,6 +286,43 @@ static inline void *page_frag_alloc_refill_prepare(struct page_frag_cache *nc, gfp_mask, ~0u); } +/** + * page_frag_alloc_refill_probe() - Probe allocating a fragment and refilling + * a page_frag. + * @nc: page_frag cache from which to allocate and refill + * @fragsz: the requested fragment size + * @pfrag: the page_frag to be refilled + * + * Probe allocating a fragment and refilling a page_frag from page_frag cache. + * + * Return: + * virtual address of the page fragment, otherwise return NULL. + */ +static inline void *page_frag_alloc_refill_probe(struct page_frag_cache *nc, + unsigned int fragsz, + struct page_frag *pfrag) +{ + return __page_frag_alloc_refill_probe_align(nc, fragsz, pfrag, ~0u); +} + +/** + * page_frag_refill_probe() - Probe refilling a page_frag. + * @nc: page_frag cache from which to refill + * @fragsz: the requested fragment size + * @pfrag: the page_frag to be refilled + * + * Probe refilling a page_frag from page_frag cache. + * + * Return: + * True if refill succeeds, otherwise return false. + */ +static inline bool page_frag_refill_probe(struct page_frag_cache *nc, + unsigned int fragsz, + struct page_frag *pfrag) +{ + return !!page_frag_alloc_refill_probe(nc, fragsz, pfrag); +} + /** * page_frag_refill_commit - Commit a prepare refilling. * @nc: page_frag cache from which to commit diff --git a/mm/page_frag_cache.c b/mm/page_frag_cache.c index 8c3cfdbe8c2b..ae40520d452a 100644 --- a/mm/page_frag_cache.c +++ b/mm/page_frag_cache.c @@ -116,6 +116,41 @@ unsigned int __page_frag_cache_commit_noref(struct page_frag_cache *nc, } EXPORT_SYMBOL(__page_frag_cache_commit_noref); +/** + * __page_frag_alloc_refill_probe_align() - Probe allocating a fragment and + * refilling a page_frag with aligning requirement. + * @nc: page_frag cache from which to allocate and refill + * @fragsz: the requested fragment size + * @pfrag: the page_frag to be refilled. + * @align_mask: the requested aligning requirement for the fragment. + * + * Probe allocating a fragment and refilling a page_frag from page_frag cache + * with aligning requirement. + * + * Return: + * virtual address of the page fragment, otherwise return NULL. + */ +void *__page_frag_alloc_refill_probe_align(struct page_frag_cache *nc, + unsigned int fragsz, + struct page_frag *pfrag, + unsigned int align_mask) +{ + unsigned long encoded_page = nc->encoded_page; + unsigned int size, offset; + + size = PAGE_SIZE << encoded_page_decode_order(encoded_page); + offset = __ALIGN_KERNEL_MASK(nc->offset, ~align_mask); + if (unlikely(!encoded_page || offset + fragsz > size)) + return NULL; + + pfrag->page = encoded_page_decode_page(encoded_page); + pfrag->size = size - offset; + pfrag->offset = offset; + + return encoded_page_decode_virt(encoded_page) + offset; +} +EXPORT_SYMBOL(__page_frag_alloc_refill_probe_align); + void *__page_frag_cache_prepare(struct page_frag_cache *nc, unsigned int fragsz, struct page_frag *pfrag, gfp_t gfp_mask, unsigned int align_mask)