From patchwork Wed Jul 31 12:45:01 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yunsheng Lin X-Patchwork-Id: 13748713 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5CA81C3DA64 for ; Wed, 31 Jul 2024 12:51:11 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 463D56B009B; Wed, 31 Jul 2024 08:51:10 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 43B886B009C; Wed, 31 Jul 2024 08:51:10 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 265A06B009D; Wed, 31 Jul 2024 08:51:10 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 089E56B009B for ; Wed, 31 Jul 2024 08:51:10 -0400 (EDT) Received: from smtpin10.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id B82DBA0300 for ; Wed, 31 Jul 2024 12:51:09 +0000 (UTC) X-FDA: 82400033058.10.37CB37E Received: from szxga03-in.huawei.com (szxga03-in.huawei.com [45.249.212.189]) by imf24.hostedemail.com (Postfix) with ESMTP id 5678518000C for ; Wed, 31 Jul 2024 12:51:06 +0000 (UTC) Authentication-Results: imf24.hostedemail.com; dkim=none; spf=pass (imf24.hostedemail.com: domain of linyunsheng@huawei.com designates 45.249.212.189 as permitted sender) smtp.mailfrom=linyunsheng@huawei.com; dmarc=pass (policy=quarantine) header.from=huawei.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1722430263; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=QZxA+kJAMme4SdVh1uoDid94HzC0OQXXJxCSgvmCupY=; b=gFsexLzgTarSOYTD15dwFg9U7Re2oVbkYQdisMLdtB4UIl6vy6J/oASQAhOmvdHTDm0Lz8 LkP3mMTHD18WMM5rAx01mMAA2X7paPUI26Rbpvb0uXQ0P+4je/nNpW9YytIGjtQhFtouyK DOfDv+M1Z5O2xGq0rx+DeWZG+PyqwUI= ARC-Authentication-Results: i=1; imf24.hostedemail.com; dkim=none; spf=pass (imf24.hostedemail.com: domain of linyunsheng@huawei.com designates 45.249.212.189 as permitted sender) smtp.mailfrom=linyunsheng@huawei.com; dmarc=pass (policy=quarantine) header.from=huawei.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1722430263; a=rsa-sha256; cv=none; b=5HB4iSf40+bdJtz8rxOZAG6tfJLyCaumyZqJqoF9nI7XPKRTO/umAcL+atbSsR0lI+6akm 7y6/QI58suJg/6i1+6OsDELz/DVBL0Y6Ti+AJc6I5Ife8jsu6eZicttivvwGPXu13qoUDU XwcGgrn68QZeSjNuicMvcYA9yGB7VjI= Received: from mail.maildlp.com (unknown [172.19.162.254]) by szxga03-in.huawei.com (SkyGuard) with ESMTP id 4WYsKx0yNRzQnQH; Wed, 31 Jul 2024 20:46:45 +0800 (CST) Received: from dggpemf200006.china.huawei.com (unknown [7.185.36.61]) by mail.maildlp.com (Postfix) with ESMTPS id 3552B180106; Wed, 31 Jul 2024 20:51:04 +0800 (CST) Received: from localhost.localdomain (10.90.30.45) by dggpemf200006.china.huawei.com (7.185.36.61) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.11; Wed, 31 Jul 2024 20:51:03 +0800 From: Yunsheng Lin To: , , CC: , , Yunsheng Lin , Alexander Duyck , Andrew Morton , Subject: [PATCH net-next v12 11/14] mm: page_frag: introduce prepare/probe/commit API Date: Wed, 31 Jul 2024 20:45:01 +0800 Message-ID: <20240731124505.2903877-12-linyunsheng@huawei.com> X-Mailer: git-send-email 2.30.0 In-Reply-To: <20240731124505.2903877-1-linyunsheng@huawei.com> References: <20240731124505.2903877-1-linyunsheng@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.90.30.45] X-ClientProxiedBy: dggems705-chm.china.huawei.com (10.3.19.182) To dggpemf200006.china.huawei.com (7.185.36.61) X-Rspam-User: X-Stat-Signature: uaeaxdg9jtkzddo9j4ud9at8ac8csafe X-Rspamd-Queue-Id: 5678518000C X-Rspamd-Server: rspam11 X-HE-Tag: 1722430266-130490 X-HE-Meta: U2FsdGVkX1/iKoX0jaEdCEjCDtFTK4cLO8NDmPcG5BNjWaaLrWaanwb1p2OOFmFpGfnhxPc0VBNsZZy+nYzTjmqZHV4ZF6FwLN91yPj9wLKqRhOxYQm0Sa2IYSjoZ2rk3IG/Xg03tHxlZ6a2VCAENf+LOP5jyexS3MtyGZl1QdbD+Expfpd9BIPuxO7PdLQyToM9958P6vTpOWjFpwLf/B3F8dDRnW4Immf/6mGe+TOS3SAWSYiw61QVenGQqLChA7CP57vCqfKNfj14XWevxT7loRL36FH6ix8jGY5x45X9AsEPCSoEdwAkp9I0Q9uiW/L5fWaHIkxUd/I0TMxOzs26msvn8HKoY+/Cz4XNB1Fts0e40FzwR9GOeFlv3HKILdmXJa8qybKsotrP+lfkTDRq2dXGBCdac2zcGCnXKxiqDyFrAaCT/HEBB0t9ndB25obnnXlVUuVM9hOLSnUv+Qi1H5jmGMnOEwxW3usPiAzDFXrn8IXVftWq/KluzkAWEoNQiY7Kc1GBPMPtDPOZDNaCfjXUJgwu+YEILiR7Sd8SST+44loM2+mvhOQnWPGs+nXvQ2jIw1fRIbeukzVKGwyQrfL5dYTs/R3Tm8bmfTD2Rb9Cw90ozwCnlLcfD+kn65umUcXQxAa778PcrB1DICyW/fudBR8x1FmIXsI3kXwvujIaPIaXqaZ5LIzmRc/aEkuUjbXB20XM5l0DRV17t1DVm40SQRW3dQDdsOI1SKtKeUw6SYDILL+0XTI7R41Jrs8z1k7EOAT/VYNQ+p2L1It+Pb5AK0OzOkCyOLNhbtZeKGLYD8fKuCp0wbtMo7R9ZC/7FqPxPKZyBfbk4SkSwla5a8B2j4aGNsyveNCj7yTFeGodiNPD8Tr/+gI1nJmQi29v7LBQJpyRtIoL1XEet9DXFrAJn/y7KooWg3+rq1rnjmJc7chhzW/m64DdPO01BSs5aHwGpfPspqYqb9D VCLdPZrI VwAQmkH5D1YpYolyo2jUw1vExnGEgU1iLoZgvk9P1p4Qc2+fPSnWzFkCWxrzjWBPd8rlQ8zlOn2Q1Bu0x87LXB7XNPbFdf7rhio9uWKiVDMVBlu/xi80H+IQ8QJRMjhxn8RotmPFA85NWdyT5kwZCsQV2QLaDb2f0wagr/iEkiYfC5ftAdrIRixbsKJssLmDiKKbppbtV9kV6R9sI/rhpdnhlCoVz2x2V3qtDCAfsSuFUJLXKBtCMxPScuH8L/hxWygo0jq/A3DcOyn8QN7/iOZge2ECjGM6gBftL X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: There are many use cases that need minimum memory in order for forward progress, but more performant if more memory is available or need to probe the cache info to use any memory available for frag caoleasing reason. Currently skb_page_frag_refill() API is used to solve the above use cases, but caller needs to know about the internal detail and access the data field of 'struct page_frag' to meet the requirement of the above use cases and its implementation is similar to the one in mm subsystem. To unify those two page_frag implementations, introduce a prepare API to ensure minimum memory is satisfied and return how much the actual memory is available to the caller and a probe API to report the current available memory to caller without doing cache refilling. The caller needs to either call the commit API to report how much memory it actually uses, or not do so if deciding to not use any memory. CC: Alexander Duyck Signed-off-by: Yunsheng Lin --- include/linux/page_frag_cache.h | 75 ++++++++++++++++ mm/page_frag_cache.c | 152 ++++++++++++++++++++++++++++---- 2 files changed, 212 insertions(+), 15 deletions(-) diff --git a/include/linux/page_frag_cache.h b/include/linux/page_frag_cache.h index 0abffdd10a1c..ba5d7f8a03cd 100644 --- a/include/linux/page_frag_cache.h +++ b/include/linux/page_frag_cache.h @@ -7,6 +7,8 @@ #include #include #include +#include +#include #include #if (PAGE_SIZE < PAGE_FRAG_CACHE_MAX_SIZE) @@ -67,6 +69,9 @@ static inline unsigned int page_frag_cache_page_size(unsigned long encoded_va) void page_frag_cache_drain(struct page_frag_cache *nc); void __page_frag_cache_drain(struct page *page, unsigned int count); +struct page *page_frag_alloc_pg(struct page_frag_cache *nc, + unsigned int *offset, unsigned int fragsz, + gfp_t gfp); void *__page_frag_alloc_va_align(struct page_frag_cache *nc, unsigned int fragsz, gfp_t gfp_mask, unsigned int align_mask); @@ -79,12 +84,82 @@ static inline void *page_frag_alloc_va_align(struct page_frag_cache *nc, return __page_frag_alloc_va_align(nc, fragsz, gfp_mask, -align); } +static inline unsigned int page_frag_cache_page_offset(const struct page_frag_cache *nc) +{ + return page_frag_cache_page_size(nc->encoded_va) - nc->remaining; +} + static inline void *page_frag_alloc_va(struct page_frag_cache *nc, unsigned int fragsz, gfp_t gfp_mask) { return __page_frag_alloc_va_align(nc, fragsz, gfp_mask, ~0u); } +void *page_frag_alloc_va_prepare(struct page_frag_cache *nc, unsigned int *fragsz, + gfp_t gfp); + +static inline void *page_frag_alloc_va_prepare_align(struct page_frag_cache *nc, + unsigned int *fragsz, + gfp_t gfp, + unsigned int align) +{ + WARN_ON_ONCE(!is_power_of_2(align) || align > PAGE_SIZE); + nc->remaining = nc->remaining & -align; + return page_frag_alloc_va_prepare(nc, fragsz, gfp); +} + +struct page *page_frag_alloc_pg_prepare(struct page_frag_cache *nc, + unsigned int *offset, + unsigned int *fragsz, gfp_t gfp); + +struct page *page_frag_alloc_prepare(struct page_frag_cache *nc, + unsigned int *offset, + unsigned int *fragsz, + void **va, gfp_t gfp); + +static inline struct page *page_frag_alloc_probe(struct page_frag_cache *nc, + unsigned int *offset, + unsigned int *fragsz, + void **va) +{ + unsigned long encoded_va = nc->encoded_va; + struct page *page; + + VM_BUG_ON(!*fragsz); + if (unlikely(nc->remaining < *fragsz)) + return NULL; + + *va = encoded_page_address(encoded_va); + page = virt_to_page(*va); + *fragsz = nc->remaining; + *offset = page_frag_cache_page_size(encoded_va) - *fragsz; + *va += *offset; + + return page; +} + +static inline void page_frag_alloc_commit(struct page_frag_cache *nc, + unsigned int fragsz) +{ + VM_BUG_ON(fragsz > nc->remaining || !nc->pagecnt_bias); + nc->pagecnt_bias--; + nc->remaining -= fragsz; +} + +static inline void page_frag_alloc_commit_noref(struct page_frag_cache *nc, + unsigned int fragsz) +{ + VM_BUG_ON(fragsz > nc->remaining); + nc->remaining -= fragsz; +} + +static inline void page_frag_alloc_abort(struct page_frag_cache *nc, + unsigned int fragsz) +{ + nc->pagecnt_bias++; + nc->remaining += fragsz; +} + void page_frag_free_va(void *addr); #endif diff --git a/mm/page_frag_cache.c b/mm/page_frag_cache.c index a24d6d5278d1..6a21d710c0e2 100644 --- a/mm/page_frag_cache.c +++ b/mm/page_frag_cache.c @@ -19,27 +19,27 @@ #include #include "internal.h" -static bool __page_frag_cache_reuse(unsigned long encoded_va, - unsigned int pagecnt_bias) +static struct page *__page_frag_cache_reuse(unsigned long encoded_va, + unsigned int pagecnt_bias) { struct page *page; page = virt_to_page((void *)encoded_va); if (!page_ref_sub_and_test(page, pagecnt_bias)) - return false; + return NULL; if (unlikely(encoded_page_pfmemalloc(encoded_va))) { free_unref_page(page, encoded_page_order(encoded_va)); - return false; + return NULL; } /* OK, page count is 0, we can safely set it */ set_page_count(page, PAGE_FRAG_CACHE_MAX_SIZE + 1); - return true; + return page; } -static bool __page_frag_cache_refill(struct page_frag_cache *nc, - gfp_t gfp_mask) +static struct page *__page_frag_cache_refill(struct page_frag_cache *nc, + gfp_t gfp_mask) { unsigned long order = PAGE_FRAG_CACHE_MAX_ORDER; struct page *page = NULL; @@ -55,7 +55,7 @@ static bool __page_frag_cache_refill(struct page_frag_cache *nc, page = __alloc_pages(gfp, 0, numa_mem_id(), NULL); if (unlikely(!page)) { memset(nc, 0, sizeof(*nc)); - return false; + return NULL; } order = 0; @@ -69,29 +69,151 @@ static bool __page_frag_cache_refill(struct page_frag_cache *nc, */ page_ref_add(page, PAGE_FRAG_CACHE_MAX_SIZE); - return true; + return page; } /* Reload cache by reusing the old cache if it is possible, or * refilling from the page allocator. */ -static bool __page_frag_cache_reload(struct page_frag_cache *nc, - gfp_t gfp_mask) +static struct page *__page_frag_cache_reload(struct page_frag_cache *nc, + gfp_t gfp_mask) { + struct page *page; + if (likely(nc->encoded_va)) { - if (__page_frag_cache_reuse(nc->encoded_va, nc->pagecnt_bias)) + page = __page_frag_cache_reuse(nc->encoded_va, nc->pagecnt_bias); + if (page) goto out; } - if (unlikely(!__page_frag_cache_refill(nc, gfp_mask))) - return false; + page = __page_frag_cache_refill(nc, gfp_mask); + if (unlikely(!page)) + return NULL; out: /* reset page count bias and remaining to start of new frag */ nc->pagecnt_bias = PAGE_FRAG_CACHE_MAX_SIZE + 1; nc->remaining = page_frag_cache_page_size(nc->encoded_va); - return true; + return page; +} + +void *page_frag_alloc_va_prepare(struct page_frag_cache *nc, + unsigned int *fragsz, gfp_t gfp) +{ + unsigned int remaining = nc->remaining; + + VM_BUG_ON(!*fragsz); + if (likely(remaining >= *fragsz)) { + unsigned long encoded_va = nc->encoded_va; + + *fragsz = remaining; + + return encoded_page_address(encoded_va) + + (page_frag_cache_page_size(encoded_va) - remaining); + } + + if (unlikely(*fragsz > PAGE_SIZE)) + return NULL; + + /* When reload fails, nc->encoded_va and nc->remaining are both reset + * to zero, so there is no need to check the return value here. + */ + __page_frag_cache_reload(nc, gfp); + + *fragsz = nc->remaining; + return encoded_page_address(nc->encoded_va); +} +EXPORT_SYMBOL(page_frag_alloc_va_prepare); + +struct page *page_frag_alloc_pg_prepare(struct page_frag_cache *nc, + unsigned int *offset, + unsigned int *fragsz, gfp_t gfp) +{ + unsigned int remaining = nc->remaining; + struct page *page; + + VM_BUG_ON(!*fragsz); + if (likely(remaining >= *fragsz)) { + unsigned long encoded_va = nc->encoded_va; + + *offset = page_frag_cache_page_size(encoded_va) - remaining; + *fragsz = remaining; + + return virt_to_page((void *)encoded_va); + } + + if (unlikely(*fragsz > PAGE_SIZE)) + return NULL; + + page = __page_frag_cache_reload(nc, gfp); + *offset = 0; + *fragsz = nc->remaining; + return page; +} +EXPORT_SYMBOL(page_frag_alloc_pg_prepare); + +struct page *page_frag_alloc_prepare(struct page_frag_cache *nc, + unsigned int *offset, + unsigned int *fragsz, + void **va, gfp_t gfp) +{ + unsigned int remaining = nc->remaining; + struct page *page; + + VM_BUG_ON(!*fragsz); + if (likely(remaining >= *fragsz)) { + unsigned long encoded_va = nc->encoded_va; + + *offset = page_frag_cache_page_size(encoded_va) - remaining; + *va = encoded_page_address(encoded_va) + *offset; + *fragsz = remaining; + + return virt_to_page((void *)encoded_va); + } + + if (unlikely(*fragsz > PAGE_SIZE)) + return NULL; + + page = __page_frag_cache_reload(nc, gfp); + *offset = 0; + *fragsz = nc->remaining; + *va = encoded_page_address(nc->encoded_va); + + return page; +} +EXPORT_SYMBOL(page_frag_alloc_prepare); + +struct page *page_frag_alloc_pg(struct page_frag_cache *nc, + unsigned int *offset, unsigned int fragsz, + gfp_t gfp) +{ + unsigned int remaining = nc->remaining; + struct page *page; + + VM_BUG_ON(!fragsz); + if (likely(remaining >= fragsz)) { + unsigned long encoded_va = nc->encoded_va; + + *offset = page_frag_cache_page_size(encoded_va) - + remaining; + + return virt_to_page((void *)encoded_va); + } + + if (unlikely(fragsz > PAGE_SIZE)) + return NULL; + + page = __page_frag_cache_reload(nc, gfp); + if (unlikely(!page)) + return NULL; + + *offset = 0; + nc->remaining = remaining - fragsz; + nc->pagecnt_bias--; + + return page; } +EXPORT_SYMBOL(page_frag_alloc_pg); void page_frag_cache_drain(struct page_frag_cache *nc) {