From patchwork Fri Aug 23 15:00:36 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yunsheng Lin X-Patchwork-Id: 13775390 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 42B48C52D7C for ; Fri, 23 Aug 2024 15:07:00 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 8C5B5800A8; Fri, 23 Aug 2024 11:06:58 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 876FB800A4; Fri, 23 Aug 2024 11:06:58 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 6A073800A8; Fri, 23 Aug 2024 11:06:58 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id 4525B800A4 for ; Fri, 23 Aug 2024 11:06:58 -0400 (EDT) Received: from smtpin04.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id F18A9A1D91 for ; Fri, 23 Aug 2024 15:06:57 +0000 (UTC) X-FDA: 82483837674.04.7EC2B59 Received: from szxga06-in.huawei.com (szxga06-in.huawei.com [45.249.212.32]) by imf10.hostedemail.com (Postfix) with ESMTP id AAF97C000A for ; Fri, 23 Aug 2024 15:06:54 +0000 (UTC) Authentication-Results: imf10.hostedemail.com; dkim=none; spf=pass (imf10.hostedemail.com: domain of linyunsheng@huawei.com designates 45.249.212.32 as permitted sender) smtp.mailfrom=linyunsheng@huawei.com; dmarc=pass (policy=quarantine) header.from=huawei.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1724425573; a=rsa-sha256; cv=none; b=CEIPZ0+MmdKTb9UvOtoMV/NaUOFPq7Xkp6MIg1lny0n9uaelIWbhs/qqP+Ayg7coWrytA1 Z3kOOUF3o/QkMEsy+BPmRXAXBlBfQSRBkUyTO/wdQnQ9ED+TsPh0XRLQCUBluEPohTgOzA SJGoxlUlzETvcdVga8RkAzLwRwkAPHo= ARC-Authentication-Results: i=1; imf10.hostedemail.com; dkim=none; spf=pass (imf10.hostedemail.com: domain of linyunsheng@huawei.com designates 45.249.212.32 as permitted sender) smtp.mailfrom=linyunsheng@huawei.com; dmarc=pass (policy=quarantine) header.from=huawei.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1724425573; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=3Q49kNfoRHNTbg6QBk9o0LtECwE9e4tPy6zxxFlopW4=; b=WvDV7C81DZ/FnI5ML4wdmmUCba/Ze1gn6E5b/ik34vH3hGpeBQmP7jqtgZ8sUrcxvFGVCz qAZTerxdc8hWzfRV27CMB2n5VTJVlrwK98iN25ZnHrb2cYKoFK9dg/z0P6B8/ZsyI9OgU6 D26PunuWtl4DSYosk60Xilg+3/sT9QI= Received: from mail.maildlp.com (unknown [172.19.163.17]) by szxga06-in.huawei.com (SkyGuard) with ESMTP id 4Wr3Jl6Z6lz1xvp8; Fri, 23 Aug 2024 23:04:55 +0800 (CST) Received: from dggpemf200006.china.huawei.com (unknown [7.185.36.61]) by mail.maildlp.com (Postfix) with ESMTPS id 6A33F1A0188; Fri, 23 Aug 2024 23:06:51 +0800 (CST) Received: from localhost.localdomain (10.90.30.45) by dggpemf200006.china.huawei.com (7.185.36.61) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.11; Fri, 23 Aug 2024 23:06:51 +0800 From: Yunsheng Lin To: , , CC: , , Yunsheng Lin , Alexander Duyck , Andrew Morton , Subject: [PATCH net-next v14 08/11] mm: page_frag: introduce prepare/probe/commit API Date: Fri, 23 Aug 2024 23:00:36 +0800 Message-ID: <20240823150040.1567062-9-linyunsheng@huawei.com> X-Mailer: git-send-email 2.30.0 In-Reply-To: <20240823150040.1567062-1-linyunsheng@huawei.com> References: <20240823150040.1567062-1-linyunsheng@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.90.30.45] X-ClientProxiedBy: dggems703-chm.china.huawei.com (10.3.19.180) To dggpemf200006.china.huawei.com (7.185.36.61) X-Stat-Signature: nj5aixbhgwagygd384nss43xp74sdj7y X-Rspamd-Queue-Id: AAF97C000A X-Rspam-User: X-Rspamd-Server: rspam10 X-HE-Tag: 1724425614-165160 X-HE-Meta: U2FsdGVkX1+dLdD4M8qgNFnMuxiAdKTkXq5mSzZHOq1KhF82jbfZSWJSYWIAq0ZJ1+JN4s571BzPBbX3a2ROKsUYaKmxnPddbYUnJleIqCW6+me+1zs+jKt5We7JE5FDRE8OJvdiQ0l+bL63/aEpsl5ZDNQIwUF2DxDGXldxNXd0KaVXw4rvqZpmtqmJYZr0adcCsxrf2f7U4G/8XvSfg+i5ZNGET1wN+rGGmTsDRAUFuVyC2Fwrm1ks8Mh4yQfUHVxR+OGAscOF4URPCT/NdzP85sjOqT9epJwrzNcREZbMhmNTGgGSrfnJl6r5vydrGHvrCmOdmpM015Tj4YvQj52YDpjEb4d8uwMoapsIJO0oGdunFGvsAhPhXvpw1aU2Al/FYB1faAYmxgzQbIAdvB765lsP95Mx3A832g4+YN3xSM89bsIjiugiVdUw2h8Lyw6eqbAwBt9+KrPd1h+PHVhnUruK+YT8qNGTKZoxjaTr0IQb1P2lctduRdQM74Y3VA9udi0FNJQFe87JFnCU31MWcSUW67t2CVNniYRcQprdizUxBBrJLSrLE2ABS5wDmXok3Qb7B/sJc+9H3Mdc+YkHVE9jkfAsP4uSdulOTRdK8dZe2ag5fT02rr6F2aUYRttWD4iwUoiNOhcYecbcCwAxGX6mF8dHFvn/VEG+Ib8Fmsf5W84/a9o87iSz3nXu59ctlcaxbUvG19ZpEVBgfOr2KRGtJADlMr/quKXVPptg3Cp1+B+3kZtsmxjorZetXRtHZbk2ZB8n19h/KxZ/iJI7y9/FzxfCxL/J5NjsZj23Ntzj3g2pPnagU8k6RpLARrr5bCdfZFAHB4PgGDYBWMzgQQYIpEAF61JH1yh9+TdeCFVauNG6WWT5bWUgR82+ATzAVza8xCjuasia5hBrIFQA6vj9sH1of51aZ2AO7LE4FkRzbBLzy1ANeT52M/dJdFN4+lSAX5N+DJT57Rk cLMApJNI cHKRJ+LaxAC85Jjn83O2Bu+FmNmUNPevUvcyNLUbPT78ezVII6KtIkHzN61GEHsaVamZOlfR/K6XJeFtRdCEhEcUMEhV6m2uvZcl/gu2lbB8u/oYFi4TEAyj2hC+t83KdDmmBGQ/iaGhE0TE5F8/VpmJSbTdR/GLZfOmfxCerDVgUJw/COFxYgenSSkWG+EoLoW67T8caH1efNZp9/Cb3vsQ0mQx+lrIV4/tdiDLaok/0UQXY+dlKXr4t57wQDKwkwDW4tm/1CUoVyYK/ma3lk/8pLJXujHb2erZn X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: There are many use cases that need minimum memory in order for forward progress, but more performant if more memory is available or need to probe the cache info to use any memory available for frag caoleasing reason. Currently skb_page_frag_refill() API is used to solve the above use cases, but caller needs to know about the internal detail and access the data field of 'struct page_frag' to meet the requirement of the above use cases and its implementation is similar to the one in mm subsystem. To unify those two page_frag implementations, introduce a prepare API to ensure minimum memory is satisfied and return how much the actual memory is available to the caller and a probe API to report the current available memory to caller without doing cache refilling. The caller needs to either call the commit API to report how much memory it actually uses, or not do so if deciding to not use any memory. CC: Alexander Duyck Signed-off-by: Yunsheng Lin --- include/linux/page_frag_cache.h | 138 ++++++++++++++++++++++++++++++++ 1 file changed, 138 insertions(+) diff --git a/include/linux/page_frag_cache.h b/include/linux/page_frag_cache.h index 1ed0d1b7014b..5a064f74eac2 100644 --- a/include/linux/page_frag_cache.h +++ b/include/linux/page_frag_cache.h @@ -73,6 +73,11 @@ static inline unsigned int page_frag_cache_page_size(unsigned long encoded_page) return PAGE_SIZE << page_pool_encoded_page_order(encoded_page); } +static inline unsigned int page_frag_cache_page_offset(const struct page_frag_cache *nc) +{ + return nc->offset; +} + void page_frag_cache_drain(struct page_frag_cache *nc); void __page_frag_cache_drain(struct page *page, unsigned int count); void *__page_frag_cache_prepare(struct page_frag_cache *nc, unsigned int fragsz, @@ -126,6 +131,139 @@ static inline void *page_frag_alloc(struct page_frag_cache *nc, return __page_frag_alloc_align(nc, fragsz, gfp_mask, ~0u); } +static inline bool __page_frag_refill_align(struct page_frag_cache *nc, unsigned int fragsz, + struct page_frag *pfrag, gfp_t gfp_mask, + unsigned int align_mask) +{ + if (unlikely(!__page_frag_cache_prepare(nc, fragsz, pfrag, gfp_mask, align_mask))) + return false; + + __page_frag_cache_commit(nc, pfrag, true, fragsz); + return true; +} + +static inline bool page_frag_refill_align(struct page_frag_cache *nc, unsigned int fragsz, + struct page_frag *pfrag, gfp_t gfp_mask, + unsigned int align) +{ + WARN_ON_ONCE(!is_power_of_2(align)); + return __page_frag_refill_align(nc, fragsz, pfrag, gfp_mask, -align); +} + +static inline bool page_frag_refill(struct page_frag_cache *nc, unsigned int fragsz, + struct page_frag *pfrag, gfp_t gfp_mask) +{ + return __page_frag_refill_align(nc, fragsz, pfrag, gfp_mask, ~0u); +} + +static inline bool __page_frag_refill_prepare_align(struct page_frag_cache *nc, + unsigned int fragsz, + struct page_frag *pfrag, + gfp_t gfp_mask, + unsigned int align_mask) +{ + return !!__page_frag_cache_prepare(nc, fragsz, pfrag, gfp_mask, align_mask); +} + +static inline bool page_frag_refill_prepare_align(struct page_frag_cache *nc, + unsigned int fragsz, + struct page_frag *pfrag, + gfp_t gfp_mask, + unsigned int align) +{ + WARN_ON_ONCE(!is_power_of_2(align)); + return __page_frag_refill_prepare_align(nc, fragsz, pfrag, gfp_mask, -align); +} + +static inline bool page_frag_refill_prepare(struct page_frag_cache *nc, + unsigned int fragsz, + struct page_frag *pfrag, + gfp_t gfp_mask) +{ + return __page_frag_refill_prepare_align(nc, fragsz, pfrag, gfp_mask, ~0u); +} + +static inline void *__page_frag_alloc_refill_prepare_align(struct page_frag_cache *nc, + unsigned int fragsz, + struct page_frag *pfrag, + gfp_t gfp_mask, + unsigned int align_mask) +{ + return __page_frag_cache_prepare(nc, fragsz, pfrag, gfp_mask, align_mask); +} + +static inline void *page_frag_alloc_refill_prepare_align(struct page_frag_cache *nc, + unsigned int fragsz, + struct page_frag *pfrag, + gfp_t gfp_mask, + unsigned int align) +{ + WARN_ON_ONCE(!is_power_of_2(align)); + return __page_frag_alloc_refill_prepare_align(nc, fragsz, pfrag, gfp_mask, -align); +} + +static inline void *page_frag_alloc_refill_prepare(struct page_frag_cache *nc, + unsigned int fragsz, + struct page_frag *pfrag, + gfp_t gfp_mask) +{ + return __page_frag_alloc_refill_prepare_align(nc, fragsz, pfrag, gfp_mask, ~0u); +} + +static inline void *__page_frag_alloc_refill_probe_align(struct page_frag_cache *nc, + unsigned int fragsz, + struct page_frag *pfrag, + unsigned int align_mask) +{ + unsigned long encoded_page = nc->encoded_page; + unsigned int size, offset; + + size = page_frag_cache_page_size(encoded_page); + offset = __ALIGN_KERNEL_MASK(nc->offset, ~align_mask); + if (unlikely(offset + fragsz > size)) + return NULL; + + pfrag->page = page_pool_encoded_page_ptr(encoded_page); + pfrag->size = size - offset; + pfrag->offset = offset; + + return page_pool_encoded_page_address(encoded_page) + offset; +} + +static inline void *page_frag_alloc_refill_probe(struct page_frag_cache *nc, + unsigned int fragsz, + struct page_frag *pfrag) +{ + return __page_frag_alloc_refill_probe_align(nc, fragsz, pfrag, ~0u); +} + +static inline bool page_frag_refill_probe(struct page_frag_cache *nc, + unsigned int fragsz, + struct page_frag *pfrag) +{ + return !!page_frag_alloc_refill_probe(nc, fragsz, pfrag); +} + +static inline void page_frag_commit(struct page_frag_cache *nc, struct page_frag *pfrag, + unsigned int used_sz) +{ + __page_frag_cache_commit(nc, pfrag, true, used_sz); +} + +static inline void page_frag_commit_noref(struct page_frag_cache *nc, + struct page_frag *pfrag, unsigned int used_sz) +{ + __page_frag_cache_commit(nc, pfrag, false, used_sz); +} + +static inline void page_frag_alloc_abort(struct page_frag_cache *nc, unsigned int fragsz) +{ + VM_BUG_ON(fragsz > nc->offset); + + nc->pagecnt_bias++; + nc->offset -= fragsz; +} + void page_frag_free(void *addr); #endif