From patchwork Mon Sep 2 12:03:09 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yunsheng Lin X-Patchwork-Id: 13787184 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 636BCCA0ED3 for ; Mon, 2 Sep 2024 12:09:34 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id A912C8D00D1; Mon, 2 Sep 2024 08:09:30 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id A27A68D0098; Mon, 2 Sep 2024 08:09:30 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 8479C8D00D1; Mon, 2 Sep 2024 08:09:30 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id 6039F8D0098 for ; Mon, 2 Sep 2024 08:09:30 -0400 (EDT) Received: from smtpin19.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id 1A59EC1C10 for ; Mon, 2 Sep 2024 12:09:30 +0000 (UTC) X-FDA: 82519678500.19.503B499 Received: from szxga05-in.huawei.com (szxga05-in.huawei.com [45.249.212.191]) by imf27.hostedemail.com (Postfix) with ESMTP id 74B4140008 for ; Mon, 2 Sep 2024 12:09:27 +0000 (UTC) Authentication-Results: imf27.hostedemail.com; dkim=none; dmarc=pass (policy=quarantine) header.from=huawei.com; spf=pass (imf27.hostedemail.com: domain of linyunsheng@huawei.com designates 45.249.212.191 as permitted sender) smtp.mailfrom=linyunsheng@huawei.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1725278945; a=rsa-sha256; cv=none; b=HY+ILKu7aA+tpuV7wSmM2cKh+eOSWqh7sE0kXyKJ3hsK+jESabkC+V1Rmi8JcD3gIyo4wT viZtjGhGt8sbyJtHiuq31D1RXG0MAL+QH1DIP1XW3G3EyNZIeVfBmb/p2tF8nUZD8Ov6Ci IvUR65VglRqE/i466TiLgojO4cAHc1k= ARC-Authentication-Results: i=1; imf27.hostedemail.com; dkim=none; dmarc=pass (policy=quarantine) header.from=huawei.com; spf=pass (imf27.hostedemail.com: domain of linyunsheng@huawei.com designates 45.249.212.191 as permitted sender) smtp.mailfrom=linyunsheng@huawei.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1725278945; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=x/efFv6kZDNih3c1YiuKR/HHovnQdqO4s4BsGxFI1AU=; b=KBMz5wgVcKO0cLLgot5E39+rFy2kz9gUI3y+NTZAcDSnV5dhfK4DK1a0fDBsmKG7IDBMjy HPbglWd3ZSXt0+0ThojfAvtC/W0sWZs9Wuq0g1/uB6tZqwz6tIJln3dD2kgce3aGQf8jr3 k7r+wvU8uIVEOZ3GiZih72x42cay4rg= Received: from mail.maildlp.com (unknown [172.19.88.163]) by szxga05-in.huawei.com (SkyGuard) with ESMTP id 4Wy6sf2kMMz1HJ2L; Mon, 2 Sep 2024 20:05:58 +0800 (CST) Received: from dggpemf200006.china.huawei.com (unknown [7.185.36.61]) by mail.maildlp.com (Postfix) with ESMTPS id B557418001B; Mon, 2 Sep 2024 20:09:23 +0800 (CST) Received: from localhost.localdomain (10.90.30.45) by dggpemf200006.china.huawei.com (7.185.36.61) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.11; Mon, 2 Sep 2024 20:09:23 +0800 From: Yunsheng Lin To: , , CC: , , Yunsheng Lin , Alexander Duyck , Andrew Morton , Subject: [PATCH net-next v17 10/14] mm: page_frag: introduce prepare/probe/commit API Date: Mon, 2 Sep 2024 20:03:09 +0800 Message-ID: <20240902120314.508180-11-linyunsheng@huawei.com> X-Mailer: git-send-email 2.30.0 In-Reply-To: <20240902120314.508180-1-linyunsheng@huawei.com> References: <20240902120314.508180-1-linyunsheng@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.90.30.45] X-ClientProxiedBy: dggems704-chm.china.huawei.com (10.3.19.181) To dggpemf200006.china.huawei.com (7.185.36.61) X-Rspam-User: X-Rspamd-Queue-Id: 74B4140008 X-Rspamd-Server: rspam01 X-Stat-Signature: ym1ohuqb5er4naeu8dc4jr5w73hs51nn X-HE-Tag: 1725278967-867800 X-HE-Meta: U2FsdGVkX18AfIKxn8Uo3OicGeLpIIY04I95xYX3QWSIDCjtVe0Jz2WKNbwklbWAK7jBPsJOBu+1LLNr3h+bDwCfm68Ho4RcuXNsv7jq6s+jAn94TeajiOE5ehWRPH6sJkeXiqsw5t8JuA1aL+NK+vh/Ke6IAdLl6R4XWi/6amo8XwHXxCmI0AfxS0qhSKt0QNaSyAf7DPqWQmAmC+VR6+fHf8SMjCRaBj1RgnX2mpGshdK7pLtNTXYD/UKaWMxZzkhGeCnhDv7zl+DwEBzBR048MgjJtDtmPl6JOyg0Be7XiNrw7ILhkTsq/xXrRHU86bimkCgossn7IYrI+jggNtgoK3Rur2rnKn2cWRXNo83QIgN1JXU0W0q+5gU9SF7tbSvOWonnYA9//adWb2PUpd8WXqS2JSpdysxwFYH075d1qheUj4mE0KO2jCwS1Y16t2BbcSht6sQaRkppPhCH/wsw28m5fEA56FjxnE7vCmnkrBgPqaHx3jn3LyoFAl+r+eIAXLgwWRctk08oLI3FPw5PD8Z3J6cE/QB5JKmvrc+ChvjVCR64qLOtRzQNr1//KBDLOY4gberOWUX8LJEoe0GNsHq8LxGfXD7QfuYuJW78bO2zIYcjsy54tyIRUDdYG5vSCu9WVzZXg2go+XElfMjNrTPvuyL4ByKJomaPwhzloC0rpA4oNHvo4jWkoC+0RIhZjILemf4poMuOlG7hxyFZRp0Bwt8X/xQa0K+uXhZFCWidBxNAd83SIXGJq2lou88qAv9dW54UrfTlmWh2mwPtQgtrPwnFbmXtdRbIF+mO3DrUIqtgS2rJDmyYlqor3JOX0oNmv93YCmebYuZ3eg8DbM+tRRCIZtnKst4kozY0PPH2hgwgDCGK7rXAGP0WhP1FUsjhb085ZqAeDblrRPXPJlI+w7kWaHVBf8Yi/UErfQLkuEo+r3jIR7zyfFHKLnZHGtqt7Iu//uq/mWm OtNFpnlX iBeua1sCKaCfDJnBPtQwJqPmOTzRbzXOiOG6KkELaSjM1z6EoABauhac9n7Noj7EMvn7HdmEr2N33oTrFb+h0cYeW1bs+hTstM3w9keHOmtq3AtUXUDu0EPYOspzaSSTnO/5Z3P5rnpwNlgS1CSrikQw7wSlYyrTpB2qNSMs2Yii4l6crj02ZMkF7hIoACVy5V4zPcJZbxsH2kT8zelnRMCH0PJXhhaDQ08hXtwQpEtiqEBPEVFYI5ocEbhteBbFcMH2lM1m6w6oGqcXoq1RSmOc9CuQJIGLPSPId X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: There are many use cases that need minimum memory in order for forward progress, but more performant if more memory is available or need to probe the cache info to use any memory available for frag caoleasing reason. Currently skb_page_frag_refill() API is used to solve the above use cases, but caller needs to know about the internal detail and access the data field of 'struct page_frag' to meet the requirement of the above use cases and its implementation is similar to the one in mm subsystem. To unify those two page_frag implementations, introduce a prepare API to ensure minimum memory is satisfied and return how much the actual memory is available to the caller and a probe API to report the current available memory to caller without doing cache refilling. The caller needs to either call the commit API to report how much memory it actually uses, or not do so if deciding to not use any memory. CC: Alexander Duyck Signed-off-by: Yunsheng Lin --- include/linux/page_frag_cache.h | 151 ++++++++++++++++++++++++++++++++ 1 file changed, 151 insertions(+) diff --git a/include/linux/page_frag_cache.h b/include/linux/page_frag_cache.h index 24835ec8c891..16cc94755dd3 100644 --- a/include/linux/page_frag_cache.h +++ b/include/linux/page_frag_cache.h @@ -61,6 +61,11 @@ static inline unsigned int page_frag_cache_page_size(unsigned long encoded_page) return PAGE_SIZE << page_frag_encoded_page_order(encoded_page); } +static inline unsigned int page_frag_cache_page_offset(const struct page_frag_cache *nc) +{ + return nc->offset; +} + void page_frag_cache_drain(struct page_frag_cache *nc); void __page_frag_cache_drain(struct page *page, unsigned int count); void *__page_frag_cache_prepare(struct page_frag_cache *nc, unsigned int fragsz, @@ -126,6 +131,152 @@ static inline void *page_frag_alloc(struct page_frag_cache *nc, return __page_frag_alloc_align(nc, fragsz, gfp_mask, ~0u); } +static inline bool __page_frag_refill_align(struct page_frag_cache *nc, + unsigned int fragsz, + struct page_frag *pfrag, + gfp_t gfp_mask, + unsigned int align_mask) +{ + if (unlikely(!__page_frag_cache_prepare(nc, fragsz, pfrag, gfp_mask, + align_mask))) + return false; + + __page_frag_cache_commit(nc, pfrag, true, fragsz); + return true; +} + +static inline bool page_frag_refill_align(struct page_frag_cache *nc, + unsigned int fragsz, + struct page_frag *pfrag, + gfp_t gfp_mask, unsigned int align) +{ + WARN_ON_ONCE(!is_power_of_2(align)); + return __page_frag_refill_align(nc, fragsz, pfrag, gfp_mask, -align); +} + +static inline bool page_frag_refill(struct page_frag_cache *nc, + unsigned int fragsz, + struct page_frag *pfrag, gfp_t gfp_mask) +{ + return __page_frag_refill_align(nc, fragsz, pfrag, gfp_mask, ~0u); +} + +static inline bool __page_frag_refill_prepare_align(struct page_frag_cache *nc, + unsigned int fragsz, + struct page_frag *pfrag, + gfp_t gfp_mask, + unsigned int align_mask) +{ + return !!__page_frag_cache_prepare(nc, fragsz, pfrag, gfp_mask, + align_mask); +} + +static inline bool page_frag_refill_prepare_align(struct page_frag_cache *nc, + unsigned int fragsz, + struct page_frag *pfrag, + gfp_t gfp_mask, + unsigned int align) +{ + WARN_ON_ONCE(!is_power_of_2(align)); + return __page_frag_refill_prepare_align(nc, fragsz, pfrag, gfp_mask, + -align); +} + +static inline bool page_frag_refill_prepare(struct page_frag_cache *nc, + unsigned int fragsz, + struct page_frag *pfrag, + gfp_t gfp_mask) +{ + return __page_frag_refill_prepare_align(nc, fragsz, pfrag, gfp_mask, + ~0u); +} + +static inline void *__page_frag_alloc_refill_prepare_align(struct page_frag_cache *nc, + unsigned int fragsz, + struct page_frag *pfrag, + gfp_t gfp_mask, + unsigned int align_mask) +{ + return __page_frag_cache_prepare(nc, fragsz, pfrag, gfp_mask, align_mask); +} + +static inline void *page_frag_alloc_refill_prepare_align(struct page_frag_cache *nc, + unsigned int fragsz, + struct page_frag *pfrag, + gfp_t gfp_mask, + unsigned int align) +{ + WARN_ON_ONCE(!is_power_of_2(align)); + return __page_frag_alloc_refill_prepare_align(nc, fragsz, pfrag, + gfp_mask, -align); +} + +static inline void *page_frag_alloc_refill_prepare(struct page_frag_cache *nc, + unsigned int fragsz, + struct page_frag *pfrag, + gfp_t gfp_mask) +{ + return __page_frag_alloc_refill_prepare_align(nc, fragsz, pfrag, + gfp_mask, ~0u); +} + +static inline void *__page_frag_alloc_refill_probe_align(struct page_frag_cache *nc, + unsigned int fragsz, + struct page_frag *pfrag, + unsigned int align_mask) +{ + unsigned long encoded_page = nc->encoded_page; + unsigned int size, offset; + + size = page_frag_cache_page_size(encoded_page); + offset = __ALIGN_KERNEL_MASK(nc->offset, ~align_mask); + if (unlikely(!encoded_page || offset + fragsz > size)) + return NULL; + + pfrag->page = page_frag_encoded_page_ptr(encoded_page); + pfrag->size = size - offset; + pfrag->offset = offset; + + return page_frag_encoded_page_address(encoded_page) + offset; +} + +static inline void *page_frag_alloc_refill_probe(struct page_frag_cache *nc, + unsigned int fragsz, + struct page_frag *pfrag) +{ + return __page_frag_alloc_refill_probe_align(nc, fragsz, pfrag, ~0u); +} + +static inline bool page_frag_refill_probe(struct page_frag_cache *nc, + unsigned int fragsz, + struct page_frag *pfrag) +{ + return !!page_frag_alloc_refill_probe(nc, fragsz, pfrag); +} + +static inline void page_frag_commit(struct page_frag_cache *nc, + struct page_frag *pfrag, + unsigned int used_sz) +{ + __page_frag_cache_commit(nc, pfrag, true, used_sz); +} + +static inline void page_frag_commit_noref(struct page_frag_cache *nc, + struct page_frag *pfrag, + unsigned int used_sz) +{ + __page_frag_cache_commit(nc, pfrag, false, used_sz); +} + +static inline void page_frag_alloc_abort(struct page_frag_cache *nc, + unsigned int fragsz) +{ + VM_BUG_ON(fragsz > nc->offset); + + nc->pagecnt_bias++; + nc->offset -= fragsz; +} + void page_frag_free(void *addr); #endif