From patchwork Fri Sep 6 07:36:38 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yunsheng Lin X-Patchwork-Id: 13793531 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id D5F53CE7AE0 for ; Fri, 6 Sep 2024 07:42:58 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 39F4E6B0092; Fri, 6 Sep 2024 03:42:58 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 28BDE6B0093; Fri, 6 Sep 2024 03:42:58 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 08F4A6B0095; Fri, 6 Sep 2024 03:42:57 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id D0E786B0092 for ; Fri, 6 Sep 2024 03:42:57 -0400 (EDT) Received: from smtpin02.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id 8B3FE121067 for ; Fri, 6 Sep 2024 07:42:57 +0000 (UTC) X-FDA: 82533521994.02.EB7FDF7 Received: from szxga04-in.huawei.com (szxga04-in.huawei.com [45.249.212.190]) by imf01.hostedemail.com (Postfix) with ESMTP id C3CED40018 for ; Fri, 6 Sep 2024 07:42:54 +0000 (UTC) Authentication-Results: imf01.hostedemail.com; dkim=none; dmarc=pass (policy=quarantine) header.from=huawei.com; spf=pass (imf01.hostedemail.com: domain of linyunsheng@huawei.com designates 45.249.212.190 as permitted sender) smtp.mailfrom=linyunsheng@huawei.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1725608550; a=rsa-sha256; cv=none; b=FCaS/g3hI4K+FFzvTmgsG6k5RFk2MOJbTCSq2IVHa9ih9zzClun5z8NGeLDtmsdwtjRLDl hxojVpY2Of9eyRpvNIFtY/IPbxxBo3Bw2vDBFMYGTWPWnJyFjdc52gST0Gn7bmlYuNd26f 79dU+MHQDumMIrLrmZRID9NwkIU8wVA= ARC-Authentication-Results: i=1; imf01.hostedemail.com; dkim=none; dmarc=pass (policy=quarantine) header.from=huawei.com; spf=pass (imf01.hostedemail.com: domain of linyunsheng@huawei.com designates 45.249.212.190 as permitted sender) smtp.mailfrom=linyunsheng@huawei.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1725608550; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=HLX9bEEtBxA2j3IE9QbpZ1DKN3jZIEZd8xQgchcf6SI=; b=5WjKp8KApDvisJ2DdDZYtY90/xogswyTn8YTmXKLPw6vrYGEmGb+hDjBZW/23+4LhihIYj 7CSoiTZtqHuBDQDyqIXH6M8KDPPi4hRWL9ArMlgTZbcCGaSj4c4df7FNyCV8ISlNJuJ/jk G/mFEpfErUbZ/pEOzVLZOfrbNduRJ9Q= Received: from mail.maildlp.com (unknown [172.19.88.214]) by szxga04-in.huawei.com (SkyGuard) with ESMTP id 4X0Sqm5fm8z2DbqN; Fri, 6 Sep 2024 15:42:28 +0800 (CST) Received: from dggpemf200006.china.huawei.com (unknown [7.185.36.61]) by mail.maildlp.com (Postfix) with ESMTPS id 88E921A016C; Fri, 6 Sep 2024 15:42:51 +0800 (CST) Received: from localhost.localdomain (10.90.30.45) by dggpemf200006.china.huawei.com (7.185.36.61) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.11; Fri, 6 Sep 2024 15:42:51 +0800 From: Yunsheng Lin To: , , CC: , , Yunsheng Lin , Alexander Duyck , Andrew Morton , Subject: [PATCH net-next v18 06/14] mm: page_frag: reuse existing space for 'size' and 'pfmemalloc' Date: Fri, 6 Sep 2024 15:36:38 +0800 Message-ID: <20240906073646.2930809-7-linyunsheng@huawei.com> X-Mailer: git-send-email 2.30.0 In-Reply-To: <20240906073646.2930809-1-linyunsheng@huawei.com> References: <20240906073646.2930809-1-linyunsheng@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.90.30.45] X-ClientProxiedBy: dggems705-chm.china.huawei.com (10.3.19.182) To dggpemf200006.china.huawei.com (7.185.36.61) X-Rspam-User: X-Rspamd-Queue-Id: C3CED40018 X-Rspamd-Server: rspam01 X-Stat-Signature: hdug99dm1ycf5tty1gmz6ta645qwccjn X-HE-Tag: 1725608574-403035 X-HE-Meta: U2FsdGVkX18zgGAwHrqBOFchBJ0WEF/Fl/2a5JOLG/02N3XmMTSh/UiBwrqWloz6jvTzsBH+FfrhO/BmJgAodSJ434Is3HMAnfanUGELiXg6LRVQVu37IBrGlWU5Ec5//iksQ+TxayLSOPvxGofOwyQtR6msr+1Omsjv8gdcABcR3aRmF4KLw+aBbQi+NeRAP1Y48iacbDm+4glwnxT0zRaM0vzigsGyfaPweNdNsQ+IjGb3uQSHgut4D3X4pGQWU6JEcMbU4dBkBRDI1wGwQYYgoI+pyu0F9ImlCW70JCvUyIOg6IYAXhj04/ZwTgZZsjg0jhHNGngYvxKsDNMpqAGvdj0kQMDdw2jjblrOsceaXlybOMLQd4NY6jJQN9tmeqslOPkZ8z4VWvi9SUParjBI2hbL96rhmNvTAmPr/dfLuJpdjfLP6a1Jpgy7vub/khRCJxEL61+2Ca7OIM6mR0n7uX21OqArs7P4/0I6ieonSgrjS4s0Zr2mxGWaC8tIL24t7NNFVBYKKAq4+Z9vDy8kSWErp2QDWZ1IMsOWEByer7EnIZKwbw5ARK1Qcn9lhwzIQskMT7fO1eyDoxulft2rQIQI+oQP3ymlEu3ka13hVp/Vh3LVEVlp4UxWIGcBfLdU3bKIIabqoET95qXIEYha0jupJCsN3ibR/TYSJ1Jq1zk2aseI3AA1xZ2vdjqXlAlQFHqGqz76LdGIm0ixaXg/AN1u/W02NaFrjQ4oC2/KGIcM/7GmUqP6+MN6tU/FGzZUpGPK80WHEE6LrRzUEboXkHvg4G2Dx3LZp/xuu6GTVuuEar6OtXGmol6DquQwbFY7ZxfwXdnrYXQ86b4lqoZdQfXH7n5q0itm2qQkb7+L8QG+Y/rUMVK9bLKl59zJ1IGhNUABzP1QlrKsK6gNfnOmVN8XQntmqax52/1Yz0f5QRqAzdCfmU7GZEuaWNlIuMQ3z4aIHsTV8mhslu+ XmjBUd4h K3coKmddAjjd4JOttAQ0yhQPkBtCI2reEEOdi4mUy3zVvOdjV4QuHQHXLWgrcay754MKsdL4fWPfLrQAHAUzDS2+zTsJAGbDUARU5oXes1798Cex1soyRKhvc6NMJw2XoswxoA9vbAwk5j9qpO80PRjH5yb/Kakg6SUFoZl0MP++w2uuI6zGsEIPYi8RxpuM2F2Uu4gxhgZkKsHsXPXVdSSkCpQKSEjTJwnxRS4DXP+LEU4NdzV5WchxTJVTtzqeQxqf6SB1qHeQ01l0lWQn5zNVtaFzJvtUyKTrg X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Currently there is one 'struct page_frag' for every 'struct sock' and 'struct task_struct', we are about to replace the 'struct page_frag' with 'struct page_frag_cache' for them. Before begin the replacing, we need to ensure the size of 'struct page_frag_cache' is not bigger than the size of 'struct page_frag', as there may be tens of thousands of 'struct sock' and 'struct task_struct' instances in the system. By or'ing the page order & pfmemalloc with lower bits of 'va' instead of using 'u16' or 'u32' for page size and 'u8' for pfmemalloc, we are able to avoid 3 or 5 bytes space waste. And page address & pfmemalloc & order is unchanged for the same page in the same 'page_frag_cache' instance, it makes sense to fit them together. After this patch, the size of 'struct page_frag_cache' should be the same as the size of 'struct page_frag'. CC: Alexander Duyck Signed-off-by: Yunsheng Lin --- include/linux/mm_types_task.h | 19 +++++---- include/linux/page_frag_cache.h | 26 +++++++++++- mm/page_frag_cache.c | 75 +++++++++++++++++++++++---------- 3 files changed, 88 insertions(+), 32 deletions(-) diff --git a/include/linux/mm_types_task.h b/include/linux/mm_types_task.h index cdc1e3696439..73a574a0e8f9 100644 --- a/include/linux/mm_types_task.h +++ b/include/linux/mm_types_task.h @@ -50,18 +50,21 @@ struct page_frag { #define PAGE_FRAG_CACHE_MAX_SIZE __ALIGN_MASK(32768, ~PAGE_MASK) #define PAGE_FRAG_CACHE_MAX_ORDER get_order(PAGE_FRAG_CACHE_MAX_SIZE) struct page_frag_cache { - void *va; -#if (PAGE_SIZE < PAGE_FRAG_CACHE_MAX_SIZE) + /* encoded_page consists of the virtual address, pfmemalloc bit and + * order of a page. + */ + unsigned long encoded_page; + + /* we maintain a pagecount bias, so that we dont dirty cache line + * containing page->_refcount every time we allocate a fragment. + */ +#if (PAGE_SIZE < PAGE_FRAG_CACHE_MAX_SIZE) && (BITS_PER_LONG <= 32) __u16 offset; - __u16 size; + __u16 pagecnt_bias; #else __u32 offset; + __u32 pagecnt_bias; #endif - /* we maintain a pagecount bias, so that we dont dirty cache line - * containing page->_refcount every time we allocate a fragment. - */ - unsigned int pagecnt_bias; - bool pfmemalloc; }; /* Track pages that require TLB flushes */ diff --git a/include/linux/page_frag_cache.h b/include/linux/page_frag_cache.h index 0a52f7a179c8..75aaad6eaea2 100644 --- a/include/linux/page_frag_cache.h +++ b/include/linux/page_frag_cache.h @@ -3,18 +3,40 @@ #ifndef _LINUX_PAGE_FRAG_CACHE_H #define _LINUX_PAGE_FRAG_CACHE_H +#include #include #include #include +#if (PAGE_SIZE < PAGE_FRAG_CACHE_MAX_SIZE) +/* Use a full byte here to enable assembler optimization as the shift + * operation is usually expecting a byte. + */ +#define PAGE_FRAG_CACHE_ORDER_MASK GENMASK(7, 0) +#define PAGE_FRAG_CACHE_PFMEMALLOC_SHIFT 8 +#define PAGE_FRAG_CACHE_PFMEMALLOC_BIT BIT(PAGE_FRAG_CACHE_PFMEMALLOC_SHIFT) +#else +/* Compiler should be able to figure out we don't read things as any value + * ANDed with 0 is 0. + */ +#define PAGE_FRAG_CACHE_ORDER_MASK 0 +#define PAGE_FRAG_CACHE_PFMEMALLOC_SHIFT 0 +#define PAGE_FRAG_CACHE_PFMEMALLOC_BIT BIT(PAGE_FRAG_CACHE_PFMEMALLOC_SHIFT) +#endif + +static inline bool page_frag_encoded_page_pfmemalloc(unsigned long encoded_page) +{ + return !!(encoded_page & PAGE_FRAG_CACHE_PFMEMALLOC_BIT); +} + static inline void page_frag_cache_init(struct page_frag_cache *nc) { - nc->va = NULL; + nc->encoded_page = 0; } static inline bool page_frag_cache_is_pfmemalloc(struct page_frag_cache *nc) { - return !!nc->pfmemalloc; + return page_frag_encoded_page_pfmemalloc(nc->encoded_page); } void page_frag_cache_drain(struct page_frag_cache *nc); diff --git a/mm/page_frag_cache.c b/mm/page_frag_cache.c index 4c8e04379cb3..cf9375a81a64 100644 --- a/mm/page_frag_cache.c +++ b/mm/page_frag_cache.c @@ -12,6 +12,7 @@ * be used in the "frags" portion of skb_shared_info. */ +#include #include #include #include @@ -19,9 +20,41 @@ #include #include "internal.h" +static unsigned long page_frag_encode_page(struct page *page, unsigned int order, + bool pfmemalloc) +{ + BUILD_BUG_ON(PAGE_FRAG_CACHE_MAX_ORDER > PAGE_FRAG_CACHE_ORDER_MASK); + BUILD_BUG_ON(PAGE_FRAG_CACHE_PFMEMALLOC_BIT >= PAGE_SIZE); + + return (unsigned long)page_address(page) | + (order & PAGE_FRAG_CACHE_ORDER_MASK) | + ((unsigned long)pfmemalloc << PAGE_FRAG_CACHE_PFMEMALLOC_SHIFT); +} + +static unsigned long page_frag_encoded_page_order(unsigned long encoded_page) +{ + return encoded_page & PAGE_FRAG_CACHE_ORDER_MASK; +} + +static void *page_frag_encoded_page_address(unsigned long encoded_page) +{ + return (void *)(encoded_page & PAGE_MASK); +} + +static struct page *page_frag_encoded_page_ptr(unsigned long encoded_page) +{ + return virt_to_page((void *)encoded_page); +} + +static unsigned int page_frag_cache_page_size(unsigned long encoded_page) +{ + return PAGE_SIZE << page_frag_encoded_page_order(encoded_page); +} + static struct page *__page_frag_cache_refill(struct page_frag_cache *nc, gfp_t gfp_mask) { + unsigned long order = PAGE_FRAG_CACHE_MAX_ORDER; struct page *page = NULL; gfp_t gfp = gfp_mask; @@ -30,23 +63,26 @@ static struct page *__page_frag_cache_refill(struct page_frag_cache *nc, __GFP_NOWARN | __GFP_NORETRY | __GFP_NOMEMALLOC; page = alloc_pages_node(NUMA_NO_NODE, gfp_mask, PAGE_FRAG_CACHE_MAX_ORDER); - nc->size = page ? PAGE_FRAG_CACHE_MAX_SIZE : PAGE_SIZE; #endif - if (unlikely(!page)) + if (unlikely(!page)) { page = alloc_pages_node(NUMA_NO_NODE, gfp, 0); + order = 0; + } - nc->va = page ? page_address(page) : NULL; + nc->encoded_page = page ? + page_frag_encode_page(page, order, page_is_pfmemalloc(page)) : 0; return page; } void page_frag_cache_drain(struct page_frag_cache *nc) { - if (!nc->va) + if (!nc->encoded_page) return; - __page_frag_cache_drain(virt_to_head_page(nc->va), nc->pagecnt_bias); - nc->va = NULL; + __page_frag_cache_drain(page_frag_encoded_page_ptr(nc->encoded_page), + nc->pagecnt_bias); + nc->encoded_page = 0; } EXPORT_SYMBOL(page_frag_cache_drain); @@ -63,35 +99,29 @@ void *__page_frag_alloc_align(struct page_frag_cache *nc, unsigned int fragsz, gfp_t gfp_mask, unsigned int align_mask) { -#if (PAGE_SIZE < PAGE_FRAG_CACHE_MAX_SIZE) - unsigned int size = nc->size; -#else - unsigned int size = PAGE_SIZE; -#endif - unsigned int offset; + unsigned long encoded_page = nc->encoded_page; + unsigned int size, offset; struct page *page; - if (unlikely(!nc->va)) { + if (unlikely(!encoded_page)) { refill: page = __page_frag_cache_refill(nc, gfp_mask); if (!page) return NULL; -#if (PAGE_SIZE < PAGE_FRAG_CACHE_MAX_SIZE) - /* if size can vary use size else just use PAGE_SIZE */ - size = nc->size; -#endif + encoded_page = nc->encoded_page; + /* Even if we own the page, we do not use atomic_set(). * This would break get_page_unless_zero() users. */ page_ref_add(page, PAGE_FRAG_CACHE_MAX_SIZE); /* reset page count bias and offset to start of new frag */ - nc->pfmemalloc = page_is_pfmemalloc(page); nc->pagecnt_bias = PAGE_FRAG_CACHE_MAX_SIZE + 1; nc->offset = 0; } + size = page_frag_cache_page_size(encoded_page); offset = __ALIGN_KERNEL_MASK(nc->offset, ~align_mask); if (unlikely(offset + fragsz > size)) { if (unlikely(fragsz > PAGE_SIZE)) { @@ -107,13 +137,14 @@ void *__page_frag_alloc_align(struct page_frag_cache *nc, return NULL; } - page = virt_to_page(nc->va); + page = page_frag_encoded_page_ptr(encoded_page); if (!page_ref_sub_and_test(page, nc->pagecnt_bias)) goto refill; - if (unlikely(nc->pfmemalloc)) { - free_unref_page(page, compound_order(page)); + if (unlikely(page_frag_encoded_page_pfmemalloc(encoded_page))) { + free_unref_page(page, + page_frag_encoded_page_order(encoded_page)); goto refill; } @@ -128,7 +159,7 @@ void *__page_frag_alloc_align(struct page_frag_cache *nc, nc->pagecnt_bias--; nc->offset = offset + fragsz; - return nc->va + offset; + return page_frag_encoded_page_address(encoded_page) + offset; } EXPORT_SYMBOL(__page_frag_alloc_align);