From patchwork Tue Dec 5 11:34:39 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yunsheng Lin X-Patchwork-Id: 13480075 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5C45EC10F04 for ; Tue, 5 Dec 2023 11:35:02 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 3C19F6B0087; Tue, 5 Dec 2023 06:35:01 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 36DB36B008A; Tue, 5 Dec 2023 06:35:01 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 1728A6B0088; Tue, 5 Dec 2023 06:35:01 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id DD39A6B0087 for ; Tue, 5 Dec 2023 06:35:00 -0500 (EST) Received: from smtpin25.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id B3D06A0AC6 for ; Tue, 5 Dec 2023 11:35:00 +0000 (UTC) X-FDA: 81532557960.25.AC68A35 Received: from szxga03-in.huawei.com (szxga03-in.huawei.com [45.249.212.189]) by imf10.hostedemail.com (Postfix) with ESMTP id 2D450C001F for ; Tue, 5 Dec 2023 11:34:56 +0000 (UTC) Authentication-Results: imf10.hostedemail.com; dkim=none; dmarc=pass (policy=quarantine) header.from=huawei.com; spf=pass (imf10.hostedemail.com: domain of linyunsheng@huawei.com designates 45.249.212.189 as permitted sender) smtp.mailfrom=linyunsheng@huawei.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1701776098; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=wkivtHNQ6Xaz0zO64LjRaITbpcUER/7q69G1eEUIA9U=; b=L5JqDLYyaNhidXMxMwSeUSgNo8W72rM2Y6qjyH30PAdDpBW8XOWvmV9pU9mssrK505IY6X LKWvi/0n+LVVqVbSvdduZAKdZ3S2cDLB0KAMdQqFk7rTLIG4+eGVGQNzcQpbgZmXtqVOa2 n262dD1tiutJWaX0Gj05qnjuS6q20S4= ARC-Authentication-Results: i=1; imf10.hostedemail.com; dkim=none; dmarc=pass (policy=quarantine) header.from=huawei.com; spf=pass (imf10.hostedemail.com: domain of linyunsheng@huawei.com designates 45.249.212.189 as permitted sender) smtp.mailfrom=linyunsheng@huawei.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1701776098; a=rsa-sha256; cv=none; b=Ir4Mz8+9kFGOU6RJV4JmbOhAz/BD4AfJO/0Jyh+R+3O9WpdBmrVy6jy4snIlNCQkDYSaxO xMxVMNWiIS3yyirt12tmPxnnf5ssZHs8t4mVIaNmcDMHuE3/i0LZ2mdzzcJ6u9HDjydmAA g/Rns8hGccgkMzpTpKpKqirlG1mhLso= Received: from dggpemm500005.china.huawei.com (unknown [172.30.72.53]) by szxga03-in.huawei.com (SkyGuard) with ESMTP id 4SkyxY2tr3z14L6L; Tue, 5 Dec 2023 19:29:53 +0800 (CST) Received: from localhost.localdomain (10.69.192.56) by dggpemm500005.china.huawei.com (7.185.36.74) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.35; Tue, 5 Dec 2023 19:34:50 +0800 From: Yunsheng Lin To: , , CC: , , Yunsheng Lin , Alexander Duyck , Andrew Morton , Eric Dumazet , Subject: [PATCH net-next 1/6] mm/page_alloc: modify page_frag_alloc_align() to accept align as an argument Date: Tue, 5 Dec 2023 19:34:39 +0800 Message-ID: <20231205113444.63015-2-linyunsheng@huawei.com> X-Mailer: git-send-email 2.33.0 In-Reply-To: <20231205113444.63015-1-linyunsheng@huawei.com> References: <20231205113444.63015-1-linyunsheng@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.69.192.56] X-ClientProxiedBy: dggems702-chm.china.huawei.com (10.3.19.179) To dggpemm500005.china.huawei.com (7.185.36.74) X-CFilter-Loop: Reflected X-Rspamd-Server: rspam09 X-Rspamd-Queue-Id: 2D450C001F X-Stat-Signature: 387wdwtm9ggjbxhz3u99c759qt8tnhsf X-Rspam-User: X-HE-Tag: 1701776096-477732 X-HE-Meta: U2FsdGVkX19RrDYOSwVVr/jqYSCdld7IaYAjPvW4WWYNuZk8MlQWnxZdUXwqaEEzFjBbZ1+GZ2t/mHxUX+ljRbg2PyQySNBzVWi350zQRCsmDBvUFsqvhEppe9YhvKCrs0EIC69flrmHR6dLSuTfb4TVEo4+SmULxhny/SZzpKFBaGZW06uE9/PY3SiD3GNvm8ziWbzPjGJIdxlNxmqxIxQW6+xIvpjisqDETTDjrtYuvUMO4RGWIAKlm8dQP+5Wxgpxm8IVIVPsezXtaDXuSZ1wQ4Lu0mAMnWtcCTYbp7cg3/EmN9vKAhA64LTXxJL6JZ87YAbUyA7YZyq5vtfFU6TcGDgiiGuzptHgY48GrB+F8MjB0bapsOi8RoPT7RM6cruck/Y+pCGWdQz4LEqU2PQRymGQlI7WorGkN2r6FV/bsQFnL0+Ri5N9yIDpWQfCDIq4SLvyNL7fa6lyF0HIS5zJK2dgoBTugpx4zch4a/kz2qAYELk4DrZPHWShDoUxSs6bkhKr37W6aBlgEPfaVarBhoAR9IAENUtvp9uY+N2UjqozR5JHGscBhKkTRyHbU+RuM+9iJs5LDG5+ss+nbJFO6G5/uY+7u9FmjP9Xdw2rzw9VbK4kOgU8bmaR3w8wlkXVuLDI5iSOKCNgJ2Sux2u3wu5RmwJ+xzWTO2sSJVGMidPt5xMTdY+ZPt6uuwN/LiPEDNP4TM+OAICT9EXKeOAWZLzGyyFdm5Y1dwSO45E9YZ+tJawAG5lJYKRr+1NgJgFfF3sYy/hMZfQslUATAF65RIUxOsK0m0Wg8OFc8y+aBCcsUyzxGzbj1aV35rCdhfxfKf3opV1jTy/+oe/EYEOtjI8k/e5g4mLhKYwVSuM10qrYgrHBKjwpJ1r4MmpTpbEUtyFKN7djs2GtKrL8Nxvi0GzDOiwoUYtdNDx7nSOhnOGUjuoQqRhsM9Hz33I0KJi4X0DPMTMQ5Y0Sgu3 QgKCSxL5 4/sD2l3fGlOKi1etNP2ON12rr8s/aM4IZWo4QGEyZQzydgPjLzCOrd2Q1WX29YGqRJaxe6Srl5EHlI3AS2jRJr9ksuUzCxN+nyKeGYluyH9L5pBe7vx92iK/xfxvDpJ+QVT3mwuTratD56v1lqxtXT3rp8ld0522FXavhKOUmzoCt0ADUrJTNoAu2P62NN5vj2FF1jgEIkSpnMojNz9Vh5w4YRd3Y0wObY6TBhAphKHFjqhxGByYPJx40JLwv1OztU6TE9n4B9CuMwaCMllglc+MUMenAnNiRL5sG X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: napi_alloc_frag_align() and netdev_alloc_frag_align() accept align as an argument, and they are thin wrappers around the __napi_alloc_frag_align() and __netdev_alloc_frag_align() APIs doing the align and align_mask conversion, in order to call page_frag_alloc_align() directly. As __napi_alloc_frag_align() and __netdev_alloc_frag_align() APIs are only used by the above thin wrappers, it seems that it makes more sense to remove align and align_mask conversion and call page_frag_alloc_align() directly. By doing that, we can also avoid the confusion between napi_alloc_frag_align() accepting align as an argument and page_frag_alloc_align() accepting align_mask as an argument when they both have the 'align' suffix. Signed-off-by: Yunsheng Lin CC: Alexander Duyck --- include/linux/gfp.h | 4 ++-- include/linux/skbuff.h | 22 ++++------------------ mm/page_alloc.c | 6 ++++-- net/core/skbuff.c | 14 +++++++------- 4 files changed, 17 insertions(+), 29 deletions(-) diff --git a/include/linux/gfp.h b/include/linux/gfp.h index de292a007138..bbd75976541e 100644 --- a/include/linux/gfp.h +++ b/include/linux/gfp.h @@ -314,12 +314,12 @@ struct page_frag_cache; extern void __page_frag_cache_drain(struct page *page, unsigned int count); extern void *page_frag_alloc_align(struct page_frag_cache *nc, unsigned int fragsz, gfp_t gfp_mask, - unsigned int align_mask); + unsigned int align); static inline void *page_frag_alloc(struct page_frag_cache *nc, unsigned int fragsz, gfp_t gfp_mask) { - return page_frag_alloc_align(nc, fragsz, gfp_mask, ~0u); + return page_frag_alloc_align(nc, fragsz, gfp_mask, 1); } extern void page_frag_free(void *addr); diff --git a/include/linux/skbuff.h b/include/linux/skbuff.h index b370eb8d70f7..095747c500b6 100644 --- a/include/linux/skbuff.h +++ b/include/linux/skbuff.h @@ -3194,7 +3194,7 @@ static inline void skb_queue_purge(struct sk_buff_head *list) unsigned int skb_rbtree_purge(struct rb_root *root); void skb_errqueue_purge(struct sk_buff_head *list); -void *__netdev_alloc_frag_align(unsigned int fragsz, unsigned int align_mask); +void *netdev_alloc_frag_align(unsigned int fragsz, unsigned int align); /** * netdev_alloc_frag - allocate a page fragment @@ -3205,14 +3205,7 @@ void *__netdev_alloc_frag_align(unsigned int fragsz, unsigned int align_mask); */ static inline void *netdev_alloc_frag(unsigned int fragsz) { - return __netdev_alloc_frag_align(fragsz, ~0u); -} - -static inline void *netdev_alloc_frag_align(unsigned int fragsz, - unsigned int align) -{ - WARN_ON_ONCE(!is_power_of_2(align)); - return __netdev_alloc_frag_align(fragsz, -align); + return netdev_alloc_frag_align(fragsz, 1); } struct sk_buff *__netdev_alloc_skb(struct net_device *dev, unsigned int length, @@ -3272,18 +3265,11 @@ static inline void skb_free_frag(void *addr) page_frag_free(addr); } -void *__napi_alloc_frag_align(unsigned int fragsz, unsigned int align_mask); +void *napi_alloc_frag_align(unsigned int fragsz, unsigned int align); static inline void *napi_alloc_frag(unsigned int fragsz) { - return __napi_alloc_frag_align(fragsz, ~0u); -} - -static inline void *napi_alloc_frag_align(unsigned int fragsz, - unsigned int align) -{ - WARN_ON_ONCE(!is_power_of_2(align)); - return __napi_alloc_frag_align(fragsz, -align); + return napi_alloc_frag_align(fragsz, 1); } struct sk_buff *__napi_alloc_skb(struct napi_struct *napi, diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 37ca4f4b62bf..9a16305cf985 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -4718,12 +4718,14 @@ EXPORT_SYMBOL(__page_frag_cache_drain); void *page_frag_alloc_align(struct page_frag_cache *nc, unsigned int fragsz, gfp_t gfp_mask, - unsigned int align_mask) + unsigned int align) { unsigned int size = PAGE_SIZE; struct page *page; int offset; + WARN_ON_ONCE(!is_power_of_2(align)); + if (unlikely(!nc->va)) { refill: page = __page_frag_cache_refill(nc, gfp_mask); @@ -4782,7 +4784,7 @@ void *page_frag_alloc_align(struct page_frag_cache *nc, } nc->pagecnt_bias--; - offset &= align_mask; + offset &= -align; nc->offset = offset; return nc->va + offset; diff --git a/net/core/skbuff.c b/net/core/skbuff.c index b157efea5dea..b98d1da4004a 100644 --- a/net/core/skbuff.c +++ b/net/core/skbuff.c @@ -291,17 +291,17 @@ void napi_get_frags_check(struct napi_struct *napi) local_bh_enable(); } -void *__napi_alloc_frag_align(unsigned int fragsz, unsigned int align_mask) +void *napi_alloc_frag_align(unsigned int fragsz, unsigned int align) { struct napi_alloc_cache *nc = this_cpu_ptr(&napi_alloc_cache); fragsz = SKB_DATA_ALIGN(fragsz); - return page_frag_alloc_align(&nc->page, fragsz, GFP_ATOMIC, align_mask); + return page_frag_alloc_align(&nc->page, fragsz, GFP_ATOMIC, align); } -EXPORT_SYMBOL(__napi_alloc_frag_align); +EXPORT_SYMBOL(napi_alloc_frag_align); -void *__netdev_alloc_frag_align(unsigned int fragsz, unsigned int align_mask) +void *netdev_alloc_frag_align(unsigned int fragsz, unsigned int align) { void *data; @@ -309,18 +309,18 @@ void *__netdev_alloc_frag_align(unsigned int fragsz, unsigned int align_mask) if (in_hardirq() || irqs_disabled()) { struct page_frag_cache *nc = this_cpu_ptr(&netdev_alloc_cache); - data = page_frag_alloc_align(nc, fragsz, GFP_ATOMIC, align_mask); + data = page_frag_alloc_align(nc, fragsz, GFP_ATOMIC, align); } else { struct napi_alloc_cache *nc; local_bh_disable(); nc = this_cpu_ptr(&napi_alloc_cache); - data = page_frag_alloc_align(&nc->page, fragsz, GFP_ATOMIC, align_mask); + data = page_frag_alloc_align(&nc->page, fragsz, GFP_ATOMIC, align); local_bh_enable(); } return data; } -EXPORT_SYMBOL(__netdev_alloc_frag_align); +EXPORT_SYMBOL(netdev_alloc_frag_align); static struct sk_buff *napi_skb_cache_get(void) {