From patchwork Tue Dec 5 11:34:39 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yunsheng Lin X-Patchwork-Id: 13480075 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5C45EC10F04 for ; Tue, 5 Dec 2023 11:35:02 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 3C19F6B0087; Tue, 5 Dec 2023 06:35:01 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 36DB36B008A; Tue, 5 Dec 2023 06:35:01 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 1728A6B0088; Tue, 5 Dec 2023 06:35:01 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id DD39A6B0087 for ; Tue, 5 Dec 2023 06:35:00 -0500 (EST) Received: from smtpin25.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id B3D06A0AC6 for ; Tue, 5 Dec 2023 11:35:00 +0000 (UTC) X-FDA: 81532557960.25.AC68A35 Received: from szxga03-in.huawei.com (szxga03-in.huawei.com [45.249.212.189]) by imf10.hostedemail.com (Postfix) with ESMTP id 2D450C001F for ; Tue, 5 Dec 2023 11:34:56 +0000 (UTC) Authentication-Results: imf10.hostedemail.com; dkim=none; dmarc=pass (policy=quarantine) header.from=huawei.com; spf=pass (imf10.hostedemail.com: domain of linyunsheng@huawei.com designates 45.249.212.189 as permitted sender) smtp.mailfrom=linyunsheng@huawei.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1701776098; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=wkivtHNQ6Xaz0zO64LjRaITbpcUER/7q69G1eEUIA9U=; b=L5JqDLYyaNhidXMxMwSeUSgNo8W72rM2Y6qjyH30PAdDpBW8XOWvmV9pU9mssrK505IY6X LKWvi/0n+LVVqVbSvdduZAKdZ3S2cDLB0KAMdQqFk7rTLIG4+eGVGQNzcQpbgZmXtqVOa2 n262dD1tiutJWaX0Gj05qnjuS6q20S4= ARC-Authentication-Results: i=1; imf10.hostedemail.com; dkim=none; dmarc=pass (policy=quarantine) header.from=huawei.com; spf=pass (imf10.hostedemail.com: domain of linyunsheng@huawei.com designates 45.249.212.189 as permitted sender) smtp.mailfrom=linyunsheng@huawei.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1701776098; a=rsa-sha256; cv=none; b=Ir4Mz8+9kFGOU6RJV4JmbOhAz/BD4AfJO/0Jyh+R+3O9WpdBmrVy6jy4snIlNCQkDYSaxO xMxVMNWiIS3yyirt12tmPxnnf5ssZHs8t4mVIaNmcDMHuE3/i0LZ2mdzzcJ6u9HDjydmAA g/Rns8hGccgkMzpTpKpKqirlG1mhLso= Received: from dggpemm500005.china.huawei.com (unknown [172.30.72.53]) by szxga03-in.huawei.com (SkyGuard) with ESMTP id 4SkyxY2tr3z14L6L; Tue, 5 Dec 2023 19:29:53 +0800 (CST) Received: from localhost.localdomain (10.69.192.56) by dggpemm500005.china.huawei.com (7.185.36.74) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.35; Tue, 5 Dec 2023 19:34:50 +0800 From: Yunsheng Lin To: , , CC: , , Yunsheng Lin , Alexander Duyck , Andrew Morton , Eric Dumazet , Subject: [PATCH net-next 1/6] mm/page_alloc: modify page_frag_alloc_align() to accept align as an argument Date: Tue, 5 Dec 2023 19:34:39 +0800 Message-ID: <20231205113444.63015-2-linyunsheng@huawei.com> X-Mailer: git-send-email 2.33.0 In-Reply-To: <20231205113444.63015-1-linyunsheng@huawei.com> References: <20231205113444.63015-1-linyunsheng@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.69.192.56] X-ClientProxiedBy: dggems702-chm.china.huawei.com (10.3.19.179) To dggpemm500005.china.huawei.com (7.185.36.74) X-CFilter-Loop: Reflected X-Rspamd-Server: rspam09 X-Rspamd-Queue-Id: 2D450C001F X-Stat-Signature: 387wdwtm9ggjbxhz3u99c759qt8tnhsf X-Rspam-User: X-HE-Tag: 1701776096-477732 X-HE-Meta: U2FsdGVkX19RrDYOSwVVr/jqYSCdld7IaYAjPvW4WWYNuZk8MlQWnxZdUXwqaEEzFjBbZ1+GZ2t/mHxUX+ljRbg2PyQySNBzVWi350zQRCsmDBvUFsqvhEppe9YhvKCrs0EIC69flrmHR6dLSuTfb4TVEo4+SmULxhny/SZzpKFBaGZW06uE9/PY3SiD3GNvm8ziWbzPjGJIdxlNxmqxIxQW6+xIvpjisqDETTDjrtYuvUMO4RGWIAKlm8dQP+5Wxgpxm8IVIVPsezXtaDXuSZ1wQ4Lu0mAMnWtcCTYbp7cg3/EmN9vKAhA64LTXxJL6JZ87YAbUyA7YZyq5vtfFU6TcGDgiiGuzptHgY48GrB+F8MjB0bapsOi8RoPT7RM6cruck/Y+pCGWdQz4LEqU2PQRymGQlI7WorGkN2r6FV/bsQFnL0+Ri5N9yIDpWQfCDIq4SLvyNL7fa6lyF0HIS5zJK2dgoBTugpx4zch4a/kz2qAYELk4DrZPHWShDoUxSs6bkhKr37W6aBlgEPfaVarBhoAR9IAENUtvp9uY+N2UjqozR5JHGscBhKkTRyHbU+RuM+9iJs5LDG5+ss+nbJFO6G5/uY+7u9FmjP9Xdw2rzw9VbK4kOgU8bmaR3w8wlkXVuLDI5iSOKCNgJ2Sux2u3wu5RmwJ+xzWTO2sSJVGMidPt5xMTdY+ZPt6uuwN/LiPEDNP4TM+OAICT9EXKeOAWZLzGyyFdm5Y1dwSO45E9YZ+tJawAG5lJYKRr+1NgJgFfF3sYy/hMZfQslUATAF65RIUxOsK0m0Wg8OFc8y+aBCcsUyzxGzbj1aV35rCdhfxfKf3opV1jTy/+oe/EYEOtjI8k/e5g4mLhKYwVSuM10qrYgrHBKjwpJ1r4MmpTpbEUtyFKN7djs2GtKrL8Nxvi0GzDOiwoUYtdNDx7nSOhnOGUjuoQqRhsM9Hz33I0KJi4X0DPMTMQ5Y0Sgu3 QgKCSxL5 4/sD2l3fGlOKi1etNP2ON12rr8s/aM4IZWo4QGEyZQzydgPjLzCOrd2Q1WX29YGqRJaxe6Srl5EHlI3AS2jRJr9ksuUzCxN+nyKeGYluyH9L5pBe7vx92iK/xfxvDpJ+QVT3mwuTratD56v1lqxtXT3rp8ld0522FXavhKOUmzoCt0ADUrJTNoAu2P62NN5vj2FF1jgEIkSpnMojNz9Vh5w4YRd3Y0wObY6TBhAphKHFjqhxGByYPJx40JLwv1OztU6TE9n4B9CuMwaCMllglc+MUMenAnNiRL5sG X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: napi_alloc_frag_align() and netdev_alloc_frag_align() accept align as an argument, and they are thin wrappers around the __napi_alloc_frag_align() and __netdev_alloc_frag_align() APIs doing the align and align_mask conversion, in order to call page_frag_alloc_align() directly. As __napi_alloc_frag_align() and __netdev_alloc_frag_align() APIs are only used by the above thin wrappers, it seems that it makes more sense to remove align and align_mask conversion and call page_frag_alloc_align() directly. By doing that, we can also avoid the confusion between napi_alloc_frag_align() accepting align as an argument and page_frag_alloc_align() accepting align_mask as an argument when they both have the 'align' suffix. Signed-off-by: Yunsheng Lin CC: Alexander Duyck --- include/linux/gfp.h | 4 ++-- include/linux/skbuff.h | 22 ++++------------------ mm/page_alloc.c | 6 ++++-- net/core/skbuff.c | 14 +++++++------- 4 files changed, 17 insertions(+), 29 deletions(-) diff --git a/include/linux/gfp.h b/include/linux/gfp.h index de292a007138..bbd75976541e 100644 --- a/include/linux/gfp.h +++ b/include/linux/gfp.h @@ -314,12 +314,12 @@ struct page_frag_cache; extern void __page_frag_cache_drain(struct page *page, unsigned int count); extern void *page_frag_alloc_align(struct page_frag_cache *nc, unsigned int fragsz, gfp_t gfp_mask, - unsigned int align_mask); + unsigned int align); static inline void *page_frag_alloc(struct page_frag_cache *nc, unsigned int fragsz, gfp_t gfp_mask) { - return page_frag_alloc_align(nc, fragsz, gfp_mask, ~0u); + return page_frag_alloc_align(nc, fragsz, gfp_mask, 1); } extern void page_frag_free(void *addr); diff --git a/include/linux/skbuff.h b/include/linux/skbuff.h index b370eb8d70f7..095747c500b6 100644 --- a/include/linux/skbuff.h +++ b/include/linux/skbuff.h @@ -3194,7 +3194,7 @@ static inline void skb_queue_purge(struct sk_buff_head *list) unsigned int skb_rbtree_purge(struct rb_root *root); void skb_errqueue_purge(struct sk_buff_head *list); -void *__netdev_alloc_frag_align(unsigned int fragsz, unsigned int align_mask); +void *netdev_alloc_frag_align(unsigned int fragsz, unsigned int align); /** * netdev_alloc_frag - allocate a page fragment @@ -3205,14 +3205,7 @@ void *__netdev_alloc_frag_align(unsigned int fragsz, unsigned int align_mask); */ static inline void *netdev_alloc_frag(unsigned int fragsz) { - return __netdev_alloc_frag_align(fragsz, ~0u); -} - -static inline void *netdev_alloc_frag_align(unsigned int fragsz, - unsigned int align) -{ - WARN_ON_ONCE(!is_power_of_2(align)); - return __netdev_alloc_frag_align(fragsz, -align); + return netdev_alloc_frag_align(fragsz, 1); } struct sk_buff *__netdev_alloc_skb(struct net_device *dev, unsigned int length, @@ -3272,18 +3265,11 @@ static inline void skb_free_frag(void *addr) page_frag_free(addr); } -void *__napi_alloc_frag_align(unsigned int fragsz, unsigned int align_mask); +void *napi_alloc_frag_align(unsigned int fragsz, unsigned int align); static inline void *napi_alloc_frag(unsigned int fragsz) { - return __napi_alloc_frag_align(fragsz, ~0u); -} - -static inline void *napi_alloc_frag_align(unsigned int fragsz, - unsigned int align) -{ - WARN_ON_ONCE(!is_power_of_2(align)); - return __napi_alloc_frag_align(fragsz, -align); + return napi_alloc_frag_align(fragsz, 1); } struct sk_buff *__napi_alloc_skb(struct napi_struct *napi, diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 37ca4f4b62bf..9a16305cf985 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -4718,12 +4718,14 @@ EXPORT_SYMBOL(__page_frag_cache_drain); void *page_frag_alloc_align(struct page_frag_cache *nc, unsigned int fragsz, gfp_t gfp_mask, - unsigned int align_mask) + unsigned int align) { unsigned int size = PAGE_SIZE; struct page *page; int offset; + WARN_ON_ONCE(!is_power_of_2(align)); + if (unlikely(!nc->va)) { refill: page = __page_frag_cache_refill(nc, gfp_mask); @@ -4782,7 +4784,7 @@ void *page_frag_alloc_align(struct page_frag_cache *nc, } nc->pagecnt_bias--; - offset &= align_mask; + offset &= -align; nc->offset = offset; return nc->va + offset; diff --git a/net/core/skbuff.c b/net/core/skbuff.c index b157efea5dea..b98d1da4004a 100644 --- a/net/core/skbuff.c +++ b/net/core/skbuff.c @@ -291,17 +291,17 @@ void napi_get_frags_check(struct napi_struct *napi) local_bh_enable(); } -void *__napi_alloc_frag_align(unsigned int fragsz, unsigned int align_mask) +void *napi_alloc_frag_align(unsigned int fragsz, unsigned int align) { struct napi_alloc_cache *nc = this_cpu_ptr(&napi_alloc_cache); fragsz = SKB_DATA_ALIGN(fragsz); - return page_frag_alloc_align(&nc->page, fragsz, GFP_ATOMIC, align_mask); + return page_frag_alloc_align(&nc->page, fragsz, GFP_ATOMIC, align); } -EXPORT_SYMBOL(__napi_alloc_frag_align); +EXPORT_SYMBOL(napi_alloc_frag_align); -void *__netdev_alloc_frag_align(unsigned int fragsz, unsigned int align_mask) +void *netdev_alloc_frag_align(unsigned int fragsz, unsigned int align) { void *data; @@ -309,18 +309,18 @@ void *__netdev_alloc_frag_align(unsigned int fragsz, unsigned int align_mask) if (in_hardirq() || irqs_disabled()) { struct page_frag_cache *nc = this_cpu_ptr(&netdev_alloc_cache); - data = page_frag_alloc_align(nc, fragsz, GFP_ATOMIC, align_mask); + data = page_frag_alloc_align(nc, fragsz, GFP_ATOMIC, align); } else { struct napi_alloc_cache *nc; local_bh_disable(); nc = this_cpu_ptr(&napi_alloc_cache); - data = page_frag_alloc_align(&nc->page, fragsz, GFP_ATOMIC, align_mask); + data = page_frag_alloc_align(&nc->page, fragsz, GFP_ATOMIC, align); local_bh_enable(); } return data; } -EXPORT_SYMBOL(__netdev_alloc_frag_align); +EXPORT_SYMBOL(netdev_alloc_frag_align); static struct sk_buff *napi_skb_cache_get(void) { From patchwork Tue Dec 5 11:34:40 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yunsheng Lin X-Patchwork-Id: 13480076 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id D0E7EC4167B for ; Tue, 5 Dec 2023 11:35:04 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 6313D6B0085; Tue, 5 Dec 2023 06:35:01 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 590766B0089; Tue, 5 Dec 2023 06:35:01 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 3BD3B6B0085; Tue, 5 Dec 2023 06:35:01 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id 236806B0085 for ; Tue, 5 Dec 2023 06:35:01 -0500 (EST) Received: from smtpin23.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id 035C5A0343 for ; Tue, 5 Dec 2023 11:35:00 +0000 (UTC) X-FDA: 81532558002.23.A323660 Received: from szxga02-in.huawei.com (szxga02-in.huawei.com [45.249.212.188]) by imf04.hostedemail.com (Postfix) with ESMTP id 2E7584001C for ; Tue, 5 Dec 2023 11:34:56 +0000 (UTC) Authentication-Results: imf04.hostedemail.com; dkim=none; spf=pass (imf04.hostedemail.com: domain of linyunsheng@huawei.com designates 45.249.212.188 as permitted sender) smtp.mailfrom=linyunsheng@huawei.com; dmarc=pass (policy=quarantine) header.from=huawei.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1701776099; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=3B5fwC9qERbgCOX4tFw3OYiISGUMa0xw9VoqY6py2PY=; b=FvSVc4v2ayzCF1yx3SEA8Tnkp4MQPLz6KJ7/bl4P+ghEf386nFL+sWICGU2Tnc4BY2vBbi XCvk6wpw6yFCkf/G4e+Na3ch3bzDCLbl0LR6a4l9qStK0p2dLZJgU9sBI3zuJzhhox12TQ d8gdORv4ffoJ0Hr8+Djjehr56lq/Wz8= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1701776099; a=rsa-sha256; cv=none; b=1tlllkS9m5EiOeYvPhzLeehyCbbMkoUE31vMLJlzm7M7Ne+EVy+UnofistjalZFCvj6Qte 1x8yZ9xHTYbZUx9C24SzT0fBK2physJGw8V9AjozjxiZsebQxUQtNryP8wG9GKI76cVfRx vRyy9gzbtnwywFK5+a0jvvf8qpo+VRA= ARC-Authentication-Results: i=1; imf04.hostedemail.com; dkim=none; spf=pass (imf04.hostedemail.com: domain of linyunsheng@huawei.com designates 45.249.212.188 as permitted sender) smtp.mailfrom=linyunsheng@huawei.com; dmarc=pass (policy=quarantine) header.from=huawei.com Received: from dggpemm500005.china.huawei.com (unknown [172.30.72.57]) by szxga02-in.huawei.com (SkyGuard) with ESMTP id 4Skz2G66sczWjJk; Tue, 5 Dec 2023 19:33:58 +0800 (CST) Received: from localhost.localdomain (10.69.192.56) by dggpemm500005.china.huawei.com (7.185.36.74) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.35; Tue, 5 Dec 2023 19:34:51 +0800 From: Yunsheng Lin To: , , CC: , , Yunsheng Lin , Alexander Duyck , "Michael S. Tsirkin" , Jason Wang , Andrew Morton , Eric Dumazet , , , Subject: [PATCH net-next 2/6] page_frag: unify gfp bit for order 3 page allocation Date: Tue, 5 Dec 2023 19:34:40 +0800 Message-ID: <20231205113444.63015-3-linyunsheng@huawei.com> X-Mailer: git-send-email 2.33.0 In-Reply-To: <20231205113444.63015-1-linyunsheng@huawei.com> References: <20231205113444.63015-1-linyunsheng@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.69.192.56] X-ClientProxiedBy: dggems702-chm.china.huawei.com (10.3.19.179) To dggpemm500005.china.huawei.com (7.185.36.74) X-CFilter-Loop: Reflected X-Rspamd-Queue-Id: 2E7584001C X-Rspam-User: X-Rspamd-Server: rspam11 X-Stat-Signature: sk37nd6hf6qk87zufmqtfbtahzjyarri X-HE-Tag: 1701776096-88860 X-HE-Meta: U2FsdGVkX18n8lgzYHVr68k1nLG0AhzbQrG65t+mVB62x/7ilBJNyQt5whpp9COjuSCtmQEHxah7B0OGsKyiV9iqSK+ORt6rQZL6rU0rSBs6/1UyaHBILMpFa3pOvEHV5l431ZnU6P+atYRbqKQ2uSY96q6Vb1+HpVyOkJocKeXzdYcpxrYiHuHyCwTlTNCC8qXVzfMrlKYbzcKB72ivq1966NShCZCX0k8IJO+UXPbFrSvoslaN7URqaCEFRGYEKIgWYbmkw9p8g/Q3KgyCHTH1Bcr7iNxmCFhCbJ46UdsqGTyxlJtDPHblCo3BZColWADMyet5X+G1bq0DcjnHxxChAtWXPejlPLj8R+mJMf8d4asNAJ1ItoW99+Roj9/x0WFSQqNaqUoDiX4DZQoMpdMw0vgdiYIFJYuVZzlScOXvGGKFXBpm/MNeGz088Xh5enQHDVHVm8I+NLljRb9CArxy07tpn4mKznlfeEy2s56GDFd24JOi1RNggR+vDOTf6KJ0fZfia2IBky5g0j6RlUj+mdD0LUANmmh+MJ7/aHURJk4C/TRxFvrUnhdXz2PQ7M8apGfsKM6u/VsYQyu0zI2y9v3445PxqoHC7VVkNDYQSMowY0NE74hYDPCKJ1ejmoWNutXOGqF2orIIsI5GIJYV1jS6EB4wD7nJveNk+TusgRk9+wh9y7zw/TfwxAK2CQM17a7r3iDwFqGOX9T9HiONDNGR7tv1S9v9hDmZ9WAqbI/AtwAtOGmX9YmXmjFB15bT5UsOiemYSl4sLcIi0+Gi14Sz9nB88la5bMsX0eWsPQ2tSUIwdWgb6AJSsXQcm2Z8fGrRqb3gw3rwVoN6gLMxakufKo4wX7yEoBXLJYKwLDmf1ALy+FrA+Z3znHeZoO57adAE1DYXsZroPiKml1XcwDRuuE9JLcP8ZP8PNAxA+TdbZk1ULYp+0P2pjSZ+uQmaefEIc4bWeMTxeEl AyGfeWBj 8zz4CXDzKyTLuTmY12UcuIhDTIQ1lE0asDsTyiUSbHSKjkKGiJgXikBTvGOItA4gaWYlKsQ/2T4d2XEuv8fvM8G13Z+ELEw/dzqpFgHlAHudAilDxW5IocWzKhj260rzHxDVaiy32k+q+j7aeAc/MPmqRsou7FVvPuIJ3HLSOWXRe1Pj8kmUDHmZEfXRCVlVjbvNQfYdWuh+90g6V/5urfT+DnFAjaOUGefWxNQY9vcKImlmI3KuQlwh+HSmu0rVIrPECCy14Ia+/pxPOcfzOjxNxVDhNAc16p8Bg X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Currently there seems to be three page frag implementions which all try to allocate order 3 page, if that fails, it then fail back to allocate order 0 page, and each of them all allow order 3 page allocation to fail under certain condition by using specific gfp bits. The gfp bits for order 3 page allocation are different between different implementation, __GFP_NOMEMALLOC is or'd to forbid access to emergency reserves memory for __page_frag_cache_refill(), but it is not or'd in other implementions, __GFP_DIRECT_RECLAIM is xor'd to avoid direct reclaim in skb_page_frag_refill(), but it is not xor'd in __page_frag_cache_refill(). This patch unifies the gfp bits used between different implementions by or'ing __GFP_NOMEMALLOC and xor'ing __GFP_DIRECT_RECLAIM for order 3 page allocation to avoid possible pressure for mm. Signed-off-by: Yunsheng Lin CC: Alexander Duyck --- drivers/vhost/net.c | 2 +- mm/page_alloc.c | 4 ++-- net/core/sock.c | 2 +- 3 files changed, 4 insertions(+), 4 deletions(-) diff --git a/drivers/vhost/net.c b/drivers/vhost/net.c index f2ed7167c848..e574e21cc0ca 100644 --- a/drivers/vhost/net.c +++ b/drivers/vhost/net.c @@ -670,7 +670,7 @@ static bool vhost_net_page_frag_refill(struct vhost_net *net, unsigned int sz, /* Avoid direct reclaim but allow kswapd to wake */ pfrag->page = alloc_pages((gfp & ~__GFP_DIRECT_RECLAIM) | __GFP_COMP | __GFP_NOWARN | - __GFP_NORETRY, + __GFP_NORETRY | __GFP_NOMEMALLOC, SKB_FRAG_PAGE_ORDER); if (likely(pfrag->page)) { pfrag->size = PAGE_SIZE << SKB_FRAG_PAGE_ORDER; diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 9a16305cf985..1f0b36dd81b5 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -4693,8 +4693,8 @@ static struct page *__page_frag_cache_refill(struct page_frag_cache *nc, gfp_t gfp = gfp_mask; #if (PAGE_SIZE < PAGE_FRAG_CACHE_MAX_SIZE) - gfp_mask |= __GFP_COMP | __GFP_NOWARN | __GFP_NORETRY | - __GFP_NOMEMALLOC; + gfp_mask = (gfp_mask & ~__GFP_DIRECT_RECLAIM) | __GFP_COMP | + __GFP_NOWARN | __GFP_NORETRY | __GFP_NOMEMALLOC; page = alloc_pages_node(NUMA_NO_NODE, gfp_mask, PAGE_FRAG_CACHE_MAX_ORDER); nc->size = page ? PAGE_FRAG_CACHE_MAX_SIZE : PAGE_SIZE; diff --git a/net/core/sock.c b/net/core/sock.c index fef349dd72fa..4efa9cae4b0d 100644 --- a/net/core/sock.c +++ b/net/core/sock.c @@ -2904,7 +2904,7 @@ bool skb_page_frag_refill(unsigned int sz, struct page_frag *pfrag, gfp_t gfp) /* Avoid direct reclaim but allow kswapd to wake */ pfrag->page = alloc_pages((gfp & ~__GFP_DIRECT_RECLAIM) | __GFP_COMP | __GFP_NOWARN | - __GFP_NORETRY, + __GFP_NORETRY | __GFP_NOMEMALLOC, SKB_FRAG_PAGE_ORDER); if (likely(pfrag->page)) { pfrag->size = PAGE_SIZE << SKB_FRAG_PAGE_ORDER; From patchwork Tue Dec 5 11:34:41 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yunsheng Lin X-Patchwork-Id: 13480074 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7058AC4167B for ; Tue, 5 Dec 2023 11:35:01 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id DA8726B0083; Tue, 5 Dec 2023 06:35:00 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id D593B6B0085; Tue, 5 Dec 2023 06:35:00 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id C485D6B0087; Tue, 5 Dec 2023 06:35:00 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id B20076B0083 for ; Tue, 5 Dec 2023 06:35:00 -0500 (EST) Received: from smtpin27.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id 7E4A016033A for ; Tue, 5 Dec 2023 11:35:00 +0000 (UTC) X-FDA: 81532557960.27.D511081 Received: from szxga02-in.huawei.com (szxga02-in.huawei.com [45.249.212.188]) by imf25.hostedemail.com (Postfix) with ESMTP id EDDDAA001F for ; Tue, 5 Dec 2023 11:34:56 +0000 (UTC) Authentication-Results: imf25.hostedemail.com; dkim=none; dmarc=pass (policy=quarantine) header.from=huawei.com; spf=pass (imf25.hostedemail.com: domain of linyunsheng@huawei.com designates 45.249.212.188 as permitted sender) smtp.mailfrom=linyunsheng@huawei.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1701776098; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=sY6rt6ZDWSPLy+R4GC8GwC8YJzCuoyXBaVHV+cH68pA=; b=3E2wtBq7RCatLZX3fG/d7cHzRMvVL8eoF+GSucNrVBKsIBRINOXMY/YoAju7bPOOpXImZp Sos0oavXfo+ZnVlQDhW3v1PE4CmYXLn/oFty5uhVlNdFiLdz94jidYJg3e7buYOoDWKL7q laQOzbgjDBHrsRJb3MCW/bvKtHjvZkQ= ARC-Authentication-Results: i=1; imf25.hostedemail.com; dkim=none; dmarc=pass (policy=quarantine) header.from=huawei.com; spf=pass (imf25.hostedemail.com: domain of linyunsheng@huawei.com designates 45.249.212.188 as permitted sender) smtp.mailfrom=linyunsheng@huawei.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1701776098; a=rsa-sha256; cv=none; b=uVVsVitts3F2XhruHLkmJn6QGaLsKpXgZ0WLTZA1abaJC3h0ceJK+Qaup+oOLYhiWXjAiz 9J5vD3DiOnUPJoMHPMdZ/yJDDPeUqGUUJWaZbq/+QMjuZtKh7zt5bkFugrv7q6FyNlBeLf X+LRmOfduKSwoKjIZQjGpUY6UwCLS/E= Received: from dggpemm500005.china.huawei.com (unknown [172.30.72.55]) by szxga02-in.huawei.com (SkyGuard) with ESMTP id 4SkyyH27zdzFr77; Tue, 5 Dec 2023 19:30:31 +0800 (CST) Received: from localhost.localdomain (10.69.192.56) by dggpemm500005.china.huawei.com (7.185.36.74) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.35; Tue, 5 Dec 2023 19:34:52 +0800 From: Yunsheng Lin To: , , CC: , , Yunsheng Lin , Alexander Duyck , Andrew Morton , Subject: [PATCH net-next 3/6] mm/page_alloc: use initial zero offset for page_frag_alloc_align() Date: Tue, 5 Dec 2023 19:34:41 +0800 Message-ID: <20231205113444.63015-4-linyunsheng@huawei.com> X-Mailer: git-send-email 2.33.0 In-Reply-To: <20231205113444.63015-1-linyunsheng@huawei.com> References: <20231205113444.63015-1-linyunsheng@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.69.192.56] X-ClientProxiedBy: dggems702-chm.china.huawei.com (10.3.19.179) To dggpemm500005.china.huawei.com (7.185.36.74) X-CFilter-Loop: Reflected X-Rspamd-Queue-Id: EDDDAA001F X-Rspam-User: X-Rspamd-Server: rspam02 X-Stat-Signature: 48ea1ign864tgh8j9j983da85y4kxwu9 X-HE-Tag: 1701776096-904997 X-HE-Meta: U2FsdGVkX19gylrarJlMtxdKhToCTjIvGNyXwFbKXh1+bfXhCuSEP7F6hFgFFdP2bJjBMa20zp5vs/Kbi0Y9ulj1maaifUiMYinivgyfepIBCEdOjigK5Pr/eyiSmQDdqCkZ8+t3uggmCC1bRZNXFPH6A5jdPebY2aSWxqKe+nWKy7vCTcAdWM6pQRLhjpzD3tTEgH6TGKr4rSwFDqB2OYiGlqwdWvzwWK6isF+9PumZ3Hf6IiYUcx4MaX5HxjYnRon3R+c9PhWbuProbYI4SzBVUB36JLBz9jD/V8blbn5m+d4yKdhuG3rN6CKY3vqTpf9IcwKCYA0Uas++mvndfm9o+wOFJkDFNnBbU+SuoyrzU5yLudr3DZghLkumSDpdd7gdDAczVOF/a840+Zb2YpOZRPgbv/94teROF7hFWs9swEasKVCCtCnzPlZg5vF2DA5B9qvHgrdk2bJsZFPYd1lDlz83IJSO2bm+WMObp9ExcG2j3wHmfJV2OTaXKMf0lkgKYYnZRtddxFPs9iL9RLZDGT03bs3lDpHh47Ojg9Mi9FuhfbWLFNX3gtG4Q90Rr/bUfxLyuz01Xgr7QMTzboprlUnuiG/9Ktvbx4CGJrap7cgcl3yrn/l3qmC6gFdtAYQJpnK/aKH5Op0Exuj0HZfk7QT4e1ci/d0x8EmHjeADnQvLmhCSHzJ6eWDkaw3jPxwP7E1rQiuEUlHl+133+aP92v41F3mzxXh3w03+R3s347Uf7Hjc5+jFIYcTjOp9bHLAJWWhEqcBeehr5Bl7z2fBUPFBPp6Z6fIJQFXUHw2QYQ4clJGf27aGfkD3EOcQq8esvzZC5J7+8tr5KgWcW32ex8dkzkU+mNTwhRwZEgemDqs2R6NHuKdGXSmnKpti7NMYpquyGPpBKDPGgPZ7gLTopdeGiDyn/Y3RJdliifoi2AchesOe7Mg8TErgl/p1x8rgfuLlUMMigMyuG3U Pn+5Bp6k uxgGCaLuyUWkWrtABzVWntf0CiIYTaEhUbM1h3jRvjb+ZbW2db3fDUv3e/XOhpNSwOp+I/B8gzP+VA7Io+tceuv4hdNq06Tt3bpVszPbyibJWNLvpi0SnbLola+3zwiOS44iH2E4CD+HkEzJY1QCqqLon6y/1EG/1YQH8DMkgAt1aJyvAIN4961wHedeOJkuPCKyc91agyuz0SlE3JdqwoGbnhQuXHF3d78eWazGDXXHlAxvoAEDtvWLAnc8HMGwYes9F3r47h3RTtwvd1ort5+ZJjC+eUTdcqNH7 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: The next patch is above to use page_frag_alloc_align() to replace vhost_net_page_frag_refill(), the main difference between those two frag page implementations is whether we use a initial zero offset or not. It seems more nature to use a initial zero offset, as it may enable more correct cache prefetching and skb frag coalescing in the networking, so change it to use initial zero offset. Signed-off-by: Yunsheng Lin CC: Alexander Duyck --- mm/page_alloc.c | 30 ++++++++++++++---------------- 1 file changed, 14 insertions(+), 16 deletions(-) diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 1f0b36dd81b5..083e0c38fb62 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -4720,7 +4720,7 @@ void *page_frag_alloc_align(struct page_frag_cache *nc, unsigned int fragsz, gfp_t gfp_mask, unsigned int align) { - unsigned int size = PAGE_SIZE; + unsigned int size; struct page *page; int offset; @@ -4732,10 +4732,6 @@ void *page_frag_alloc_align(struct page_frag_cache *nc, if (!page) return NULL; -#if (PAGE_SIZE < PAGE_FRAG_CACHE_MAX_SIZE) - /* if size can vary use size else just use PAGE_SIZE */ - size = nc->size; -#endif /* Even if we own the page, we do not use atomic_set(). * This would break get_page_unless_zero() users. */ @@ -4744,11 +4740,18 @@ void *page_frag_alloc_align(struct page_frag_cache *nc, /* reset page count bias and offset to start of new frag */ nc->pfmemalloc = page_is_pfmemalloc(page); nc->pagecnt_bias = PAGE_FRAG_CACHE_MAX_SIZE + 1; - nc->offset = size; + nc->offset = 0; } - offset = nc->offset - fragsz; - if (unlikely(offset < 0)) { +#if (PAGE_SIZE < PAGE_FRAG_CACHE_MAX_SIZE) + /* if size can vary use size else just use PAGE_SIZE */ + size = nc->size; +#else + size = PAGE_SIZE; +#endif + + offset = ALIGN(nc->offset, align); + if (unlikely(offset + fragsz > size)) { page = virt_to_page(nc->va); if (!page_ref_sub_and_test(page, nc->pagecnt_bias)) @@ -4759,17 +4762,13 @@ void *page_frag_alloc_align(struct page_frag_cache *nc, goto refill; } -#if (PAGE_SIZE < PAGE_FRAG_CACHE_MAX_SIZE) - /* if size can vary use size else just use PAGE_SIZE */ - size = nc->size; -#endif /* OK, page count is 0, we can safely set it */ set_page_count(page, PAGE_FRAG_CACHE_MAX_SIZE + 1); /* reset page count bias and offset to start of new frag */ nc->pagecnt_bias = PAGE_FRAG_CACHE_MAX_SIZE + 1; - offset = size - fragsz; - if (unlikely(offset < 0)) { + offset = 0; + if (unlikely(fragsz > size)) { /* * The caller is trying to allocate a fragment * with fragsz > PAGE_SIZE but the cache isn't big @@ -4784,8 +4783,7 @@ void *page_frag_alloc_align(struct page_frag_cache *nc, } nc->pagecnt_bias--; - offset &= -align; - nc->offset = offset; + nc->offset = offset + fragsz; return nc->va + offset; } From patchwork Tue Dec 5 11:34:43 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yunsheng Lin X-Patchwork-Id: 13480077 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id EEDE4C10F04 for ; Tue, 5 Dec 2023 11:35:06 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 03BE16B008A; Tue, 5 Dec 2023 06:35:04 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id F3BC56B0089; Tue, 5 Dec 2023 06:35:03 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id DAAAE6B008A; Tue, 5 Dec 2023 06:35:03 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id B59886B0088 for ; Tue, 5 Dec 2023 06:35:03 -0500 (EST) Received: from smtpin04.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id 960C8A0343 for ; Tue, 5 Dec 2023 11:35:03 +0000 (UTC) X-FDA: 81532558086.04.B6FA30E Received: from szxga08-in.huawei.com (szxga08-in.huawei.com [45.249.212.255]) by imf21.hostedemail.com (Postfix) with ESMTP id F354E1C0018 for ; Tue, 5 Dec 2023 11:35:00 +0000 (UTC) Authentication-Results: imf21.hostedemail.com; dkim=none; dmarc=pass (policy=quarantine) header.from=huawei.com; spf=pass (imf21.hostedemail.com: domain of linyunsheng@huawei.com designates 45.249.212.255 as permitted sender) smtp.mailfrom=linyunsheng@huawei.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1701776101; a=rsa-sha256; cv=none; b=4C2E5ZCgovM+o326IlDRy+rHRAY6l6xCo5MZFnpTaJKfbENFPXtiRNdB/ulwVUEtbraioZ tsWN+11hi1yJGTzTghZh1+B8gU4K0tROTD43TPoZVbJA9Yu7i0ZKbyTJ2CepixLpp27QMz jbitKEoZAfyOQh0ziq32g4MvfVhBNmM= ARC-Authentication-Results: i=1; imf21.hostedemail.com; dkim=none; dmarc=pass (policy=quarantine) header.from=huawei.com; spf=pass (imf21.hostedemail.com: domain of linyunsheng@huawei.com designates 45.249.212.255 as permitted sender) smtp.mailfrom=linyunsheng@huawei.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1701776101; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=XEiWuC5sh0azayVc1A+xmmLJgJg/MsGs9Nd/taIUf78=; b=0s3nQyi9O927wBSbYtp6LI9QpXTlkFk52sCiN9HwJyoK186lrEkj/QKIUy5YpUNUf79KSL 5LjerE2cOh+KnjshVfG6aBB/TkAPiUCkNrKTm+mp/RZnjMLjtOfmT85ICd1ue9tVJiRKnd q0JTKZ0u1/XZh1aXYSKEc20VeunR7lk= Received: from dggpemm500005.china.huawei.com (unknown [172.30.72.54]) by szxga08-in.huawei.com (SkyGuard) with ESMTP id 4Skyz163TSz1Q65P; Tue, 5 Dec 2023 19:31:09 +0800 (CST) Received: from localhost.localdomain (10.69.192.56) by dggpemm500005.china.huawei.com (7.185.36.74) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.35; Tue, 5 Dec 2023 19:34:55 +0800 From: Yunsheng Lin To: , , CC: , , Yunsheng Lin , Jeroen de Borst , Praveen Kaligineedi , Shailend Chand , Eric Dumazet , Felix Fietkau , John Crispin , Sean Wang , Mark Lee , Lorenzo Bianconi , Matthias Brugger , AngeloGioacchino Del Regno , Keith Busch , Jens Axboe , Christoph Hellwig , Sagi Grimberg , Chaitanya Kulkarni , "Michael S. Tsirkin" , Jason Wang , Andrew Morton , , , , , , Subject: [PATCH net-next 5/6] net: introduce page_frag_cache_drain() Date: Tue, 5 Dec 2023 19:34:43 +0800 Message-ID: <20231205113444.63015-6-linyunsheng@huawei.com> X-Mailer: git-send-email 2.33.0 In-Reply-To: <20231205113444.63015-1-linyunsheng@huawei.com> References: <20231205113444.63015-1-linyunsheng@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.69.192.56] X-ClientProxiedBy: dggems702-chm.china.huawei.com (10.3.19.179) To dggpemm500005.china.huawei.com (7.185.36.74) X-CFilter-Loop: Reflected X-Rspamd-Server: rspam08 X-Rspamd-Queue-Id: F354E1C0018 X-Stat-Signature: jxw8s17ci6zu3p8kukkhzxa1qw7es1kr X-Rspam-User: X-HE-Tag: 1701776100-800963 X-HE-Meta: U2FsdGVkX19fV3L69uT3i9rVWZ+rW7sTGnD0YkbNBkp8t7wVT5tL4ehSngpI1FfXlnzwUmKv9NYljGnXwwshmNtOcu815fvywVMUV4yNI7X0nB4iPqtarpL+w+CF4mjqXAHXKAvIKB3mf65hbo+CaCw4BAKZYwqdku8OXdsFH8rpOH+LSjIPY3fU3NP4PWL9Z9kvDBdlsQr/99T+PQbaqV2IfnIJufVrrPaV6pymE006EaAL6P8hHKX3ld8/kT1z6xM3iMFacne9cIYoyvqXbNn8CEfI2Lf0iSCtFslT4alRgWrlFXgo5NcL8IqyWrLHrItMhB5wLMSpBHv4feSJWFtt2x8u6eQGXXttkUMUDmj2JiTJXCtXjNjZ3CySJyCYzqgoSRS4Hhvt+pV4CJzkn9DhUx1sB0lrwx1fibDfO+STLscnrv+zQzCKAnOZo5MV1spJsICMqGfufRjCCl7BRuJlTHxCQEKUQwwhvC471x3uSTzk7LwNFjb+3zFEzKq2sepC7Ts6kVmkFBrj9bWrijyJBqxoW04epFFFWHipQYNXOtlveEgPeoU2OrzofVfGEuXYoxUE+sNJLYESSrJhZUbs8DNHtbTnH3ldo41k2eml8mkctA28a0WVoXAhAdQ0vn4GmJIpiAtkyyJjsp+o2MrcjLRZ9f8J+TFhocF5gxBdVeG5It3SURorKPU0E+F2fQSctLdgKTqt4t2IXt6L/csQY+Xg/3rjt1UL5Jd7GnM+ru1kCFCHlk5cRGDqhuZZ5K8y2XOsctIMuxTNMG2fx5xnWSjWcTb5zWYjvMQWSIpMm2Z7Vz7SdSSYNaUcNSgFXOEy8xtn38SiEn+NN48hivbMvcrMaSTjT1pAmyyHMzeNixAzzlAa3SVka863WfJOTqBg6R/FweyuiaS0T6s321qmCqimQdfjWSA/yK0vN7RqKiFQol8tiaVaecvUQ7fFE4x6mGTZG0QBqC/pYrY MKsFU3uS uDjVm9W2pz9ED1h4QkYvDZne1JbC7l2O1qahhALJa/ecrytISkDyTLTunEmNCW05Fts9eUTakKnqzvNVJDz9Vt0Y+/9t8gHtfauYqeSaPu8yLEfD8vvcrntRuQY0YpJbt6tVZGaRz16CGY+5iyGzDW6Q6XHWuCt0q5C0V2U4mcGmaVNQbAFaIQ47zfnnsYqWxnkGBNqfcUM3x0vVaUelEL+4UuIVqbZRyi9VunsvOWghh9Q6VdRUqsYu30FUXfmN8CckKv94L4LMCSf0= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: When draining a page_frag_cache, most user are doing the similar steps, so introduce an API to avoid code duplication. Signed-off-by: Yunsheng Lin Acked-by: Jason Wang --- drivers/net/ethernet/google/gve/gve_main.c | 11 ++--------- drivers/net/ethernet/mediatek/mtk_wed_wo.c | 17 ++--------------- drivers/nvme/host/tcp.c | 7 +------ drivers/nvme/target/tcp.c | 4 +--- drivers/vhost/net.c | 4 +--- include/linux/gfp.h | 2 ++ mm/page_alloc.c | 10 ++++++++++ 7 files changed, 19 insertions(+), 36 deletions(-) diff --git a/drivers/net/ethernet/google/gve/gve_main.c b/drivers/net/ethernet/google/gve/gve_main.c index 619bf63ec935..d976190b0f4d 100644 --- a/drivers/net/ethernet/google/gve/gve_main.c +++ b/drivers/net/ethernet/google/gve/gve_main.c @@ -1278,17 +1278,10 @@ static void gve_unreg_xdp_info(struct gve_priv *priv) static void gve_drain_page_cache(struct gve_priv *priv) { - struct page_frag_cache *nc; int i; - for (i = 0; i < priv->rx_cfg.num_queues; i++) { - nc = &priv->rx[i].page_cache; - if (nc->va) { - __page_frag_cache_drain(virt_to_page(nc->va), - nc->pagecnt_bias); - nc->va = NULL; - } - } + for (i = 0; i < priv->rx_cfg.num_queues; i++) + page_frag_cache_drain(&priv->rx[i].page_cache); } static int gve_open(struct net_device *dev) diff --git a/drivers/net/ethernet/mediatek/mtk_wed_wo.c b/drivers/net/ethernet/mediatek/mtk_wed_wo.c index 7ffbd4fca881..df0a3ceaf59b 100644 --- a/drivers/net/ethernet/mediatek/mtk_wed_wo.c +++ b/drivers/net/ethernet/mediatek/mtk_wed_wo.c @@ -286,7 +286,6 @@ mtk_wed_wo_queue_free(struct mtk_wed_wo *wo, struct mtk_wed_wo_queue *q) static void mtk_wed_wo_queue_tx_clean(struct mtk_wed_wo *wo, struct mtk_wed_wo_queue *q) { - struct page *page; int i; for (i = 0; i < q->n_desc; i++) { @@ -298,19 +297,12 @@ mtk_wed_wo_queue_tx_clean(struct mtk_wed_wo *wo, struct mtk_wed_wo_queue *q) entry->buf = NULL; } - if (!q->cache.va) - return; - - page = virt_to_page(q->cache.va); - __page_frag_cache_drain(page, q->cache.pagecnt_bias); - memset(&q->cache, 0, sizeof(q->cache)); + page_frag_cache_drain(&q->cache); } static void mtk_wed_wo_queue_rx_clean(struct mtk_wed_wo *wo, struct mtk_wed_wo_queue *q) { - struct page *page; - for (;;) { void *buf = mtk_wed_wo_dequeue(wo, q, NULL, true); @@ -320,12 +312,7 @@ mtk_wed_wo_queue_rx_clean(struct mtk_wed_wo *wo, struct mtk_wed_wo_queue *q) skb_free_frag(buf); } - if (!q->cache.va) - return; - - page = virt_to_page(q->cache.va); - __page_frag_cache_drain(page, q->cache.pagecnt_bias); - memset(&q->cache, 0, sizeof(q->cache)); + page_frag_cache_drain(&q->cache); } static void diff --git a/drivers/nvme/host/tcp.c b/drivers/nvme/host/tcp.c index d79811cfa0ce..1c85e1398e4e 100644 --- a/drivers/nvme/host/tcp.c +++ b/drivers/nvme/host/tcp.c @@ -1344,7 +1344,6 @@ static int nvme_tcp_alloc_async_req(struct nvme_tcp_ctrl *ctrl) static void nvme_tcp_free_queue(struct nvme_ctrl *nctrl, int qid) { - struct page *page; struct nvme_tcp_ctrl *ctrl = to_tcp_ctrl(nctrl); struct nvme_tcp_queue *queue = &ctrl->queues[qid]; unsigned int noreclaim_flag; @@ -1355,11 +1354,7 @@ static void nvme_tcp_free_queue(struct nvme_ctrl *nctrl, int qid) if (queue->hdr_digest || queue->data_digest) nvme_tcp_free_crypto(queue); - if (queue->pf_cache.va) { - page = virt_to_head_page(queue->pf_cache.va); - __page_frag_cache_drain(page, queue->pf_cache.pagecnt_bias); - queue->pf_cache.va = NULL; - } + page_frag_cache_drain(&queue->pf_cache); noreclaim_flag = memalloc_noreclaim_save(); /* ->sock will be released by fput() */ diff --git a/drivers/nvme/target/tcp.c b/drivers/nvme/target/tcp.c index 4cc27856aa8f..11237557cfc5 100644 --- a/drivers/nvme/target/tcp.c +++ b/drivers/nvme/target/tcp.c @@ -1576,7 +1576,6 @@ static void nvmet_tcp_free_cmd_data_in_buffers(struct nvmet_tcp_queue *queue) static void nvmet_tcp_release_queue_work(struct work_struct *w) { - struct page *page; struct nvmet_tcp_queue *queue = container_of(w, struct nvmet_tcp_queue, release_work); @@ -1600,8 +1599,7 @@ static void nvmet_tcp_release_queue_work(struct work_struct *w) if (queue->hdr_digest || queue->data_digest) nvmet_tcp_free_crypto(queue); ida_free(&nvmet_tcp_queue_ida, queue->idx); - page = virt_to_head_page(queue->pf_cache.va); - __page_frag_cache_drain(page, queue->pf_cache.pagecnt_bias); + page_frag_cache_drain(&queue->pf_cache); kfree(queue); } diff --git a/drivers/vhost/net.c b/drivers/vhost/net.c index 805e11d598e4..4b2fcb228a0a 100644 --- a/drivers/vhost/net.c +++ b/drivers/vhost/net.c @@ -1386,9 +1386,7 @@ static int vhost_net_release(struct inode *inode, struct file *f) kfree(n->vqs[VHOST_NET_VQ_RX].rxq.queue); kfree(n->vqs[VHOST_NET_VQ_TX].xdp); kfree(n->dev.vqs); - if (n->pf_cache.va) - __page_frag_cache_drain(virt_to_head_page(n->pf_cache.va), - n->pf_cache.pagecnt_bias); + page_frag_cache_drain(&n->pf_cache); kvfree(n); return 0; } diff --git a/include/linux/gfp.h b/include/linux/gfp.h index bbd75976541e..03ba079655d3 100644 --- a/include/linux/gfp.h +++ b/include/linux/gfp.h @@ -316,6 +316,8 @@ extern void *page_frag_alloc_align(struct page_frag_cache *nc, unsigned int fragsz, gfp_t gfp_mask, unsigned int align); +void page_frag_cache_drain(struct page_frag_cache *nc); + static inline void *page_frag_alloc(struct page_frag_cache *nc, unsigned int fragsz, gfp_t gfp_mask) { diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 083e0c38fb62..5a0e68edcb05 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -4716,6 +4716,16 @@ void __page_frag_cache_drain(struct page *page, unsigned int count) } EXPORT_SYMBOL(__page_frag_cache_drain); +void page_frag_cache_drain(struct page_frag_cache *nc) +{ + if (!nc->va) + return; + + __page_frag_cache_drain(virt_to_head_page(nc->va), nc->pagecnt_bias); + nc->va = NULL; +} +EXPORT_SYMBOL(page_frag_cache_drain); + void *page_frag_alloc_align(struct page_frag_cache *nc, unsigned int fragsz, gfp_t gfp_mask, unsigned int align)