From patchwork Wed Aug 18 03:32:21 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yunsheng Lin X-Patchwork-Id: 12442571 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9C96DC4320A for ; Wed, 18 Aug 2021 03:33:55 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 8440A6108E for ; Wed, 18 Aug 2021 03:33:55 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S237460AbhHRDe1 (ORCPT ); Tue, 17 Aug 2021 23:34:27 -0400 Received: from szxga02-in.huawei.com ([45.249.212.188]:8874 "EHLO szxga02-in.huawei.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S237271AbhHRDeY (ORCPT ); Tue, 17 Aug 2021 23:34:24 -0400 Received: from dggemv703-chm.china.huawei.com (unknown [172.30.72.55]) by szxga02-in.huawei.com (SkyGuard) with ESMTP id 4GqD0n3445z8sZf; Wed, 18 Aug 2021 11:29:45 +0800 (CST) Received: from dggpemm500005.china.huawei.com (7.185.36.74) by dggemv703-chm.china.huawei.com (10.3.19.46) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2176.2; Wed, 18 Aug 2021 11:33:29 +0800 Received: from localhost.localdomain (10.69.192.56) by dggpemm500005.china.huawei.com (7.185.36.74) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2176.2; Wed, 18 Aug 2021 11:33:28 +0800 From: Yunsheng Lin To: , CC: , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , Subject: [PATCH RFC 5/7] sock: support refilling pfrag from pfrag_pool Date: Wed, 18 Aug 2021 11:32:21 +0800 Message-ID: <1629257542-36145-6-git-send-email-linyunsheng@huawei.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1629257542-36145-1-git-send-email-linyunsheng@huawei.com> References: <1629257542-36145-1-git-send-email-linyunsheng@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.69.192.56] X-ClientProxiedBy: dggems704-chm.china.huawei.com (10.3.19.181) To dggpemm500005.china.huawei.com (7.185.36.74) X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org X-Patchwork-State: RFC As previous patch has added pfrag pool based on the page pool, so support refilling pfrag from the new pfrag pool for tcpv4. Signed-off-by: Yunsheng Lin --- include/net/sock.h | 1 + net/core/sock.c | 9 +++++++++ net/ipv4/tcp.c | 34 ++++++++++++++++++++++++++-------- 3 files changed, 36 insertions(+), 8 deletions(-) diff --git a/include/net/sock.h b/include/net/sock.h index 6e76145..af40084 100644 --- a/include/net/sock.h +++ b/include/net/sock.h @@ -455,6 +455,7 @@ struct sock { unsigned long sk_pacing_rate; /* bytes per second */ unsigned long sk_max_pacing_rate; struct page_frag sk_frag; + struct pfrag_pool *sk_frag_pool; netdev_features_t sk_route_caps; netdev_features_t sk_route_nocaps; netdev_features_t sk_route_forced_caps; diff --git a/net/core/sock.c b/net/core/sock.c index aada649..53152c9 100644 --- a/net/core/sock.c +++ b/net/core/sock.c @@ -140,6 +140,7 @@ #include #include +#include static DEFINE_MUTEX(proto_list_mutex); static LIST_HEAD(proto_list); @@ -1934,6 +1935,11 @@ static void __sk_destruct(struct rcu_head *head) put_page(sk->sk_frag.page); sk->sk_frag.page = NULL; } + if (sk->sk_frag_pool) { + pfrag_pool_flush(sk->sk_frag_pool); + kfree(sk->sk_frag_pool); + sk->sk_frag_pool = NULL; + } if (sk->sk_peer_cred) put_cred(sk->sk_peer_cred); @@ -3134,6 +3140,9 @@ void sock_init_data(struct socket *sock, struct sock *sk) sk->sk_frag.page = NULL; sk->sk_frag.offset = 0; + + sk->sk_frag_pool = kzalloc(sizeof(*sk->sk_frag_pool), sk->sk_allocation); + sk->sk_peek_off = -1; sk->sk_peer_pid = NULL; diff --git a/net/ipv4/tcp.c b/net/ipv4/tcp.c index f931def..992dcbc 100644 --- a/net/ipv4/tcp.c +++ b/net/ipv4/tcp.c @@ -280,6 +280,7 @@ #include #include #include +#include /* Track pending CMSGs. */ enum { @@ -1337,12 +1338,20 @@ int tcp_sendmsg_locked(struct sock *sk, struct msghdr *msg, size_t size) if (err) goto do_fault; } else if (!zc) { - bool merge = true; + bool merge = true, pfrag_pool = true; int i = skb_shinfo(skb)->nr_frags; - struct page_frag *pfrag = sk_page_frag(sk); + struct page_frag *pfrag; - if (!sk_page_frag_refill(sk, pfrag)) - goto wait_for_space; + pfrag_pool_updata_napi(sk->sk_frag_pool, + READ_ONCE(sk->sk_napi_id)); + pfrag = pfrag_pool_refill(sk->sk_frag_pool, sk->sk_allocation); + if (!pfrag) { + pfrag = sk_page_frag(sk); + if (!sk_page_frag_refill(sk, pfrag)) + goto wait_for_space; + + pfrag_pool = false; + } if (!skb_can_coalesce(skb, i, pfrag->page, pfrag->offset)) { @@ -1369,11 +1378,20 @@ int tcp_sendmsg_locked(struct sock *sk, struct msghdr *msg, size_t size) if (merge) { skb_frag_size_add(&skb_shinfo(skb)->frags[i - 1], copy); } else { - skb_fill_page_desc(skb, i, pfrag->page, - pfrag->offset, copy); - page_ref_inc(pfrag->page); + if (pfrag_pool) { + skb_fill_pp_page_desc(skb, i, pfrag->page, + pfrag->offset, copy); + } else { + page_ref_inc(pfrag->page); + skb_fill_page_desc(skb, i, pfrag->page, + pfrag->offset, copy); + } } - pfrag->offset += copy; + + if (pfrag_pool) + pfrag_pool_commit(sk->sk_frag_pool, copy, merge); + else + pfrag->offset += copy; } else { if (!sk_wmem_schedule(sk, copy)) goto wait_for_space;