From patchwork Sat Feb 22 18:30:57 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Cong Wang X-Patchwork-Id: 13986809 X-Patchwork-Delegate: bpf@iogearbox.net Received: from mail-pl1-f175.google.com (mail-pl1-f175.google.com [209.85.214.175]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 965B220C46B; Sat, 22 Feb 2025 18:31:11 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.175 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1740249074; cv=none; b=JsdsTbHmRP21rffHvXJTNe2OPxcIwaWdxqgHg1OsKeqXCTao53pHhDe2fQaU/tjMle/rVfMRTktMRlEDIT/BV/Locx0Vtoj0Pt86hHfQN1t83a8qSMsEBOzP9KWjotlItcM0VvNa1ichC/u9dJvX4w1uPd6pLumtQ8sfJi/YAmk= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1740249074; c=relaxed/simple; bh=BNJQxhVemeF2rui7uqWnX4QUGtWHz5udfqJaSs/0jLc=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=hqud6oU5HQsrMP60fBu2LnlE8b9twU5gGft8WDcviyRDooLDcIcfywy2omVUlsgTByNCKk22+d7o/FitzEOeI6v0DjgqM6UZa6KTlN8TtIE+lAe8WbI8Jk40ls+0ctoBG4tAaJZt+Wb9xlN6AlQdJvQHvGhs4mqSwFoBKk6D2rY= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=P5LYry3Z; arc=none smtp.client-ip=209.85.214.175 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="P5LYry3Z" Received: by mail-pl1-f175.google.com with SMTP id d9443c01a7336-221050f3f00so70961155ad.2; Sat, 22 Feb 2025 10:31:11 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1740249071; x=1740853871; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=NXsg36/bH91fqCSaqtS2utCScPqNE1lpqUyqrcS3cJ8=; b=P5LYry3ZTCaO0Is5Vy9Urw6Grru6qUd1Ep/bMhA4GGLKWaYt22Mu21J2jncPd8ppb0 bQiAMjMWoDS7WJPVjtIqnG0DPDLEAfjOzjy7c1KbH8EvhPm/jbY9kOfc6dLm/XPuhN1a Gb4nIZb4t/81rI0xjqQDDahx8YMuaQ55mZ8oFmvN4Gx8n3o79r5RtEETbMGorXx1faj0 5Q/Y4o1iIEVXtc1SU1G0mBY30buw9xcMa+KWkPc2eyDA7yf2EV1Jc6sT22Ge3QJxor+k EKVsQdYw7YbopVtpmYfsyBc1aKdb0np7XT6W59Cm39ssddGtf9+hIR7m9bD6jK/0SxPw nn7Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1740249071; x=1740853871; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=NXsg36/bH91fqCSaqtS2utCScPqNE1lpqUyqrcS3cJ8=; b=dHB2me+VSyFn8zEYNHjbm3Iz7W1KweplV49e3Bh7yDiXNB0wggByvSM0vNOkmqbjQk kv1LQeJznLJVFuvAamWUpjg2JN1uYewIcJ+6bfvOGaSJ4klrwCoT3+zkfFmI7b2lq5gL SNaMu5BKLh1dPQpP2fH1dJBYzpi6+HpQO3RNWtFCX2xdIpvP2S+0i7q2mhMhcSCPzb/s Q165qt7g5BBa4UtXh0z4PnD/ReVdfM7Mo9Fdt+DriuvBmnl7zbVpug/KYlb04dbFiClC oo67b3To7iTLvry90rzjMn9pcBtvVOJa0sIyjcxlWntJOB8hRBNyErB4RKvaHylsUp1i d19w== X-Gm-Message-State: AOJu0YxuQUL/UiL0BRddr5ziEfFnEuOSWmHg/vIXh11iYlt/8XJLqlPa 90SuEC/qN5w26hxYaZ4cVFe6dps3ihxKhZum9oG8XkkwzT4Ur7si0TgjeQ== X-Gm-Gg: ASbGnctEXyCa0ogLIR7y7lfbTGlAXZbQvFXra/y7LVhC/Vkzr1CFbRoxMjla1ammPJD chmx9Gr0PpLaax96vvTW9ePZFcA+XZ6x+fJ2BZXZdDGi9wOHmtf/z7Fu9hyFkOCmJnls7mnIL+/ 9IrG8PZnD1/XhtGV5BvpjYbLPbUVaYrJK0m2QlACRfDGqiDHTTGbgIf+8nIzpVV32njA2VuWuzD 5ecw+jGceYN9CgQO6ludeiVDjDAmwYFiqAV5z873VcTJryBxOTQZvaeNEJBiPQbkM//jT2Q5hCZ E2lw8f9QFBIJHMp+9raA8T8IGJgfkjVN0bhojIjl3+ktn//Vbmo+5e8= X-Google-Smtp-Source: AGHT+IHRysilBQkVn11eiqOnppaV51eo4ww/OR2tbSN0HeyHAHG/clSpBA63NiwTZDq/R9EPb3tCKQ== X-Received: by 2002:a05:6a20:1586:b0:1ee:bed7:f9c4 with SMTP id adf61e73a8af0-1eef3c58fd4mr15325551637.8.1740249070775; Sat, 22 Feb 2025 10:31:10 -0800 (PST) Received: from pop-os.hsd1.ca.comcast.net ([2601:647:6881:9060:2714:159c:631a:37c0]) by smtp.gmail.com with ESMTPSA id d2e1a72fcca58-73250dd701bsm16442959b3a.131.2025.02.22.10.31.09 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sat, 22 Feb 2025 10:31:10 -0800 (PST) From: Cong Wang To: netdev@vger.kernel.org Cc: bpf@vger.kernel.org, john.fastabend@gmail.com, jakub@cloudflare.com, zhoufeng.zf@bytedance.com, zijianzhang@bytedance.com, Amery Hung , Cong Wang Subject: [Patch bpf-next 4/4] tcp_bpf: improve ingress redirection performance with message corking Date: Sat, 22 Feb 2025 10:30:57 -0800 Message-Id: <20250222183057.800800-5-xiyou.wangcong@gmail.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20250222183057.800800-1-xiyou.wangcong@gmail.com> References: <20250222183057.800800-1-xiyou.wangcong@gmail.com> Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: bpf@iogearbox.net From: Zijian Zhang The TCP_BPF ingress redirection path currently lacks the message corking mechanism found in standard TCP. This causes the sender to wake up the receiver for every message, even when messages are small, resulting in reduced throughput compared to regular TCP in certain scenarios. This change introduces a kernel worker-based intermediate layer to provide automatic message corking for TCP_BPF. While this adds a slight latency overhead, it significantly improves overall throughput by reducing unnecessary wake-ups and reducing the sock lock contention. Reviewed-by: Amery Hung Co-developed-by: Cong Wang Signed-off-by: Cong Wang Signed-off-by: Zijian Zhang --- include/linux/skmsg.h | 19 ++++ net/core/skmsg.c | 139 ++++++++++++++++++++++++++++- net/ipv4/tcp_bpf.c | 197 ++++++++++++++++++++++++++++++++++++++++-- 3 files changed, 347 insertions(+), 8 deletions(-) diff --git a/include/linux/skmsg.h b/include/linux/skmsg.h index beaf79b2b68b..c6e0da4044db 100644 --- a/include/linux/skmsg.h +++ b/include/linux/skmsg.h @@ -15,6 +15,8 @@ #define MAX_MSG_FRAGS MAX_SKB_FRAGS #define NR_MSG_FRAG_IDS (MAX_MSG_FRAGS + 1) +/* GSO size for TCP BPF backlog processing */ +#define TCP_BPF_GSO_SIZE 65536 enum __sk_action { __SK_DROP = 0, @@ -85,8 +87,10 @@ struct sk_psock { struct sock *sk_redir; u32 apply_bytes; u32 cork_bytes; + u32 backlog_since_notify; unsigned int eval : 8; unsigned int redir_ingress : 1; /* undefined if sk_redir is null */ + unsigned int backlog_work_delayed : 1; struct sk_msg *cork; struct sk_psock_progs progs; #if IS_ENABLED(CONFIG_BPF_STREAM_PARSER) @@ -97,6 +101,9 @@ struct sk_psock { struct sk_buff_head ingress_skb; struct list_head ingress_msg; spinlock_t ingress_lock; + struct list_head backlog_msg; + /* spin_lock for backlog_msg and backlog_since_notify */ + spinlock_t backlog_msg_lock; unsigned long state; struct list_head link; spinlock_t link_lock; @@ -117,11 +124,13 @@ struct sk_psock { struct mutex work_mutex; struct sk_psock_work_state work_state; struct delayed_work work; + struct delayed_work backlog_work; struct sock *sk_pair; struct rcu_work rwork; }; struct sk_msg *sk_msg_alloc(gfp_t gfp); +bool sk_msg_try_coalesce_ok(struct sk_msg *msg, int elem_first_coalesce); int sk_msg_expand(struct sock *sk, struct sk_msg *msg, int len, int elem_first_coalesce); int sk_msg_clone(struct sock *sk, struct sk_msg *dst, struct sk_msg *src, @@ -396,9 +405,19 @@ static inline void sk_psock_report_error(struct sk_psock *psock, int err) sk_error_report(sk); } +void sk_psock_backlog_msg(struct sk_psock *psock); struct sk_psock *sk_psock_init(struct sock *sk, int node); void sk_psock_stop(struct sk_psock *psock); +static inline void sk_psock_run_backlog_work(struct sk_psock *psock, + bool delayed) +{ + if (!sk_psock_test_state(psock, SK_PSOCK_TX_ENABLED)) + return; + psock->backlog_work_delayed = delayed; + schedule_delayed_work(&psock->backlog_work, delayed ? 1 : 0); +} + #if IS_ENABLED(CONFIG_BPF_STREAM_PARSER) int sk_psock_init_strp(struct sock *sk, struct sk_psock *psock); void sk_psock_start_strp(struct sock *sk, struct sk_psock *psock); diff --git a/net/core/skmsg.c b/net/core/skmsg.c index 25c53c8c9857..32507163fd2d 100644 --- a/net/core/skmsg.c +++ b/net/core/skmsg.c @@ -12,7 +12,7 @@ struct kmem_cache *sk_msg_cachep; -static bool sk_msg_try_coalesce_ok(struct sk_msg *msg, int elem_first_coalesce) +bool sk_msg_try_coalesce_ok(struct sk_msg *msg, int elem_first_coalesce) { if (msg->sg.end > msg->sg.start && elem_first_coalesce < msg->sg.end) @@ -707,6 +707,118 @@ static void sk_psock_backlog(struct work_struct *work) mutex_unlock(&psock->work_mutex); } +static bool backlog_notify(struct sk_psock *psock, bool m_sched_failed, + bool ingress_empty) +{ + /* Notify if: + * 1. We have corked enough bytes + * 2. We have already delayed notification + * 3. Memory allocation failed + * 4. Ingress queue was empty and we're about to add data + */ + return psock->backlog_since_notify >= TCP_BPF_GSO_SIZE || + psock->backlog_work_delayed || + m_sched_failed || + ingress_empty; +} + +static bool backlog_xfer_to_local(struct sk_psock *psock, struct sock *sk_from, + struct list_head *local_head, u32 *tot_size) +{ + struct sock *sk = psock->sk; + struct sk_msg *msg, *tmp; + u32 size = 0; + + list_for_each_entry_safe(msg, tmp, &psock->backlog_msg, list) { + if (msg->sk != sk_from) + break; + + if (!__sk_rmem_schedule(sk, msg->sg.size, false)) + return true; + + list_move_tail(&msg->list, local_head); + sk_wmem_queued_add(msg->sk, -msg->sg.size); + sock_put(msg->sk); + msg->sk = NULL; + psock->backlog_since_notify += msg->sg.size; + size += msg->sg.size; + } + + *tot_size = size; + return false; +} + +/* This function handles the transfer of backlogged messages from the sender + * backlog queue to the ingress queue of the peer socket. Notification of data + * availability will be sent under some conditions. + */ +void sk_psock_backlog_msg(struct sk_psock *psock) +{ + bool rmem_schedule_failed = false; + struct sock *sk_from = NULL; + struct sock *sk = psock->sk; + LIST_HEAD(local_head); + struct sk_msg *msg; + bool should_notify; + u32 tot_size = 0; + + if (!sk_psock_test_state(psock, SK_PSOCK_TX_ENABLED)) + return; + + lock_sock(sk); + spin_lock(&psock->backlog_msg_lock); + + msg = list_first_entry_or_null(&psock->backlog_msg, + struct sk_msg, list); + if (!msg) { + should_notify = !list_empty(&psock->ingress_msg); + spin_unlock(&psock->backlog_msg_lock); + goto notify; + } + + sk_from = msg->sk; + sock_hold(sk_from); + + rmem_schedule_failed = backlog_xfer_to_local(psock, sk_from, + &local_head, &tot_size); + should_notify = backlog_notify(psock, rmem_schedule_failed, + list_empty(&psock->ingress_msg)); + spin_unlock(&psock->backlog_msg_lock); + + spin_lock_bh(&psock->ingress_lock); + list_splice_tail_init(&local_head, &psock->ingress_msg); + spin_unlock_bh(&psock->ingress_lock); + + atomic_add(tot_size, &sk->sk_rmem_alloc); + sk_mem_charge(sk, tot_size); + +notify: + if (should_notify) { + psock->backlog_since_notify = 0; + sk_psock_data_ready(sk, psock); + if (!list_empty(&psock->backlog_msg)) + sk_psock_run_backlog_work(psock, rmem_schedule_failed); + } else { + sk_psock_run_backlog_work(psock, true); + } + release_sock(sk); + + if (sk_from) { + bool slow = lock_sock_fast(sk_from); + + sk_mem_uncharge(sk_from, tot_size); + unlock_sock_fast(sk_from, slow); + sock_put(sk_from); + } +} + +static void sk_psock_backlog_msg_work(struct work_struct *work) +{ + struct delayed_work *dwork = to_delayed_work(work); + + sk_psock_backlog_msg(container_of(dwork, struct sk_psock, backlog_work)); +} + struct sk_psock *sk_psock_init(struct sock *sk, int node) { struct sk_psock *psock; @@ -744,8 +856,11 @@ struct sk_psock *sk_psock_init(struct sock *sk, int node) INIT_DELAYED_WORK(&psock->work, sk_psock_backlog); mutex_init(&psock->work_mutex); + INIT_DELAYED_WORK(&psock->backlog_work, sk_psock_backlog_msg_work); INIT_LIST_HEAD(&psock->ingress_msg); spin_lock_init(&psock->ingress_lock); + INIT_LIST_HEAD(&psock->backlog_msg); + spin_lock_init(&psock->backlog_msg_lock); skb_queue_head_init(&psock->ingress_skb); sk_psock_set_state(psock, SK_PSOCK_TX_ENABLED); @@ -799,6 +914,26 @@ static void __sk_psock_zap_ingress(struct sk_psock *psock) __sk_psock_purge_ingress_msg(psock); } +static void __sk_psock_purge_backlog_msg(struct sk_psock *psock) +{ + struct sk_msg *msg, *tmp; + + spin_lock(&psock->backlog_msg_lock); + list_for_each_entry_safe(msg, tmp, &psock->backlog_msg, list) { + struct sock *sk_from = msg->sk; + bool slow; + + list_del(&msg->list); + slow = lock_sock_fast(sk_from); + sk_wmem_queued_add(sk_from, -msg->sg.size); + sock_put(sk_from); + sk_msg_free(sk_from, msg); + unlock_sock_fast(sk_from, slow); + kfree_sk_msg(msg); + } + spin_unlock(&psock->backlog_msg_lock); +} + static void sk_psock_link_destroy(struct sk_psock *psock) { struct sk_psock_link *link, *tmp; @@ -828,7 +963,9 @@ static void sk_psock_destroy(struct work_struct *work) sk_psock_done_strp(psock); cancel_delayed_work_sync(&psock->work); + cancel_delayed_work_sync(&psock->backlog_work); __sk_psock_zap_ingress(psock); + __sk_psock_purge_backlog_msg(psock); mutex_destroy(&psock->work_mutex); psock_progs_drop(&psock->progs); diff --git a/net/ipv4/tcp_bpf.c b/net/ipv4/tcp_bpf.c index f0ef41c951e2..82d437210f6f 100644 --- a/net/ipv4/tcp_bpf.c +++ b/net/ipv4/tcp_bpf.c @@ -381,6 +381,183 @@ static int tcp_bpf_recvmsg(struct sock *sk, struct msghdr *msg, size_t len, return ret; } +static int tcp_bpf_coalesce_msg(struct sk_msg *last, struct sk_msg *msg, + u32 *apply_bytes_ptr, u32 *tot_size) +{ + struct scatterlist *sge_from, *sge_to; + u32 apply_bytes = *apply_bytes_ptr; + bool apply = apply_bytes; + int i = msg->sg.start; + u32 size; + + while (i != msg->sg.end) { + int last_sge_idx = last->sg.end; + + if (sk_msg_full(last)) + break; + + sge_from = sk_msg_elem(msg, i); + sk_msg_iter_var_prev(last_sge_idx); + sge_to = &last->sg.data[last_sge_idx]; + + size = (apply && apply_bytes < sge_from->length) ? + apply_bytes : sge_from->length; + if (sk_msg_try_coalesce_ok(last, last_sge_idx) && + sg_page(sge_to) == sg_page(sge_from) && + sge_to->offset + sge_to->length == sge_from->offset) { + sge_to->length += size; + } else { + sge_to = &last->sg.data[last->sg.end]; + sg_unmark_end(sge_to); + sg_set_page(sge_to, sg_page(sge_from), size, + sge_from->offset); + get_page(sg_page(sge_to)); + sk_msg_iter_next(last, end); + } + + sge_from->length -= size; + sge_from->offset += size; + + if (sge_from->length == 0) { + put_page(sg_page(sge_to)); + sk_msg_iter_var_next(i); + } + + msg->sg.size -= size; + last->sg.size += size; + *tot_size += size; + + if (apply) { + apply_bytes -= size; + if (!apply_bytes) + break; + } + } + + if (apply) + *apply_bytes_ptr = apply_bytes; + + msg->sg.start = i; + return i; +} + +static void tcp_bpf_xfer_msg(struct sk_msg *dst, struct sk_msg *msg, + u32 *apply_bytes_ptr, u32 *tot_size) +{ + u32 apply_bytes = *apply_bytes_ptr; + bool apply = apply_bytes; + struct scatterlist *sge; + int i = msg->sg.start; + u32 size; + + do { + sge = sk_msg_elem(msg, i); + size = (apply && apply_bytes < sge->length) ? + apply_bytes : sge->length; + + sk_msg_xfer(dst, msg, i, size); + *tot_size += size; + if (sge->length) + get_page(sk_msg_page(dst, i)); + sk_msg_iter_var_next(i); + dst->sg.end = i; + if (apply) { + apply_bytes -= size; + if (!apply_bytes) { + if (sge->length) + sk_msg_iter_var_prev(i); + break; + } + } + } while (i != msg->sg.end); + + if (apply) + *apply_bytes_ptr = apply_bytes; + msg->sg.start = i; +} + +static int tcp_bpf_ingress_backlog(struct sock *sk, struct sock *sk_redir, + struct sk_msg *msg, u32 apply_bytes) +{ + bool ingress_msg_empty = false; + bool apply = apply_bytes; + struct sk_psock *psock; + struct sk_msg *tmp; + u32 tot_size = 0; + int ret = 0; + u8 nonagle; + + psock = sk_psock_get(sk_redir); + if (unlikely(!psock)) + return -EPIPE; + + spin_lock(&psock->backlog_msg_lock); + /* If possible, coalesce the curr sk_msg to the last sk_msg from the + * psock->backlog_msg. + */ + if (!list_empty(&psock->backlog_msg)) { + struct sk_msg *last; + + last = list_last_entry(&psock->backlog_msg, struct sk_msg, list); + if (last->sk == sk) { + int i = tcp_bpf_coalesce_msg(last, msg, &apply_bytes, + &tot_size); + + if (i == msg->sg.end || (apply && !apply_bytes)) + goto out_unlock; + } + } + + /* Otherwise, allocate a new sk_msg and transfer the data from the + * passed in msg to it. + */ + tmp = sk_msg_alloc(GFP_ATOMIC); + if (!tmp) { + ret = -ENOMEM; + spin_unlock(&psock->backlog_msg_lock); + goto error; + } + + tmp->sk = sk; + sock_hold(tmp->sk); + tmp->sg.start = msg->sg.start; + tcp_bpf_xfer_msg(tmp, msg, &apply_bytes, &tot_size); + + ingress_msg_empty = list_empty(&psock->ingress_msg); + list_add_tail(&tmp->list, &psock->backlog_msg); + +out_unlock: + spin_unlock(&psock->backlog_msg_lock); + sk_wmem_queued_add(sk, tot_size); + + /* At this point, the data has been handled well. If one of the + * following conditions is met, we can notify the peer socket in + * the context of this system call immediately. + * 1. If the write buffer has been used up; + * 2. Or, the message size is larger than TCP_BPF_GSO_SIZE; + * 3. Or, the ingress queue was empty; + * 4. Or, the tcp socket is set to no_delay. + * Otherwise, kick off the backlog work so that we can have some + * time to wait for any incoming messages before sending a + * notification to the peer socket. + */ + nonagle = tcp_sk(sk)->nonagle; + if (!sk_stream_memory_free(sk) || + tot_size >= TCP_BPF_GSO_SIZE || ingress_msg_empty || + (!(nonagle & TCP_NAGLE_CORK) && (nonagle & TCP_NAGLE_OFF))) { + release_sock(sk); + psock->backlog_work_delayed = false; + sk_psock_backlog_msg(psock); + lock_sock(sk); + } else { + sk_psock_run_backlog_work(psock, false); + } + +error: + sk_psock_put(sk_redir, psock); + return ret; +} + static int tcp_bpf_send_verdict(struct sock *sk, struct sk_psock *psock, struct sk_msg *msg, int *copied, int flags) { @@ -442,18 +619,24 @@ static int tcp_bpf_send_verdict(struct sock *sk, struct sk_psock *psock, cork = true; psock->cork = NULL; } - release_sock(sk); - origsize = msg->sg.size; - ret = tcp_bpf_sendmsg_redir(sk_redir, redir_ingress, - msg, tosend, flags); - sent = origsize - msg->sg.size; + if (redir_ingress) { + ret = tcp_bpf_ingress_backlog(sk, sk_redir, msg, tosend); + } else { + release_sock(sk); + + origsize = msg->sg.size; + ret = tcp_bpf_sendmsg_redir(sk_redir, redir_ingress, + msg, tosend, flags); + sent = origsize - msg->sg.size; + + lock_sock(sk); + sk_mem_uncharge(sk, sent); + } if (eval == __SK_REDIRECT) sock_put(sk_redir); - lock_sock(sk); - sk_mem_uncharge(sk, sent); if (unlikely(ret < 0)) { int free = sk_msg_free(sk, msg);