diff mbox series

[net] kcm: fix a race condition in kcm_recvmsg()

Message ID 20221023023044.149357-1-xiyou.wangcong@gmail.com (mailing list archive)
State Changes Requested
Delegated to: Netdev Maintainers
Headers show
Series [net] kcm: fix a race condition in kcm_recvmsg() | expand

Checks

Context Check Description
netdev/tree_selection success Clearly marked for net
netdev/fixes_present success Fixes tag present in non-next series
netdev/subject_prefix success Link
netdev/cover_letter success Single patches do not need cover letters
netdev/patch_count success Link
netdev/header_inline success No static functions without inline keyword in header files
netdev/build_32bit success Errors and warnings before: 0 this patch: 0
netdev/cc_maintainers fail 1 blamed authors not CCed: davem@davemloft.net; 6 maintainers not CCed: sgarzare@redhat.com kuba@kernel.org nikolay@nvidia.com davem@davemloft.net mkl@pengutronix.de edumazet@google.com
netdev/build_clang success Errors and warnings before: 0 this patch: 0
netdev/module_param success Was 0 now: 0
netdev/verify_signedoff success Signed-off-by tag matches author and committer
netdev/check_selftest success No net selftest shell script
netdev/verify_fixes success Fixes tag looks correct
netdev/build_allmodconfig_warn success Errors and warnings before: 0 this patch: 0
netdev/checkpatch success total: 0 errors, 0 warnings, 0 checks, 17 lines checked
netdev/kdoc success Errors and warnings before: 0 this patch: 0
netdev/source_inline success Was 0 now: 0

Commit Message

Cong Wang Oct. 23, 2022, 2:30 a.m. UTC
From: Cong Wang <cong.wang@bytedance.com>

sk->sk_receive_queue is protected by skb queue lock, but for KCM
sockets its RX path takes mux->rx_lock to protect more than just
skb queue, so grabbing skb queue lock is not necessary when
mux->rx_lock is already held. But kcm_recvmsg() still only grabs
the skb queue lock, so race conditions still exist.

Close this race condition by taking mux->rx_lock in kcm_recvmsg()
too. This way is much simpler than enforcing skb queue lock
everywhere.

Fixes: ab7ac4eb9832 ("kcm: Kernel Connection Multiplexor module")
Tested-by: shaozhengchao <shaozhengchao@huawei.com>
Cc: Paolo Abeni <pabeni@redhat.com>
Cc: Tom Herbert <tom@herbertland.com>
Signed-off-by: Cong Wang <cong.wang@bytedance.com>
---
 net/kcm/kcmsock.c | 3 +++
 1 file changed, 3 insertions(+)

Comments

Jakub Kicinski Oct. 25, 2022, 11:02 p.m. UTC | #1
On Sat, 22 Oct 2022 19:30:44 -0700 Cong Wang wrote:
> +			spin_lock_bh(&mux->rx_lock);
>  			KCM_STATS_INCR(kcm->stats.rx_msgs);
>  			skb_unlink(skb, &sk->sk_receive_queue);
> +			spin_unlock_bh(&mux->rx_lock);

Why not switch to __skb_unlink() at the same time?
Abundance of caution?

Adding Eric who was fixing KCM bugs recently.
Eric Dumazet Oct. 25, 2022, 11:49 p.m. UTC | #2
On Tue, Oct 25, 2022 at 4:02 PM Jakub Kicinski <kuba@kernel.org> wrote:
>
> On Sat, 22 Oct 2022 19:30:44 -0700 Cong Wang wrote:
> > +                     spin_lock_bh(&mux->rx_lock);
> >                       KCM_STATS_INCR(kcm->stats.rx_msgs);
> >                       skb_unlink(skb, &sk->sk_receive_queue);
> > +                     spin_unlock_bh(&mux->rx_lock);
>
> Why not switch to __skb_unlink() at the same time?
> Abundance of caution?
>
> Adding Eric who was fixing KCM bugs recently.

I think kcm_queue_rcv_skb() might have a similar problem if/when
called from requeue_rx_msgs()

(The mux->rx_lock spinlock is not acquired, and skb_queue_tail() is used)

I agree we should stick to one lock, and if this is not the standard
skb head lock, we should not use it at all
(ie use __skb_queue_tail() and friends)
Cong Wang Oct. 28, 2022, 7:21 p.m. UTC | #3
On Tue, Oct 25, 2022 at 04:02:22PM -0700, Jakub Kicinski wrote:
> On Sat, 22 Oct 2022 19:30:44 -0700 Cong Wang wrote:
> > +			spin_lock_bh(&mux->rx_lock);
> >  			KCM_STATS_INCR(kcm->stats.rx_msgs);
> >  			skb_unlink(skb, &sk->sk_receive_queue);
> > +			spin_unlock_bh(&mux->rx_lock);
> 
> Why not switch to __skb_unlink() at the same time?
> Abundance of caution?

What gain do we have? Since we have rx_lock, skb queue lock should never
be contended?

Thanks.
Cong Wang Oct. 28, 2022, 7:24 p.m. UTC | #4
On Tue, Oct 25, 2022 at 04:49:48PM -0700, Eric Dumazet wrote:
> On Tue, Oct 25, 2022 at 4:02 PM Jakub Kicinski <kuba@kernel.org> wrote:
> >
> > On Sat, 22 Oct 2022 19:30:44 -0700 Cong Wang wrote:
> > > +                     spin_lock_bh(&mux->rx_lock);
> > >                       KCM_STATS_INCR(kcm->stats.rx_msgs);
> > >                       skb_unlink(skb, &sk->sk_receive_queue);
> > > +                     spin_unlock_bh(&mux->rx_lock);
> >
> > Why not switch to __skb_unlink() at the same time?
> > Abundance of caution?
> >
> > Adding Eric who was fixing KCM bugs recently.
> 
> I think kcm_queue_rcv_skb() might have a similar problem if/when
> called from requeue_rx_msgs()
> 
> (The mux->rx_lock spinlock is not acquired, and skb_queue_tail() is used)

rx_lock is acquired at least by 2 callers of it, requeue_rx_msgs() and
kcm_rcv_ready(). kcm_rcv_strparser() seems missing it, I can fix this in
a separate patch as no one actually reported a bug.

Thanks.
Jakub Kicinski Oct. 28, 2022, 11:27 p.m. UTC | #5
On Fri, 28 Oct 2022 12:21:11 -0700 Cong Wang wrote:
> On Tue, Oct 25, 2022 at 04:02:22PM -0700, Jakub Kicinski wrote:
> > On Sat, 22 Oct 2022 19:30:44 -0700 Cong Wang wrote:  
> > > +			spin_lock_bh(&mux->rx_lock);
> > >  			KCM_STATS_INCR(kcm->stats.rx_msgs);
> > >  			skb_unlink(skb, &sk->sk_receive_queue);
> > > +			spin_unlock_bh(&mux->rx_lock);  
> > 
> > Why not switch to __skb_unlink() at the same time?
> > Abundance of caution?  
> 
> What gain do we have? Since we have rx_lock, skb queue lock should never
> be contended?

I was thinking mostly about readability, the performance is secondary.
Other parts of the code use unlocked skb queue helpers so it may be
confusing to a reader why this on isn't, and therefore what lock
protects the queue. But no strong feelings.
Cong Wang Nov. 1, 2022, 8:52 p.m. UTC | #6
On Sat, Oct 22, 2022 at 07:30:44PM -0700, Cong Wang wrote:
> From: Cong Wang <cong.wang@bytedance.com>
> 
> sk->sk_receive_queue is protected by skb queue lock, but for KCM
> sockets its RX path takes mux->rx_lock to protect more than just
> skb queue, so grabbing skb queue lock is not necessary when
> mux->rx_lock is already held. But kcm_recvmsg() still only grabs
> the skb queue lock, so race conditions still exist.
> 
> Close this race condition by taking mux->rx_lock in kcm_recvmsg()
> too. This way is much simpler than enforcing skb queue lock
> everywhere.
> 

After a second thought, this actually could introduce a performance
regression as struct kcm_mux can be shared by multiple KCM sockets.

So, I am afraid we have to use the skb queue lock, fortunately I found
an easier way (comparing to Paolo's) to solve the skb peek race.

Zhengchao, could you please test the following patch?

Thanks!

---------------->

diff --git a/net/kcm/kcmsock.c b/net/kcm/kcmsock.c
index a5004228111d..890a2423f559 100644
--- a/net/kcm/kcmsock.c
+++ b/net/kcm/kcmsock.c
@@ -222,7 +222,7 @@ static void requeue_rx_msgs(struct kcm_mux *mux, struct sk_buff_head *head)
 	struct sk_buff *skb;
 	struct kcm_sock *kcm;
 
-	while ((skb = __skb_dequeue(head))) {
+	while ((skb = skb_dequeue(head))) {
 		/* Reset destructor to avoid calling kcm_rcv_ready */
 		skb->destructor = sock_rfree;
 		skb_orphan(skb);
@@ -1085,53 +1085,17 @@ static int kcm_sendmsg(struct socket *sock, struct msghdr *msg, size_t len)
 	return err;
 }
 
-static struct sk_buff *kcm_wait_data(struct sock *sk, int flags,
-				     long timeo, int *err)
-{
-	struct sk_buff *skb;
-
-	while (!(skb = skb_peek(&sk->sk_receive_queue))) {
-		if (sk->sk_err) {
-			*err = sock_error(sk);
-			return NULL;
-		}
-
-		if (sock_flag(sk, SOCK_DONE))
-			return NULL;
-
-		if ((flags & MSG_DONTWAIT) || !timeo) {
-			*err = -EAGAIN;
-			return NULL;
-		}
-
-		sk_wait_data(sk, &timeo, NULL);
-
-		/* Handle signals */
-		if (signal_pending(current)) {
-			*err = sock_intr_errno(timeo);
-			return NULL;
-		}
-	}
-
-	return skb;
-}
-
 static int kcm_recvmsg(struct socket *sock, struct msghdr *msg,
 		       size_t len, int flags)
 {
 	struct sock *sk = sock->sk;
 	struct kcm_sock *kcm = kcm_sk(sk);
 	int err = 0;
-	long timeo;
 	struct strp_msg *stm;
 	int copied = 0;
 	struct sk_buff *skb;
 
-	timeo = sock_rcvtimeo(sk, flags & MSG_DONTWAIT);
-
-	lock_sock(sk);
-
-	skb = kcm_wait_data(sk, flags, timeo, &err);
+	skb = skb_recv_datagram(sk, flags, &err);
 	if (!skb)
 		goto out;
 
@@ -1162,14 +1126,11 @@ static int kcm_recvmsg(struct socket *sock, struct msghdr *msg,
 			/* Finished with message */
 			msg->msg_flags |= MSG_EOR;
 			KCM_STATS_INCR(kcm->stats.rx_msgs);
-			skb_unlink(skb, &sk->sk_receive_queue);
-			kfree_skb(skb);
 		}
 	}
 
 out:
-	release_sock(sk);
-
+	skb_free_datagram(sk, skb);
 	return copied ? : err;
 }
 
@@ -1179,7 +1140,6 @@ static ssize_t kcm_splice_read(struct socket *sock, loff_t *ppos,
 {
 	struct sock *sk = sock->sk;
 	struct kcm_sock *kcm = kcm_sk(sk);
-	long timeo;
 	struct strp_msg *stm;
 	int err = 0;
 	ssize_t copied;
@@ -1187,11 +1147,7 @@ static ssize_t kcm_splice_read(struct socket *sock, loff_t *ppos,
 
 	/* Only support splice for SOCKSEQPACKET */
 
-	timeo = sock_rcvtimeo(sk, flags & MSG_DONTWAIT);
-
-	lock_sock(sk);
-
-	skb = kcm_wait_data(sk, flags, timeo, &err);
+	skb = skb_recv_datagram(sk, flags, &err);
 	if (!skb)
 		goto err_out;
 
@@ -1219,13 +1175,11 @@ static ssize_t kcm_splice_read(struct socket *sock, loff_t *ppos,
 	 * finish reading the message.
 	 */
 
-	release_sock(sk);
-
+	skb_free_datagram(sk, skb);
 	return copied;
 
 err_out:
-	release_sock(sk);
-
+	skb_free_datagram(sk, skb);
 	return err;
 }
diff mbox series

Patch

diff --git a/net/kcm/kcmsock.c b/net/kcm/kcmsock.c
index 27725464ec08..8b4e5d0ab2b6 100644
--- a/net/kcm/kcmsock.c
+++ b/net/kcm/kcmsock.c
@@ -1116,6 +1116,7 @@  static int kcm_recvmsg(struct socket *sock, struct msghdr *msg,
 {
 	struct sock *sk = sock->sk;
 	struct kcm_sock *kcm = kcm_sk(sk);
+	struct kcm_mux *mux = kcm->mux;
 	int err = 0;
 	long timeo;
 	struct strp_msg *stm;
@@ -1156,8 +1157,10 @@  static int kcm_recvmsg(struct socket *sock, struct msghdr *msg,
 msg_finished:
 			/* Finished with message */
 			msg->msg_flags |= MSG_EOR;
+			spin_lock_bh(&mux->rx_lock);
 			KCM_STATS_INCR(kcm->stats.rx_msgs);
 			skb_unlink(skb, &sk->sk_receive_queue);
+			spin_unlock_bh(&mux->rx_lock);
 			kfree_skb(skb);
 		}
 	}