diff mbox series

[v3,1/2] wifi: ath9k: fix races between ath9k_wmi_cmd and ath9k_wmi_ctrl_rx

Message ID 20230425192607.18015-1-pchelkin@ispras.ru (mailing list archive)
State Awaiting Upstream
Delegated to: Netdev Maintainers
Headers show
Series [v3,1/2] wifi: ath9k: fix races between ath9k_wmi_cmd and ath9k_wmi_ctrl_rx | expand

Checks

Context Check Description
netdev/tree_selection success Not a local patch

Commit Message

Fedor Pchelkin April 25, 2023, 7:26 p.m. UTC
Currently, the synchronization between ath9k_wmi_cmd() and
ath9k_wmi_ctrl_rx() is exposed to a race condition which, although being
rather unlikely, can lead to invalid behaviour of ath9k_wmi_cmd().

Consider the following scenario:

CPU0					CPU1

ath9k_wmi_cmd(...)
  mutex_lock(&wmi->op_mutex)
  ath9k_wmi_cmd_issue(...)
  wait_for_completion_timeout(...)
  ---
  timeout
  ---
					/* the callback is being processed
					 * before last_seq_id became zero
					 */
					ath9k_wmi_ctrl_rx(...)
					  spin_lock_irqsave(...)
					  /* wmi->last_seq_id check here
					   * doesn't detect timeout yet
					   */
					  spin_unlock_irqrestore(...)
  /* last_seq_id is zeroed to
   * indicate there was a timeout
   */
  wmi->last_seq_id = 0
  mutex_unlock(&wmi->op_mutex)
  return -ETIMEDOUT

ath9k_wmi_cmd(...)
  mutex_lock(&wmi->op_mutex)
  /* the buffer is replaced with
   * another one
   */
  wmi->cmd_rsp_buf = rsp_buf
  wmi->cmd_rsp_len = rsp_len
  ath9k_wmi_cmd_issue(...)
    spin_lock_irqsave(...)
    spin_unlock_irqrestore(...)
  wait_for_completion_timeout(...)
					/* the continuation of the
					 * callback left after the first
					 * ath9k_wmi_cmd call
					 */
					  ath9k_wmi_rsp_callback(...)
					    /* copying data designated
					     * to already timeouted
					     * WMI command into an
					     * inappropriate wmi_cmd_buf
					     */
					    memcpy(...)
					    complete(&wmi->cmd_wait)
  /* awakened by the bogus callback
   * => invalid return result
   */
  mutex_unlock(&wmi->op_mutex)
  return 0

To fix this, update last_seq_id on timeout path inside ath9k_wmi_cmd()
under the wmi_lock. Move ath9k_wmi_rsp_callback() under wmi_lock inside
ath9k_wmi_ctrl_rx() so that the wmi->cmd_wait can be completed only for
initially designated wmi_cmd call, otherwise the path would be rejected
with last_seq_id check.

Found by Linux Verification Center (linuxtesting.org) with Syzkaller.

Fixes: fb9987d0f748 ("ath9k_htc: Support for AR9271 chipset.")
Signed-off-by: Fedor Pchelkin <pchelkin@ispras.ru>
---
v2: do not extract ath9k_wmi_rsp_callback() internals, rephrase
description
v3: per Hillf Danton's comment, protect last_seq_id with wmi_lock on
timeout path, divide the v2 version into two separate patches; the first
one is concerned with last_seq_id and completion under wmi_lock, the
second one is for moving rsp buffer and length recording under wmi lock.
Note that it's been only tested with Syzkaller and on basic driver
scenarios with real hardware.

 drivers/net/wireless/ath/ath9k/wmi.c | 6 ++++--
 1 file changed, 4 insertions(+), 2 deletions(-)

Comments

Fedor Pchelkin April 26, 2023, 7:02 p.m. UTC | #1
On Wed, Apr 26, 2023 at 07:07:08AM +0800, Hillf Danton wrote: 
> Given similar wait timeout[1], just taking lock on the waiter side is not
> enough wrt fixing the race, because in case job done on the waker side,
> waiter needs to wait again after timeout.
> 

As I understand you correctly, you mean the case when a timeout occurs
during ath9k_wmi_ctrl_rx() callback execution. I suppose if a timeout has
occurred on a waiter's side, it should return immediately and doesn't have
to care in which state the callback has been at that moment.

AFAICS, this is controlled properly with taking a wmi_lock on waiter and
waker sides, and there is no data corruption.

If a callback has not managed to do its work entirely (performing a
completion and subsequently waking waiting thread is included here), then,
well, it is considered a timeout, in my opinion.

Your suggestion makes a wmi_cmd call to give a little more chance for the
belated callback to complete (although timeout has actually expired). That
is probably good, but increasing a timeout value makes that job, too. I
don't think it makes any sense on real hardware.

Or do you mean there is data corruption that is properly fixed with your
patch?

That is, I agree there can be a situation when a callback makes all the
logical work it should and it just hasn't got enough time to perform a
completion before a timeout on waiter's side occurs. And this behaviour
can be named "racy". But, technically, this seems to be a rather valid
timeout.

> [1] https://lore.kernel.org/lkml/9d9b9652-c1ac-58e9-2eab-9256c17b1da2@I-love.SAKURA.ne.jp/
> 

I don't think it's a similar case because wait_for_completion_state() is
interruptible while wait_for_completion_timeout() is not.

> A correct fix looks like after putting pieces together
> 
> +++ b/drivers/net/wireless/ath/ath9k/wmi.c
> @@ -238,6 +238,7 @@ static void ath9k_wmi_ctrl_rx(void *priv
>  		spin_unlock_irqrestore(&wmi->wmi_lock, flags);
>  		goto free_skb;
>  	}
> +	wmi->last_seq_id = 0;
>  	spin_unlock_irqrestore(&wmi->wmi_lock, flags);
>  
>  	/* WMI command response */
> @@ -339,9 +340,20 @@ int ath9k_wmi_cmd(struct wmi *wmi, enum
>  
>  	time_left = wait_for_completion_timeout(&wmi->cmd_wait, timeout);
>  	if (!time_left) {
> +		unsigned long flags;
> +		int wait = 0;
> +
>  		ath_dbg(common, WMI, "Timeout waiting for WMI command: %s\n",
>  			wmi_cmd_to_name(cmd_id));
> -		wmi->last_seq_id = 0;
> +
> +		spin_lock_irqsave(&wmi->wmi_lock, flags);
> +		if (wmi->last_seq_id == 0) /* job done on the waker side? */
> +			wait = 1;
> +		else
> +			wmi->last_seq_id = 0;
> +		spin_unlock_irqrestore(&wmi->wmi_lock, flags);
> +		if (wait)
> +			wait_for_completion(&wmi->cmd_wait);
>  		mutex_unlock(&wmi->op_mutex);
>  		return -ETIMEDOUT;
>  	}
Toke Høiland-Jørgensen May 15, 2023, 12:06 p.m. UTC | #2
Fedor Pchelkin <pchelkin@ispras.ru> writes:

> On Wed, Apr 26, 2023 at 07:07:08AM +0800, Hillf Danton wrote: 
>> Given similar wait timeout[1], just taking lock on the waiter side is not
>> enough wrt fixing the race, because in case job done on the waker side,
>> waiter needs to wait again after timeout.
>> 
>
> As I understand you correctly, you mean the case when a timeout occurs
> during ath9k_wmi_ctrl_rx() callback execution. I suppose if a timeout has
> occurred on a waiter's side, it should return immediately and doesn't have
> to care in which state the callback has been at that moment.
>
> AFAICS, this is controlled properly with taking a wmi_lock on waiter and
> waker sides, and there is no data corruption.
>
> If a callback has not managed to do its work entirely (performing a
> completion and subsequently waking waiting thread is included here), then,
> well, it is considered a timeout, in my opinion.
>
> Your suggestion makes a wmi_cmd call to give a little more chance for the
> belated callback to complete (although timeout has actually expired). That
> is probably good, but increasing a timeout value makes that job, too. I
> don't think it makes any sense on real hardware.
>
> Or do you mean there is data corruption that is properly fixed with your
> patch?
>
> That is, I agree there can be a situation when a callback makes all the
> logical work it should and it just hasn't got enough time to perform a
> completion before a timeout on waiter's side occurs. And this behaviour
> can be named "racy". But, technically, this seems to be a rather valid
> timeout.
>
>> [1] https://lore.kernel.org/lkml/9d9b9652-c1ac-58e9-2eab-9256c17b1da2@I-love.SAKURA.ne.jp/
>> 
>
> I don't think it's a similar case because wait_for_completion_state() is
> interruptible while wait_for_completion_timeout() is not.

Ping, Hillf?

-Toke
Hillf Danton May 18, 2023, 10:24 a.m. UTC | #3
Fedor Pchelkin <pchelkin@ispras.ru> writes:

> On Wed, Apr 26, 2023 at 07:07:08AM +0800, Hillf Danton wrote: 
>> Given similar wait timeout[1], just taking lock on the waiter side is not
>> enough wrt fixing the race, because in case job done on the waker side,
>> waiter needs to wait again after timeout.
>> 
>
> As I understand you correctly, you mean the case when a timeout occurs
> during ath9k_wmi_ctrl_rx() callback execution. I suppose if a timeout has
> occurred on a waiter's side, it should return immediately and doesn't have
> to care in which state the callback has been at that moment.
>
> AFAICS, this is controlled properly with taking a wmi_lock on waiter and
> waker sides, and there is no data corruption.
>
> If a callback has not managed to do its work entirely (performing a
> completion and subsequently waking waiting thread is included here), then,
> well, it is considered a timeout, in my opinion.
>
> Your suggestion makes a wmi_cmd call to give a little more chance for the
> belated callback to complete (although timeout has actually expired). That
> is probably good, but increasing a timeout value makes that job, too. I
> don't think it makes any sense on real hardware.
>
> Or do you mean there is data corruption that is properly fixed with your patch?

Given complete() not paired with wait_for_completion(), what is the
difference after this patch?
Fedor Pchelkin May 18, 2023, 3:44 p.m. UTC | #4
On Thu, May 18, 2023 at 06:24:37PM +0800, Hillf Danton wrote:
> Fedor Pchelkin <pchelkin@ispras.ru> writes:
> 
> > On Wed, Apr 26, 2023 at 07:07:08AM +0800, Hillf Danton wrote: 
> >> Given similar wait timeout[1], just taking lock on the waiter side is not
> >> enough wrt fixing the race, because in case job done on the waker side,
> >> waiter needs to wait again after timeout.
> >> 
> >
> > As I understand you correctly, you mean the case when a timeout occurs
> > during ath9k_wmi_ctrl_rx() callback execution. I suppose if a timeout has
> > occurred on a waiter's side, it should return immediately and doesn't have
> > to care in which state the callback has been at that moment.
> >
> > AFAICS, this is controlled properly with taking a wmi_lock on waiter and
> > waker sides, and there is no data corruption.
> >
> > If a callback has not managed to do its work entirely (performing a
> > completion and subsequently waking waiting thread is included here), then,
> > well, it is considered a timeout, in my opinion.
> >
> > Your suggestion makes a wmi_cmd call to give a little more chance for the
> > belated callback to complete (although timeout has actually expired). That
> > is probably good, but increasing a timeout value makes that job, too. I
> > don't think it makes any sense on real hardware.
> >
> > Or do you mean there is data corruption that is properly fixed with your patch?
> 
> Given complete() not paired with wait_for_completion(), what is the
> difference after this patch?

The main thing in the patch is making ath9k_wmi_ctrl_rx() release wmi_lock
after calling ath9k_wmi_rsp_callback() which does copying data into the
shared wmi->cmd_rsp_buf buffer. Otherwise there can occur a data
corrupting scenario outlined in the patch description (added it here,
too).

On Tue, 25 Apr 2023 22:26:06 +0300, Fedor Pchelkin wrote:
> CPU0					CPU1
> 
> ath9k_wmi_cmd(...)
>   mutex_lock(&wmi->op_mutex)
>   ath9k_wmi_cmd_issue(...)
>   wait_for_completion_timeout(...)
>   ---
>   timeout
>   ---
> 					/* the callback is being processed
> 					 * before last_seq_id became zero
> 					 */
> 					ath9k_wmi_ctrl_rx(...)
> 					  spin_lock_irqsave(...)
> 					  /* wmi->last_seq_id check here
> 					   * doesn't detect timeout yet
> 					   */
> 					  spin_unlock_irqrestore(...)
>   /* last_seq_id is zeroed to
>    * indicate there was a timeout
>    */
>   wmi->last_seq_id = 0
>   mutex_unlock(&wmi->op_mutex)
>   return -ETIMEDOUT
> 
> ath9k_wmi_cmd(...)
>   mutex_lock(&wmi->op_mutex)
>   /* the buffer is replaced with
>    * another one
>    */
>   wmi->cmd_rsp_buf = rsp_buf
>   wmi->cmd_rsp_len = rsp_len
>   ath9k_wmi_cmd_issue(...)
>     spin_lock_irqsave(...)
>     spin_unlock_irqrestore(...)
>   wait_for_completion_timeout(...)
> 					/* the continuation of the
> 					 * callback left after the first
> 					 * ath9k_wmi_cmd call
> 					 */
> 					  ath9k_wmi_rsp_callback(...)
> 					    /* copying data designated
> 					     * to already timeouted
> 					     * WMI command into an
> 					     * inappropriate wmi_cmd_buf
> 					     */
> 					    memcpy(...)
> 					    complete(&wmi->cmd_wait)
>   /* awakened by the bogus callback
>    * => invalid return result
>    */
>   mutex_unlock(&wmi->op_mutex)
>   return 0

So before the patch the wmi->last_seq_id check in ath9k_wmi_ctrl_rx()
wasn't helpful in case wmi->last_seq_id value was changed during
ath9k_wmi_rsp_callback() execution because of the next ath9k_wmi_cmd()
call.

With the proposed patch the wmi->last_seq_id check in ath9k_wmi_ctrl_rx()
accomplishes its job as:
 - the next ath9k_wmi_cmd call changes last_seq_id value under lock so
   it either waits for a belated ath9k_wmi_ctrl_rx() to finish or updates
   last_seq_id value so that the timeout check in ath9k_wmi_ctrl_rx()
   indicates that the waiter side has timeouted and we shouldn't further
   process the callback.
 - memcopying in ath9k_wmi_rsp_callback() is made to a valid place if
   the last_seq_id check was successful under the lock.
Toke Høiland-Jørgensen Aug. 8, 2023, 2:06 p.m. UTC | #5
Fedor Pchelkin <pchelkin@ispras.ru> writes:

> Currently, the synchronization between ath9k_wmi_cmd() and
> ath9k_wmi_ctrl_rx() is exposed to a race condition which, although being
> rather unlikely, can lead to invalid behaviour of ath9k_wmi_cmd().
>
> Consider the following scenario:
>
> CPU0					CPU1
>
> ath9k_wmi_cmd(...)
>   mutex_lock(&wmi->op_mutex)
>   ath9k_wmi_cmd_issue(...)
>   wait_for_completion_timeout(...)
>   ---
>   timeout
>   ---
> 					/* the callback is being processed
> 					 * before last_seq_id became zero
> 					 */
> 					ath9k_wmi_ctrl_rx(...)
> 					  spin_lock_irqsave(...)
> 					  /* wmi->last_seq_id check here
> 					   * doesn't detect timeout yet
> 					   */
> 					  spin_unlock_irqrestore(...)
>   /* last_seq_id is zeroed to
>    * indicate there was a timeout
>    */
>   wmi->last_seq_id = 0
>   mutex_unlock(&wmi->op_mutex)
>   return -ETIMEDOUT
>
> ath9k_wmi_cmd(...)
>   mutex_lock(&wmi->op_mutex)
>   /* the buffer is replaced with
>    * another one
>    */
>   wmi->cmd_rsp_buf = rsp_buf
>   wmi->cmd_rsp_len = rsp_len
>   ath9k_wmi_cmd_issue(...)
>     spin_lock_irqsave(...)
>     spin_unlock_irqrestore(...)
>   wait_for_completion_timeout(...)
> 					/* the continuation of the
> 					 * callback left after the first
> 					 * ath9k_wmi_cmd call
> 					 */
> 					  ath9k_wmi_rsp_callback(...)
> 					    /* copying data designated
> 					     * to already timeouted
> 					     * WMI command into an
> 					     * inappropriate wmi_cmd_buf
> 					     */
> 					    memcpy(...)
> 					    complete(&wmi->cmd_wait)
>   /* awakened by the bogus callback
>    * => invalid return result
>    */
>   mutex_unlock(&wmi->op_mutex)
>   return 0
>
> To fix this, update last_seq_id on timeout path inside ath9k_wmi_cmd()
> under the wmi_lock. Move ath9k_wmi_rsp_callback() under wmi_lock inside
> ath9k_wmi_ctrl_rx() so that the wmi->cmd_wait can be completed only for
> initially designated wmi_cmd call, otherwise the path would be rejected
> with last_seq_id check.
>
> Found by Linux Verification Center (linuxtesting.org) with Syzkaller.
>
> Fixes: fb9987d0f748 ("ath9k_htc: Support for AR9271 chipset.")
> Signed-off-by: Fedor Pchelkin <pchelkin@ispras.ru>

Alright, finally took the time to dig into this and convince myself that
the fix if correct. Sorry for taking so long!

Acked-by: Toke Høiland-Jørgensen <toke@toke.dk>
Kalle Valo Aug. 22, 2023, 1:35 p.m. UTC | #6
Fedor Pchelkin <pchelkin@ispras.ru> wrote:

> Currently, the synchronization between ath9k_wmi_cmd() and
> ath9k_wmi_ctrl_rx() is exposed to a race condition which, although being
> rather unlikely, can lead to invalid behaviour of ath9k_wmi_cmd().
> 
> Consider the following scenario:
> 
> CPU0                                    CPU1
> 
> ath9k_wmi_cmd(...)
>   mutex_lock(&wmi->op_mutex)
>   ath9k_wmi_cmd_issue(...)
>   wait_for_completion_timeout(...)
>   ---
>   timeout
>   ---
>                                         /* the callback is being processed
>                                          * before last_seq_id became zero
>                                          */
>                                         ath9k_wmi_ctrl_rx(...)
>                                           spin_lock_irqsave(...)
>                                           /* wmi->last_seq_id check here
>                                            * doesn't detect timeout yet
>                                            */
>                                           spin_unlock_irqrestore(...)
>   /* last_seq_id is zeroed to
>    * indicate there was a timeout
>    */
>   wmi->last_seq_id = 0
>   mutex_unlock(&wmi->op_mutex)
>   return -ETIMEDOUT
> 
> ath9k_wmi_cmd(...)
>   mutex_lock(&wmi->op_mutex)
>   /* the buffer is replaced with
>    * another one
>    */
>   wmi->cmd_rsp_buf = rsp_buf
>   wmi->cmd_rsp_len = rsp_len
>   ath9k_wmi_cmd_issue(...)
>     spin_lock_irqsave(...)
>     spin_unlock_irqrestore(...)
>   wait_for_completion_timeout(...)
>                                         /* the continuation of the
>                                          * callback left after the first
>                                          * ath9k_wmi_cmd call
>                                          */
>                                           ath9k_wmi_rsp_callback(...)
>                                             /* copying data designated
>                                              * to already timeouted
>                                              * WMI command into an
>                                              * inappropriate wmi_cmd_buf
>                                              */
>                                             memcpy(...)
>                                             complete(&wmi->cmd_wait)
>   /* awakened by the bogus callback
>    * => invalid return result
>    */
>   mutex_unlock(&wmi->op_mutex)
>   return 0
> 
> To fix this, update last_seq_id on timeout path inside ath9k_wmi_cmd()
> under the wmi_lock. Move ath9k_wmi_rsp_callback() under wmi_lock inside
> ath9k_wmi_ctrl_rx() so that the wmi->cmd_wait can be completed only for
> initially designated wmi_cmd call, otherwise the path would be rejected
> with last_seq_id check.
> 
> Found by Linux Verification Center (linuxtesting.org) with Syzkaller.
> 
> Fixes: fb9987d0f748 ("ath9k_htc: Support for AR9271 chipset.")
> Signed-off-by: Fedor Pchelkin <pchelkin@ispras.ru>
> Acked-by: Toke Høiland-Jørgensen <toke@toke.dk>
> Signed-off-by: Kalle Valo <quic_kvalo@quicinc.com>

2 patches applied to ath-next branch of ath.git, thanks.

b674fb513e2e wifi: ath9k: fix races between ath9k_wmi_cmd and ath9k_wmi_ctrl_rx
454994cfa9e4 wifi: ath9k: protect WMI command response buffer replacement with a lock
diff mbox series

Patch

diff --git a/drivers/net/wireless/ath/ath9k/wmi.c b/drivers/net/wireless/ath/ath9k/wmi.c
index d652c647d56b..04f363cb90fe 100644
--- a/drivers/net/wireless/ath/ath9k/wmi.c
+++ b/drivers/net/wireless/ath/ath9k/wmi.c
@@ -242,10 +242,10 @@  static void ath9k_wmi_ctrl_rx(void *priv, struct sk_buff *skb,
 		spin_unlock_irqrestore(&wmi->wmi_lock, flags);
 		goto free_skb;
 	}
-	spin_unlock_irqrestore(&wmi->wmi_lock, flags);
 
 	/* WMI command response */
 	ath9k_wmi_rsp_callback(wmi, skb);
+	spin_unlock_irqrestore(&wmi->wmi_lock, flags);
 
 free_skb:
 	kfree_skb(skb);
@@ -308,8 +308,8 @@  int ath9k_wmi_cmd(struct wmi *wmi, enum wmi_cmd_id cmd_id,
 	struct ath_common *common = ath9k_hw_common(ah);
 	u16 headroom = sizeof(struct htc_frame_hdr) +
 		       sizeof(struct wmi_cmd_hdr);
+	unsigned long time_left, flags;
 	struct sk_buff *skb;
-	unsigned long time_left;
 	int ret = 0;
 
 	if (ah->ah_flags & AH_UNPLUGGED)
@@ -345,7 +345,9 @@  int ath9k_wmi_cmd(struct wmi *wmi, enum wmi_cmd_id cmd_id,
 	if (!time_left) {
 		ath_dbg(common, WMI, "Timeout waiting for WMI command: %s\n",
 			wmi_cmd_to_name(cmd_id));
+		spin_lock_irqsave(&wmi->wmi_lock, flags);
 		wmi->last_seq_id = 0;
+		spin_unlock_irqrestore(&wmi->wmi_lock, flags);
 		mutex_unlock(&wmi->op_mutex);
 		return -ETIMEDOUT;
 	}