diff mbox series

[net-next,v2] net: wwan: t7xx: fix GFP_KERNEL usage in spin_lock context

Message ID 20220517064821.3966990-1-william.xuanziyang@huawei.com (mailing list archive)
State Superseded
Delegated to: Netdev Maintainers
Headers show
Series [net-next,v2] net: wwan: t7xx: fix GFP_KERNEL usage in spin_lock context | expand

Checks

Context Check Description
netdev/tree_selection success Clearly marked for net-next
netdev/fixes_present success Fixes tag not required for -next series
netdev/subject_prefix success Link
netdev/cover_letter success Single patches do not need cover letters
netdev/patch_count success Link
netdev/header_inline success No static functions without inline keyword in header files
netdev/build_32bit success Errors and warnings before: 0 this patch: 0
netdev/cc_maintainers fail 1 blamed authors not CCed: ilpo.jarvinen@linux.intel.com; 4 maintainers not CCed: linux-mediatek@lists.infradead.org matthias.bgg@gmail.com linux-arm-kernel@lists.infradead.org ilpo.jarvinen@linux.intel.com
netdev/build_clang success Errors and warnings before: 0 this patch: 0
netdev/module_param success Was 0 now: 0
netdev/verify_signedoff success Signed-off-by tag matches author and committer
netdev/verify_fixes success Fixes tag looks correct
netdev/build_allmodconfig_warn success Errors and warnings before: 0 this patch: 0
netdev/checkpatch success total: 0 errors, 0 warnings, 0 checks, 21 lines checked
netdev/kdoc success Errors and warnings before: 0 this patch: 0
netdev/source_inline success Was 0 now: 0

Commit Message

Ziyang Xuan (William) May 17, 2022, 6:48 a.m. UTC
t7xx_cldma_clear_rxq() call t7xx_cldma_alloc_and_map_skb() in spin_lock
context, But __dev_alloc_skb() in t7xx_cldma_alloc_and_map_skb() uses
GFP_KERNEL, that will introduce scheduling factor in spin_lock context.

Because t7xx_cldma_clear_rxq() is called after stopping CLDMA, so we can
remove the spin_lock from t7xx_cldma_clear_rxq().

Fixes: 39d439047f1d ("net: wwan: t7xx: Add control DMA interface")
Signed-off-by: Ziyang Xuan <william.xuanziyang@huawei.com>
---
 drivers/net/wwan/t7xx/t7xx_hif_cldma.c | 7 ++++---
 1 file changed, 4 insertions(+), 3 deletions(-)

Comments

Sergey Ryazanov May 17, 2022, 8:35 a.m. UTC | #1
On Tue, May 17, 2022 at 9:30 AM Ziyang Xuan
<william.xuanziyang@huawei.com> wrote:
> t7xx_cldma_clear_rxq() call t7xx_cldma_alloc_and_map_skb() in spin_lock
> context, But __dev_alloc_skb() in t7xx_cldma_alloc_and_map_skb() uses
> GFP_KERNEL, that will introduce scheduling factor in spin_lock context.
>
> Because t7xx_cldma_clear_rxq() is called after stopping CLDMA, so we can
> remove the spin_lock from t7xx_cldma_clear_rxq().
>
> Fixes: 39d439047f1d ("net: wwan: t7xx: Add control DMA interface")
> Signed-off-by: Ziyang Xuan <william.xuanziyang@huawei.com>

Reviewed-by: Sergey Ryazanov <ryazanov.s.a@gmail.com>
Loic Poulain May 17, 2022, 8:50 a.m. UTC | #2
Hi Ziyang,

On Tue, 17 May 2022 at 08:30, Ziyang Xuan <william.xuanziyang@huawei.com> wrote:
>
> t7xx_cldma_clear_rxq() call t7xx_cldma_alloc_and_map_skb() in spin_lock
> context, But __dev_alloc_skb() in t7xx_cldma_alloc_and_map_skb() uses
> GFP_KERNEL, that will introduce scheduling factor in spin_lock context.
>
> Because t7xx_cldma_clear_rxq() is called after stopping CLDMA, so we can
> remove the spin_lock from t7xx_cldma_clear_rxq().
>
> Fixes: 39d439047f1d ("net: wwan: t7xx: Add control DMA interface")
> Signed-off-by: Ziyang Xuan <william.xuanziyang@huawei.com>
> ---

You should normally indicate what changed in this v2.

>  drivers/net/wwan/t7xx/t7xx_hif_cldma.c | 7 ++++---
>  1 file changed, 4 insertions(+), 3 deletions(-)
>
> diff --git a/drivers/net/wwan/t7xx/t7xx_hif_cldma.c b/drivers/net/wwan/t7xx/t7xx_hif_cldma.c
> index 46066dcd2607..7493285a9606 100644
> --- a/drivers/net/wwan/t7xx/t7xx_hif_cldma.c
> +++ b/drivers/net/wwan/t7xx/t7xx_hif_cldma.c
> @@ -782,10 +782,12 @@ static int t7xx_cldma_clear_rxq(struct cldma_ctrl *md_ctrl, int qnum)
>         struct cldma_queue *rxq = &md_ctrl->rxq[qnum];
>         struct cldma_request *req;
>         struct cldma_gpd *gpd;
> -       unsigned long flags;
>         int ret = 0;
>
> -       spin_lock_irqsave(&rxq->ring_lock, flags);
> +       /* CLDMA has been stopped. There is not any CLDMA IRQ, holding
> +        * ring_lock is not needed.

If it makes sense to explain why we don't need locking, the next
sentence is not needed:


>  Thus we can use functions that may
> +        * introduce scheduling.
> +        */
>         t7xx_cldma_q_reset(rxq);
>         list_for_each_entry(req, &rxq->tr_ring->gpd_ring, entry) {
>                 gpd = req->gpd;
> @@ -808,7 +810,6 @@ static int t7xx_cldma_clear_rxq(struct cldma_ctrl *md_ctrl, int qnum)
>
>                 t7xx_cldma_gpd_set_data_ptr(req->gpd, req->mapped_buff);
>         }
> -       spin_unlock_irqrestore(&rxq->ring_lock, flags);
>
>         return ret;
>  }
> --
> 2.25.1
>
Ziyang Xuan (William) May 18, 2022, 4:39 a.m. UTC | #3
> Hi Ziyang,
> 
> On Tue, 17 May 2022 at 08:30, Ziyang Xuan <william.xuanziyang@huawei.com> wrote:
>>
>> t7xx_cldma_clear_rxq() call t7xx_cldma_alloc_and_map_skb() in spin_lock
>> context, But __dev_alloc_skb() in t7xx_cldma_alloc_and_map_skb() uses
>> GFP_KERNEL, that will introduce scheduling factor in spin_lock context.
>>
>> Because t7xx_cldma_clear_rxq() is called after stopping CLDMA, so we can
>> remove the spin_lock from t7xx_cldma_clear_rxq().
>>
>> Fixes: 39d439047f1d ("net: wwan: t7xx: Add control DMA interface")
>> Signed-off-by: Ziyang Xuan <william.xuanziyang@huawei.com>
>> ---
> 
> You should normally indicate what changed in this v2.
> 
>>  drivers/net/wwan/t7xx/t7xx_hif_cldma.c | 7 ++++---
>>  1 file changed, 4 insertions(+), 3 deletions(-)
>>
>> diff --git a/drivers/net/wwan/t7xx/t7xx_hif_cldma.c b/drivers/net/wwan/t7xx/t7xx_hif_cldma.c
>> index 46066dcd2607..7493285a9606 100644
>> --- a/drivers/net/wwan/t7xx/t7xx_hif_cldma.c
>> +++ b/drivers/net/wwan/t7xx/t7xx_hif_cldma.c
>> @@ -782,10 +782,12 @@ static int t7xx_cldma_clear_rxq(struct cldma_ctrl *md_ctrl, int qnum)
>>         struct cldma_queue *rxq = &md_ctrl->rxq[qnum];
>>         struct cldma_request *req;
>>         struct cldma_gpd *gpd;
>> -       unsigned long flags;
>>         int ret = 0;
>>
>> -       spin_lock_irqsave(&rxq->ring_lock, flags);
>> +       /* CLDMA has been stopped. There is not any CLDMA IRQ, holding
>> +        * ring_lock is not needed.
> 
> If it makes sense to explain why we don't need locking, the next
> sentence is not needed:

I want to remind the possible developer if he or she want to add spin_lock
here again in future, he or she should check whether there is a scheduling
factor or not here firstly.

> 
> 
>>  Thus we can use functions that may
>> +        * introduce scheduling.
>> +        */
>>         t7xx_cldma_q_reset(rxq);
>>         list_for_each_entry(req, &rxq->tr_ring->gpd_ring, entry) {
>>                 gpd = req->gpd;
>> @@ -808,7 +810,6 @@ static int t7xx_cldma_clear_rxq(struct cldma_ctrl *md_ctrl, int qnum)
>>
>>                 t7xx_cldma_gpd_set_data_ptr(req->gpd, req->mapped_buff);
>>         }
>> -       spin_unlock_irqrestore(&rxq->ring_lock, flags);
>>
>>         return ret;
>>  }
>> --
>> 2.25.1
>>
> .
>
Ilpo Järvinen May 18, 2022, 6:09 p.m. UTC | #4
On Tue, 17 May 2022, Ziyang Xuan wrote:

> t7xx_cldma_clear_rxq() call t7xx_cldma_alloc_and_map_skb() in spin_lock
> context, But __dev_alloc_skb() in t7xx_cldma_alloc_and_map_skb() uses
> GFP_KERNEL, that will introduce scheduling factor in spin_lock context.
> 
> Because t7xx_cldma_clear_rxq() is called after stopping CLDMA, so we can
> remove the spin_lock from t7xx_cldma_clear_rxq().
> 

Perhaps Suggested-by: ... would have been appropriate too.

> Fixes: 39d439047f1d ("net: wwan: t7xx: Add control DMA interface")
> Signed-off-by: Ziyang Xuan <william.xuanziyang@huawei.com>
Ziyang Xuan (William) May 19, 2022, 4:20 a.m. UTC | #5
> On Tue, 17 May 2022, Ziyang Xuan wrote:
> 
>> t7xx_cldma_clear_rxq() call t7xx_cldma_alloc_and_map_skb() in spin_lock
>> context, But __dev_alloc_skb() in t7xx_cldma_alloc_and_map_skb() uses
>> GFP_KERNEL, that will introduce scheduling factor in spin_lock context.
>>
>> Because t7xx_cldma_clear_rxq() is called after stopping CLDMA, so we can
>> remove the spin_lock from t7xx_cldma_clear_rxq().
>>
> 
> Perhaps Suggested-by: ... would have been appropriate too.

Yes,I will send the v3 patch.

> 
>> Fixes: 39d439047f1d ("net: wwan: t7xx: Add control DMA interface")
>> Signed-off-by: Ziyang Xuan <william.xuanziyang@huawei.com>
>
diff mbox series

Patch

diff --git a/drivers/net/wwan/t7xx/t7xx_hif_cldma.c b/drivers/net/wwan/t7xx/t7xx_hif_cldma.c
index 46066dcd2607..7493285a9606 100644
--- a/drivers/net/wwan/t7xx/t7xx_hif_cldma.c
+++ b/drivers/net/wwan/t7xx/t7xx_hif_cldma.c
@@ -782,10 +782,12 @@  static int t7xx_cldma_clear_rxq(struct cldma_ctrl *md_ctrl, int qnum)
 	struct cldma_queue *rxq = &md_ctrl->rxq[qnum];
 	struct cldma_request *req;
 	struct cldma_gpd *gpd;
-	unsigned long flags;
 	int ret = 0;
 
-	spin_lock_irqsave(&rxq->ring_lock, flags);
+	/* CLDMA has been stopped. There is not any CLDMA IRQ, holding
+	 * ring_lock is not needed. Thus we can use functions that may
+	 * introduce scheduling.
+	 */
 	t7xx_cldma_q_reset(rxq);
 	list_for_each_entry(req, &rxq->tr_ring->gpd_ring, entry) {
 		gpd = req->gpd;
@@ -808,7 +810,6 @@  static int t7xx_cldma_clear_rxq(struct cldma_ctrl *md_ctrl, int qnum)
 
 		t7xx_cldma_gpd_set_data_ptr(req->gpd, req->mapped_buff);
 	}
-	spin_unlock_irqrestore(&rxq->ring_lock, flags);
 
 	return ret;
 }