diff mbox series

[V2,net] ice: Re-organizes reqstd/avail {R,T}XQ check/code for efficiency+readability

Message ID 20210413224446.16612-1-salil.mehta@huawei.com (mailing list archive)
State Awaiting Upstream
Delegated to: Netdev Maintainers
Headers show
Series [V2,net] ice: Re-organizes reqstd/avail {R,T}XQ check/code for efficiency+readability | expand

Checks

Context Check Description
netdev/cover_letter success Link
netdev/fixes_present success Link
netdev/patch_count success Link
netdev/tree_selection success Clearly marked for net
netdev/subject_prefix success Link
netdev/cc_maintainers fail 1 blamed authors not CCed: maciej.fijalkowski@intel.com; 1 maintainers not CCed: maciej.fijalkowski@intel.com
netdev/source_inline success Was 0 now: 0
netdev/verify_signedoff success Link
netdev/module_param success Was 0 now: 0
netdev/build_32bit success Errors and warnings before: 0 this patch: 0
netdev/kdoc success Errors and warnings before: 0 this patch: 0
netdev/verify_fixes success Link
netdev/checkpatch warning WARNING: line length of 82 exceeds 80 columns
netdev/build_allmodconfig_warn success Errors and warnings before: 0 this patch: 0
netdev/header_inline success Link

Commit Message

Salil Mehta April 13, 2021, 10:44 p.m. UTC
If user has explicitly requested the number of {R,T}XQs, then it is
unnecessary to get the count of already available {R,T}XQs from the
PF avail_{r,t}xqs bitmap. This value will get overridden by user specified
value in any case.

This patch does minor re-organization of the code for improving the flow
and readabiltiy. This scope of improvement was found during the review of
the ICE driver code.

FYI, I could not test this change due to unavailability of the hardware.
It would be helpful if somebody can test this patch and provide Tested-by
Tag. Many thanks!

Fixes: 87324e747fde ("ice: Implement ethtool ops for channels")
Cc: intel-wired-lan@lists.osuosl.org
Cc: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Signed-off-by: Salil Mehta <salil.mehta@huawei.com>
--
Change V1->V2
 (*) Fixed the comments from Anthony Nguyen(Intel)
     Link: https://lkml.org/lkml/2021/4/12/1997
---
 drivers/net/ethernet/intel/ice/ice_lib.c | 14 ++++++++------
 1 file changed, 8 insertions(+), 6 deletions(-)

Comments

Brelinski, TonyX April 20, 2021, 8:25 p.m. UTC | #1
> -----Original Message-----
> From: Intel-wired-lan <intel-wired-lan-bounces@osuosl.org> On Behalf Of
> Salil Mehta
> Sent: Tuesday, April 13, 2021 3:45 PM
> To: davem@davemloft.net; kuba@kernel.org
> Cc: salil.mehta@huawei.com; linuxarm@openeuler.org;
> netdev@vger.kernel.org; linuxarm@huawei.com; linux-
> kernel@vger.kernel.org; Jeff Kirsher <jeffrey.t.kirsher@intel.com>; intel-
> wired-lan@lists.osuosl.org
> Subject: [Intel-wired-lan] [PATCH V2 net] ice: Re-organizes reqstd/avail {R,
> T}XQ check/code for efficiency+readability
> 
> If user has explicitly requested the number of {R,T}XQs, then it is
> unnecessary to get the count of already available {R,T}XQs from the PF
> avail_{r,t}xqs bitmap. This value will get overridden by user specified value in
> any case.
> 
> This patch does minor re-organization of the code for improving the flow and
> readabiltiy. This scope of improvement was found during the review of the
> ICE driver code.
> 
> FYI, I could not test this change due to unavailability of the hardware.
> It would be helpful if somebody can test this patch and provide Tested-by
> Tag. Many thanks!
> 
> Fixes: 87324e747fde ("ice: Implement ethtool ops for channels")
> Cc: intel-wired-lan@lists.osuosl.org
> Cc: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
> Signed-off-by: Salil Mehta <salil.mehta@huawei.com>
> --
> Change V1->V2
>  (*) Fixed the comments from Anthony Nguyen(Intel)
>      Link: https://lkml.org/lkml/2021/4/12/1997
> ---
>  drivers/net/ethernet/intel/ice/ice_lib.c | 14 ++++++++------
>  1 file changed, 8 insertions(+), 6 deletions(-)

Tested-by: Tony Brelinski <tonyx.brelinski@intel.com> (A Contingent Worker at Intel)
Salil Mehta April 20, 2021, 9:28 p.m. UTC | #2
> From: Brelinski, TonyX [mailto:tonyx.brelinski@intel.com]
> Sent: Tuesday, April 20, 2021 9:26 PM
> 
> > From: Intel-wired-lan <intel-wired-lan-bounces@osuosl.org> On Behalf Of
> > Salil Mehta
> > Sent: Tuesday, April 13, 2021 3:45 PM
> > To: davem@davemloft.net; kuba@kernel.org
> > Cc: salil.mehta@huawei.com; linuxarm@openeuler.org;
> > netdev@vger.kernel.org; linuxarm@huawei.com; linux-
> > kernel@vger.kernel.org; Jeff Kirsher <jeffrey.t.kirsher@intel.com>; intel-
> > wired-lan@lists.osuosl.org
> > Subject: [Intel-wired-lan] [PATCH V2 net] ice: Re-organizes reqstd/avail {R,
> > T}XQ check/code for efficiency+readability
> >
> > If user has explicitly requested the number of {R,T}XQs, then it is
> > unnecessary to get the count of already available {R,T}XQs from the PF
> > avail_{r,t}xqs bitmap. This value will get overridden by user specified value
> in
> > any case.
> >
> > This patch does minor re-organization of the code for improving the flow and
> > readabiltiy. This scope of improvement was found during the review of the
> > ICE driver code.
> >
> > FYI, I could not test this change due to unavailability of the hardware.
> > It would be helpful if somebody can test this patch and provide Tested-by
> > Tag. Many thanks!
> >
> > Fixes: 87324e747fde ("ice: Implement ethtool ops for channels")
> > Cc: intel-wired-lan@lists.osuosl.org
> > Cc: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
> > Signed-off-by: Salil Mehta <salil.mehta@huawei.com>
> > --
> > Change V1->V2
> >  (*) Fixed the comments from Anthony Nguyen(Intel)
> >      Link: https://lkml.org/lkml/2021/4/12/1997
> > ---
> >  drivers/net/ethernet/intel/ice/ice_lib.c | 14 ++++++++------
> >  1 file changed, 8 insertions(+), 6 deletions(-)
> 
> Tested-by: Tony Brelinski <tonyx.brelinski@intel.com> (A Contingent Worker at
> Intel)

Many thanks! 

Salil.
Paul Menzel April 21, 2021, 5:35 a.m. UTC | #3
Dear Salil,


Thank you very much for your patch.

In the git commit message summary, could you please use imperative mood [1]?

> Re-organize reqstd/avail {R, T}XQ check/code for efficiency+readability

It’s a bit long though. Maybe:

> Avoid unnecessary assignment with user specified {R,T}XQs

Am 14.04.21 um 00:44 schrieb Salil Mehta:
> If user has explicitly requested the number of {R,T}XQs, then it is
> unnecessary to get the count of already available {R,T}XQs from the
> PF avail_{r,t}xqs bitmap. This value will get overridden by user specified
> value in any case.
> 
> This patch does minor re-organization of the code for improving the flow
> and readabiltiy. This scope of improvement was found during the review of

readabil*it*y

> the ICE driver code.
> 
> FYI, I could not test this change due to unavailability of the hardware.
> It would be helpful if somebody can test this patch and provide Tested-by
> Tag. Many thanks!

This should go outside the commit message (below the --- for example).

> Fixes: 87324e747fde ("ice: Implement ethtool ops for channels")

Did you check the behavior before is actually a bug? Or is it just for 
the detection heuristic for commits to be applied to the stable series?

> Cc: intel-wired-lan@lists.osuosl.org
> Cc: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
> Signed-off-by: Salil Mehta <salil.mehta@huawei.com>
> --
> Change V1->V2
>   (*) Fixed the comments from Anthony Nguyen(Intel)
>       Link: https://lkml.org/lkml/2021/4/12/1997
> ---
>   drivers/net/ethernet/intel/ice/ice_lib.c | 14 ++++++++------
>   1 file changed, 8 insertions(+), 6 deletions(-)
> 
> diff --git a/drivers/net/ethernet/intel/ice/ice_lib.c b/drivers/net/ethernet/intel/ice/ice_lib.c
> index d13c7fc8fb0a..d77133d6baa7 100644
> --- a/drivers/net/ethernet/intel/ice/ice_lib.c
> +++ b/drivers/net/ethernet/intel/ice/ice_lib.c
> @@ -161,12 +161,13 @@ static void ice_vsi_set_num_qs(struct ice_vsi *vsi, u16 vf_id)
>   
>   	switch (vsi->type) {
>   	case ICE_VSI_PF:
> -		vsi->alloc_txq = min3(pf->num_lan_msix,
> -				      ice_get_avail_txq_count(pf),
> -				      (u16)num_online_cpus());
>   		if (vsi->req_txq) {
>   			vsi->alloc_txq = vsi->req_txq;
>   			vsi->num_txq = vsi->req_txq;
> +		} else {
> +			vsi->alloc_txq = min3(pf->num_lan_msix,
> +					      ice_get_avail_txq_count(pf),
> +					      (u16)num_online_cpus());
>   		}

I am curious, did you check the compiler actually creates different 
code, or did it notice the inefficiency by itself and optimized it already?

>   
>   		pf->num_lan_tx = vsi->alloc_txq;
> @@ -175,12 +176,13 @@ static void ice_vsi_set_num_qs(struct ice_vsi *vsi, u16 vf_id)
>   		if (!test_bit(ICE_FLAG_RSS_ENA, pf->flags)) {
>   			vsi->alloc_rxq = 1;
>   		} else {
> -			vsi->alloc_rxq = min3(pf->num_lan_msix,
> -					      ice_get_avail_rxq_count(pf),
> -					      (u16)num_online_cpus());
>   			if (vsi->req_rxq) {
>   				vsi->alloc_rxq = vsi->req_rxq;
>   				vsi->num_rxq = vsi->req_rxq;
> +			} else {
> +				vsi->alloc_rxq = min3(pf->num_lan_msix,
> +						      ice_get_avail_rxq_count(pf),
> +						      (u16)num_online_cpus());
>   			}
>   		}
>   


Kind regards,

Paul
Salil Mehta April 21, 2021, 7:41 a.m. UTC | #4
> From: Paul Menzel [mailto:pmenzel@molgen.mpg.de]
> Sent: Wednesday, April 21, 2021 6:36 AM
> To: Salil Mehta <salil.mehta@huawei.com>
> Cc: linuxarm@openeuler.org; netdev@vger.kernel.org; Linuxarm
> <linuxarm@huawei.com>; linux-kernel@vger.kernel.org; Jeff Kirsher
> <jeffrey.t.kirsher@intel.com>; intel-wired-lan@lists.osuosl.org; David S.
> Miller <davem@davemloft.net>; Jakub Kicinski <kuba@kernel.org>
> Subject: Re: [Intel-wired-lan] [PATCH V2 net] ice: Re-organizes reqstd/avail
> {R, T}XQ check/code for efficiency+readability
> 
> Dear Salil,
> 
> 
> Thank you very much for your patch.

Thanks for the review.

> In the git commit message summary, could you please use imperative mood [1]?

No issues. There is always a scope of improvement.


> > Re-organize reqstd/avail {R, T}XQ check/code for efficiency+readability
> 
> It’s a bit long though. Maybe:
> 
> Avoid unnecessary assignment with user specified {R,T}XQs

Umm..above conveys the wrong meaning as this is not what patch is doing. 

If you see the code, in the presence of the user specified {R,T}XQs it
avoids fetching available {R,T}XQ count. 

What about below?

"Avoid unnecessary avail_{r,t}xq assignments if user has specified Qs"


> Am 14.04.21 um 00:44 schrieb Salil Mehta:
> > If user has explicitly requested the number of {R,T}XQs, then it is
> > unnecessary to get the count of already available {R,T}XQs from the
> > PF avail_{r,t}xqs bitmap. This value will get overridden by user specified
> > value in any case.
> >
> > This patch does minor re-organization of the code for improving the flow
> > and readabiltiy. This scope of improvement was found during the review of
> 
> readabil*it*y


Thanks. Missed that earlier. My shaky fingers :(

 
> > the ICE driver code.
> >
> > FYI, I could not test this change due to unavailability of the hardware.
> > It would be helpful if somebody can test this patch and provide Tested-by
> > Tag. Many thanks!
> 
> This should go outside the commit message (below the --- for example).

Agreed.

> > Fixes: 87324e747fde ("ice: Implement ethtool ops for channels")
> 
> Did you check the behavior before is actually a bug? Or is it just for
> the detection heuristic for commits to be applied to the stable series?

Right, later was the idea. 

 
> > Cc: intel-wired-lan@lists.osuosl.org
> > Cc: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
> > Signed-off-by: Salil Mehta <salil.mehta@huawei.com>
> > --
> > Change V1->V2
> >   (*) Fixed the comments from Anthony Nguyen(Intel)
> >       Link: https://lkml.org/lkml/2021/4/12/1997
> > ---
> >   drivers/net/ethernet/intel/ice/ice_lib.c | 14 ++++++++------
> >   1 file changed, 8 insertions(+), 6 deletions(-)
> >
> > diff --git a/drivers/net/ethernet/intel/ice/ice_lib.c
> b/drivers/net/ethernet/intel/ice/ice_lib.c
> > index d13c7fc8fb0a..d77133d6baa7 100644
> > --- a/drivers/net/ethernet/intel/ice/ice_lib.c
> > +++ b/drivers/net/ethernet/intel/ice/ice_lib.c
> > @@ -161,12 +161,13 @@ static void ice_vsi_set_num_qs(struct ice_vsi *vsi, u16
> vf_id)
> >
> >   	switch (vsi->type) {
> >   	case ICE_VSI_PF:
> > -		vsi->alloc_txq = min3(pf->num_lan_msix,
> > -				      ice_get_avail_txq_count(pf),
> > -				      (u16)num_online_cpus());
> >   		if (vsi->req_txq) {
> >   			vsi->alloc_txq = vsi->req_txq;
> >   			vsi->num_txq = vsi->req_txq;
> > +		} else {
> > +			vsi->alloc_txq = min3(pf->num_lan_msix,
> > +					      ice_get_avail_txq_count(pf),
> > +					      (u16)num_online_cpus());
> >   		}
> 
> I am curious, did you check the compiler actually creates different
> code, or did it notice the inefficiency by itself and optimized it already?

I have not looked into that detail but irrespective of what compiler generates
I would like to keep the code in a shape which is more efficient and more readable.

I do understand in certain cases we have to do tradeoff between efficiency
and readability but I do not see that here.


> >   		pf->num_lan_tx = vsi->alloc_txq;
> > @@ -175,12 +176,13 @@ static void ice_vsi_set_num_qs(struct ice_vsi *vsi, u16
> vf_id)
> >   		if (!test_bit(ICE_FLAG_RSS_ENA, pf->flags)) {
> >   			vsi->alloc_rxq = 1;
> >   		} else {
> > -			vsi->alloc_rxq = min3(pf->num_lan_msix,
> > -					      ice_get_avail_rxq_count(pf),
> > -					      (u16)num_online_cpus());
> >   			if (vsi->req_rxq) {
> >   				vsi->alloc_rxq = vsi->req_rxq;
> >   				vsi->num_rxq = vsi->req_rxq;
> > +			} else {
> > +				vsi->alloc_rxq = min3(pf->num_lan_msix,
> > +						      ice_get_avail_rxq_count(pf),
> > +						      (u16)num_online_cpus());
> >   			}
> >   		}
> >
> 
> 
> Kind regards,
> 
> Paul
Paul Menzel April 21, 2021, 7:54 a.m. UTC | #5
[CC: Remove Jeff, as email is rejected]

Dear Salil,


Am 21.04.21 um 09:41 schrieb Salil Mehta:
>> From: Paul Menzel [mailto:pmenzel@molgen.mpg.de]
>> Sent: Wednesday, April 21, 2021 6:36 AM

[…]

>> In the git commit message summary, could you please use imperative mood [1]?
> 
> No issues. There is always a scope of improvement.
> 
>>> Re-organize reqstd/avail {R, T}XQ check/code for efficiency+readability
>>
>> It’s a bit long though. Maybe:
>>
>> Avoid unnecessary assignment with user specified {R,T}XQs
> 
> Umm..above conveys the wrong meaning as this is not what patch is doing.
> 
> If you see the code, in the presence of the user specified {R,T}XQs it
> avoids fetching available {R,T}XQ count.
> 
> What about below?
> 
> "Avoid unnecessary avail_{r,t}xq assignments if user has specified Qs"

Sounds good, still a little long. Maybe:

> Avoid unnecessary avail_{r,t}xq assignments with user specified Qs

>> Am 14.04.21 um 00:44 schrieb Salil Mehta:
>>> If user has explicitly requested the number of {R,T}XQs, then it is
>>> unnecessary to get the count of already available {R,T}XQs from the
>>> PF avail_{r,t}xqs bitmap. This value will get overridden by user specified
>>> value in any case.
>>>
>>> This patch does minor re-organization of the code for improving the flow
>>> and readabiltiy. This scope of improvement was found during the review of
>>
>> readabil*it*y
> 
> Thanks. Missed that earlier. My shaky fingers :(
> 
>>> the ICE driver code.
>>>
>>> FYI, I could not test this change due to unavailability of the hardware.
>>> It would be helpful if somebody can test this patch and provide Tested-by
>>> Tag. Many thanks!
>>
>> This should go outside the commit message (below the --- for example).
> 
> Agreed.
> 
>>> Fixes: 87324e747fde ("ice: Implement ethtool ops for channels")
>>
>> Did you check the behavior before is actually a bug? Or is it just for
>> the detection heuristic for commits to be applied to the stable series?
> 
> Right, later was the idea.
>   
>>> Cc: intel-wired-lan@lists.osuosl.org
>>> Cc: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
>>> Signed-off-by: Salil Mehta <salil.mehta@huawei.com>
>>> --
>>> Change V1->V2
>>>    (*) Fixed the comments from Anthony Nguyen(Intel)
>>>        Link: https://lkml.org/lkml/2021/4/12/1997
>>> ---
>>>    drivers/net/ethernet/intel/ice/ice_lib.c | 14 ++++++++------
>>>    1 file changed, 8 insertions(+), 6 deletions(-)
>>>
>>> diff --git a/drivers/net/ethernet/intel/ice/ice_lib.c b/drivers/net/ethernet/intel/ice/ice_lib.c
>>> index d13c7fc8fb0a..d77133d6baa7 100644
>>> --- a/drivers/net/ethernet/intel/ice/ice_lib.c
>>> +++ b/drivers/net/ethernet/intel/ice/ice_lib.c
>>> @@ -161,12 +161,13 @@ static void ice_vsi_set_num_qs(struct ice_vsi *vsi, u16 vf_id)
>>>
>>>    	switch (vsi->type) {
>>>    	case ICE_VSI_PF:
>>> -		vsi->alloc_txq = min3(pf->num_lan_msix,
>>> -				      ice_get_avail_txq_count(pf),
>>> -				      (u16)num_online_cpus());
>>>    		if (vsi->req_txq) {
>>>    			vsi->alloc_txq = vsi->req_txq;
>>>    			vsi->num_txq = vsi->req_txq;
>>> +		} else {
>>> +			vsi->alloc_txq = min3(pf->num_lan_msix,
>>> +					      ice_get_avail_txq_count(pf),
>>> +					      (u16)num_online_cpus());
>>>    		}
>>
>> I am curious, did you check the compiler actually creates different
>> code, or did it notice the inefficiency by itself and optimized it already?
> 
> I have not looked into that detail but irrespective of what compiler generates
> I would like to keep the code in a shape which is more efficient and more readable.
> 
> I do understand in certain cases we have to do tradeoff between efficiency
> and readability but I do not see that here.

I agree, as *efficiency* is mentioned several times, I assume it was 
tested. Thank you for the clarification.

>>>    		pf->num_lan_tx = vsi->alloc_txq;
>>> @@ -175,12 +176,13 @@ static void ice_vsi_set_num_qs(struct ice_vsi *vsi, u16 vf_id)
>>>    		if (!test_bit(ICE_FLAG_RSS_ENA, pf->flags)) {
>>>    			vsi->alloc_rxq = 1;
>>>    		} else {
>>> -			vsi->alloc_rxq = min3(pf->num_lan_msix,
>>> -					      ice_get_avail_rxq_count(pf),
>>> -					      (u16)num_online_cpus());
>>>    			if (vsi->req_rxq) {
>>>    				vsi->alloc_rxq = vsi->req_rxq;
>>>    				vsi->num_rxq = vsi->req_rxq;
>>> +			} else {
>>> +				vsi->alloc_rxq = min3(pf->num_lan_msix,
>>> +						      ice_get_avail_rxq_count(pf),
>>> +						      (u16)num_online_cpus());
>>>    			}
>>>    		}
>>>


Kind regards,

Paul
Salil Mehta April 21, 2021, 8:08 a.m. UTC | #6
> From: Paul Menzel [mailto:pmenzel@molgen.mpg.de]
> Sent: Wednesday, April 21, 2021 8:54 AM
> 
> [CC: Remove Jeff, as email is rejected]

Yes, thanks for the reminder. I had noticed it earlier.

[...]

> >>> diff --git a/drivers/net/ethernet/intel/ice/ice_lib.c
> b/drivers/net/ethernet/intel/ice/ice_lib.c
> >>> index d13c7fc8fb0a..d77133d6baa7 100644
> >>> --- a/drivers/net/ethernet/intel/ice/ice_lib.c
> >>> +++ b/drivers/net/ethernet/intel/ice/ice_lib.c
> >>> @@ -161,12 +161,13 @@ static void ice_vsi_set_num_qs(struct ice_vsi *vsi,
> u16 vf_id)
> >>>
> >>>    	switch (vsi->type) {
> >>>    	case ICE_VSI_PF:
> >>> -		vsi->alloc_txq = min3(pf->num_lan_msix,
> >>> -				      ice_get_avail_txq_count(pf),
> >>> -				      (u16)num_online_cpus());
> >>>    		if (vsi->req_txq) {
> >>>    			vsi->alloc_txq = vsi->req_txq;
> >>>    			vsi->num_txq = vsi->req_txq;
> >>> +		} else {
> >>> +			vsi->alloc_txq = min3(pf->num_lan_msix,
> >>> +					      ice_get_avail_txq_count(pf),
> >>> +					      (u16)num_online_cpus());
> >>>    		}
> >>
> >> I am curious, did you check the compiler actually creates different
> >> code, or did it notice the inefficiency by itself and optimized it already?
> >
> > I have not looked into that detail but irrespective of what compiler generates
> > I would like to keep the code in a shape which is more efficient and more readable.
> >
> > I do understand in certain cases we have to do tradeoff between efficiency
> > and readability but I do not see that here.
> 
> I agree, as *efficiency* is mentioned several times, I assume it was
> tested. Thank you for the clarification.


I mentioned inefficient because below code gets executed unnecessarily.


/**
 * ice_get_avail_q_count - Get count of queues in use
 * @pf_qmap: bitmap to get queue use count from
 * @lock: pointer to a mutex that protects access to pf_qmap
 * @size: size of the bitmap
 */
static u16
ice_get_avail_q_count(unsigned long *pf_qmap, struct mutex *lock, u16 size)
{
	unsigned long bit;
	u16 count = 0;

	mutex_lock(lock);
	for_each_clear_bit(bit, pf_qmap, size)
		count++;
	mutex_unlock(lock);

	return count;
}

/**
 * ice_get_avail_txq_count - Get count of Tx queues in use
 * @pf: pointer to an ice_pf instance
 */
u16 ice_get_avail_txq_count(struct ice_pf *pf)
{
	return ice_get_avail_q_count(pf->avail_txqs, &pf->avail_q_mutex,
				     pf->max_pf_txqs);
}



> >>>    		pf->num_lan_tx = vsi->alloc_txq;
> >>> @@ -175,12 +176,13 @@ static void ice_vsi_set_num_qs(struct ice_vsi *vsi,
> u16 vf_id)
> >>>    		if (!test_bit(ICE_FLAG_RSS_ENA, pf->flags)) {
> >>>    			vsi->alloc_rxq = 1;
> >>>    		} else {
> >>> -			vsi->alloc_rxq = min3(pf->num_lan_msix,
> >>> -					      ice_get_avail_rxq_count(pf),
> >>> -					      (u16)num_online_cpus());
> >>>    			if (vsi->req_rxq) {
> >>>    				vsi->alloc_rxq = vsi->req_rxq;
> >>>    				vsi->num_rxq = vsi->req_rxq;
> >>> +			} else {
> >>> +				vsi->alloc_rxq = min3(pf->num_lan_msix,
> >>> +						      ice_get_avail_rxq_count(pf),
> >>> +						      (u16)num_online_cpus());
> >>>    			}
> >>>    		}
> >>>
> 
> 
> Kind regards,
> 
> Paul
diff mbox series

Patch

diff --git a/drivers/net/ethernet/intel/ice/ice_lib.c b/drivers/net/ethernet/intel/ice/ice_lib.c
index d13c7fc8fb0a..d77133d6baa7 100644
--- a/drivers/net/ethernet/intel/ice/ice_lib.c
+++ b/drivers/net/ethernet/intel/ice/ice_lib.c
@@ -161,12 +161,13 @@  static void ice_vsi_set_num_qs(struct ice_vsi *vsi, u16 vf_id)
 
 	switch (vsi->type) {
 	case ICE_VSI_PF:
-		vsi->alloc_txq = min3(pf->num_lan_msix,
-				      ice_get_avail_txq_count(pf),
-				      (u16)num_online_cpus());
 		if (vsi->req_txq) {
 			vsi->alloc_txq = vsi->req_txq;
 			vsi->num_txq = vsi->req_txq;
+		} else {
+			vsi->alloc_txq = min3(pf->num_lan_msix,
+					      ice_get_avail_txq_count(pf),
+					      (u16)num_online_cpus());
 		}
 
 		pf->num_lan_tx = vsi->alloc_txq;
@@ -175,12 +176,13 @@  static void ice_vsi_set_num_qs(struct ice_vsi *vsi, u16 vf_id)
 		if (!test_bit(ICE_FLAG_RSS_ENA, pf->flags)) {
 			vsi->alloc_rxq = 1;
 		} else {
-			vsi->alloc_rxq = min3(pf->num_lan_msix,
-					      ice_get_avail_rxq_count(pf),
-					      (u16)num_online_cpus());
 			if (vsi->req_rxq) {
 				vsi->alloc_rxq = vsi->req_rxq;
 				vsi->num_rxq = vsi->req_rxq;
+			} else {
+				vsi->alloc_rxq = min3(pf->num_lan_msix,
+						      ice_get_avail_rxq_count(pf),
+						      (u16)num_online_cpus());
 			}
 		}