diff mbox series

[net-next] octeontx2-af: Fix multicast/mirror group lock/unlock issue

Message ID 20231212091558.49579-1-sumang@marvell.com (mailing list archive)
State Superseded
Delegated to: Netdev Maintainers
Headers show
Series [net-next] octeontx2-af: Fix multicast/mirror group lock/unlock issue | expand

Checks

Context Check Description
netdev/series_format success Single patches do not need cover letters
netdev/tree_selection success Clearly marked for net-next
netdev/ynl success Generated files up to date; no warnings/errors; no diff in generated;
netdev/fixes_present success Fixes tag not required for -next series
netdev/header_inline success No static functions without inline keyword in header files
netdev/build_32bit success Errors and warnings before: 8 this patch: 8
netdev/cc_maintainers fail 2 blamed authors not CCed: horms@kernel.org wojciech.drewek@intel.com; 2 maintainers not CCed: horms@kernel.org wojciech.drewek@intel.com
netdev/build_clang success Errors and warnings before: 1142 this patch: 1142
netdev/verify_signedoff success Signed-off-by tag matches author and committer
netdev/deprecated_api success None detected
netdev/check_selftest success No net selftest shell script
netdev/verify_fixes success Fixes tag looks correct
netdev/build_allmodconfig_warn success Errors and warnings before: 1142 this patch: 1142
netdev/checkpatch warning WARNING: line length of 81 exceeds 80 columns
netdev/build_clang_rust success No Rust files in patch. Skipping build
netdev/kdoc success Errors and warnings before: 0 this patch: 0
netdev/source_inline success Was 0 now: 0

Commit Message

Suman Ghosh Dec. 12, 2023, 9:15 a.m. UTC
As per the existing implementation, there exists a race between finding
a multicast/mirror group entry and deleting that entry. The group lock
was taken and released independently by rvu_nix_mcast_find_grp_elem()
function. Which is incorrect and group lock should be taken during the
entire operation of group updation/deletion. This patch fixes the same.

Fixes: 51b2804c19cd ("octeontx2-af: Add new mbox to support multicast/mirror offload")
Signed-off-by: Suman Ghosh <sumang@marvell.com>
---

Note: This is a follow up of

https://urldefense.proofpoint.com/v2/url?u=https-3A__git.kernel.org_netdev_net-2Dnext_c_51b2804c19cd&d=DwIDaQ&c=nKjWec2b6R0mOyPaz7xtfQ&r=7si3Xn9Ly-Se1a655kvEPIYU0nQ9HPeN280sEUv5ROU&m=NjKPoTkYVlL5Dh4aSr3-dVo-AukiIperlvB0S4_Mqzkyl_VcYAAKrWhkGZE5Cx-p&s=AkBf0454Xm-0adqV0Os7ZE8peaCXtYyuNbCS5kit6Jk&e=

 .../ethernet/marvell/octeontx2/af/rvu_nix.c   | 58 +++++++++++++------
 1 file changed, 40 insertions(+), 18 deletions(-)

Comments

Simon Horman Dec. 12, 2023, 11:16 a.m. UTC | #1
On Tue, Dec 12, 2023 at 02:45:58PM +0530, Suman Ghosh wrote:
> As per the existing implementation, there exists a race between finding
> a multicast/mirror group entry and deleting that entry. The group lock
> was taken and released independently by rvu_nix_mcast_find_grp_elem()
> function. Which is incorrect and group lock should be taken during the
> entire operation of group updation/deletion. This patch fixes the same.
> 
> Fixes: 51b2804c19cd ("octeontx2-af: Add new mbox to support multicast/mirror offload")
> Signed-off-by: Suman Ghosh <sumang@marvell.com>

...

> @@ -6306,6 +6310,13 @@ int rvu_mbox_handler_nix_mcast_grp_destroy(struct rvu *rvu,
>  		return err;
>  
>  	mcast_grp = &nix_hw->mcast_grp;
> +
> +	/* If AF is requesting for the deletion,
> +	 * then AF is already taking the lock
> +	 */
> +	if (!req->is_af)
> +		mutex_lock(&mcast_grp->mcast_grp_lock);
> +
>  	elem = rvu_nix_mcast_find_grp_elem(mcast_grp, req->mcast_grp_idx);
>  	if (!elem)

Hi Suman,

Does mcast_grp_lock need to be released here?
If so, I would suggest a goto label, say unlock_grp.

>  		return NIX_AF_ERR_INVALID_MCAST_GRP;
> @@ -6333,12 +6344,6 @@ int rvu_mbox_handler_nix_mcast_grp_destroy(struct rvu *rvu,
>  	mutex_unlock(&mcast->mce_lock);
>  
>  delete_grp:
> -	/* If AF is requesting for the deletion,
> -	 * then AF is already taking the lock
> -	 */
> -	if (!req->is_af)
> -		mutex_lock(&mcast_grp->mcast_grp_lock);
> -
>  	list_del(&elem->list);
>  	kfree(elem);
>  	mcast_grp->count--;
> @@ -6370,9 +6375,20 @@ int rvu_mbox_handler_nix_mcast_grp_update(struct rvu *rvu,
>  		return err;
>  
>  	mcast_grp = &nix_hw->mcast_grp;
> +
> +	/* If AF is requesting for the updation,
> +	 * then AF is already taking the lock
> +	 */
> +	if (!req->is_af)
> +		mutex_lock(&mcast_grp->mcast_grp_lock);
> +
>  	elem = rvu_nix_mcast_find_grp_elem(mcast_grp, req->mcast_grp_idx);
> -	if (!elem)
> +	if (!elem) {
> +		if (!req->is_af)
> +			mutex_unlock(&mcast_grp->mcast_grp_lock);
> +
>  		return NIX_AF_ERR_INVALID_MCAST_GRP;
> +	}
>  
>  	/* If any pcifunc matches the group's pcifunc, then we can
>  	 * delete the entire group.
> @@ -6383,8 +6399,11 @@ int rvu_mbox_handler_nix_mcast_grp_update(struct rvu *rvu,
>  				/* Delete group */
>  				dreq.hdr.pcifunc = elem->pcifunc;
>  				dreq.mcast_grp_idx = elem->mcast_grp_idx;
> -				dreq.is_af = req->is_af;
> +				dreq.is_af = 1;
>  				rvu_mbox_handler_nix_mcast_grp_destroy(rvu, &dreq, NULL);
> +				if (!req->is_af)
> +					mutex_unlock(&mcast_grp->mcast_grp_lock);
> +
>  				return 0;
>  			}
>  		}
> @@ -6467,5 +6486,8 @@ int rvu_mbox_handler_nix_mcast_grp_update(struct rvu *rvu,
>  
>  done:

I think it would be good to rename this label, say unlock_mce;

>  	mutex_unlock(&mcast->mce_lock);

Add a new label here, say unlock_grp;
And jump to this label whenever there is a need for the mutex_unlock() below.

> +	if (!req->is_af)
> +		mutex_unlock(&mcast_grp->mcast_grp_lock);
> +
>  	return ret;
>  }
> -- 
> 2.25.1
>
Suman Ghosh Dec. 13, 2023, 4:52 a.m. UTC | #2
>
>> @@ -6306,6 +6310,13 @@ int
>rvu_mbox_handler_nix_mcast_grp_destroy(struct rvu *rvu,
>>  		return err;
>>
>>  	mcast_grp = &nix_hw->mcast_grp;
>> +
>> +	/* If AF is requesting for the deletion,
>> +	 * then AF is already taking the lock
>> +	 */
>> +	if (!req->is_af)
>> +		mutex_lock(&mcast_grp->mcast_grp_lock);
>> +
>>  	elem = rvu_nix_mcast_find_grp_elem(mcast_grp, req->mcast_grp_idx);
>>  	if (!elem)
>
>Hi Suman,
>
>Does mcast_grp_lock need to be released here?
>If so, I would suggest a goto label, say unlock_grp.
[Suman] ack, will update in v2
>
>>  		return NIX_AF_ERR_INVALID_MCAST_GRP; @@ -6333,12 +6344,6 @@
>int
>> rvu_mbox_handler_nix_mcast_grp_destroy(struct rvu *rvu,
>>  	mutex_unlock(&mcast->mce_lock);
>>
>>  delete_grp:
>> -	/* If AF is requesting for the deletion,
>> -	 * then AF is already taking the lock
>> -	 */
>> -	if (!req->is_af)
>> -		mutex_lock(&mcast_grp->mcast_grp_lock);
>> -
>>  	list_del(&elem->list);
>>  	kfree(elem);
>>  	mcast_grp->count--;
>> @@ -6370,9 +6375,20 @@ int
>rvu_mbox_handler_nix_mcast_grp_update(struct rvu *rvu,
>>  		return err;
>>
>>  	mcast_grp = &nix_hw->mcast_grp;
>> +
>> +	/* If AF is requesting for the updation,
>> +	 * then AF is already taking the lock
>> +	 */
>> +	if (!req->is_af)
>> +		mutex_lock(&mcast_grp->mcast_grp_lock);
>> +
>>  	elem = rvu_nix_mcast_find_grp_elem(mcast_grp, req->mcast_grp_idx);
>> -	if (!elem)
>> +	if (!elem) {
>> +		if (!req->is_af)
>> +			mutex_unlock(&mcast_grp->mcast_grp_lock);
>> +
>>  		return NIX_AF_ERR_INVALID_MCAST_GRP;
>> +	}
>>
>>  	/* If any pcifunc matches the group's pcifunc, then we can
>>  	 * delete the entire group.
>> @@ -6383,8 +6399,11 @@ int
>rvu_mbox_handler_nix_mcast_grp_update(struct rvu *rvu,
>>  				/* Delete group */
>>  				dreq.hdr.pcifunc = elem->pcifunc;
>>  				dreq.mcast_grp_idx = elem->mcast_grp_idx;
>> -				dreq.is_af = req->is_af;
>> +				dreq.is_af = 1;
>>  				rvu_mbox_handler_nix_mcast_grp_destroy(rvu, &dreq,
>NULL);
>> +				if (!req->is_af)
>> +					mutex_unlock(&mcast_grp->mcast_grp_lock);
>> +
>>  				return 0;
>>  			}
>>  		}
>> @@ -6467,5 +6486,8 @@ int rvu_mbox_handler_nix_mcast_grp_update(struct
>> rvu *rvu,
>>
>>  done:
>
>I think it would be good to rename this label, say unlock_mce;
[Suman] ack, will update in v2
>
>>  	mutex_unlock(&mcast->mce_lock);
>
>Add a new label here, say unlock_grp;
>And jump to this label whenever there is a need for the mutex_unlock()
>below.
[Suman] ack, will update in v2
>
>> +	if (!req->is_af)
>> +		mutex_unlock(&mcast_grp->mcast_grp_lock);
>> +
>>  	return ret;
>>  }
>> --
>> 2.25.1
>>
diff mbox series

Patch

diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu_nix.c b/drivers/net/ethernet/marvell/octeontx2/af/rvu_nix.c
index b01503acd520..0ab5626380c5 100644
--- a/drivers/net/ethernet/marvell/octeontx2/af/rvu_nix.c
+++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu_nix.c
@@ -6142,14 +6142,12 @@  static struct nix_mcast_grp_elem *rvu_nix_mcast_find_grp_elem(struct nix_mcast_g
 	struct nix_mcast_grp_elem *iter;
 	bool is_found = false;
 
-	mutex_lock(&mcast_grp->mcast_grp_lock);
 	list_for_each_entry(iter, &mcast_grp->mcast_grp_head, list) {
 		if (iter->mcast_grp_idx == mcast_grp_idx) {
 			is_found = true;
 			break;
 		}
 	}
-	mutex_unlock(&mcast_grp->mcast_grp_lock);
 
 	if (is_found)
 		return iter;
@@ -6162,7 +6160,7 @@  int rvu_nix_mcast_get_mce_index(struct rvu *rvu, u16 pcifunc, u32 mcast_grp_idx)
 	struct nix_mcast_grp_elem *elem;
 	struct nix_mcast_grp *mcast_grp;
 	struct nix_hw *nix_hw;
-	int blkaddr;
+	int blkaddr, ret;
 
 	blkaddr = rvu_get_blkaddr(rvu, BLKTYPE_NIX, pcifunc);
 	nix_hw = get_nix_hw(rvu->hw, blkaddr);
@@ -6170,11 +6168,15 @@  int rvu_nix_mcast_get_mce_index(struct rvu *rvu, u16 pcifunc, u32 mcast_grp_idx)
 		return NIX_AF_ERR_INVALID_NIXBLK;
 
 	mcast_grp = &nix_hw->mcast_grp;
+	mutex_lock(&mcast_grp->mcast_grp_lock);
 	elem = rvu_nix_mcast_find_grp_elem(mcast_grp, mcast_grp_idx);
 	if (!elem)
-		return NIX_AF_ERR_INVALID_MCAST_GRP;
+		ret = NIX_AF_ERR_INVALID_MCAST_GRP;
+	else
+		ret = elem->mce_start_index;
 
-	return elem->mce_start_index;
+	mutex_unlock(&mcast_grp->mcast_grp_lock);
+	return ret;
 }
 
 void rvu_nix_mcast_flr_free_entries(struct rvu *rvu, u16 pcifunc)
@@ -6238,7 +6240,7 @@  int rvu_nix_mcast_update_mcam_entry(struct rvu *rvu, u16 pcifunc,
 	struct nix_mcast_grp_elem *elem;
 	struct nix_mcast_grp *mcast_grp;
 	struct nix_hw *nix_hw;
-	int blkaddr;
+	int blkaddr, ret = 0;
 
 	blkaddr = rvu_get_blkaddr(rvu, BLKTYPE_NIX, pcifunc);
 	nix_hw = get_nix_hw(rvu->hw, blkaddr);
@@ -6246,13 +6248,15 @@  int rvu_nix_mcast_update_mcam_entry(struct rvu *rvu, u16 pcifunc,
 		return NIX_AF_ERR_INVALID_NIXBLK;
 
 	mcast_grp = &nix_hw->mcast_grp;
+	mutex_lock(&mcast_grp->mcast_grp_lock);
 	elem = rvu_nix_mcast_find_grp_elem(mcast_grp, mcast_grp_idx);
 	if (!elem)
-		return NIX_AF_ERR_INVALID_MCAST_GRP;
-
-	elem->mcam_index = mcam_index;
+		ret = NIX_AF_ERR_INVALID_MCAST_GRP;
+	else
+		elem->mcam_index = mcam_index;
 
-	return 0;
+	mutex_unlock(&mcast_grp->mcast_grp_lock);
+	return ret;
 }
 
 int rvu_mbox_handler_nix_mcast_grp_create(struct rvu *rvu,
@@ -6306,6 +6310,13 @@  int rvu_mbox_handler_nix_mcast_grp_destroy(struct rvu *rvu,
 		return err;
 
 	mcast_grp = &nix_hw->mcast_grp;
+
+	/* If AF is requesting for the deletion,
+	 * then AF is already taking the lock
+	 */
+	if (!req->is_af)
+		mutex_lock(&mcast_grp->mcast_grp_lock);
+
 	elem = rvu_nix_mcast_find_grp_elem(mcast_grp, req->mcast_grp_idx);
 	if (!elem)
 		return NIX_AF_ERR_INVALID_MCAST_GRP;
@@ -6333,12 +6344,6 @@  int rvu_mbox_handler_nix_mcast_grp_destroy(struct rvu *rvu,
 	mutex_unlock(&mcast->mce_lock);
 
 delete_grp:
-	/* If AF is requesting for the deletion,
-	 * then AF is already taking the lock
-	 */
-	if (!req->is_af)
-		mutex_lock(&mcast_grp->mcast_grp_lock);
-
 	list_del(&elem->list);
 	kfree(elem);
 	mcast_grp->count--;
@@ -6370,9 +6375,20 @@  int rvu_mbox_handler_nix_mcast_grp_update(struct rvu *rvu,
 		return err;
 
 	mcast_grp = &nix_hw->mcast_grp;
+
+	/* If AF is requesting for the updation,
+	 * then AF is already taking the lock
+	 */
+	if (!req->is_af)
+		mutex_lock(&mcast_grp->mcast_grp_lock);
+
 	elem = rvu_nix_mcast_find_grp_elem(mcast_grp, req->mcast_grp_idx);
-	if (!elem)
+	if (!elem) {
+		if (!req->is_af)
+			mutex_unlock(&mcast_grp->mcast_grp_lock);
+
 		return NIX_AF_ERR_INVALID_MCAST_GRP;
+	}
 
 	/* If any pcifunc matches the group's pcifunc, then we can
 	 * delete the entire group.
@@ -6383,8 +6399,11 @@  int rvu_mbox_handler_nix_mcast_grp_update(struct rvu *rvu,
 				/* Delete group */
 				dreq.hdr.pcifunc = elem->pcifunc;
 				dreq.mcast_grp_idx = elem->mcast_grp_idx;
-				dreq.is_af = req->is_af;
+				dreq.is_af = 1;
 				rvu_mbox_handler_nix_mcast_grp_destroy(rvu, &dreq, NULL);
+				if (!req->is_af)
+					mutex_unlock(&mcast_grp->mcast_grp_lock);
+
 				return 0;
 			}
 		}
@@ -6467,5 +6486,8 @@  int rvu_mbox_handler_nix_mcast_grp_update(struct rvu *rvu,
 
 done:
 	mutex_unlock(&mcast->mce_lock);
+	if (!req->is_af)
+		mutex_unlock(&mcast_grp->mcast_grp_lock);
+
 	return ret;
 }