diff mbox series

RDMA/srpt: Fix TPG creation

Message ID 20191023204106.23326-1-bvanassche@acm.org (mailing list archive)
State Mainlined
Commit 79d81ef42c9a8feee2f1df5dffa6ac628b71141d
Delegated to: Jason Gunthorpe
Headers show
Series RDMA/srpt: Fix TPG creation | expand

Commit Message

Bart Van Assche Oct. 23, 2019, 8:41 p.m. UTC
Unlike the iSCSI target driver, for the SRP target driver it is sufficient
if a single TPG can be associated with each RDMA port name. However, users
started associating multiple TPGs with RDMA port names. Support this by
converting the single TPG in struct srpt_port_id into a list. This patch
fixes the following list corruption issue:

list_add corruption. prev->next should be next (ffffffffc0a080c0), but was ffffa08a994ce6f0. (prev=ffffa08a994ce6f0).
WARNING: CPU: 2 PID: 2597 at lib/list_debug.c:28 __list_add_valid+0x6a/0x70
CPU: 2 PID: 2597 Comm: targetcli Not tainted 5.4.0-rc1.3bfa3c9602a7 #1
RIP: 0010:__list_add_valid+0x6a/0x70
Call Trace:
 core_tpg_register+0x116/0x200 [target_core_mod]
 srpt_make_tpg+0x3f/0x60 [ib_srpt]
 target_fabric_make_tpg+0x41/0x290 [target_core_mod]
 configfs_mkdir+0x158/0x3e0
 vfs_mkdir+0x108/0x1a0
 do_mkdirat+0x77/0xe0
 do_syscall_64+0x55/0x1d0
 entry_SYSCALL_64_after_hwframe+0x44/0xa9

Cc: Honggang LI <honli@redhat.com>
Reported-by: Honggang LI <honli@redhat.com>
Fixes: a42d985bd5b2 ("ib_srpt: Initial SRP Target merge for v3.3-rc1")
Signed-off-by: Bart Van Assche <bvanassche@acm.org>
---
 drivers/infiniband/ulp/srpt/ib_srpt.c | 77 ++++++++++++++++++---------
 drivers/infiniband/ulp/srpt/ib_srpt.h | 25 +++++++--
 2 files changed, 73 insertions(+), 29 deletions(-)

Comments

Honggang LI Oct. 24, 2019, 3:37 a.m. UTC | #1
On Wed, Oct 23, 2019 at 01:41:06PM -0700, Bart Van Assche wrote:
> Unlike the iSCSI target driver, for the SRP target driver it is sufficient
> if a single TPG can be associated with each RDMA port name. However, users
> started associating multiple TPGs with RDMA port names. Support this by
> converting the single TPG in struct srpt_port_id into a list. This patch
> fixes the following list corruption issue:
> 

First of all, this patch do fix the list corruption issue.
I can't reproduce it anymore after apply this patch.

> list_add corruption. prev->next should be next (ffffffffc0a080c0), but was ffffa08a994ce6f0. (prev=ffffa08a994ce6f0).
> WARNING: CPU: 2 PID: 2597 at lib/list_debug.c:28 __list_add_valid+0x6a/0x70
> CPU: 2 PID: 2597 Comm: targetcli Not tainted 5.4.0-rc1.3bfa3c9602a7 #1
> RIP: 0010:__list_add_valid+0x6a/0x70
> Call Trace:
>  core_tpg_register+0x116/0x200 [target_core_mod]
>  srpt_make_tpg+0x3f/0x60 [ib_srpt]
>  target_fabric_make_tpg+0x41/0x290 [target_core_mod]
>  configfs_mkdir+0x158/0x3e0
>  vfs_mkdir+0x108/0x1a0
>  do_mkdirat+0x77/0xe0
>  do_syscall_64+0x55/0x1d0
>  entry_SYSCALL_64_after_hwframe+0x44/0xa9
> 

> +
> +	mutex_lock(&sport->port_gid_id.mutex);
> +	list_for_each_entry(stpg, &sport->port_gid_id.tpg_list, entry) {
> +		if (!IS_ERR_OR_NULL(ch->sess))
                ^^^^^^^
> +			break;
> +		ch->sess = target_setup_session(&stpg->tpg, tag_num,
                           ^^^^^^^^^^^^^^^^^^^^
>  					tag_size, TARGET_PROT_NORMAL, i_port_id,
>  					ch, NULL);
> -	/* Retry without leading "0x" */
> -	if (sport->port_gid_id.tpg.se_tpg_wwn && IS_ERR_OR_NULL(ch->sess))
> -		ch->sess = target_setup_session(&sport->port_gid_id.tpg, tag_num,
> +		if (!IS_ERR_OR_NULL(ch->sess))
                    ^
> +			break;

I'm confused about this 'if' statement. In case you repeated the
validation as previous 'if' statement, it is redundance.

In case you check the return of the first target_setup_session,
it seems wrong, we only need to retry in case first target_setup_session
was failed. But you break out, and skip the second target_setup_session.

> +		/* Retry without leading "0x" */
> +		ch->sess = target_setup_session(&stpg->tpg, tag_num,
                           ^^^^^^^^^^^^^^^^^^^^
>  						tag_size, TARGET_PROT_NORMAL,
>  						i_port_id + 2, ch, NULL);

thanks
Bart Van Assche Oct. 24, 2019, 4:01 a.m. UTC | #2
On 2019-10-23 20:37, Honggang LI wrote:
> On Wed, Oct 23, 2019 at 01:41:06PM -0700, Bart Van Assche wrote:
>> +	mutex_lock(&sport->port_gid_id.mutex);
>> +	list_for_each_entry(stpg, &sport->port_gid_id.tpg_list, entry) {
>> +		if (!IS_ERR_OR_NULL(ch->sess))
>                 ^^^^^^^
>> +			break;
>> +		ch->sess = target_setup_session(&stpg->tpg, tag_num,
>                            ^^^^^^^^^^^^^^^^^^^^
>>  					tag_size, TARGET_PROT_NORMAL, i_port_id,
>>  					ch, NULL);
>> -	/* Retry without leading "0x" */
>> -	if (sport->port_gid_id.tpg.se_tpg_wwn && IS_ERR_OR_NULL(ch->sess))
>> -		ch->sess = target_setup_session(&sport->port_gid_id.tpg, tag_num,
>> +		if (!IS_ERR_OR_NULL(ch->sess))
>                     ^
>> +			break;
> 
> I'm confused about this 'if' statement. In case you repeated the
> validation as previous 'if' statement, it is redundance.
> 
> In case you check the return of the first target_setup_session,
> it seems wrong, we only need to retry in case first target_setup_session
> was failed. But you break out, and skip the second target_setup_session.
> 
>> +		/* Retry without leading "0x" */
>> +		ch->sess = target_setup_session(&stpg->tpg, tag_num,
>                            ^^^^^^^^^^^^^^^^^^^^
>>  						tag_size, TARGET_PROT_NORMAL,
>>  						i_port_id + 2, ch, NULL);

Hi Honggang,

The purpose of this code is to keep iterating until a session has been
created. The "if (!IS_ERR_OR_NULL(ch->sess)) break" code prevents
further target_setup_session() calls after a session has been created
successfully.

Bart.
Honggang LI Oct. 24, 2019, 1:07 p.m. UTC | #3
On Wed, Oct 23, 2019 at 01:41:06PM -0700, Bart Van Assche wrote:
> Unlike the iSCSI target driver, for the SRP target driver it is sufficient
> if a single TPG can be associated with each RDMA port name. However, users
> started associating multiple TPGs with RDMA port names. Support this by
> converting the single TPG in struct srpt_port_id into a list. This patch
> fixes the following list corruption issue:
> 
> list_add corruption. prev->next should be next (ffffffffc0a080c0), but was ffffa08a994ce6f0. (prev=ffffa08a994ce6f0).
> WARNING: CPU: 2 PID: 2597 at lib/list_debug.c:28 __list_add_valid+0x6a/0x70
> CPU: 2 PID: 2597 Comm: targetcli Not tainted 5.4.0-rc1.3bfa3c9602a7 #1
> RIP: 0010:__list_add_valid+0x6a/0x70
> Call Trace:
>  core_tpg_register+0x116/0x200 [target_core_mod]
>  srpt_make_tpg+0x3f/0x60 [ib_srpt]
>  target_fabric_make_tpg+0x41/0x290 [target_core_mod]
>  configfs_mkdir+0x158/0x3e0
>  vfs_mkdir+0x108/0x1a0
>  do_mkdirat+0x77/0xe0
>  do_syscall_64+0x55/0x1d0
>  entry_SYSCALL_64_after_hwframe+0x44/0xa9
> 
> Cc: Honggang LI <honli@redhat.com>
> Reported-by: Honggang LI <honli@redhat.com>
> Fixes: a42d985bd5b2 ("ib_srpt: Initial SRP Target merge for v3.3-rc1")
> Signed-off-by: Bart Van Assche <bvanassche@acm.org>
> ---
>  drivers/infiniband/ulp/srpt/ib_srpt.c | 77 ++++++++++++++++++---------
>  drivers/infiniband/ulp/srpt/ib_srpt.h | 25 +++++++--
>  2 files changed, 73 insertions(+), 29 deletions(-)
> 
> diff --git a/drivers/infiniband/ulp/srpt/ib_srpt.c b/drivers/infiniband/ulp/srpt/ib_srpt.c
> index daf811abf40a..a278e76b9e02 100644
> --- a/drivers/infiniband/ulp/srpt/ib_srpt.c
> +++ b/drivers/infiniband/ulp/srpt/ib_srpt.c
> @@ -2131,6 +2131,7 @@ static int srpt_cm_req_recv(struct srpt_device *const sdev,
>  	char i_port_id[36];
>  	u32 it_iu_len;
>  	int i, tag_num, tag_size, ret;
> +	struct srpt_tpg *stpg;
>  
>  	WARN_ON_ONCE(irqs_disabled());
>  
> @@ -2288,19 +2289,33 @@ static int srpt_cm_req_recv(struct srpt_device *const sdev,
>  
>  	tag_num = ch->rq_size;
>  	tag_size = 1; /* ib_srpt does not use se_sess->sess_cmd_map */
> -	if (sport->port_guid_id.tpg.se_tpg_wwn)
> -		ch->sess = target_setup_session(&sport->port_guid_id.tpg, tag_num,
> +
> +	mutex_lock(&sport->port_guid_id.mutex);
> +	list_for_each_entry(stpg, &sport->port_guid_id.tpg_list, entry) {
> +		if (!IS_ERR_OR_NULL(ch->sess))
> +			break;
> +		ch->sess = target_setup_session(&stpg->tpg, tag_num,
>  						tag_size, TARGET_PROT_NORMAL,
>  						ch->sess_name, ch, NULL);
> -	if (sport->port_gid_id.tpg.se_tpg_wwn && IS_ERR_OR_NULL(ch->sess))
> -		ch->sess = target_setup_session(&sport->port_gid_id.tpg, tag_num,
> +	}
> +	mutex_unlock(&sport->port_guid_id.mutex);
> +
> +	mutex_lock(&sport->port_gid_id.mutex);
> +	list_for_each_entry(stpg, &sport->port_gid_id.tpg_list, entry) {
> +		if (!IS_ERR_OR_NULL(ch->sess))
> +			break;
> +		ch->sess = target_setup_session(&stpg->tpg, tag_num,
>  					tag_size, TARGET_PROT_NORMAL, i_port_id,
>  					ch, NULL);
> -	/* Retry without leading "0x" */
> -	if (sport->port_gid_id.tpg.se_tpg_wwn && IS_ERR_OR_NULL(ch->sess))
> -		ch->sess = target_setup_session(&sport->port_gid_id.tpg, tag_num,
> +		if (!IS_ERR_OR_NULL(ch->sess))
> +			break;
> +		/* Retry without leading "0x" */
> +		ch->sess = target_setup_session(&stpg->tpg, tag_num,
>  						tag_size, TARGET_PROT_NORMAL,
>  						i_port_id + 2, ch, NULL);
> +	}
> +	mutex_unlock(&sport->port_gid_id.mutex);
> +
>  	if (IS_ERR_OR_NULL(ch->sess)) {
>  		WARN_ON_ONCE(ch->sess == NULL);
>  		ret = PTR_ERR(ch->sess);
> @@ -3140,6 +3155,10 @@ static void srpt_add_one(struct ib_device *device)
>  		sport->port_attrib.srp_sq_size = DEF_SRPT_SQ_SIZE;
>  		sport->port_attrib.use_srq = false;
>  		INIT_WORK(&sport->work, srpt_refresh_port_work);
> +		mutex_init(&sport->port_guid_id.mutex);
> +		INIT_LIST_HEAD(&sport->port_guid_id.tpg_list);
> +		mutex_init(&sport->port_gid_id.mutex);
> +		INIT_LIST_HEAD(&sport->port_gid_id.tpg_list);
>  
>  		if (srpt_refresh_port(sport)) {
>  			pr_err("MAD registration failed for %s-%d.\n",
> @@ -3242,18 +3261,6 @@ static struct srpt_port *srpt_tpg_to_sport(struct se_portal_group *tpg)
>  	return tpg->se_tpg_wwn->priv;
>  }
>  
> -static struct srpt_port_id *srpt_tpg_to_sport_id(struct se_portal_group *tpg)
> -{
> -	struct srpt_port *sport = srpt_tpg_to_sport(tpg);
> -
> -	if (tpg == &sport->port_guid_id.tpg)
> -		return &sport->port_guid_id;
> -	if (tpg == &sport->port_gid_id.tpg)
> -		return &sport->port_gid_id;
> -	WARN_ON_ONCE(true);
> -	return NULL;
> -}
> -
>  static struct srpt_port_id *srpt_wwn_to_sport_id(struct se_wwn *wwn)
>  {
>  	struct srpt_port *sport = wwn->priv;
> @@ -3268,7 +3275,9 @@ static struct srpt_port_id *srpt_wwn_to_sport_id(struct se_wwn *wwn)
>  
>  static char *srpt_get_fabric_wwn(struct se_portal_group *tpg)
>  {
> -	return srpt_tpg_to_sport_id(tpg)->name;
> +	struct srpt_tpg *stpg = container_of(tpg, typeof(*stpg), tpg);
> +
> +	return stpg->sport_id->name;
>  }
>  
>  static u16 srpt_get_tag(struct se_portal_group *tpg)
> @@ -3725,16 +3734,27 @@ static struct se_portal_group *srpt_make_tpg(struct se_wwn *wwn,
>  					     const char *name)
>  {
>  	struct srpt_port *sport = wwn->priv;
> -	struct se_portal_group *tpg = &srpt_wwn_to_sport_id(wwn)->tpg;
> -	int res;
> +	struct srpt_port_id *sport_id = srpt_wwn_to_sport_id(wwn);
> +	struct srpt_tpg *stpg;
> +	int res = -ENOMEM;
>  
> -	res = core_tpg_register(wwn, tpg, SCSI_PROTOCOL_SRP);
> -	if (res)
> +	stpg = kzalloc(sizeof(*stpg), GFP_KERNEL);
> +	if (!stpg)
> +		return ERR_PTR(res);
> +	stpg->sport_id = sport_id;
> +	res = core_tpg_register(wwn, &stpg->tpg, SCSI_PROTOCOL_SRP);
> +	if (res) {
> +		kfree(stpg);
>  		return ERR_PTR(res);
> +	}
> +
> +	mutex_lock(&sport_id->mutex);
> +	list_add_tail(&stpg->entry, &sport_id->tpg_list);
> +	mutex_unlock(&sport_id->mutex);
>  
>  	atomic_inc(&sport->refcount);
>  
> -	return tpg;
> +	return &stpg->tpg;
>  }
>  
>  /**
> @@ -3743,10 +3763,17 @@ static struct se_portal_group *srpt_make_tpg(struct se_wwn *wwn,
>   */
>  static void srpt_drop_tpg(struct se_portal_group *tpg)
>  {
> +	struct srpt_tpg *stpg = container_of(tpg, typeof(*stpg), tpg);
> +	struct srpt_port_id *sport_id = stpg->sport_id;
>  	struct srpt_port *sport = srpt_tpg_to_sport(tpg);
>  
> +	mutex_lock(&sport_id->mutex);
> +	list_del(&stpg->entry);
> +	mutex_unlock(&sport_id->mutex);
> +
>  	sport->enabled = false;
>  	core_tpg_deregister(tpg);
> +	kfree(stpg);
>  	srpt_drop_sport_ref(sport);
>  }
>  
> diff --git a/drivers/infiniband/ulp/srpt/ib_srpt.h b/drivers/infiniband/ulp/srpt/ib_srpt.h
> index f8bd95302ac0..27a54f777e3b 100644
> --- a/drivers/infiniband/ulp/srpt/ib_srpt.h
> +++ b/drivers/infiniband/ulp/srpt/ib_srpt.h
> @@ -363,17 +363,34 @@ struct srpt_port_attrib {
>  	bool			use_srq;
>  };
>  
> +/**
> + * struct srpt_tpg - information about a single "target portal group"
> + * @entry:	Entry in @sport_id->tpg_list.
> + * @sport_id:	Port name this TPG is associated with.
> + * @tpg:	LIO TPG data structure.
> + *
> + * Zero or more target portal groups are associated with each port name
> + * (srpt_port_id). With each TPG an ACL list is associated.
> + */
> +struct srpt_tpg {
> +	struct list_head	entry;
> +	struct srpt_port_id	*sport_id;
> +	struct se_portal_group	tpg;
> +};
> +
>  /**
>   * struct srpt_port_id - information about an RDMA port name
> - * @tpg: TPG associated with the RDMA port.
> - * @wwn: WWN associated with the RDMA port.
> - * @name: ASCII representation of the port name.
> + * @mutex:	Protects @tpg_list changes.
> + * @tpg_list:	TPGs associated with the RDMA port name.
> + * @wwn:	WWN associated with the RDMA port name.
> + * @name:	ASCII representation of the port name.
>   *
>   * Multiple sysfs directories can be associated with a single RDMA port. This
>   * data structure represents a single (port, name) pair.
>   */
>  struct srpt_port_id {
> -	struct se_portal_group	tpg;
> +	struct mutex		mutex;
> +	struct list_head	tpg_list;
>  	struct se_wwn		wwn;
>  	char			name[64];
>  };

Acked-by: Honggang Li <honli@redhat.com>

Thanks
Jason Gunthorpe Oct. 28, 2019, 4:32 p.m. UTC | #4
On Wed, Oct 23, 2019 at 01:41:06PM -0700, Bart Van Assche wrote:
> Unlike the iSCSI target driver, for the SRP target driver it is sufficient
> if a single TPG can be associated with each RDMA port name. However, users
> started associating multiple TPGs with RDMA port names. Support this by
> converting the single TPG in struct srpt_port_id into a list. This patch
> fixes the following list corruption issue:
> 
> list_add corruption. prev->next should be next (ffffffffc0a080c0), but was ffffa08a994ce6f0. (prev=ffffa08a994ce6f0).
> WARNING: CPU: 2 PID: 2597 at lib/list_debug.c:28 __list_add_valid+0x6a/0x70
> CPU: 2 PID: 2597 Comm: targetcli Not tainted 5.4.0-rc1.3bfa3c9602a7 #1
> RIP: 0010:__list_add_valid+0x6a/0x70
> Call Trace:
>  core_tpg_register+0x116/0x200 [target_core_mod]
>  srpt_make_tpg+0x3f/0x60 [ib_srpt]
>  target_fabric_make_tpg+0x41/0x290 [target_core_mod]
>  configfs_mkdir+0x158/0x3e0
>  vfs_mkdir+0x108/0x1a0
>  do_mkdirat+0x77/0xe0
>  do_syscall_64+0x55/0x1d0
>  entry_SYSCALL_64_after_hwframe+0x44/0xa9
> 
> Cc: Honggang LI <honli@redhat.com>
> Reported-by: Honggang LI <honli@redhat.com>
> Fixes: a42d985bd5b2 ("ib_srpt: Initial SRP Target merge for v3.3-rc1")
> Signed-off-by: Bart Van Assche <bvanassche@acm.org>
> Acked-by: Honggang Li <honli@redhat.com>
> ---
>  drivers/infiniband/ulp/srpt/ib_srpt.c | 77 ++++++++++++++++++---------
>  drivers/infiniband/ulp/srpt/ib_srpt.h | 25 +++++++--
>  2 files changed, 73 insertions(+), 29 deletions(-)

Applied to for-next, thanks

Jason
diff mbox series

Patch

diff --git a/drivers/infiniband/ulp/srpt/ib_srpt.c b/drivers/infiniband/ulp/srpt/ib_srpt.c
index daf811abf40a..a278e76b9e02 100644
--- a/drivers/infiniband/ulp/srpt/ib_srpt.c
+++ b/drivers/infiniband/ulp/srpt/ib_srpt.c
@@ -2131,6 +2131,7 @@  static int srpt_cm_req_recv(struct srpt_device *const sdev,
 	char i_port_id[36];
 	u32 it_iu_len;
 	int i, tag_num, tag_size, ret;
+	struct srpt_tpg *stpg;
 
 	WARN_ON_ONCE(irqs_disabled());
 
@@ -2288,19 +2289,33 @@  static int srpt_cm_req_recv(struct srpt_device *const sdev,
 
 	tag_num = ch->rq_size;
 	tag_size = 1; /* ib_srpt does not use se_sess->sess_cmd_map */
-	if (sport->port_guid_id.tpg.se_tpg_wwn)
-		ch->sess = target_setup_session(&sport->port_guid_id.tpg, tag_num,
+
+	mutex_lock(&sport->port_guid_id.mutex);
+	list_for_each_entry(stpg, &sport->port_guid_id.tpg_list, entry) {
+		if (!IS_ERR_OR_NULL(ch->sess))
+			break;
+		ch->sess = target_setup_session(&stpg->tpg, tag_num,
 						tag_size, TARGET_PROT_NORMAL,
 						ch->sess_name, ch, NULL);
-	if (sport->port_gid_id.tpg.se_tpg_wwn && IS_ERR_OR_NULL(ch->sess))
-		ch->sess = target_setup_session(&sport->port_gid_id.tpg, tag_num,
+	}
+	mutex_unlock(&sport->port_guid_id.mutex);
+
+	mutex_lock(&sport->port_gid_id.mutex);
+	list_for_each_entry(stpg, &sport->port_gid_id.tpg_list, entry) {
+		if (!IS_ERR_OR_NULL(ch->sess))
+			break;
+		ch->sess = target_setup_session(&stpg->tpg, tag_num,
 					tag_size, TARGET_PROT_NORMAL, i_port_id,
 					ch, NULL);
-	/* Retry without leading "0x" */
-	if (sport->port_gid_id.tpg.se_tpg_wwn && IS_ERR_OR_NULL(ch->sess))
-		ch->sess = target_setup_session(&sport->port_gid_id.tpg, tag_num,
+		if (!IS_ERR_OR_NULL(ch->sess))
+			break;
+		/* Retry without leading "0x" */
+		ch->sess = target_setup_session(&stpg->tpg, tag_num,
 						tag_size, TARGET_PROT_NORMAL,
 						i_port_id + 2, ch, NULL);
+	}
+	mutex_unlock(&sport->port_gid_id.mutex);
+
 	if (IS_ERR_OR_NULL(ch->sess)) {
 		WARN_ON_ONCE(ch->sess == NULL);
 		ret = PTR_ERR(ch->sess);
@@ -3140,6 +3155,10 @@  static void srpt_add_one(struct ib_device *device)
 		sport->port_attrib.srp_sq_size = DEF_SRPT_SQ_SIZE;
 		sport->port_attrib.use_srq = false;
 		INIT_WORK(&sport->work, srpt_refresh_port_work);
+		mutex_init(&sport->port_guid_id.mutex);
+		INIT_LIST_HEAD(&sport->port_guid_id.tpg_list);
+		mutex_init(&sport->port_gid_id.mutex);
+		INIT_LIST_HEAD(&sport->port_gid_id.tpg_list);
 
 		if (srpt_refresh_port(sport)) {
 			pr_err("MAD registration failed for %s-%d.\n",
@@ -3242,18 +3261,6 @@  static struct srpt_port *srpt_tpg_to_sport(struct se_portal_group *tpg)
 	return tpg->se_tpg_wwn->priv;
 }
 
-static struct srpt_port_id *srpt_tpg_to_sport_id(struct se_portal_group *tpg)
-{
-	struct srpt_port *sport = srpt_tpg_to_sport(tpg);
-
-	if (tpg == &sport->port_guid_id.tpg)
-		return &sport->port_guid_id;
-	if (tpg == &sport->port_gid_id.tpg)
-		return &sport->port_gid_id;
-	WARN_ON_ONCE(true);
-	return NULL;
-}
-
 static struct srpt_port_id *srpt_wwn_to_sport_id(struct se_wwn *wwn)
 {
 	struct srpt_port *sport = wwn->priv;
@@ -3268,7 +3275,9 @@  static struct srpt_port_id *srpt_wwn_to_sport_id(struct se_wwn *wwn)
 
 static char *srpt_get_fabric_wwn(struct se_portal_group *tpg)
 {
-	return srpt_tpg_to_sport_id(tpg)->name;
+	struct srpt_tpg *stpg = container_of(tpg, typeof(*stpg), tpg);
+
+	return stpg->sport_id->name;
 }
 
 static u16 srpt_get_tag(struct se_portal_group *tpg)
@@ -3725,16 +3734,27 @@  static struct se_portal_group *srpt_make_tpg(struct se_wwn *wwn,
 					     const char *name)
 {
 	struct srpt_port *sport = wwn->priv;
-	struct se_portal_group *tpg = &srpt_wwn_to_sport_id(wwn)->tpg;
-	int res;
+	struct srpt_port_id *sport_id = srpt_wwn_to_sport_id(wwn);
+	struct srpt_tpg *stpg;
+	int res = -ENOMEM;
 
-	res = core_tpg_register(wwn, tpg, SCSI_PROTOCOL_SRP);
-	if (res)
+	stpg = kzalloc(sizeof(*stpg), GFP_KERNEL);
+	if (!stpg)
+		return ERR_PTR(res);
+	stpg->sport_id = sport_id;
+	res = core_tpg_register(wwn, &stpg->tpg, SCSI_PROTOCOL_SRP);
+	if (res) {
+		kfree(stpg);
 		return ERR_PTR(res);
+	}
+
+	mutex_lock(&sport_id->mutex);
+	list_add_tail(&stpg->entry, &sport_id->tpg_list);
+	mutex_unlock(&sport_id->mutex);
 
 	atomic_inc(&sport->refcount);
 
-	return tpg;
+	return &stpg->tpg;
 }
 
 /**
@@ -3743,10 +3763,17 @@  static struct se_portal_group *srpt_make_tpg(struct se_wwn *wwn,
  */
 static void srpt_drop_tpg(struct se_portal_group *tpg)
 {
+	struct srpt_tpg *stpg = container_of(tpg, typeof(*stpg), tpg);
+	struct srpt_port_id *sport_id = stpg->sport_id;
 	struct srpt_port *sport = srpt_tpg_to_sport(tpg);
 
+	mutex_lock(&sport_id->mutex);
+	list_del(&stpg->entry);
+	mutex_unlock(&sport_id->mutex);
+
 	sport->enabled = false;
 	core_tpg_deregister(tpg);
+	kfree(stpg);
 	srpt_drop_sport_ref(sport);
 }
 
diff --git a/drivers/infiniband/ulp/srpt/ib_srpt.h b/drivers/infiniband/ulp/srpt/ib_srpt.h
index f8bd95302ac0..27a54f777e3b 100644
--- a/drivers/infiniband/ulp/srpt/ib_srpt.h
+++ b/drivers/infiniband/ulp/srpt/ib_srpt.h
@@ -363,17 +363,34 @@  struct srpt_port_attrib {
 	bool			use_srq;
 };
 
+/**
+ * struct srpt_tpg - information about a single "target portal group"
+ * @entry:	Entry in @sport_id->tpg_list.
+ * @sport_id:	Port name this TPG is associated with.
+ * @tpg:	LIO TPG data structure.
+ *
+ * Zero or more target portal groups are associated with each port name
+ * (srpt_port_id). With each TPG an ACL list is associated.
+ */
+struct srpt_tpg {
+	struct list_head	entry;
+	struct srpt_port_id	*sport_id;
+	struct se_portal_group	tpg;
+};
+
 /**
  * struct srpt_port_id - information about an RDMA port name
- * @tpg: TPG associated with the RDMA port.
- * @wwn: WWN associated with the RDMA port.
- * @name: ASCII representation of the port name.
+ * @mutex:	Protects @tpg_list changes.
+ * @tpg_list:	TPGs associated with the RDMA port name.
+ * @wwn:	WWN associated with the RDMA port name.
+ * @name:	ASCII representation of the port name.
  *
  * Multiple sysfs directories can be associated with a single RDMA port. This
  * data structure represents a single (port, name) pair.
  */
 struct srpt_port_id {
-	struct se_portal_group	tpg;
+	struct mutex		mutex;
+	struct list_head	tpg_list;
 	struct se_wwn		wwn;
 	char			name[64];
 };