Message ID | 20170202005853.23456-21-bart.vanassche@sandisk.com (mailing list archive) |
---|---|
State | Superseded |
Headers | show |
On Wed, Feb 01, 2017 at 04:58:37PM -0800, Bart Van Assche wrote: > a per-session completion in the latter function. Note: this patch > changes the behavior from aborting outstanding commands back to > waiting for command completion. See also commit 0f4a943168f3 > ("target: Fix remote-port TMR ABORT + se_cmd fabric stop"). Please add an explanation of why this is ѕafe and desirable. -- To unsubscribe from this list: send the line "unsubscribe target-devel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
On Mon, 2017-02-06 at 01:25 -0800, Christoph Hellwig wrote: > On Wed, Feb 01, 2017 at 04:58:37PM -0800, Bart Van Assche wrote: > > a per-session completion in the latter function. Note: this patch > > changes the behavior from aborting outstanding commands back to > > waiting for command completion. See also commit 0f4a943168f3 > > ("target: Fix remote-port TMR ABORT + se_cmd fabric stop"). > > Please add an explanation of why this is ѕafe and desirable. Hello Christoph, How about adding the following: This change is desirable because it makes the session shutdown code simpler. It is safe because once a SCSI initiator system has submitted a command a target system is always allowed to execute it to completion. Bart.
> - if (sess->sess_tearing_down || se_cmd->cmd_wait_set) { > + if (sess->sess_tearing_down) { > pr_debug("Attempted to abort io tag: %llu already shutdown," > " skipping\n", se_cmd->tag); > spin_unlock(&se_cmd->t_state_lock); > @@ -247,7 +247,7 @@ static void core_tmr_drain_tmr_list( > spin_unlock(&sess->sess_cmd_lock); > continue; > } > - if (sess->sess_tearing_down || cmd->cmd_wait_set) { > + if (sess->sess_tearing_down) { If se_cmd->cmd_wait_set is set, sess->sess_tearing_down must always be set as well, so these are a trivially correct cleanup. Maybe split this into a separate prep patch? > init_completion(&cmd->finished); > spin_lock_init(&cmd->t_state_lock); > kref_init(&cmd->cmd_kref); > @@ -2538,16 +2537,6 @@ int transport_generic_free_cmd(struct se_cmd *cmd, int wait_for_tasks) > if (cmd->se_lun) > transport_lun_remove_cmd(cmd); > } > - /* > - * If the task has been internally aborted due to TMR ABORT_TASK > - * or LUN_RESET, target_core_tmr.c is responsible for performing > - * the remaining calls to target_put_sess_cmd(), and not the > - * callers of this function. > - */ > - if (aborted) { > - pr_debug("Detected CMD_T_ABORTED for ITT: %llu\n", cmd->tag); > - wait_for_completion(&cmd->cmd_wait_comp); > - } Previously this waited for the command to be freed from the TMR code, which provided a synchronization point. I don't think it's useful, but it should be mentioned in the changelog. Except for that the patch looks fine to me: Reviewed-by: Christoph Hellwig <hch@lst.de> -- To unsubscribe from this list: send the line "unsubscribe target-devel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
On Mon, 2017-02-06 at 10:14 -0800, Christoph Hellwig wrote: > > @@ -2538,16 +2537,6 @@ int transport_generic_free_cmd(struct se_cmd *cmd, int wait_for_tasks) > > if (cmd->se_lun) > > transport_lun_remove_cmd(cmd); > > } > > - /* > > - * If the task has been internally aborted due to TMR ABORT_TASK > > - * or LUN_RESET, target_core_tmr.c is responsible for performing > > - * the remaining calls to target_put_sess_cmd(), and not the > > - * callers of this function. > > - */ > > - if (aborted) { > > - pr_debug("Detected CMD_T_ABORTED for ITT: %llu\n", cmd->tag); > > - wait_for_completion(&cmd->cmd_wait_comp); > > - } > > Previously this waited for the command to be freed from the TMR code, > which provided a synchronization point. I don't think it's useful, > but it should be mentioned in the changelog. The session shutdown code still waits until aborted commands have been freed because it waits until the reference count for all commands associated with a session has dropped to zero. Do you really want me to mention this explicitly in the changelog? Bart.-- To unsubscribe from this list: send the line "unsubscribe target-devel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
On Wed, 2017-02-01 at 16:58 -0800, Bart Van Assche wrote: > Target drivers must call target_sess_cmd_list_set_waiting() and > target_wait_for_sess_cmds() before freeing a session. Instead of > setting a flag in each pending command from the former function > and waiting in the latter function on a per-command completion, > only set a per-session flag in the former function and wait on > a per-session completion in the latter function. Note: this patch > changes the behavior from aborting outstanding commands back to > waiting for command completion. See also commit 0f4a943168f3 > ("target: Fix remote-port TMR ABORT + se_cmd fabric stop"). > > This patch is based on the following two patches: > * Bart Van Assche, target: Simplify session shutdown code, February 19, 2015 > (https://github.com/bvanassche/linux/commit/8df5463d7d7619f2f1b70cfe5172eaef0aa52815). > * Christoph Hellwig, target: Rework session shutdown code, December 7, 2015 > (http://thread.gmane.org/gmane.linux.scsi.target.devel/10695). > > Signed-off-by: Bart Van Assche <bart.vanassche@sandisk.com> > Reviewed-by: Hannes Reinecke <hare@suse.com> > Cc: Christoph Hellwig <hch@lst.de> > Cc: Sagi Grimberg <sagig@mellanox.com> > Cc: Andy Grover <agrover@redhat.com> > --- > drivers/target/target_core_tmr.c | 4 +- > drivers/target/target_core_transport.c | 106 ++++++++------------------------- > include/target/target_core_base.h | 4 +- > 3 files changed, 29 insertions(+), 85 deletions(-) > > diff --git a/drivers/target/target_core_tmr.c b/drivers/target/target_core_tmr.c > index bc07e059fb22..95f8e7d0cf46 100644 > --- a/drivers/target/target_core_tmr.c > +++ b/drivers/target/target_core_tmr.c > @@ -114,7 +114,7 @@ static bool __target_check_io_state(struct se_cmd *se_cmd, > spin_unlock(&se_cmd->t_state_lock); > return false; > } > - if (sess->sess_tearing_down || se_cmd->cmd_wait_set) { > + if (sess->sess_tearing_down) { > pr_debug("Attempted to abort io tag: %llu already shutdown," > " skipping\n", se_cmd->tag); > spin_unlock(&se_cmd->t_state_lock); > @@ -247,7 +247,7 @@ static void core_tmr_drain_tmr_list( > spin_unlock(&sess->sess_cmd_lock); > continue; > } > - if (sess->sess_tearing_down || cmd->cmd_wait_set) { > + if (sess->sess_tearing_down) { > spin_unlock(&cmd->t_state_lock); > spin_unlock(&sess->sess_cmd_lock); > continue; > diff --git a/drivers/target/target_core_transport.c b/drivers/target/target_core_transport.c > index 8506ecc0d6c0..82930960e722 100644 > --- a/drivers/target/target_core_transport.c > +++ b/drivers/target/target_core_transport.c > @@ -237,8 +237,8 @@ struct se_session *transport_init_session(enum target_prot_op sup_prot_ops) > INIT_LIST_HEAD(&se_sess->sess_list); > INIT_LIST_HEAD(&se_sess->sess_acl_list); > INIT_LIST_HEAD(&se_sess->sess_cmd_list); > - INIT_LIST_HEAD(&se_sess->sess_wait_list); > spin_lock_init(&se_sess->sess_cmd_lock); > + init_waitqueue_head(&se_sess->cmd_list_wq); > se_sess->sup_prot_ops = sup_prot_ops; > > return se_sess; > @@ -1237,7 +1237,6 @@ void transport_init_se_cmd( > INIT_LIST_HEAD(&cmd->se_cmd_list); > INIT_LIST_HEAD(&cmd->state_list); > init_completion(&cmd->t_transport_stop_comp); > - init_completion(&cmd->cmd_wait_comp); > init_completion(&cmd->finished); > spin_lock_init(&cmd->t_state_lock); > kref_init(&cmd->cmd_kref); > @@ -2538,16 +2537,6 @@ int transport_generic_free_cmd(struct se_cmd *cmd, int wait_for_tasks) > if (cmd->se_lun) > transport_lun_remove_cmd(cmd); > } > - /* > - * If the task has been internally aborted due to TMR ABORT_TASK > - * or LUN_RESET, target_core_tmr.c is responsible for performing > - * the remaining calls to target_put_sess_cmd(), and not the > - * callers of this function. > - */ > - if (aborted) { > - pr_debug("Detected CMD_T_ABORTED for ITT: %llu\n", cmd->tag); > - wait_for_completion(&cmd->cmd_wait_comp); > - } > return transport_put_cmd(cmd); > } As alluded to earlier as the second order issue wrt how concurrent CMD_T_ABORT and fabric se_session shutdown with CMD_T_FABRIC_STOP interact in existing code. iscsi-target and iser-target currently depend upon transport_generic_free_cmd() to both quiesce, and wait for the se_cmd to complete via se_cmd->cmd_wait_comp in the CMD_T_ABORTED path before returning to the caller. This is required because as soon as all the transport_generic_free_cmd() callers return, iscsi-target + iser-target expect for it to be safe to release se_session associated memory. Which means that if transport_generic_free_cmd() doesn't wait during the second order issue when CMD_T_ABORTED is blocked waiting for se_cmd to be quiesced from se_device->tmr_wq context, it will trigger use-after-free OOPsen. > EXPORT_SYMBOL(transport_generic_free_cmd); > @@ -2605,25 +2594,15 @@ static void target_release_cmd_kref(struct kref *kref) > struct se_cmd *se_cmd = container_of(kref, struct se_cmd, cmd_kref); > struct se_session *se_sess = se_cmd->se_sess; > unsigned long flags; > - bool fabric_stop; > > complete_all(&se_cmd->finished); > > spin_lock_irqsave(&se_sess->sess_cmd_lock, flags); > - > - spin_lock(&se_cmd->t_state_lock); > - fabric_stop = (se_cmd->transport_state & CMD_T_FABRIC_STOP) && > - (se_cmd->transport_state & CMD_T_ABORTED); > - spin_unlock(&se_cmd->t_state_lock); > - > - if (se_cmd->cmd_wait_set || fabric_stop) { > - list_del_init(&se_cmd->se_cmd_list); > - spin_unlock_irqrestore(&se_sess->sess_cmd_lock, flags); > - target_free_cmd_mem(se_cmd); > - complete(&se_cmd->cmd_wait_comp); > - return; > + if (likely(!list_empty(&se_cmd->se_cmd_list))) { > + list_del(&se_cmd->se_cmd_list); > + if (list_empty(&se_sess->sess_cmd_list)) > + wake_up(&se_sess->cmd_list_wq); > } > - list_del_init(&se_cmd->se_cmd_list); > spin_unlock_irqrestore(&se_sess->sess_cmd_lock, flags); > > target_free_cmd_mem(se_cmd); > @@ -2646,77 +2625,44 @@ int target_put_sess_cmd(struct se_cmd *se_cmd) > } > EXPORT_SYMBOL(target_put_sess_cmd); > > -/* target_sess_cmd_list_set_waiting - Flag all commands in > - * sess_cmd_list to complete cmd_wait_comp. Set > - * sess_tearing_down so no more commands are queued. > +/** > + * target_sess_cmd_list_set_waiting - Prevent new commands to be queued > * @se_sess: session to flag > */ > void target_sess_cmd_list_set_waiting(struct se_session *se_sess) > { > - struct se_cmd *se_cmd, *tmp_cmd; > unsigned long flags; > - int rc; > > spin_lock_irqsave(&se_sess->sess_cmd_lock, flags); > - if (se_sess->sess_tearing_down) { > - spin_unlock_irqrestore(&se_sess->sess_cmd_lock, flags); > - return; > - } > se_sess->sess_tearing_down = 1; > - list_splice_init(&se_sess->sess_cmd_list, &se_sess->sess_wait_list); > - > - list_for_each_entry_safe(se_cmd, tmp_cmd, > - &se_sess->sess_wait_list, se_cmd_list) { > - rc = kref_get_unless_zero(&se_cmd->cmd_kref); > - if (rc) { > - se_cmd->cmd_wait_set = 1; > - spin_lock(&se_cmd->t_state_lock); > - se_cmd->transport_state |= CMD_T_FABRIC_STOP; > - spin_unlock(&se_cmd->t_state_lock); > - } else > - list_del_init(&se_cmd->se_cmd_list); > - } > - > spin_unlock_irqrestore(&se_sess->sess_cmd_lock, flags); > } > EXPORT_SYMBOL(target_sess_cmd_list_set_waiting); > > -/* target_wait_for_sess_cmds - Wait for outstanding descriptors > +/** > + * target_wait_for_sess_cmds - Wait for outstanding commands > * @se_sess: session to wait for active I/O > */ > void target_wait_for_sess_cmds(struct se_session *se_sess) > { > - struct se_cmd *se_cmd, *tmp_cmd; > - unsigned long flags; > - bool tas; > - > - list_for_each_entry_safe(se_cmd, tmp_cmd, > - &se_sess->sess_wait_list, se_cmd_list) { > - pr_debug("Waiting for se_cmd: %p t_state: %d, fabric state:" > - " %d\n", se_cmd, se_cmd->t_state, > - se_cmd->se_tfo->get_cmd_state(se_cmd)); > - > - spin_lock_irqsave(&se_cmd->t_state_lock, flags); > - tas = (se_cmd->transport_state & CMD_T_TAS); > - spin_unlock_irqrestore(&se_cmd->t_state_lock, flags); > - > - if (!target_put_sess_cmd(se_cmd)) { > - if (tas) > - target_put_sess_cmd(se_cmd); > - } > - > - wait_for_completion(&se_cmd->cmd_wait_comp); > - pr_debug("After cmd_wait_comp: se_cmd: %p t_state: %d" > - " fabric state: %d\n", se_cmd, se_cmd->t_state, > - se_cmd->se_tfo->get_cmd_state(se_cmd)); > - > - target_put_sess_cmd(se_cmd); > - } > - > - spin_lock_irqsave(&se_sess->sess_cmd_lock, flags); > - WARN_ON(!list_empty(&se_sess->sess_cmd_list)); > - spin_unlock_irqrestore(&se_sess->sess_cmd_lock, flags); > + struct se_cmd *cmd; > + int ret; > > + WARN_ON_ONCE(!se_sess->sess_tearing_down); > + > + spin_lock_irq(&se_sess->sess_cmd_lock); > + do { > + ret = wait_event_interruptible_lock_irq_timeout( > + se_sess->cmd_list_wq, > + list_empty(&se_sess->sess_cmd_list), > + se_sess->sess_cmd_lock, 180 * HZ); > + list_for_each_entry(cmd, &se_sess->sess_cmd_list, se_cmd_list) > + pr_debug("%s: ITT %#llx dir %d i_state %d t_state %d len %d\n", > + __func__, cmd->tag, cmd->data_direction, > + cmd->se_tfo->get_cmd_state(cmd), cmd->t_state, > + cmd->data_length); > + } while (ret <= 0); > + spin_unlock_irq(&se_sess->sess_cmd_lock); > } > EXPORT_SYMBOL(target_wait_for_sess_cmds); > > diff --git a/include/target/target_core_base.h b/include/target/target_core_base.h > index e690ce36e592..d6d4465b1694 100644 > --- a/include/target/target_core_base.h > +++ b/include/target/target_core_base.h > @@ -441,7 +441,6 @@ struct se_cmd { > u8 scsi_asc; > u8 scsi_ascq; > u16 scsi_sense_length; > - unsigned cmd_wait_set:1; > unsigned unknown_data_length:1; > bool state_active:1; > bool send_abort_response:1; > @@ -474,7 +473,6 @@ struct se_cmd { > struct se_session *se_sess; > struct se_tmr_req *se_tmr_req; > struct list_head se_cmd_list; > - struct completion cmd_wait_comp; > const struct target_core_fabric_ops *se_tfo; > sense_reason_t (*execute_cmd)(struct se_cmd *); > sense_reason_t (*transport_complete_callback)(struct se_cmd *, bool, int *); > @@ -605,8 +603,8 @@ struct se_session { > struct list_head sess_list; > struct list_head sess_acl_list; > struct list_head sess_cmd_list; > - struct list_head sess_wait_list; > spinlock_t sess_cmd_lock; > + wait_queue_head_t cmd_list_wq; > void *sess_cmd_map; > struct percpu_ida sess_tag_pool; > struct workqueue_struct *tmf_wq; -- To unsubscribe from this list: send the line "unsubscribe target-devel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
On Tue, 2017-02-07 at 07:53 -0800, Nicholas A. Bellinger wrote: > iscsi-target and iser-target currently depend upon > transport_generic_free_cmd() to both quiesce, and wait for the se_cmd to > complete via se_cmd->cmd_wait_comp in the CMD_T_ABORTED path before > returning to the caller. > > This is required because as soon as all the transport_generic_free_cmd() > callers return, iscsi-target + iser-target expect for it to be safe to > release se_session associated memory. > > Which means that if transport_generic_free_cmd() doesn't wait during the > second order issue when CMD_T_ABORTED is blocked waiting for se_cmd to > be quiesced from se_device->tmr_wq context, it will trigger > use-after-free OOPsen. That's useful feedback. Although I have not seen any such OOPSes in my tests, I will make sure that no such use-after-free is triggered. Bart.-- To unsubscribe from this list: send the line "unsubscribe target-devel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
On Tue, 2017-02-07 at 17:21 +0000, Bart Van Assche wrote: > On Tue, 2017-02-07 at 07:53 -0800, Nicholas A. Bellinger wrote: > > iscsi-target and iser-target currently depend upon > > transport_generic_free_cmd() to both quiesce, and wait for the se_cmd to > > complete via se_cmd->cmd_wait_comp in the CMD_T_ABORTED path before > > returning to the caller. > > > > This is required because as soon as all the transport_generic_free_cmd() > > callers return, iscsi-target + iser-target expect for it to be safe to > > release se_session associated memory. > > > > Which means that if transport_generic_free_cmd() doesn't wait during the > > second order issue when CMD_T_ABORTED is blocked waiting for se_cmd to > > be quiesced from se_device->tmr_wq context, it will trigger > > use-after-free OOPsen. > > That's useful feedback. Although I have not seen any such OOPSes in my > tests, I will make sure that no such use-after-free is triggered. > Note this requirement is specific to iscsi-target + iser-target only. All other drivers in modern upstream v4.x mainline code use target_wait_for_sess_cmds() to wait for outstanding se_cmd->cmd_kref to reach zero, and don't have this specific requirement. -- To unsubscribe from this list: send the line "unsubscribe target-devel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
diff --git a/drivers/target/target_core_tmr.c b/drivers/target/target_core_tmr.c index bc07e059fb22..95f8e7d0cf46 100644 --- a/drivers/target/target_core_tmr.c +++ b/drivers/target/target_core_tmr.c @@ -114,7 +114,7 @@ static bool __target_check_io_state(struct se_cmd *se_cmd, spin_unlock(&se_cmd->t_state_lock); return false; } - if (sess->sess_tearing_down || se_cmd->cmd_wait_set) { + if (sess->sess_tearing_down) { pr_debug("Attempted to abort io tag: %llu already shutdown," " skipping\n", se_cmd->tag); spin_unlock(&se_cmd->t_state_lock); @@ -247,7 +247,7 @@ static void core_tmr_drain_tmr_list( spin_unlock(&sess->sess_cmd_lock); continue; } - if (sess->sess_tearing_down || cmd->cmd_wait_set) { + if (sess->sess_tearing_down) { spin_unlock(&cmd->t_state_lock); spin_unlock(&sess->sess_cmd_lock); continue; diff --git a/drivers/target/target_core_transport.c b/drivers/target/target_core_transport.c index 8506ecc0d6c0..82930960e722 100644 --- a/drivers/target/target_core_transport.c +++ b/drivers/target/target_core_transport.c @@ -237,8 +237,8 @@ struct se_session *transport_init_session(enum target_prot_op sup_prot_ops) INIT_LIST_HEAD(&se_sess->sess_list); INIT_LIST_HEAD(&se_sess->sess_acl_list); INIT_LIST_HEAD(&se_sess->sess_cmd_list); - INIT_LIST_HEAD(&se_sess->sess_wait_list); spin_lock_init(&se_sess->sess_cmd_lock); + init_waitqueue_head(&se_sess->cmd_list_wq); se_sess->sup_prot_ops = sup_prot_ops; return se_sess; @@ -1237,7 +1237,6 @@ void transport_init_se_cmd( INIT_LIST_HEAD(&cmd->se_cmd_list); INIT_LIST_HEAD(&cmd->state_list); init_completion(&cmd->t_transport_stop_comp); - init_completion(&cmd->cmd_wait_comp); init_completion(&cmd->finished); spin_lock_init(&cmd->t_state_lock); kref_init(&cmd->cmd_kref); @@ -2538,16 +2537,6 @@ int transport_generic_free_cmd(struct se_cmd *cmd, int wait_for_tasks) if (cmd->se_lun) transport_lun_remove_cmd(cmd); } - /* - * If the task has been internally aborted due to TMR ABORT_TASK - * or LUN_RESET, target_core_tmr.c is responsible for performing - * the remaining calls to target_put_sess_cmd(), and not the - * callers of this function. - */ - if (aborted) { - pr_debug("Detected CMD_T_ABORTED for ITT: %llu\n", cmd->tag); - wait_for_completion(&cmd->cmd_wait_comp); - } return transport_put_cmd(cmd); } EXPORT_SYMBOL(transport_generic_free_cmd); @@ -2605,25 +2594,15 @@ static void target_release_cmd_kref(struct kref *kref) struct se_cmd *se_cmd = container_of(kref, struct se_cmd, cmd_kref); struct se_session *se_sess = se_cmd->se_sess; unsigned long flags; - bool fabric_stop; complete_all(&se_cmd->finished); spin_lock_irqsave(&se_sess->sess_cmd_lock, flags); - - spin_lock(&se_cmd->t_state_lock); - fabric_stop = (se_cmd->transport_state & CMD_T_FABRIC_STOP) && - (se_cmd->transport_state & CMD_T_ABORTED); - spin_unlock(&se_cmd->t_state_lock); - - if (se_cmd->cmd_wait_set || fabric_stop) { - list_del_init(&se_cmd->se_cmd_list); - spin_unlock_irqrestore(&se_sess->sess_cmd_lock, flags); - target_free_cmd_mem(se_cmd); - complete(&se_cmd->cmd_wait_comp); - return; + if (likely(!list_empty(&se_cmd->se_cmd_list))) { + list_del(&se_cmd->se_cmd_list); + if (list_empty(&se_sess->sess_cmd_list)) + wake_up(&se_sess->cmd_list_wq); } - list_del_init(&se_cmd->se_cmd_list); spin_unlock_irqrestore(&se_sess->sess_cmd_lock, flags); target_free_cmd_mem(se_cmd); @@ -2646,77 +2625,44 @@ int target_put_sess_cmd(struct se_cmd *se_cmd) } EXPORT_SYMBOL(target_put_sess_cmd); -/* target_sess_cmd_list_set_waiting - Flag all commands in - * sess_cmd_list to complete cmd_wait_comp. Set - * sess_tearing_down so no more commands are queued. +/** + * target_sess_cmd_list_set_waiting - Prevent new commands to be queued * @se_sess: session to flag */ void target_sess_cmd_list_set_waiting(struct se_session *se_sess) { - struct se_cmd *se_cmd, *tmp_cmd; unsigned long flags; - int rc; spin_lock_irqsave(&se_sess->sess_cmd_lock, flags); - if (se_sess->sess_tearing_down) { - spin_unlock_irqrestore(&se_sess->sess_cmd_lock, flags); - return; - } se_sess->sess_tearing_down = 1; - list_splice_init(&se_sess->sess_cmd_list, &se_sess->sess_wait_list); - - list_for_each_entry_safe(se_cmd, tmp_cmd, - &se_sess->sess_wait_list, se_cmd_list) { - rc = kref_get_unless_zero(&se_cmd->cmd_kref); - if (rc) { - se_cmd->cmd_wait_set = 1; - spin_lock(&se_cmd->t_state_lock); - se_cmd->transport_state |= CMD_T_FABRIC_STOP; - spin_unlock(&se_cmd->t_state_lock); - } else - list_del_init(&se_cmd->se_cmd_list); - } - spin_unlock_irqrestore(&se_sess->sess_cmd_lock, flags); } EXPORT_SYMBOL(target_sess_cmd_list_set_waiting); -/* target_wait_for_sess_cmds - Wait for outstanding descriptors +/** + * target_wait_for_sess_cmds - Wait for outstanding commands * @se_sess: session to wait for active I/O */ void target_wait_for_sess_cmds(struct se_session *se_sess) { - struct se_cmd *se_cmd, *tmp_cmd; - unsigned long flags; - bool tas; - - list_for_each_entry_safe(se_cmd, tmp_cmd, - &se_sess->sess_wait_list, se_cmd_list) { - pr_debug("Waiting for se_cmd: %p t_state: %d, fabric state:" - " %d\n", se_cmd, se_cmd->t_state, - se_cmd->se_tfo->get_cmd_state(se_cmd)); - - spin_lock_irqsave(&se_cmd->t_state_lock, flags); - tas = (se_cmd->transport_state & CMD_T_TAS); - spin_unlock_irqrestore(&se_cmd->t_state_lock, flags); - - if (!target_put_sess_cmd(se_cmd)) { - if (tas) - target_put_sess_cmd(se_cmd); - } - - wait_for_completion(&se_cmd->cmd_wait_comp); - pr_debug("After cmd_wait_comp: se_cmd: %p t_state: %d" - " fabric state: %d\n", se_cmd, se_cmd->t_state, - se_cmd->se_tfo->get_cmd_state(se_cmd)); - - target_put_sess_cmd(se_cmd); - } - - spin_lock_irqsave(&se_sess->sess_cmd_lock, flags); - WARN_ON(!list_empty(&se_sess->sess_cmd_list)); - spin_unlock_irqrestore(&se_sess->sess_cmd_lock, flags); + struct se_cmd *cmd; + int ret; + WARN_ON_ONCE(!se_sess->sess_tearing_down); + + spin_lock_irq(&se_sess->sess_cmd_lock); + do { + ret = wait_event_interruptible_lock_irq_timeout( + se_sess->cmd_list_wq, + list_empty(&se_sess->sess_cmd_list), + se_sess->sess_cmd_lock, 180 * HZ); + list_for_each_entry(cmd, &se_sess->sess_cmd_list, se_cmd_list) + pr_debug("%s: ITT %#llx dir %d i_state %d t_state %d len %d\n", + __func__, cmd->tag, cmd->data_direction, + cmd->se_tfo->get_cmd_state(cmd), cmd->t_state, + cmd->data_length); + } while (ret <= 0); + spin_unlock_irq(&se_sess->sess_cmd_lock); } EXPORT_SYMBOL(target_wait_for_sess_cmds); diff --git a/include/target/target_core_base.h b/include/target/target_core_base.h index e690ce36e592..d6d4465b1694 100644 --- a/include/target/target_core_base.h +++ b/include/target/target_core_base.h @@ -441,7 +441,6 @@ struct se_cmd { u8 scsi_asc; u8 scsi_ascq; u16 scsi_sense_length; - unsigned cmd_wait_set:1; unsigned unknown_data_length:1; bool state_active:1; bool send_abort_response:1; @@ -474,7 +473,6 @@ struct se_cmd { struct se_session *se_sess; struct se_tmr_req *se_tmr_req; struct list_head se_cmd_list; - struct completion cmd_wait_comp; const struct target_core_fabric_ops *se_tfo; sense_reason_t (*execute_cmd)(struct se_cmd *); sense_reason_t (*transport_complete_callback)(struct se_cmd *, bool, int *); @@ -605,8 +603,8 @@ struct se_session { struct list_head sess_list; struct list_head sess_acl_list; struct list_head sess_cmd_list; - struct list_head sess_wait_list; spinlock_t sess_cmd_lock; + wait_queue_head_t cmd_list_wq; void *sess_cmd_map; struct percpu_ida sess_tag_pool; struct workqueue_struct *tmf_wq;