diff mbox series

[V5,10/12] vhost-scsi: flush IO vqs then send TMF rsp

Message ID 20211207025117.23551-11-michael.christie@oracle.com (mailing list archive)
State New, archived
Headers show
Series vhost: multiple worker support | expand

Commit Message

Mike Christie Dec. 7, 2021, 2:51 a.m. UTC
With one worker we will always send the scsi cmd responses then send the
TMF rsp, because LIO will always complete the scsi cmds first then call
into us to send the TMF response.

With multiple workers, the IO vq workers could be running while the
TMF/ctl vq worker is so this has us do a flush before completing the TMF
to make sure cmds are completed when it's work is later queued and run.

Signed-off-by: Mike Christie <michael.christie@oracle.com>
---
 drivers/vhost/scsi.c | 8 +++++++-
 1 file changed, 7 insertions(+), 1 deletion(-)
diff mbox series

Patch

diff --git a/drivers/vhost/scsi.c b/drivers/vhost/scsi.c
index 93c6ad1246eb..33e3ff4c1f38 100644
--- a/drivers/vhost/scsi.c
+++ b/drivers/vhost/scsi.c
@@ -413,7 +413,13 @@  static void vhost_scsi_queue_tm_rsp(struct se_cmd *se_cmd)
 {
 	struct vhost_scsi_tmf *tmf = container_of(se_cmd, struct vhost_scsi_tmf,
 						  se_cmd);
-
+	/*
+	 * LIO will complete the cmds this TMF has cleaned up, then call
+	 * this function. If we have vqs that do not share a worker with the
+	 * ctl vq, then those cmds/works could still be completing. Do a
+	 * flush here to make sure when the tmf work runs the cmds are done.
+	 */
+	vhost_work_dev_flush(&tmf->vhost->dev);
 	tmf->scsi_resp = se_cmd->se_tmr_req->response;
 	transport_generic_free_cmd(&tmf->se_cmd, 0);
 }