diff mbox series

[PATCHv2,4/5] nvme: use return value from blk_execute_rq()

Message ID 20210423220558.40764-5-kbusch@kernel.org (mailing list archive)
State New, archived
Headers show
Series block and nvme passthrough error handling | expand

Commit Message

Keith Busch April 23, 2021, 10:05 p.m. UTC
We don't have an nvme status to report if the driver's .queue_rq()
returns an error without dispatching the requested nvme command. Use the
return value from blk_execute_rq() for all passthrough commands so the
caller may know their command was not successful.

If the command is from the target passthrough interface and fails to
dispatch, synthesize the response back to the host as a internal target
error.

Signed-off-by: Keith Busch <kbusch@kernel.org>
---
 drivers/nvme/host/core.c       | 16 ++++++++++++----
 drivers/nvme/host/ioctl.c      |  6 +-----
 drivers/nvme/host/nvme.h       |  2 +-
 drivers/nvme/target/passthru.c |  8 ++++----
 4 files changed, 18 insertions(+), 14 deletions(-)

Comments

Christoph Hellwig April 26, 2021, 2:42 p.m. UTC | #1
On Fri, Apr 23, 2021 at 03:05:57PM -0700, Keith Busch wrote:
> We don't have an nvme status to report if the driver's .queue_rq()
> returns an error without dispatching the requested nvme command. Use the
> return value from blk_execute_rq() for all passthrough commands so the
> caller may know their command was not successful.
> 
> If the command is from the target passthrough interface and fails to
> dispatch, synthesize the response back to the host as a internal target
> error.
> 
> Signed-off-by: Keith Busch <kbusch@kernel.org>
> ---
>  drivers/nvme/host/core.c       | 16 ++++++++++++----
>  drivers/nvme/host/ioctl.c      |  6 +-----
>  drivers/nvme/host/nvme.h       |  2 +-
>  drivers/nvme/target/passthru.c |  8 ++++----
>  4 files changed, 18 insertions(+), 14 deletions(-)
> 
> diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
> index 10bb8406e067..62af5fe7a0ce 100644
> --- a/drivers/nvme/host/core.c
> +++ b/drivers/nvme/host/core.c
> @@ -972,12 +972,12 @@ int __nvme_submit_sync_cmd(struct request_queue *q, struct nvme_command *cmd,
>  			goto out;
>  	}
>  
> -	blk_execute_rq(NULL, req, at_head);
> +	ret = blk_execute_rq(NULL, req, at_head);
>  	if (result)
>  		*result = nvme_req(req)->result;
>  	if (nvme_req(req)->flags & NVME_REQ_CANCELLED)
>  		ret = -EINTR;
> -	else
> +	else if (nvme_req(req)->status)
>  		ret = nvme_req(req)->status;

Just cosmetic, and already in the existing code, but I'd prefer if we
could keep the ret assignments together, something like:

	status = blk_execute_rq(NULL, req, at_head);
  	if (nvme_req(req)->flags & NVME_REQ_CANCELLED)
 		ret = -EINTR;
	else if (nvme_req(req)->status)
		ret = nvme_req(req)->status;
	else
		ret = blk_status_to_errno(status);

 	if (result)
 		*result = nvme_req(req)->result;

> +	ret = blk_execute_rq(disk, rq, 0);
>  	if (effects) /* nothing to be done for zero cmd effects */
>  		nvme_passthru_end(ctrl, effects);
> +
> +	if (nvme_req(rq)->flags & NVME_REQ_CANCELLED)
> +		ret = -EINTR;
> +	else if (nvme_req(rq)->status)
> +		ret = nvme_req(rq)->status;
> +

I think we want a helper for all this return value magic instead of
duplicating it.
Yuanyuan Zhong April 26, 2021, 5:10 p.m. UTC | #2
On Fri, Apr 23, 2021 at 3:06 PM Keith Busch <kbusch@kernel.org> wrote:
>
> We don't have an nvme status to report if the driver's .queue_rq()
> returns an error without dispatching the requested nvme command. Use the
> return value from blk_execute_rq() for all passthrough commands so the
> caller may know their command was not successful.
>
> If the command is from the target passthrough interface and fails to
> dispatch, synthesize the response back to the host as a internal target
> error.
>
> Signed-off-by: Keith Busch <kbusch@kernel.org>
> ---
>  drivers/nvme/host/core.c       | 16 ++++++++++++----
>  drivers/nvme/host/ioctl.c      |  6 +-----
>  drivers/nvme/host/nvme.h       |  2 +-
>  drivers/nvme/target/passthru.c |  8 ++++----
>  4 files changed, 18 insertions(+), 14 deletions(-)
>
> diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
> index 10bb8406e067..62af5fe7a0ce 100644
> --- a/drivers/nvme/host/core.c
> +++ b/drivers/nvme/host/core.c
> @@ -972,12 +972,12 @@ int __nvme_submit_sync_cmd(struct request_queue *q, struct nvme_command *cmd,
>                         goto out;
>         }
>
> -       blk_execute_rq(NULL, req, at_head);
> +       ret = blk_execute_rq(NULL, req, at_head);
>         if (result)
>                 *result = nvme_req(req)->result;
>         if (nvme_req(req)->flags & NVME_REQ_CANCELLED)
>                 ret = -EINTR;
> -       else
> +       else if (nvme_req(req)->status)

Since nvme_req(req)->status is uninitialized for a command failed to dispatch,
it's valid only if ret from blk_execute_rq() is 0.
Keith Busch April 26, 2021, 5:15 p.m. UTC | #3
On Mon, Apr 26, 2021 at 10:10:09AM -0700, Yuanyuan Zhong wrote:
> On Fri, Apr 23, 2021 at 3:06 PM Keith Busch <kbusch@kernel.org> wrote:
> >
> > We don't have an nvme status to report if the driver's .queue_rq()
> > returns an error without dispatching the requested nvme command. Use the
> > return value from blk_execute_rq() for all passthrough commands so the
> > caller may know their command was not successful.
> >
> > If the command is from the target passthrough interface and fails to
> > dispatch, synthesize the response back to the host as a internal target
> > error.
> >
> > Signed-off-by: Keith Busch <kbusch@kernel.org>
> > ---
> >  drivers/nvme/host/core.c       | 16 ++++++++++++----
> >  drivers/nvme/host/ioctl.c      |  6 +-----
> >  drivers/nvme/host/nvme.h       |  2 +-
> >  drivers/nvme/target/passthru.c |  8 ++++----
> >  4 files changed, 18 insertions(+), 14 deletions(-)
> >
> > diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
> > index 10bb8406e067..62af5fe7a0ce 100644
> > --- a/drivers/nvme/host/core.c
> > +++ b/drivers/nvme/host/core.c
> > @@ -972,12 +972,12 @@ int __nvme_submit_sync_cmd(struct request_queue *q, struct nvme_command *cmd,
> >                         goto out;
> >         }
> >
> > -       blk_execute_rq(NULL, req, at_head);
> > +       ret = blk_execute_rq(NULL, req, at_head);
> >         if (result)
> >                 *result = nvme_req(req)->result;
> >         if (nvme_req(req)->flags & NVME_REQ_CANCELLED)
> >                 ret = -EINTR;
> > -       else
> > +       else if (nvme_req(req)->status)
> 
> Since nvme_req(req)->status is uninitialized for a command failed to dispatch,
> it's valid only if ret from blk_execute_rq() is 0.

That's not quite right. If queue_rq() succeeds, but the SSD returns an
error, blk_execute_rq() returns a non-zero value with a valid nvme_req
status.
Yuanyuan Zhong April 26, 2021, 5:39 p.m. UTC | #4
On Mon, Apr 26, 2021 at 10:15 AM Keith Busch <kbusch@kernel.org> wrote:
>
> On Mon, Apr 26, 2021 at 10:10:09AM -0700, Yuanyuan Zhong wrote:
> > On Fri, Apr 23, 2021 at 3:06 PM Keith Busch <kbusch@kernel.org> wrote:
> > >
> > > We don't have an nvme status to report if the driver's .queue_rq()
> > > returns an error without dispatching the requested nvme command. Use the
> > > return value from blk_execute_rq() for all passthrough commands so the
> > > caller may know their command was not successful.
> > >
> > > If the command is from the target passthrough interface and fails to
> > > dispatch, synthesize the response back to the host as a internal target
> > > error.
> > >
> > > Signed-off-by: Keith Busch <kbusch@kernel.org>
> > > ---
> > >  drivers/nvme/host/core.c       | 16 ++++++++++++----
> > >  drivers/nvme/host/ioctl.c      |  6 +-----
> > >  drivers/nvme/host/nvme.h       |  2 +-
> > >  drivers/nvme/target/passthru.c |  8 ++++----
> > >  4 files changed, 18 insertions(+), 14 deletions(-)
> > >
> > > diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
> > > index 10bb8406e067..62af5fe7a0ce 100644
> > > --- a/drivers/nvme/host/core.c
> > > +++ b/drivers/nvme/host/core.c
> > > @@ -972,12 +972,12 @@ int __nvme_submit_sync_cmd(struct request_queue *q, struct nvme_command *cmd,
> > >                         goto out;
> > >         }
> > >
> > > -       blk_execute_rq(NULL, req, at_head);
> > > +       ret = blk_execute_rq(NULL, req, at_head);
> > >         if (result)
> > >                 *result = nvme_req(req)->result;
> > >         if (nvme_req(req)->flags & NVME_REQ_CANCELLED)
> > >                 ret = -EINTR;
> > > -       else
> > > +       else if (nvme_req(req)->status)
> >
> > Since nvme_req(req)->status is uninitialized for a command failed to dispatch,
> > it's valid only if ret from blk_execute_rq() is 0.
>
> That's not quite right. If queue_rq() succeeds, but the SSD returns an
> error, blk_execute_rq() returns a non-zero value with a valid nvme_req
> status.

Agreed. But after that, freeing the req let nvme_req(req)->status from SSD stay.
If the same req get re-allocated and get dispatching failure,
shouldn't the dispatching
error take precedence over the stale nvme_req(req)->status?
diff mbox series

Patch

diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
index 10bb8406e067..62af5fe7a0ce 100644
--- a/drivers/nvme/host/core.c
+++ b/drivers/nvme/host/core.c
@@ -972,12 +972,12 @@  int __nvme_submit_sync_cmd(struct request_queue *q, struct nvme_command *cmd,
 			goto out;
 	}
 
-	blk_execute_rq(NULL, req, at_head);
+	ret = blk_execute_rq(NULL, req, at_head);
 	if (result)
 		*result = nvme_req(req)->result;
 	if (nvme_req(req)->flags & NVME_REQ_CANCELLED)
 		ret = -EINTR;
-	else
+	else if (nvme_req(req)->status)
 		ret = nvme_req(req)->status;
  out:
 	blk_mq_free_request(req);
@@ -1066,18 +1066,26 @@  static void nvme_passthru_end(struct nvme_ctrl *ctrl, u32 effects)
 	}
 }
 
-void nvme_execute_passthru_rq(struct request *rq)
+int nvme_execute_passthru_rq(struct request *rq)
 {
 	struct nvme_command *cmd = nvme_req(rq)->cmd;
 	struct nvme_ctrl *ctrl = nvme_req(rq)->ctrl;
 	struct nvme_ns *ns = rq->q->queuedata;
 	struct gendisk *disk = ns ? ns->disk : NULL;
 	u32 effects;
+	int ret;
 
 	effects = nvme_passthru_start(ctrl, ns, cmd->common.opcode);
-	blk_execute_rq(disk, rq, 0);
+	ret = blk_execute_rq(disk, rq, 0);
 	if (effects) /* nothing to be done for zero cmd effects */
 		nvme_passthru_end(ctrl, effects);
+
+	if (nvme_req(rq)->flags & NVME_REQ_CANCELLED)
+		ret = -EINTR;
+	else if (nvme_req(rq)->status)
+		ret = nvme_req(rq)->status;
+
+	return ret;
 }
 EXPORT_SYMBOL_NS_GPL(nvme_execute_passthru_rq, NVME_TARGET_PASSTHRU);
 
diff --git a/drivers/nvme/host/ioctl.c b/drivers/nvme/host/ioctl.c
index 8e05d65c9e93..9cdd8bfebb80 100644
--- a/drivers/nvme/host/ioctl.c
+++ b/drivers/nvme/host/ioctl.c
@@ -93,11 +93,7 @@  static int nvme_submit_user_cmd(struct request_queue *q,
 		}
 	}
 
-	nvme_execute_passthru_rq(req);
-	if (nvme_req(req)->flags & NVME_REQ_CANCELLED)
-		ret = -EINTR;
-	else
-		ret = nvme_req(req)->status;
+	ret = nvme_execute_passthru_rq(req);
 	if (result)
 		*result = le64_to_cpu(nvme_req(req)->result.u64);
 	if (meta && !ret && !write) {
diff --git a/drivers/nvme/host/nvme.h b/drivers/nvme/host/nvme.h
index c8f6ec5b8d2b..76a7ed0728b9 100644
--- a/drivers/nvme/host/nvme.h
+++ b/drivers/nvme/host/nvme.h
@@ -847,7 +847,7 @@  static inline void nvme_hwmon_exit(struct nvme_ctrl *ctrl)
 
 u32 nvme_command_effects(struct nvme_ctrl *ctrl, struct nvme_ns *ns,
 			 u8 opcode);
-void nvme_execute_passthru_rq(struct request *rq);
+int nvme_execute_passthru_rq(struct request *rq);
 struct nvme_ctrl *nvme_ctrl_from_file(struct file *file);
 struct nvme_ns *nvme_find_get_ns(struct nvme_ctrl *ctrl, unsigned nsid);
 void nvme_put_ns(struct nvme_ns *ns);
diff --git a/drivers/nvme/target/passthru.c b/drivers/nvme/target/passthru.c
index 2798944899b7..d9a649d9903b 100644
--- a/drivers/nvme/target/passthru.c
+++ b/drivers/nvme/target/passthru.c
@@ -153,11 +153,10 @@  static void nvmet_passthru_execute_cmd_work(struct work_struct *w)
 {
 	struct nvmet_req *req = container_of(w, struct nvmet_req, p.work);
 	struct request *rq = req->p.rq;
-	u16 status;
+	int status;
 
-	nvme_execute_passthru_rq(rq);
+	status = nvme_execute_passthru_rq(rq);
 
-	status = nvme_req(rq)->status;
 	if (status == NVME_SC_SUCCESS &&
 	    req->cmd->common.opcode == nvme_admin_identify) {
 		switch (req->cmd->identify.cns) {
@@ -168,7 +167,8 @@  static void nvmet_passthru_execute_cmd_work(struct work_struct *w)
 			nvmet_passthru_override_id_ns(req);
 			break;
 		}
-	}
+	} else if (status < 0)
+		status = NVME_SC_INTERNAL;
 
 	req->cqe->result = nvme_req(rq)->result;
 	nvmet_req_complete(req, status);