diff mbox series

[v9,05/12] Signed-off-by: Chaitanya Kulkarni <chaitanya.kulkarni@wdc.com> [logang@deltatee.com: fixed some of the wording in the help message] Signed-off-by: Logan Gunthorpe <logang@deltatee.com> Reviewed-by: Max Gurtovoy <maxg@mellanox.com>

Message ID 20191009192530.13079-6-logang@deltatee.com (mailing list archive)
State New, archived
Headers show
Series None | expand

Commit Message

Logan Gunthorpe Oct. 9, 2019, 7:25 p.m. UTC
From: Chaitanya Kulkarni <chaitanya.kulkarni@wdc.com>

nvmet-passthru: add passthru code to process commands

Add passthru command handling capability for the NVMeOF target and
export passthru APIs which are used to integrate passthru
code with nvmet-core. A passthru ns member is added to the target request
to hold the ns reference for respective commands.

The new file io-cmd-passthru.c handles passthru cmd parsing and execution.
In the passthru mode, we create a block layer request from the nvmet
request and map the data on to the block layer request. For handling the
side effects of the passthru admin commands we add two functions similar
to the nvme_passthru[start|end]() functions present in the nvme-core.
Only admin commands on a white list are let through which includes
vendor unique commands.

We introduce new passthru workqueue similar to the one we have for the
file backend for NVMeOF target to execute the NVMe Admin passthru
commands.

All the new passthrtu code is enabled or disabled by a new  KConfig option
for the NVMeOF target.

Signed-off-by: Chaitanya Kulkarni <chaitanya.kulkarni@wdc.com>
[logang@deltatee.com:
   * renamed passthru-cmd.c to io-cmd-passthru.c for consistency
   * squashed "update target makefile for passthru"
   * squashed "integrate passthru request processing"
   * squashed "update KConfig with config passthru option"
   * added appropriate CONFIG_NVME_TARGET_PASSTHRU #ifdefs
   * pushed passthru_wq into passthrtu.c and introduced
     nvmet_passthru_init() and nvmet_passthru_destroy() to avoid
     inline #ifdef mess
   * renamed nvmet_passthru_ctrl() to nvmet_req_passthru_ctrl()
     and provided nvmet_passthr_ctrl() to get the ctrl from a subsys
   * fixed failure path in nvmet_passthru_execute_cmd() to ensure
     we always complete the request (with an error when appropriate)
   * restructered out nvmet_passthru_make_request() and
     nvmet_passthru_execute_cmd() to create nvmet_passthru_map_sg()
     which makes the code simpler and more readable.
   * move call to nvme_find_get_ns() into nvmet_passthru_execute_cmd()
     to prevent a lockdep error. nvme_find_get_ns() takes a
     lock and can sleep but nvme_init_req() is called while
     hctx_lock() is held (in the loop transport) and therefore
     should not sleep.
   * added check in nvmet_passthru_execute_cmd() to ensure we don't
     violate queue_max_segments or queue_max_hw_sectors.
   * added nvmet_passthru_set_mdts() to prevent requests that
     exceed max_segments
   * convert admin command blacklist to a white list with vendor
     unique commands specifically allowed
   * force setting cmic bit to support multipath for connections
     to the target
   * dropped le16_to_cpu() conversion in nvmet_passthru_req_done() as
     it's currently already done in nvme_end_request()
   * unabbreviated 'VUC' in a comment as it's not a commonly known
     acronym
   * removed unnecessary inline tags on static functions
   * minor edits to commit message
]
Signed-off-by: Logan Gunthorpe <logang@deltatee.com>
---
 drivers/nvme/target/Kconfig           |  10 +
 drivers/nvme/target/Makefile          |   1 +
 drivers/nvme/target/core.c            |  11 +-
 drivers/nvme/target/io-cmd-passthru.c | 567 ++++++++++++++++++++++++++
 drivers/nvme/target/nvmet.h           |  46 +++
 include/linux/nvme.h                  |   1 +
 6 files changed, 635 insertions(+), 1 deletion(-)
 create mode 100644 drivers/nvme/target/io-cmd-passthru.c

Comments

Christoph Hellwig Oct. 10, 2019, 12:34 p.m. UTC | #1
Just a first round of comments.

> The new file io-cmd-passthru.c handles passthru cmd parsing and execution.
> In the passthru mode, we create a block layer request from the nvmet
> request and map the data on to the block layer request. For handling the
> side effects of the passthru admin commands we add two functions similar
> to the nvme_passthru[start|end]() functions present in the nvme-core.
> Only admin commands on a white list are let through which includes
> vendor unique commands.

I think we need to do work in the core instead and offer a nvme_execute_rq
that handles all that work, and which also is used by the ioctl code.
While we're at it, nvme_submit_io seems to lack the
nvme_passthru_start/end handling, so we'd also fix that while we are at
it.

> @@ -896,6 +896,8 @@ bool nvmet_req_init(struct nvmet_req *req, struct nvmet_cq *cq,
>  	if (unlikely(!req->sq->ctrl))
>  		/* will return an error for any Non-connect command: */
>  		status = nvmet_parse_connect_cmd(req);
> +	else if (nvmet_req_passthru_ctrl(req))
> +		status = nvmet_parse_passthru_cmd(req);
>  	else if (likely(req->sq->qid != 0))
>  		status = nvmet_parse_io_cmd(req);
>  	else if (nvme_is_fabrics(req->cmd))

This is turning into a mess (mostly not the fault of this patch, but
that is the final straw).  Moreover there is way to much magic in
nvmet_parse_passthru_cmd that we better share with the existing
code.  See the patch below for what I'd like to see.  This should
probably be split into a prep patch I can submit directly and the
passthrough bits on top of it.

> diff --git a/drivers/nvme/target/io-cmd-passthru.c b/drivers/nvme/target/io-cmd-passthru.c

This file doesn't contains I/O command handling only, it should
be renamed to passthru.c

> +int nvmet_passthru_init(void)
> +{
> +	passthru_wq = alloc_workqueue("nvmet-passthru-wq", WQ_MEM_RECLAIM, 0);
> +	if (!passthru_wq)
> +		return -ENOMEM;
> +
> +	return 0;
> +}
> +
> +void nvmet_passthru_destroy(void)
> +{
> +	destroy_workqueue(passthru_wq);
> +}

Please keep the init/exit code at the end of the file.

> +
> +static void nvmet_passthru_req_complete(struct nvmet_req *req,
> +		struct request *rq, u16 status)
> +{
> +	nvmet_req_complete(req, status);
> +
> +	if (rq)
> +		blk_put_request(rq);
> +}

No real need for the empty line.  Also I can't see how rq could ever
be NULL here.  In fact I don't really see much of a point in this
helper - just calling the rwo functions directly makes the callers
easier to read for me personally.

> +
> +static void nvmet_passthru_req_done(struct request *rq,
> +		blk_status_t blk_status)
> +{
> +	struct nvmet_req *req = rq->end_io_data;
> +	u16 status = nvme_req(rq)->status;
> +
> +	req->cqe->result.u32 = nvme_req(rq)->result.u32;

This will lose any 64-bit results (not there yet, but we have
pending TPs).  Just assign the actual result union and we are safe
for any future additions.

> +static u16 nvmet_passthru_override_format_nvm(struct nvmet_req *req)
> +{
> +	int lbaf = le32_to_cpu(req->cmd->format.cdw10) & 0x0000000F;
> +	int nsid = le32_to_cpu(req->cmd->format.nsid);
> +	u16 status = NVME_SC_SUCCESS;
> +	struct nvme_id_ns *id;
> +	int ret;
> +
> +	ret = nvme_identify_ns(nvmet_req_passthru_ctrl(req), nsid, &id);
> +	if (ret)
> +		return NVME_SC_INTERNAL;
> +	/*
> +	 * XXX: Please update this code once NVMeOF target starts supporting
> +	 * metadata. We don't support ns lba format with metadata over fabrics
> +	 * right now, so report an error if format nvm cmd tries to format
> +	 * a namespace with the LBA format which has metadata.
> +	 */
> +	if (id->lbaf[lbaf].ms)
> +		status = NVME_SC_INVALID_NS;
> +
> +	kfree(id);
> +	return status;

We already filter out all formats with metadata in
nvmet_passthru_override_id_ns, so I see no point in rejecting
formats here that the host can't even have seen.

> +static void nvmet_passthru_set_mdts(struct nvmet_ctrl *ctrl,
> +				    struct nvme_id_ctrl *id)
> +{
> +	struct nvme_ctrl *pctrl = ctrl->subsys->passthru_ctrl;
> +	u32 max_hw_sectors;
> +	int page_shift;
> +
> +	/*
> +	 * The passthru NVMe driver may have a limit on the number
> +	 * of segments which depends on the host's memory fragementation.
> +	 * To solve this, ensure mdts is limitted to the pages equal to
> +	 * the number of segments.
> +	 */
> +
> +	max_hw_sectors = min_not_zero(pctrl->max_segments << (PAGE_SHIFT - 9),
> +				      pctrl->max_hw_sectors);

No need for the blank line, and pleae use up all 80 chars for comments.

> +	nvmet_passthru_set_mdts(ctrl, id);

Any reason that is a separate function and not inlined here?

> +	/* don't support host memory buffer */
> +	id->hmpre = 0;
> +	id->hmmin = 0;

What about CMB/PMR?

> +	/*
> +	 * When passsthru controller is setup using nvme-loop transport it will
> +	 * export the passthru ctrl subsysnqn (PCIe NVMe ctrl) and will fail in
> +	 * the nvme/host/core.c in the nvme_init_subsystem()->nvme_active_ctrl()
> +	 * code path with duplicate ctr subsynqn. In order to prevent that we
> +	 * mask the passthru-ctrl subsysnqn with the target ctrl subsysnqn.
> +	 */
> +	memcpy(id->subnqn, ctrl->subsysnqn, sizeof(id->subnqn));

I don't think this is a good idea.  It will break multipathing when you
export two ports of the original controller.  The whole idea of
overwriting ctrlid and subsysnqn will also lead to huge problems with
persistent reservations.  I think we need to pass through the subnqn
and cntlid unmodified.

> +	/* Support multipath connections with fabrics */
> +	id->cmic |= 1 << 1;

I don't think we can just overwrite this, we need to use the original
controllers values.

> +	if (status)
> +		goto out_free;
> +
> +	for (i = 0; i < (id->nlbaf + 1); i++)
> +		if (id->lbaf[i].ms)
> +			memset(&id->lbaf[i], 0, sizeof(id->lbaf[i]));
> +
> +	id->flbas = id->flbas & ~(1 << 4);
> +	id->mc = 0;

This probably wants a similar annotation as all the other metadata
related bits.  Especially as Max is looking into metadata support for
RDMA, and people are working on PCIe frontends for the target code.

> +	/*
> +	 * Admin Commands have side effects and it is better to handle those
> +	 * side effects in the submission thread context than in the request
> +	 * completion path, which is in the interrupt context. Also in this
> +	 * way, we keep the passhru admin command code path consistent with the
> +	 * nvme/host/core.c sync command submission APIs/IOCTLs and use
> +	 * nvme_passthru_start/end() to handle side effects consistently.
> +	 */
> +	blk_execute_rq(req->p.rq->q, NULL, req->p.rq, 0);

Can we please only do the synchronous execution for the cases where
we actually need to override something.

> +static int nvmet_passthru_map_sg(struct nvmet_req *req, struct request *rq)
> +{
> +	int sg_cnt = req->sg_cnt;
> +	struct scatterlist *sg;
> +	int op = REQ_OP_READ;
> +	int op_flags = 0;
> +	struct bio *bio;
> +	int i, ret;
> +
> +	if (req->cmd->common.opcode == nvme_cmd_flush) {
> +		op_flags = REQ_FUA;
> +	} else if (nvme_is_write(req->cmd)) {
> +		op = REQ_OP_WRITE;
> +		op_flags = REQ_SYNC | REQ_IDLE;
> +	}

This looks weird.  Normally this should be REQ_OP_DRV_IN/REQ_OP_DRV_OUT.

> +	for_each_sg(req->sg, sg, req->sg_cnt, i) {
> +		if (bio_add_page(bio, sg_page(sg), sg->length,
> +				 sg->offset) != sg->length) {
> +			ret = blk_rq_append_bio(rq, &bio);
> +			if (unlikely(ret))
> +				return ret;
> +
> +			bio_set_op_attrs(bio, op, op_flags);

bio_set_op_attrs is deprecated, please just open code it.

> +	/*
> +	 * We don't support fused cmds, also nvme-pci driver uses its own
> +	 * sgl_threshold parameter to decide whether to use SGLs or PRPs hence
> +	 * turn off those bits in the flags.
> +	 */
> +	req->cmd->common.flags &= ~(NVME_CMD_FUSE_FIRST | NVME_CMD_FUSE_SECOND |
> +			NVME_CMD_SGL_ALL);

I think this belongs into nvme_setup_cmd as is affects all pass
through commands.

> +	rq = nvmet_passthru_blk_make_request(req, ns, GFP_KERNEL);
> +	if (unlikely(IS_ERR(rq))) {

IS_ERR contains an implicitl unlikely.  But I also don't see why
nvmet_passthru_blk_make_request even exists.  Merging it into the
caller would seems more reasonable to me.

> +	if (unlikely(blk_rq_nr_phys_segments(rq) > queue_max_segments(rq->q) ||
> +	    (blk_rq_payload_bytes(rq) >> 9) > queue_max_hw_sectors(rq->q))) {

I' split the two unrelated checks into two if statements for
readability.

> +	case nvme_admin_set_features:
> +		switch (le32_to_cpu(req->cmd->features.fid)) {
> +		case NVME_FEAT_ASYNC_EVENT:
> +		case NVME_FEAT_KATO:
> +		case NVME_FEAT_NUM_QUEUES:
> +			status = nvmet_parse_admin_cmd(req);
> +			break;
> +		default:
> +			req->execute = nvmet_passthru_execute_cmd;
> +		}
> +		break;

We'll need to treat get_features equally.

> +	/* 4. By default, blacklist all admin commands */
> +	default:
> +
> +		status = NVME_SC_INVALID_OPCODE | NVME_SC_DNR;
> +		req->execute = NULL;
> +		break;
> +	}

That seems odd.  There is plenty of other useful admin commands.

Yes, we need to ignore the PCIe specific ones:

 - Create I/O Completion Queue
 - Create I/O Submission Queue
 - Delete I/O Completion Queue
 - Delete I/O Submission Queue
 - Doorbell Buffer Configuration
 - Virtualization Management

but all others seem perfectly valid to pass through.

>  /* Convert a 32-bit number to a 16-bit 0's based number */
> diff --git a/include/linux/nvme.h b/include/linux/nvme.h
> index f61d6906e59d..94e730b5d0a3 100644
> --- a/include/linux/nvme.h
> +++ b/include/linux/nvme.h
> @@ -816,6 +816,7 @@ enum nvme_admin_opcode {
>  	nvme_admin_security_recv	= 0x82,
>  	nvme_admin_sanitize_nvm		= 0x84,
>  	nvme_admin_get_lba_status	= 0x86,
> +	nvme_admin_vendor_unique_start	= 0xC0,

They are called vendor specific commands.

diff --git a/drivers/nvme/target/admin-cmd.c b/drivers/nvme/target/admin-cmd.c
index 67b6642bb628..de24c140b547 100644
--- a/drivers/nvme/target/admin-cmd.c
+++ b/drivers/nvme/target/admin-cmd.c
@@ -652,7 +652,7 @@ u16 nvmet_set_feat_async_event(struct nvmet_req *req, u32 mask)
 	return 0;
 }
 
-static void nvmet_execute_set_features(struct nvmet_req *req)
+void nvmet_execute_set_features(struct nvmet_req *req)
 {
 	struct nvmet_subsys *subsys = req->sq->ctrl->subsys;
 	u32 cdw10 = le32_to_cpu(req->cmd->common.cdw10);
@@ -813,10 +813,18 @@ u16 nvmet_parse_admin_cmd(struct nvmet_req *req)
 	struct nvme_command *cmd = req->cmd;
 	u16 ret;
 
+	if (nvme_is_fabrics(cmd))
+		return nvmet_parse_fabrics_cmd(req);
+	if (req->sq->ctrl->subsys->type == NVME_NQN_DISC)
+		return nvmet_parse_discovery_cmd(req);
+
 	ret = nvmet_check_ctrl_status(req, cmd);
 	if (unlikely(ret))
 		return ret;
 
+	if (nvmet_req_passthru_ctrl(req))
+		return nvmet_parse_passthru_admin_cmd(req);
+
 	switch (cmd->common.opcode) {
 	case nvme_admin_get_log_page:
 		req->data_len = nvmet_get_log_page_len(cmd);
diff --git a/drivers/nvme/target/core.c b/drivers/nvme/target/core.c
index f9d46354f9ae..3a5b7e42a158 100644
--- a/drivers/nvme/target/core.c
+++ b/drivers/nvme/target/core.c
@@ -839,6 +839,9 @@ static u16 nvmet_parse_io_cmd(struct nvmet_req *req)
 	if (unlikely(ret))
 		return ret;
 
+	if (nvmet_req_passthru_ctrl(req))
+		return nvmet_setup_passthru_command(req);
+
 	req->ns = nvmet_find_namespace(req->sq->ctrl, cmd->rw.nsid);
 	if (unlikely(!req->ns)) {
 		req->error_loc = offsetof(struct nvme_common_command, nsid);
@@ -900,16 +903,10 @@ bool nvmet_req_init(struct nvmet_req *req, struct nvmet_cq *cq,
 	}
 
 	if (unlikely(!req->sq->ctrl))
-		/* will return an error for any Non-connect command: */
+		/* will return an error for any non-connect command: */
 		status = nvmet_parse_connect_cmd(req);
-	else if (nvmet_req_passthru_ctrl(req))
-		status = nvmet_parse_passthru_cmd(req);
 	else if (likely(req->sq->qid != 0))
 		status = nvmet_parse_io_cmd(req);
-	else if (nvme_is_fabrics(req->cmd))
-		status = nvmet_parse_fabrics_cmd(req);
-	else if (req->sq->ctrl->subsys->type == NVME_NQN_DISC)
-		status = nvmet_parse_discovery_cmd(req);
 	else
 		status = nvmet_parse_admin_cmd(req);
 
diff --git a/drivers/nvme/target/io-cmd-passthru.c b/drivers/nvme/target/io-cmd-passthru.c
index 37d06ebcbd0f..fbf3508ea8ea 100644
--- a/drivers/nvme/target/io-cmd-passthru.c
+++ b/drivers/nvme/target/io-cmd-passthru.c
@@ -557,6 +557,12 @@ static void nvmet_passthru_emulate_id_desclist(struct nvmet_req *req)
 	nvmet_req_complete(req, status);
 }
 
+u16 nvmet_setup_passthru_command(struct nvmet_req *req)
+{
+	req->execute = nvmet_passthru_execute_cmd;
+	return NVME_SC_SUCCESS;
+}
+
 /*
  * In the passthru mode we support three types for commands:-
  * 1. Commands which are black-listed.
@@ -564,84 +570,55 @@ static void nvmet_passthru_emulate_id_desclist(struct nvmet_req *req)
  * 3. Commands which are emulated in the target code, since we can't rely
  *    on passthru-ctrl and cannot route through the target code.
  */
-static u16 nvmet_parse_passthru_admin_cmd(struct nvmet_req *req)
+u16 nvmet_parse_passthru_admin_cmd(struct nvmet_req *req)
 {
 	struct nvme_command *cmd = req->cmd;
-	u16 status = 0;
 
-	if (cmd->common.opcode >= nvme_admin_vendor_unique_start) {
-		/*
-		 * Passthru all vendor unique commands
-		 */
-		req->execute = nvmet_passthru_execute_cmd;
-		return status;
-	}
+	/*
+	 * Passthru all vendor unique commands
+	 */
+	if (cmd->common.opcode >= nvme_admin_vendor_unique_start)
+		return nvmet_setup_passthru_command(req);
 
 	switch (cmd->common.opcode) {
-	/* 2. commands which are routed through target code */
 	case nvme_admin_async_event:
-	/*
-	 * Right now we don't monitor any events for the passthru controller.
-	 * Instead generate asyn event notice for the ns-mgmt/format/attach
-	 * commands so that host can update it's ns-inventory.
-	 */
-		/* fallthru */
+		req->execute = nvmet_execute_async_event;
+		req->data_len = 0;
+		return 0;
 	case nvme_admin_keep_alive:
-	/*
-	 * Most PCIe ctrls don't support keep alive cmd, we route keep alive
-	 * to the non-passthru mode. In future please change this code when
-	 * PCIe ctrls with keep alive support available.
-	 */
-		status = nvmet_parse_admin_cmd(req);
-		break;
+		/*
+		 * Most PCIe ctrls don't support keep alive cmd, we route keep
+		 * alive to the non-passthru mode. In future please change this
+		 * code when PCIe ctrls with keep alive support available.
+		 */
+		req->execute = nvmet_execute_keep_alive;
+		req->data_len = 0;
+		return 0;
 	case nvme_admin_set_features:
 		switch (le32_to_cpu(req->cmd->features.fid)) {
 		case NVME_FEAT_ASYNC_EVENT:
 		case NVME_FEAT_KATO:
 		case NVME_FEAT_NUM_QUEUES:
-			status = nvmet_parse_admin_cmd(req);
-			break;
+			/* XXX: we'll need to do the same for get_features! */
+			req->execute = nvmet_execute_set_features;
+			req->data_len = 0;
+			return 0;
 		default:
-			req->execute = nvmet_passthru_execute_cmd;
+			return nvmet_setup_passthru_command(req);
 		}
 		break;
-	/* 3. commands which are emulated in the passthru code */
 	case nvme_admin_identify:
 		switch (req->cmd->identify.cns) {
 		case NVME_ID_CNS_NS_DESC_LIST:
 			req->execute = nvmet_passthru_emulate_id_desclist;
-			break;
+			req->data_len = 0;
+			return 0;
 		default:
-			req->execute = nvmet_passthru_execute_cmd;
+			return nvmet_setup_passthru_command(req);
 		}
 		break;
-	/* 4. By default, blacklist all admin commands */
 	default:
-
-		status = NVME_SC_INVALID_OPCODE | NVME_SC_DNR;
-		req->execute = NULL;
-		break;
+		/* By default, blacklist all admin commands */
+		return NVME_SC_INVALID_OPCODE | NVME_SC_DNR;
 	}
-
-	return status;
-}
-
-u16 nvmet_parse_passthru_cmd(struct nvmet_req *req)
-{
-	int ret;
-
-	if (unlikely(req->cmd->common.opcode == nvme_fabrics_command))
-		return nvmet_parse_fabrics_cmd(req);
-	else if (unlikely(req->sq->ctrl->subsys->type == NVME_NQN_DISC))
-		return nvmet_parse_discovery_cmd(req);
-
-	ret = nvmet_check_ctrl_status(req, req->cmd);
-	if (unlikely(ret))
-		return ret;
-
-	if (unlikely(req->sq->qid == 0))
-		return nvmet_parse_passthru_admin_cmd(req);
-
-	req->execute = nvmet_passthru_execute_cmd;
-	return NVME_SC_SUCCESS;
 }
diff --git a/drivers/nvme/target/nvmet.h b/drivers/nvme/target/nvmet.h
index ba7690979661..200ec07507cd 100644
--- a/drivers/nvme/target/nvmet.h
+++ b/drivers/nvme/target/nvmet.h
@@ -390,6 +390,7 @@ void nvmet_req_complete(struct nvmet_req *req, u16 status);
 int nvmet_req_alloc_sgl(struct nvmet_req *req);
 void nvmet_req_free_sgl(struct nvmet_req *req);
 
+void nvmet_execute_set_features(struct nvmet_req *req);
 void nvmet_execute_keep_alive(struct nvmet_req *req);
 
 void nvmet_cq_setup(struct nvmet_ctrl *ctrl, struct nvmet_cq *cq, u16 qid,
@@ -508,22 +509,19 @@ static inline u32 nvmet_rw_len(struct nvmet_req *req)
 }
 
 #ifdef CONFIG_NVME_TARGET_PASSTHRU
-
 int nvmet_passthru_init(void);
 void nvmet_passthru_destroy(void);
 void nvmet_passthru_subsys_free(struct nvmet_subsys *subsys);
 int nvmet_passthru_ctrl_enable(struct nvmet_subsys *subsys);
 void nvmet_passthru_ctrl_disable(struct nvmet_subsys *subsys);
-u16 nvmet_parse_passthru_cmd(struct nvmet_req *req);
+u16 nvmet_parse_passthru_admin_cmd(struct nvmet_req *req);
+u16 nvmet_setup_passthru_command(struct nvmet_req *req);
 
-static inline
-struct nvme_ctrl *nvmet_passthru_ctrl(struct nvmet_subsys *subsys)
+static inline struct nvme_ctrl *nvmet_passthru_ctrl(struct nvmet_subsys *subsys)
 {
 	return subsys->passthru_ctrl;
 }
-
 #else /* CONFIG_NVME_TARGET_PASSTHRU */
-
 static inline int nvmet_passthru_init(void)
 {
 	return 0;
@@ -541,12 +539,10 @@ static inline u16 nvmet_parse_passthru_cmd(struct nvmet_req *req)
 {
 	return 0;
 }
-static inline
-struct nvme_ctrl *nvmet_passthru_ctrl(struct nvmet_subsys *subsys)
+static inline struct nvme_ctrl *nvmet_passthru_ctrl(struct nvmet_subsys *subsys)
 {
 	return NULL;
 }
-
 #endif /* CONFIG_NVME_TARGET_PASSTHRU */
 
 static inline struct nvme_ctrl *nvmet_req_passthru_ctrl(struct nvmet_req *req)
Logan Gunthorpe Oct. 25, 2019, 11:09 p.m. UTC | #2
Hey,

Ok, I've got much of the work in progress: anything I haven't mentioned
below I should be able to get done for the next version of the patchset.

However, I have some remaining comments on the following feedback:

On 2019-10-10 6:34 a.m., Christoph Hellwig wrote:
>> +	/* don't support host memory buffer */
>> +	id->hmpre = 0;
>> +	id->hmmin = 0;
> 
> What about CMB/PMR?

The CMB and PMR are not specified in the identify structure. They are
specified in PCI registers so there's no need to override anything here.
I don't think it makes any sense to try to pass these through fabrics.

>> +	/*
>> +	 * When passsthru controller is setup using nvme-loop transport it will
>> +	 * export the passthru ctrl subsysnqn (PCIe NVMe ctrl) and will fail in
>> +	 * the nvme/host/core.c in the nvme_init_subsystem()->nvme_active_ctrl()
>> +	 * code path with duplicate ctr subsynqn. In order to prevent that we
>> +	 * mask the passthru-ctrl subsysnqn with the target ctrl subsysnqn.
>> +	 */
>> +	memcpy(id->subnqn, ctrl->subsysnqn, sizeof(id->subnqn));
> 
> I don't think this is a good idea.  It will break multipathing when you
> export two ports of the original controller.  The whole idea of
> overwriting ctrlid and subsysnqn will also lead to huge problems with
> persistent reservations.  I think we need to pass through the subnqn
> and cntlid unmodified.

I think trying to clone the cntlid will cause bigger problems with
multipath... I'll inflict some ascii art on you to try and explain.

The fabrics code creates a new controller for every connection, so if
they all had the cntlid of the passed through controller then we'd have
to restrict each passed through controller to only have a single
connection which means we can't have multi-path over the fabrics side
because we'd end up with something like this:

 +-----------------+      +-----------------+      +----------------+
 |Host A Subsys A  |      |Target Subsys A  |      |Host B Subsys A |
 | +--------------+|      |        +------+ | tcp  |        +------+|
 | | PCI Device   ||  -------------+ Ctrl +-----------------+ Ctrl ||
 | |      +------+||  |   |        |   0  | |      |        |   0  ||
 | |      | Ctrl +----+   |        +------+ |      |        +------+|
 | |      |   0  |||  |   |        +------+ | rdma |        +------+|
 | |      +------+||  +------------+ Ctrl +-----------------+ Ctrl ||
 | |      +------+||  |   |        |   0  | |      |        |   0  ||
 | |      | Ctrl |||  |   |        +------+ |      |        +------+|
 | |      |   1  |||  |   |        +------+ | loop |                |
 | |      +------+||  +------------+ Ctrl +----+   |                |
 | +--------------+|      |        |   0  | |  |   |                |
 |                 |      |        +------+ |  |   |                |
 |                 |      +-----------------+  |   +----------------+
 |                 |                           |
 |     ----------------------------------------+
 |    |            |
 |    |   +------+ |
 |    +---+ Ctrl | |
 |        |   0  | |
 |        +------+ |
 +-----------------+

Multipath doesn't work on Host B because all the controllers have the
same cntlid and it doesn't work on Host A for the loop back interface
because the cntlid conflicts with the cntlid of the original device. If
we allow the target to assign cntlid's from the IDA, per usual, I think
we have a much better situation:

+-----------------+      +-----------------+      +----------------+
 |Host A Subsys A  |      |Target Subsys A  |      |Host B Subsys A |
 | +--------------+|      |        +------+ | tcp  |        +------+|
 | | PCI Device   ||  -------------+ Ctrl +-----------------+ Ctrl ||
 | |      +------+||  |   |        |   0  | |      |        |   0  ||
 | |      | Ctrl +----+   |        +------+ |      |        +------+|
 | |      |   0  |||  |   |        +------+ | rdma |        +------+|
 | |      +------+||  +------------+ Ctrl +-----------------+ Ctrl ||
 | |      +------+||  |   |        |   1  | |      |        |   1  ||
 | |      | Ctrl |||  |   |        +------+ |      |        +------+|
 | |      |   1  |||  |   |        +------+ | loop |                |
 | |      +------+||  +------------+ Ctrl +----+   |                |
 | +--------------+|      |        |   2  | |  |   |                |
 |                 |      |        +------+ |  |   |                |
 |                 |      +-----------------+  |   +----------------+
 |                 |                           |
 |     ----------------------------------------+
 |    |            |
 |    |   +------+ |
 |    +---+ Ctrl | |
 |        |   2  | |
 |        +------+ |
 +-----------------+

Now multipath works for host B and will work with the loopback to Host
A, but *only* if the target assigns it a cntlid that doesn't conflict
with one that was in the original device (which is actually quite common
in the simple case of one device and one target). To deal with this
situation I contend it's better to replace the subsysnqn:

 +-----------------+      +-----------------+      +----------------+
 |Host A Subsys A  |      |Target Subsys B  |      | Host B Subsys B|
 | +--------------+|      |        +------+ | tcp  |        +------+|
 | | PCI Device   ||  -------------+ Ctrl +-----------------+ Ctrl ||
 | |      +------+||  |   |        |   2  | |      |        |   2  ||
 | |      | Ctrl +----+   |        +------+ |      |        +------+|
 | |      |   0  |||  |   |        +------+ | rdma |        +------+|
 | |      +------+||  +------------+ Ctrl +-----------------+ Ctrl ||
 | |      +------+||  |   |        |   1  | |      |        |   1  ||
 | |      | Ctrl |||  |   |        +------+ |      |        +------+|
 | |      |   1  |||  |   |        +------+ | loop |                |
 | |      +------+||  +------------- Ctrl +----+   |                |
 | +--------------+|      |        |   0  | |  |   |                |
 +-----------------+      |        +------+ |  |   |                |
 +-----------------+      +-----------------+  |   +----------------+
 |Host A Subsys B  |                           |
 |     ----------------------------------------+
 |    |            |

 |    |   +------+ |

 |    +---+ Ctrl | |

 |        |   0  | |

 |        +------+ |

 +-----------------+

This way loopback is always guaranteed to work, and multipath over
fabrics will still work as well because they are never exposed to the
original subsysnqn. Using a loopback device is really only useful for
testing so I don't think using it as a path in a multipath setup will
ever make any sense and thus we don't lose anything valuable by not
having the same subsysnqn for the looped back host.

The first problem with this is if someone wants to passthru two ports
of a multiport PCI device. If the cntlids and the subsysnqns were copied
we could theoretically do something like this:

 +-----------------+     +-----------------+       +-----------------+
 |Host A Subsys A  |     |Target Subsys A  |       |Host B Subsys A  |
 | +--------------+|     |        +------+ |       |        +------+ |
 | | PCI Device   ||  ------------+ Ctrl +------------------+ Ctrl | |
 | |      +------+||  |  |        |   0  | |       |        |   0  | |
 | |      | Ctrl +----+  |        +------+ |       |        +------+ |
 | |      |   0  |||     +-----------------+       |        +------+ |
 | |      +------+||     +-----------------+    ------------+ Ctrl | |
 | |      +------+||     | Target Subsys A |    |  |        |   1  | |
 | |      | Ctrl +----+  |        +------+ |    |  |        +------+ |
 | |      |   1  |||  |  |        | Ctrl | |    |  |                 |
 | |      +------+||  -----------+|   1  +------+  +-----------------+
 | +--------------+|     |        +------+ |
 +-----------------+     +-----------------+

But this is awkward because we now have two subsystems that will require
different nqns in configfs but will expose the same nqn as the original
device in the identify command. If we try to make it less awkward by
allowing a target subsystem to point to multiple ctrls (of the same
subsystem) then we end up having to deal with a bunch of multipath
complexity like implementing multipath for admin commands, etc. Not to
mention the current passthru code is pretty much bypassing all the core
multipath code so this would all have to be reworked significantly.

I would argue that if someone wants to create a target for a multi-port
PCI device and have multipath through both ports, then they should use
the regular block device target and not a passthru target -- then it
will all work using the existing multipath support in the core.

The second problem with substituting cntlids is that some admin commands
like the namespace attach command, take the cntlid as an input, so we'd
have to translate those some how if we want to pass them through. I
think this should be possible, however, I don't have any hardware that
implements this so it would be hard for me to test any solution for this
problem. So, for now, we've chosen just to reject such admin commands.

>> +	/* Support multipath connections with fabrics */
>> +	id->cmic |= 1 << 1;
> 
> I don't think we can just overwrite this, we need to use the original
> controllers values.

If we don't overwrite this, then none of the multi-path over fabric
scenarios, from above (that we do want to support) will work with any
device that doesn't advertise this bit. As long as we set the bit, then
multipath over the fabrics connection will work just fine.
>> +	/* 4. By default, blacklist all admin commands */
>> +	default:
>> +
>> +		status = NVME_SC_INVALID_OPCODE | NVME_SC_DNR;
>> +		req->execute = NULL;
>> +		break;
>> +	}
> 
> That seems odd.  There is plenty of other useful admin commands.
> 
> Yes, we need to ignore the PCIe specific ones:
> 
>  - Create I/O Completion Queue
>  - Create I/O Submission Queue
>  - Delete I/O Completion Queue
>  - Delete I/O Submission Queue
>  - Doorbell Buffer Configuration
>  - Virtualization Management
> 
> but all others seem perfectly valid to pass through.

This was based on Sagi's feedback[1]. He contends that the format NVM
command is not safe in an environment where there might be multiple
hosts. For similar reasons, firmware commands and others might be
dangerous too. We also have to ignore NS attach commands for reasons
outlined above. So it certainly seems like there's more admin commands
than not that we need to at least be careful of. Starting with a black
list then adding the commands that are interesting to pass through (and
that we can properly reason won't break things) seems like a prudent
approach. For our use cases, we largely only care about identify
commands and vendor specific commands.

Logan


[1]
https://lore.kernel.org/linux-block/e4430207-7def-8776-0289-0d58689dc0cd@grimberg.me/
diff mbox series

Patch

diff --git a/drivers/nvme/target/Kconfig b/drivers/nvme/target/Kconfig
index d7f48c0fb311..2478cb5a932d 100644
--- a/drivers/nvme/target/Kconfig
+++ b/drivers/nvme/target/Kconfig
@@ -15,6 +15,16 @@  config NVME_TARGET
 	  To configure the NVMe target you probably want to use the nvmetcli
 	  tool from http://git.infradead.org/users/hch/nvmetcli.git.
 
+config NVME_TARGET_PASSTHRU
+	bool "NVMe Target Passthrough support"
+	depends on NVME_CORE
+	depends on NVME_TARGET
+	help
+	  This enables target side NVMe passthru controller support for the
+	  NVMe Over Fabrics protocol. It allows for hosts to manage and
+	  directly access an actual NVMe controller residing on the target
+	  side, incuding executing Vendor Unique Commands.
+
 config NVME_TARGET_LOOP
 	tristate "NVMe loopback device support"
 	depends on NVME_TARGET
diff --git a/drivers/nvme/target/Makefile b/drivers/nvme/target/Makefile
index 2b33836f3d3e..bf57799fde63 100644
--- a/drivers/nvme/target/Makefile
+++ b/drivers/nvme/target/Makefile
@@ -11,6 +11,7 @@  obj-$(CONFIG_NVME_TARGET_TCP)		+= nvmet-tcp.o
 
 nvmet-y		+= core.o configfs.o admin-cmd.o fabrics-cmd.o \
 			discovery.o io-cmd-file.o io-cmd-bdev.o
+nvmet-$(CONFIG_NVME_TARGET_PASSTHRU)	+= io-cmd-passthru.o
 nvme-loop-y	+= loop.o
 nvmet-rdma-y	+= rdma.o
 nvmet-fc-y	+= fc.o
diff --git a/drivers/nvme/target/core.c b/drivers/nvme/target/core.c
index d6dcb86d8be7..256f765e772b 100644
--- a/drivers/nvme/target/core.c
+++ b/drivers/nvme/target/core.c
@@ -896,6 +896,8 @@  bool nvmet_req_init(struct nvmet_req *req, struct nvmet_cq *cq,
 	if (unlikely(!req->sq->ctrl))
 		/* will return an error for any Non-connect command: */
 		status = nvmet_parse_connect_cmd(req);
+	else if (nvmet_req_passthru_ctrl(req))
+		status = nvmet_parse_passthru_cmd(req);
 	else if (likely(req->sq->qid != 0))
 		status = nvmet_parse_io_cmd(req);
 	else if (nvme_is_fabrics(req->cmd))
@@ -1463,11 +1465,15 @@  static int __init nvmet_init(void)
 
 	nvmet_ana_group_enabled[NVMET_DEFAULT_ANA_GRPID] = 1;
 
+	error = nvmet_passthru_init();
+	if (error)
+		goto out;
+
 	buffered_io_wq = alloc_workqueue("nvmet-buffered-io-wq",
 			WQ_MEM_RECLAIM, 0);
 	if (!buffered_io_wq) {
 		error = -ENOMEM;
-		goto out;
+		goto out_passthru_destroy;
 	}
 
 	error = nvmet_init_discovery();
@@ -1483,6 +1489,8 @@  static int __init nvmet_init(void)
 	nvmet_exit_discovery();
 out_free_work_queue:
 	destroy_workqueue(buffered_io_wq);
+out_passthru_destroy:
+	nvmet_passthru_destroy();
 out:
 	return error;
 }
@@ -1493,6 +1501,7 @@  static void __exit nvmet_exit(void)
 	nvmet_exit_discovery();
 	ida_destroy(&cntlid_ida);
 	destroy_workqueue(buffered_io_wq);
+	nvmet_passthru_destroy();
 
 	BUILD_BUG_ON(sizeof(struct nvmf_disc_rsp_page_entry) != 1024);
 	BUILD_BUG_ON(sizeof(struct nvmf_disc_rsp_page_hdr) != 1024);
diff --git a/drivers/nvme/target/io-cmd-passthru.c b/drivers/nvme/target/io-cmd-passthru.c
new file mode 100644
index 000000000000..1eb855b4071c
--- /dev/null
+++ b/drivers/nvme/target/io-cmd-passthru.c
@@ -0,0 +1,567 @@ 
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * NVMe Over Fabrics Target Passthrough command implementation.
+ *
+ * Copyright (c) 2017-2018 Western Digital Corporation or its
+ * affiliates.
+ */
+#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
+#include <linux/module.h>
+
+#include "../host/nvme.h"
+#include "nvmet.h"
+
+static struct workqueue_struct *passthru_wq;
+
+int nvmet_passthru_init(void)
+{
+	passthru_wq = alloc_workqueue("nvmet-passthru-wq", WQ_MEM_RECLAIM, 0);
+	if (!passthru_wq)
+		return -ENOMEM;
+
+	return 0;
+}
+
+void nvmet_passthru_destroy(void)
+{
+	destroy_workqueue(passthru_wq);
+}
+
+static void nvmet_passthru_req_complete(struct nvmet_req *req,
+		struct request *rq, u16 status)
+{
+	nvmet_req_complete(req, status);
+
+	if (rq)
+		blk_put_request(rq);
+}
+
+static void nvmet_passthru_req_done(struct request *rq,
+		blk_status_t blk_status)
+{
+	struct nvmet_req *req = rq->end_io_data;
+	u16 status = nvme_req(rq)->status;
+
+	req->cqe->result.u32 = nvme_req(rq)->result.u32;
+
+	nvmet_passthru_req_complete(req, rq, status);
+}
+
+static u16 nvmet_passthru_override_format_nvm(struct nvmet_req *req)
+{
+	int lbaf = le32_to_cpu(req->cmd->format.cdw10) & 0x0000000F;
+	int nsid = le32_to_cpu(req->cmd->format.nsid);
+	u16 status = NVME_SC_SUCCESS;
+	struct nvme_id_ns *id;
+	int ret;
+
+	ret = nvme_identify_ns(nvmet_req_passthru_ctrl(req), nsid, &id);
+	if (ret)
+		return NVME_SC_INTERNAL;
+	/*
+	 * XXX: Please update this code once NVMeOF target starts supporting
+	 * metadata. We don't support ns lba format with metadata over fabrics
+	 * right now, so report an error if format nvm cmd tries to format
+	 * a namespace with the LBA format which has metadata.
+	 */
+	if (id->lbaf[lbaf].ms)
+		status = NVME_SC_INVALID_NS;
+
+	kfree(id);
+	return status;
+}
+
+static void nvmet_passthru_set_mdts(struct nvmet_ctrl *ctrl,
+				    struct nvme_id_ctrl *id)
+{
+	struct nvme_ctrl *pctrl = ctrl->subsys->passthru_ctrl;
+	u32 max_hw_sectors;
+	int page_shift;
+
+	/*
+	 * The passthru NVMe driver may have a limit on the number
+	 * of segments which depends on the host's memory fragementation.
+	 * To solve this, ensure mdts is limitted to the pages equal to
+	 * the number of segments.
+	 */
+
+	max_hw_sectors = min_not_zero(pctrl->max_segments << (PAGE_SHIFT - 9),
+				      pctrl->max_hw_sectors);
+
+	page_shift = NVME_CAP_MPSMIN(ctrl->cap) + 12;
+
+	id->mdts = ilog2(max_hw_sectors) + 9 - page_shift;
+}
+
+static u16 nvmet_passthru_override_id_ctrl(struct nvmet_req *req)
+{
+	struct nvmet_ctrl *ctrl = req->sq->ctrl;
+	u16 status = NVME_SC_SUCCESS;
+	struct nvme_id_ctrl *id;
+
+	id = kzalloc(sizeof(*id), GFP_KERNEL);
+	if (!id) {
+		status = NVME_SC_INTERNAL;
+		goto out;
+	}
+	status = nvmet_copy_from_sgl(req, 0, id, sizeof(*id));
+	if (status)
+		goto out_free;
+
+	id->cntlid = cpu_to_le16(ctrl->cntlid);
+	id->ver = cpu_to_le32(ctrl->subsys->ver);
+
+	nvmet_passthru_set_mdts(ctrl, id);
+
+	id->acl = 3;
+	/*
+	 * We export aerl limit for the fabrics controller, update this when
+	 * passthru based aerl support is added.
+	 */
+	id->aerl = NVMET_ASYNC_EVENTS - 1;
+
+	/* emulate kas as most of the PCIe ctrl don't have a support for kas */
+	id->kas = cpu_to_le16(NVMET_KAS);
+
+	/* don't support host memory buffer */
+	id->hmpre = 0;
+	id->hmmin = 0;
+
+	id->sqes = min_t(__u8, ((0x6 << 4) | 0x6), id->sqes);
+	id->cqes = min_t(__u8, ((0x4 << 4) | 0x4), id->cqes);
+	id->maxcmd = cpu_to_le16(NVMET_MAX_CMD);
+
+	/* don't support fuse commands */
+	id->fuses = 0;
+
+	id->sgls = cpu_to_le32(1 << 0); /* we always support SGLs */
+	if (ctrl->ops->has_keyed_sgls)
+		id->sgls |= cpu_to_le32(1 << 2);
+	if (req->port->inline_data_size)
+		id->sgls |= cpu_to_le32(1 << 20);
+
+	/*
+	 * When passsthru controller is setup using nvme-loop transport it will
+	 * export the passthru ctrl subsysnqn (PCIe NVMe ctrl) and will fail in
+	 * the nvme/host/core.c in the nvme_init_subsystem()->nvme_active_ctrl()
+	 * code path with duplicate ctr subsynqn. In order to prevent that we
+	 * mask the passthru-ctrl subsysnqn with the target ctrl subsysnqn.
+	 */
+	memcpy(id->subnqn, ctrl->subsysnqn, sizeof(id->subnqn));
+
+	/* use fabric id-ctrl values */
+	id->ioccsz = cpu_to_le32((sizeof(struct nvme_command) +
+				req->port->inline_data_size) / 16);
+	id->iorcsz = cpu_to_le32(sizeof(struct nvme_completion) / 16);
+
+	id->msdbd = ctrl->ops->msdbd;
+
+	/* Support multipath connections with fabrics */
+	id->cmic |= 1 << 1;
+
+	status = nvmet_copy_to_sgl(req, 0, id, sizeof(struct nvme_id_ctrl));
+
+out_free:
+	kfree(id);
+out:
+	return status;
+}
+
+static u16 nvmet_passthru_override_id_ns(struct nvmet_req *req)
+{
+	u16 status = NVME_SC_SUCCESS;
+	struct nvme_id_ns *id;
+	int i;
+
+	id = kzalloc(sizeof(*id), GFP_KERNEL);
+	if (!id) {
+		status = NVME_SC_INTERNAL;
+		goto out;
+	}
+
+	status = nvmet_copy_from_sgl(req, 0, id, sizeof(struct nvme_id_ns));
+	if (status)
+		goto out_free;
+
+	for (i = 0; i < (id->nlbaf + 1); i++)
+		if (id->lbaf[i].ms)
+			memset(&id->lbaf[i], 0, sizeof(id->lbaf[i]));
+
+	id->flbas = id->flbas & ~(1 << 4);
+	id->mc = 0;
+
+	status = nvmet_copy_to_sgl(req, 0, id, sizeof(*id));
+
+out_free:
+	kfree(id);
+out:
+	return status;
+}
+
+static u16 nvmet_passthru_fixup_identify(struct nvmet_req *req)
+{
+	u16 status = NVME_SC_SUCCESS;
+
+	switch (req->cmd->identify.cns) {
+	case NVME_ID_CNS_CTRL:
+		status = nvmet_passthru_override_id_ctrl(req);
+		break;
+	case NVME_ID_CNS_NS:
+		status = nvmet_passthru_override_id_ns(req);
+		break;
+	}
+	return status;
+}
+
+static u16 nvmet_passthru_admin_passthru_start(struct nvmet_req *req)
+{
+	u16 status = NVME_SC_SUCCESS;
+
+	switch (req->cmd->common.opcode) {
+	case nvme_admin_format_nvm:
+		status = nvmet_passthru_override_format_nvm(req);
+		break;
+	}
+	return status;
+}
+
+static u16 nvmet_passthru_admin_passthru_end(struct nvmet_req *req)
+{
+	u8 aer_type = NVME_AER_TYPE_NOTICE;
+	u16 status = NVME_SC_SUCCESS;
+
+	switch (req->cmd->common.opcode) {
+	case nvme_admin_identify:
+		status = nvmet_passthru_fixup_identify(req);
+		break;
+	case nvme_admin_ns_mgmt:
+	case nvme_admin_ns_attach:
+	case nvme_admin_format_nvm:
+		if (nvmet_add_async_event(req->sq->ctrl, aer_type, 0, 0))
+			status = NVME_SC_INTERNAL;
+		break;
+	}
+	return status;
+}
+
+static void nvmet_passthru_execute_admin_cmd(struct nvmet_req *req)
+{
+	u8 opcode = req->cmd->common.opcode;
+	u32 effects;
+	u16 status;
+
+	status = nvmet_passthru_admin_passthru_start(req);
+	if (status)
+		goto out;
+
+	effects = nvme_passthru_start(nvmet_req_passthru_ctrl(req), NULL,
+				      opcode);
+
+	/*
+	 * Admin Commands have side effects and it is better to handle those
+	 * side effects in the submission thread context than in the request
+	 * completion path, which is in the interrupt context. Also in this
+	 * way, we keep the passhru admin command code path consistent with the
+	 * nvme/host/core.c sync command submission APIs/IOCTLs and use
+	 * nvme_passthru_start/end() to handle side effects consistently.
+	 */
+	blk_execute_rq(req->p.rq->q, NULL, req->p.rq, 0);
+
+	nvme_passthru_end(nvmet_req_passthru_ctrl(req), effects);
+	status = nvmet_passthru_admin_passthru_end(req);
+out:
+	if (status == NVME_SC_SUCCESS) {
+		nvmet_set_result(req, nvme_req(req->p.rq)->result.u32);
+		status = nvme_req(req->p.rq)->status;
+	}
+
+	nvmet_passthru_req_complete(req, req->p.rq, status);
+}
+
+static int nvmet_passthru_map_sg(struct nvmet_req *req, struct request *rq)
+{
+	int sg_cnt = req->sg_cnt;
+	struct scatterlist *sg;
+	int op = REQ_OP_READ;
+	int op_flags = 0;
+	struct bio *bio;
+	int i, ret;
+
+	if (req->cmd->common.opcode == nvme_cmd_flush) {
+		op_flags = REQ_FUA;
+	} else if (nvme_is_write(req->cmd)) {
+		op = REQ_OP_WRITE;
+		op_flags = REQ_SYNC | REQ_IDLE;
+	}
+
+	bio = bio_alloc(GFP_KERNEL, min(sg_cnt, BIO_MAX_PAGES));
+	bio->bi_end_io = bio_put;
+
+	for_each_sg(req->sg, sg, req->sg_cnt, i) {
+		if (bio_add_page(bio, sg_page(sg), sg->length,
+				 sg->offset) != sg->length) {
+			ret = blk_rq_append_bio(rq, &bio);
+			if (unlikely(ret))
+				return ret;
+
+			bio_set_op_attrs(bio, op, op_flags);
+			bio = bio_alloc(GFP_KERNEL,
+					min(sg_cnt, BIO_MAX_PAGES));
+		}
+		sg_cnt--;
+	}
+
+	ret = blk_rq_append_bio(rq, &bio);
+	if (unlikely(ret))
+		return ret;
+
+	return 0;
+}
+
+static struct request *nvmet_passthru_blk_make_request(struct nvmet_req *req,
+		struct nvme_ns *ns, gfp_t gfp_mask)
+{
+	struct nvme_ctrl *passthru_ctrl = nvmet_req_passthru_ctrl(req);
+	struct nvme_command *cmd = req->cmd;
+	struct request_queue *q;
+	struct request *rq;
+	int ret;
+
+	if (ns)
+		q = ns->queue;
+	else
+		q = passthru_ctrl->admin_q;
+
+	rq = nvme_alloc_request(q, cmd, BLK_MQ_REQ_NOWAIT, NVME_QID_ANY);
+	if (unlikely(IS_ERR(rq)))
+		return rq;
+
+	if (req->sg_cnt) {
+		ret = nvmet_passthru_map_sg(req, rq);
+		if (unlikely(ret)) {
+			blk_put_request(rq);
+			return ERR_PTR(ret);
+		}
+	}
+
+	/*
+	 * We don't support fused cmds, also nvme-pci driver uses its own
+	 * sgl_threshold parameter to decide whether to use SGLs or PRPs hence
+	 * turn off those bits in the flags.
+	 */
+	req->cmd->common.flags &= ~(NVME_CMD_FUSE_FIRST | NVME_CMD_FUSE_SECOND |
+			NVME_CMD_SGL_ALL);
+	return rq;
+}
+
+
+static void nvmet_passthru_execute_admin_work(struct work_struct *w)
+{
+	struct nvmet_req *req = container_of(w, struct nvmet_req, p.work);
+
+	nvmet_passthru_execute_admin_cmd(req);
+}
+
+static void nvmet_passthru_submit_admin_cmd(struct nvmet_req *req)
+{
+	INIT_WORK(&req->p.work, nvmet_passthru_execute_admin_work);
+	queue_work(passthru_wq, &req->p.work);
+}
+
+static void nvmet_passthru_execute_cmd(struct nvmet_req *req)
+{
+	struct request *rq = NULL;
+	struct nvme_ns *ns = NULL;
+	u16 status;
+
+	if (likely(req->sq->qid != 0)) {
+		u32 nsid = le32_to_cpu(req->cmd->common.nsid);
+
+		ns = nvme_find_get_ns(nvmet_req_passthru_ctrl(req), nsid);
+		if (unlikely(!ns)) {
+			pr_err("failed to get passthru ns nsid:%u\n", nsid);
+			status = NVME_SC_INVALID_NS | NVME_SC_DNR;
+			goto fail_out;
+		}
+	}
+
+	rq = nvmet_passthru_blk_make_request(req, ns, GFP_KERNEL);
+	if (unlikely(IS_ERR(rq))) {
+		rq = NULL;
+		status = NVME_SC_INTERNAL;
+		goto fail_out;
+	}
+
+	if (unlikely(blk_rq_nr_phys_segments(rq) > queue_max_segments(rq->q) ||
+	    (blk_rq_payload_bytes(rq) >> 9) > queue_max_hw_sectors(rq->q))) {
+		status = NVME_SC_INVALID_FIELD;
+		goto fail_out;
+	}
+
+	rq->end_io_data = req;
+	if (req->sq->qid != 0) {
+		blk_execute_rq_nowait(rq->q, NULL, rq, 0,
+				      nvmet_passthru_req_done);
+	} else {
+		req->p.rq = rq;
+		nvmet_passthru_submit_admin_cmd(req);
+	}
+
+	if (ns)
+		nvme_put_ns(ns);
+
+	return;
+
+fail_out:
+	if (ns)
+		nvme_put_ns(ns);
+	nvmet_passthru_req_complete(req, rq, status);
+}
+
+/*
+ * We emulate commands which are not routed through the existing target
+ * code and not supported by the passthru ctrl. E.g consider a scenario where
+ * passthru ctrl version is < 1.3.0. Target Fabrics ctrl version is >= 1.3.0
+ * in that case in order to be fabrics compliant we need to emulate ns-desc-list
+ * command which is 1.3.0 compliant but not present for the passthru ctrl due
+ * to lower version.
+ */
+static void nvmet_passthru_emulate_id_desclist(struct nvmet_req *req)
+{
+	int nsid = le32_to_cpu(req->cmd->common.nsid);
+	u16 status = NVME_SC_SUCCESS;
+	struct nvme_ns_ids *ids;
+	struct nvme_ns *ns;
+	off_t off = 0;
+
+	ns = nvme_find_get_ns(nvmet_req_passthru_ctrl(req), nsid);
+	if (unlikely(!ns)) {
+		pr_err("failed to get passthru ns nsid:%u\n", nsid);
+		status = NVME_SC_INVALID_NS | NVME_SC_DNR;
+		goto out;
+	}
+	/*
+	 * Instead of refactoring and creating helpers, keep it simple and
+	 * just re-use the code from admin-cmd.c ->
+	 * nvmet_execute_identify_ns_desclist().
+	 */
+	ids = &ns->head->ids;
+	if (memchr_inv(ids->eui64, 0, sizeof(ids->eui64))) {
+		status = nvmet_copy_ns_identifier(req, NVME_NIDT_EUI64,
+						  NVME_NIDT_EUI64_LEN,
+						  &ids->eui64, &off);
+		if (status)
+			goto out_put_ns;
+	}
+	if (memchr_inv(&ids->uuid, 0, sizeof(ids->uuid))) {
+		status = nvmet_copy_ns_identifier(req, NVME_NIDT_UUID,
+						  NVME_NIDT_UUID_LEN,
+						  &ids->uuid, &off);
+		if (status)
+			goto out_put_ns;
+	}
+	if (memchr_inv(ids->nguid, 0, sizeof(ids->nguid))) {
+		status = nvmet_copy_ns_identifier(req, NVME_NIDT_NGUID,
+						  NVME_NIDT_NGUID_LEN,
+						  &ids->nguid, &off);
+		if (status)
+			goto out_put_ns;
+	}
+
+	if (sg_zero_buffer(req->sg, req->sg_cnt, NVME_IDENTIFY_DATA_SIZE - off,
+			off) != NVME_IDENTIFY_DATA_SIZE - off)
+		status = NVME_SC_INTERNAL | NVME_SC_DNR;
+out_put_ns:
+	nvme_put_ns(ns);
+out:
+	nvmet_req_complete(req, status);
+}
+
+/*
+ * In the passthru mode we support three types for commands:-
+ * 1. Commands which are black-listed.
+ * 2. Commands which are routed through target code.
+ * 3. Commands which are emulated in the target code, since we can't rely
+ *    on passthru-ctrl and cannot route through the target code.
+ */
+static u16 nvmet_parse_passthru_admin_cmd(struct nvmet_req *req)
+{
+	struct nvme_command *cmd = req->cmd;
+	u16 status = 0;
+
+	if (cmd->common.opcode >= nvme_admin_vendor_unique_start) {
+		/*
+		 * Passthru all vendor unique commands
+		 */
+		req->execute = nvmet_passthru_execute_cmd;
+		return status;
+	}
+
+	switch (cmd->common.opcode) {
+	/* 2. commands which are routed through target code */
+	case nvme_admin_async_event:
+	/*
+	 * Right now we don't monitor any events for the passthru controller.
+	 * Instead generate asyn event notice for the ns-mgmt/format/attach
+	 * commands so that host can update it's ns-inventory.
+	 */
+		/* fallthru */
+	case nvme_admin_keep_alive:
+	/*
+	 * Most PCIe ctrls don't support keep alive cmd, we route keep alive
+	 * to the non-passthru mode. In future please change this code when
+	 * PCIe ctrls with keep alive support available.
+	 */
+		status = nvmet_parse_admin_cmd(req);
+		break;
+	case nvme_admin_set_features:
+		switch (le32_to_cpu(req->cmd->features.fid)) {
+		case NVME_FEAT_ASYNC_EVENT:
+		case NVME_FEAT_KATO:
+		case NVME_FEAT_NUM_QUEUES:
+			status = nvmet_parse_admin_cmd(req);
+			break;
+		default:
+			req->execute = nvmet_passthru_execute_cmd;
+		}
+		break;
+	/* 3. commands which are emulated in the passthru code */
+	case nvme_admin_identify:
+		switch (req->cmd->identify.cns) {
+		case NVME_ID_CNS_NS_DESC_LIST:
+			req->execute = nvmet_passthru_emulate_id_desclist;
+			break;
+		default:
+			req->execute = nvmet_passthru_execute_cmd;
+		}
+		break;
+	/* 4. By default, blacklist all admin commands */
+	default:
+
+		status = NVME_SC_INVALID_OPCODE | NVME_SC_DNR;
+		req->execute = NULL;
+		break;
+	}
+
+	return status;
+}
+
+u16 nvmet_parse_passthru_cmd(struct nvmet_req *req)
+{
+	int ret;
+
+	if (unlikely(req->cmd->common.opcode == nvme_fabrics_command))
+		return nvmet_parse_fabrics_cmd(req);
+	else if (unlikely(req->sq->ctrl->subsys->type == NVME_NQN_DISC))
+		return nvmet_parse_discovery_cmd(req);
+
+	ret = nvmet_check_ctrl_status(req, req->cmd);
+	if (unlikely(ret))
+		return ret;
+
+	if (unlikely(req->sq->qid == 0))
+		return nvmet_parse_passthru_admin_cmd(req);
+
+	req->execute = nvmet_passthru_execute_cmd;
+	return NVME_SC_SUCCESS;
+}
diff --git a/drivers/nvme/target/nvmet.h b/drivers/nvme/target/nvmet.h
index 5dfd4e0ae096..daec1240307c 100644
--- a/drivers/nvme/target/nvmet.h
+++ b/drivers/nvme/target/nvmet.h
@@ -227,6 +227,10 @@  struct nvmet_subsys {
 
 	struct config_group	namespaces_group;
 	struct config_group	allowed_hosts_group;
+
+#ifdef CONFIG_NVME_TARGET_PASSTHRU
+	struct nvme_ctrl	*passthru_ctrl;
+#endif /* CONFIG_NVME_TARGET_PASSTHRU */
 };
 
 static inline struct nvmet_subsys *to_subsys(struct config_item *item)
@@ -302,6 +306,10 @@  struct nvmet_req {
 			struct bio_vec          *bvec;
 			struct work_struct      work;
 		} f;
+		struct {
+			struct request		*rq;
+			struct work_struct      work;
+		} p;
 	};
 	int			sg_cnt;
 	/* data length as parsed from the command: */
@@ -497,6 +505,44 @@  static inline u32 nvmet_rw_len(struct nvmet_req *req)
 			req->ns->blksize_shift;
 }
 
+#ifdef CONFIG_NVME_TARGET_PASSTHRU
+
+int nvmet_passthru_init(void);
+void nvmet_passthru_destroy(void);
+u16 nvmet_parse_passthru_cmd(struct nvmet_req *req);
+
+static inline
+struct nvme_ctrl *nvmet_passthru_ctrl(struct nvmet_subsys *subsys)
+{
+	return subsys->passthru_ctrl;
+}
+
+#else /* CONFIG_NVME_TARGET_PASSTHRU */
+
+static inline int nvmet_passthru_init(void)
+{
+	return 0;
+}
+static inline void nvmet_passthru_destroy(void)
+{
+}
+static inline u16 nvmet_parse_passthru_cmd(struct nvmet_req *req)
+{
+	return 0;
+}
+static inline
+struct nvme_ctrl *nvmet_passthru_ctrl(struct nvmet_subsys *subsys)
+{
+	return NULL;
+}
+
+#endif /* CONFIG_NVME_TARGET_PASSTHRU */
+
+static inline struct nvme_ctrl *nvmet_req_passthru_ctrl(struct nvmet_req *req)
+{
+	return nvmet_passthru_ctrl(req->sq->ctrl->subsys);
+}
+
 u16 errno_to_nvme_status(struct nvmet_req *req, int errno);
 
 /* Convert a 32-bit number to a 16-bit 0's based number */
diff --git a/include/linux/nvme.h b/include/linux/nvme.h
index f61d6906e59d..94e730b5d0a3 100644
--- a/include/linux/nvme.h
+++ b/include/linux/nvme.h
@@ -816,6 +816,7 @@  enum nvme_admin_opcode {
 	nvme_admin_security_recv	= 0x82,
 	nvme_admin_sanitize_nvm		= 0x84,
 	nvme_admin_get_lba_status	= 0x86,
+	nvme_admin_vendor_unique_start	= 0xC0,
 };
 
 #define nvme_admin_opcode_name(opcode)	{ opcode, #opcode }