diff mbox series

[v7,08/14] nvmet-core: allow one host per passthru-ctrl

Message ID 20190801234514.7941-9-logang@deltatee.com (mailing list archive)
State New, archived
Headers show
Series nvmet: add target passthru commands support | expand

Commit Message

Logan Gunthorpe Aug. 1, 2019, 11:45 p.m. UTC
This patch rejects any new connection to the passthru-ctrl if this
controller is already connected to a different host. At the time of
allocating the controller we check if the subsys associated with
the passthru ctrl is already connected to a host and reject it
if the hostnqn differs.

Connections from the same host (by hostnqn) are supported to allow
for multipath.

[chaitanya.kulkarni@wdc.com: based conceptually on a similar patch but
  different implementation]
Signed-off-by: Chaitanya Kulkarni <chaitanya.kulkarni@wdc.com>
Signed-off-by: Logan Gunthorpe <logang@deltatee.com>
---
 drivers/nvme/target/core.c            |  4 ++++
 drivers/nvme/target/io-cmd-passthru.c | 31 +++++++++++++++++++++++++++
 drivers/nvme/target/nvmet.h           |  7 ++++++
 3 files changed, 42 insertions(+)

Comments

Max Gurtovoy Aug. 15, 2019, 12:36 p.m. UTC | #1
On 8/2/2019 2:45 AM, Logan Gunthorpe wrote:
> This patch rejects any new connection to the passthru-ctrl if this
> controller is already connected to a different host. At the time of
> allocating the controller we check if the subsys associated with
> the passthru ctrl is already connected to a host and reject it
> if the hostnqn differs.

This is a big limitation.

Are we plan to enable many front-end ctrl's to connect to the single 
back-end ctrl in the future ?

I guess it can be incremental to this series.
Logan Gunthorpe Aug. 15, 2019, 4:06 p.m. UTC | #2
On 2019-08-15 6:36 a.m., Max Gurtovoy wrote:
> 
> On 8/2/2019 2:45 AM, Logan Gunthorpe wrote:
>> This patch rejects any new connection to the passthru-ctrl if this
>> controller is already connected to a different host. At the time of
>> allocating the controller we check if the subsys associated with
>> the passthru ctrl is already connected to a host and reject it
>> if the hostnqn differs.
> 
> This is a big limitation.
> 
> Are we plan to enable many front-end ctrl's to connect to the single
> back-end ctrl in the future ?

Honestly, I don't know that it's really necessary, but the limitation
was requested by Sagi the first time this patch-set was submitted[1]
citing unspecified user troubles. If there's consensus to remove the
restriction I certainly can.

Logan

[1] http://lists.infradead.org/pipermail/linux-nvme/2018-April/016588.html
Max Gurtovoy Aug. 18, 2019, 10:33 a.m. UTC | #3
On 8/15/2019 7:06 PM, Logan Gunthorpe wrote:
>
> On 2019-08-15 6:36 a.m., Max Gurtovoy wrote:
>> On 8/2/2019 2:45 AM, Logan Gunthorpe wrote:
>>> This patch rejects any new connection to the passthru-ctrl if this
>>> controller is already connected to a different host. At the time of
>>> allocating the controller we check if the subsys associated with
>>> the passthru ctrl is already connected to a host and reject it
>>> if the hostnqn differs.
>> This is a big limitation.
>>
>> Are we plan to enable many front-end ctrl's to connect to the single
>> back-end ctrl in the future ?
> Honestly, I don't know that it's really necessary, but the limitation
> was requested by Sagi the first time this patch-set was submitted[1]
> citing unspecified user troubles. If there's consensus to remove the
> restriction I certainly can.
>
> Logan
>
> [1] http://lists.infradead.org/pipermail/linux-nvme/2018-April/016588.html

I don't understand why we don't limit a regular ctrl to single access 
and we do it for the PT ctrl.

I guess the block layer helps to sync between multiple access in 
parallel but we can do it as well.

Also, let's say you limit the access to this subsystem to 1 user, the 
bdev is still accessibly for local user and also you can create a 
different subsystem that will use this device (PT and non-PT ctrl).

Sagi,

can you explain the trouble you meant and how this limitation solve it ?
Sagi Grimberg Aug. 22, 2019, 12:09 a.m. UTC | #4
> I don't understand why we don't limit a regular ctrl to single access 
> and we do it for the PT ctrl.
> 
> I guess the block layer helps to sync between multiple access in 
> parallel but we can do it as well.
> 
> Also, let's say you limit the access to this subsystem to 1 user, the 
> bdev is still accessibly for local user and also you can create a 
> different subsystem that will use this device (PT and non-PT ctrl).
> 
> Sagi,
> 
> can you explain the trouble you meant and how this limitation solve it ?

Its different to emulate the controller with all its admin
commands vs. passing it through to the nvme device.. (think of format nvm)
Max Gurtovoy Aug. 22, 2019, 8:50 a.m. UTC | #5
On 8/22/2019 3:09 AM, Sagi Grimberg wrote:
>
>> I don't understand why we don't limit a regular ctrl to single access 
>> and we do it for the PT ctrl.
>>
>> I guess the block layer helps to sync between multiple access in 
>> parallel but we can do it as well.
>>
>> Also, let's say you limit the access to this subsystem to 1 user, the 
>> bdev is still accessibly for local user and also you can create a 
>> different subsystem that will use this device (PT and non-PT ctrl).
>>
>> Sagi,
>>
>> can you explain the trouble you meant and how this limitation solve it ?
>
> Its different to emulate the controller with all its admin
> commands vs. passing it through to the nvme device.. (think of format 
> nvm)
>
>
>
we don't need to support format command for PT ctrl as we don't support 
other commands such create_sq/cq.
Sagi Grimberg Aug. 22, 2019, 5:41 p.m. UTC | #6
>>> I don't understand why we don't limit a regular ctrl to single access 
>>> and we do it for the PT ctrl.
>>>
>>> I guess the block layer helps to sync between multiple access in 
>>> parallel but we can do it as well.
>>>
>>> Also, let's say you limit the access to this subsystem to 1 user, the 
>>> bdev is still accessibly for local user and also you can create a 
>>> different subsystem that will use this device (PT and non-PT ctrl).
>>>
>>> Sagi,
>>>
>>> can you explain the trouble you meant and how this limitation solve it ?
>>
>> Its different to emulate the controller with all its admin
>> commands vs. passing it through to the nvme device.. (think of format 
>> nvm)
>>
>>
>>
> we don't need to support format command for PT ctrl as we don't support 
> other commands such create_sq/cq.

That is just an example, basically every command that we are not aware
of we simply passthru to the drive without knowing the implications
on a multi-host environment..
Sagi Grimberg Aug. 22, 2019, 7:17 p.m. UTC | #7
>>>> I don't understand why we don't limit a regular ctrl to single 
>>>> access and we do it for the PT ctrl.
>>>>
>>>> I guess the block layer helps to sync between multiple access in 
>>>> parallel but we can do it as well.
>>>>
>>>> Also, let's say you limit the access to this subsystem to 1 user, 
>>>> the bdev is still accessibly for local user and also you can create 
>>>> a different subsystem that will use this device (PT and non-PT ctrl).
>>>>
>>>> Sagi,
>>>>
>>>> can you explain the trouble you meant and how this limitation solve 
>>>> it ?
>>>
>>> Its different to emulate the controller with all its admin
>>> commands vs. passing it through to the nvme device.. (think of format 
>>> nvm)
>>>
>>>
>>>
>> we don't need to support format command for PT ctrl as we don't 
>> support other commands such create_sq/cq.
> 
> That is just an example, basically every command that we are not aware
> of we simply passthru to the drive without knowing the implications
> on a multi-host environment..

If we were to change the logic of nvmet_parse_passthru_admin_cmd to
have the default case do nvmet_parse_admin_cmd, and only have
the vendor-specific space opcodes do nvmet_passthru_execute_cmd
then I could not see at the moment how we can break a multi-host
export...
Logan Gunthorpe Aug. 22, 2019, 7:41 p.m. UTC | #8
On 2019-08-22 1:17 p.m., Sagi Grimberg wrote:
> 
>>>>> I don't understand why we don't limit a regular ctrl to single
>>>>> access and we do it for the PT ctrl.
>>>>>
>>>>> I guess the block layer helps to sync between multiple access in
>>>>> parallel but we can do it as well.
>>>>>
>>>>> Also, let's say you limit the access to this subsystem to 1 user,
>>>>> the bdev is still accessibly for local user and also you can create
>>>>> a different subsystem that will use this device (PT and non-PT ctrl).
>>>>>
>>>>> Sagi,
>>>>>
>>>>> can you explain the trouble you meant and how this limitation solve
>>>>> it ?
>>>>
>>>> Its different to emulate the controller with all its admin
>>>> commands vs. passing it through to the nvme device.. (think of
>>>> format nvm)
>>>>
>>>>
>>>>
>>> we don't need to support format command for PT ctrl as we don't
>>> support other commands such create_sq/cq.
>>
>> That is just an example, basically every command that we are not aware
>> of we simply passthru to the drive without knowing the implications
>> on a multi-host environment..
> 
> If we were to change the logic of nvmet_parse_passthru_admin_cmd to
> have the default case do nvmet_parse_admin_cmd, and only have
> the vendor-specific space opcodes do nvmet_passthru_execute_cmd
> then I could not see at the moment how we can break a multi-host
> export...

That makes sense. I'll make that change and resend a v8 next week.

Logan
diff mbox series

Patch

diff --git a/drivers/nvme/target/core.c b/drivers/nvme/target/core.c
index 2e75968af7f4..c655f26db3da 100644
--- a/drivers/nvme/target/core.c
+++ b/drivers/nvme/target/core.c
@@ -1278,6 +1278,10 @@  u16 nvmet_alloc_ctrl(const char *subsysnqn, const char *hostnqn,
 	if (!ctrl->sqs)
 		goto out_free_cqs;
 
+	ret = nvmet_passthru_alloc_ctrl(subsys, hostnqn);
+	if (ret)
+		goto out_free_sqs;
+
 	ret = ida_simple_get(&cntlid_ida,
 			     NVME_CNTLID_MIN, NVME_CNTLID_MAX,
 			     GFP_KERNEL);
diff --git a/drivers/nvme/target/io-cmd-passthru.c b/drivers/nvme/target/io-cmd-passthru.c
index b199785500ad..06a919283cc5 100644
--- a/drivers/nvme/target/io-cmd-passthru.c
+++ b/drivers/nvme/target/io-cmd-passthru.c
@@ -104,6 +104,37 @@  void nvmet_passthru_subsys_free(struct nvmet_subsys *subsys)
 	mutex_unlock(&subsys->lock);
 }
 
+int nvmet_passthru_alloc_ctrl(struct nvmet_subsys *subsys,
+			      const char *hostnqn)
+{
+	struct nvmet_ctrl *ctrl;
+
+	/*
+	 * Check here if this subsystem is already connected to the passthru
+	 * ctrl. We allow only one host to connect to a given passthru
+	 * subsystem.
+	 */
+	int rc = 0;
+
+	mutex_lock(&subsys->lock);
+
+	if (!subsys->passthru_ctrl)
+		goto out;
+
+	if (list_empty(&subsys->ctrls))
+		goto out;
+
+	ctrl = list_first_entry(&subsys->ctrls, struct nvmet_ctrl,
+				subsys_entry);
+
+	if (strcmp(hostnqn, ctrl->hostnqn))
+		rc = -ENODEV;
+
+out:
+	mutex_unlock(&subsys->lock);
+	return rc;
+}
+
 static void nvmet_passthru_req_complete(struct nvmet_req *req,
 		struct request *rq, u16 status)
 {
diff --git a/drivers/nvme/target/nvmet.h b/drivers/nvme/target/nvmet.h
index aff4db03269d..6436cb990905 100644
--- a/drivers/nvme/target/nvmet.h
+++ b/drivers/nvme/target/nvmet.h
@@ -513,6 +513,8 @@  void nvmet_passthru_destroy(void);
 void nvmet_passthru_subsys_free(struct nvmet_subsys *subsys);
 int nvmet_passthru_ctrl_enable(struct nvmet_subsys *subsys);
 void nvmet_passthru_ctrl_disable(struct nvmet_subsys *subsys);
+int nvmet_passthru_alloc_ctrl(struct nvmet_subsys *subsys,
+			      const char *hostnqn);
 u16 nvmet_parse_passthru_cmd(struct nvmet_req *req);
 
 static inline
@@ -536,6 +538,11 @@  static inline void nvmet_passthru_subsys_free(struct nvmet_subsys *subsys)
 static inline void nvmet_passthru_ctrl_disable(struct nvmet_subsys *subsys)
 {
 }
+static inline int nvmet_passthru_alloc_ctrl(struct nvmet_subsys *subsys,
+					    const char *hostnqn)
+{
+	return 0;
+}
 static inline u16 nvmet_parse_passthru_cmd(struct nvmet_req *req)
 {
 	return 0;