diff mbox series

[RFC,03/11] nvmet: Add nvmet_fabrics_ops flag to indicate SGLs not supported

Message ID 20250313052222.178524-4-michael.christie@oracle.com (mailing list archive)
State New
Headers show
Series nvmet: Add NVMe target mdev/vfio driver | expand

Commit Message

Mike Christie March 13, 2025, 5:18 a.m. UTC
The nvmet_mdev_pci driver does not initially support SGLs. In some
prelim testing I don't think there will be a perf gain (the virt related
interface may be the major bottleneck so I may not notice) so I wasn't
sure if they will be required/needed. This adds a nvmet_fabrics_ops flag
so we can tell nvmet core to tell the host we do not supports SGLS.

Signed-off-by: Mike Christie <michael.christie@oracle.com>
---
 drivers/nvme/target/admin-cmd.c | 13 +++++++------
 drivers/nvme/target/nvmet.h     |  1 +
 2 files changed, 8 insertions(+), 6 deletions(-)

Comments

Christoph Hellwig March 13, 2025, 6:37 a.m. UTC | #1
On Thu, Mar 13, 2025 at 12:18:04AM -0500, Mike Christie wrote:
> The nvmet_mdev_pci driver does not initially support SGLs. In some
> prelim testing I don't think there will be a perf gain (the virt related
> interface may be the major bottleneck so I may not notice) so I wasn't
> sure if they will be required/needed. This adds a nvmet_fabrics_ops flag
> so we can tell nvmet core to tell the host we do not supports SGLS.

I'd prefer to be able to support SGLs if we can, but if not I could
live with a flag of some sort.
Damien Le Moal March 13, 2025, 9:02 a.m. UTC | #2
On 3/13/25 14:18, Mike Christie wrote:
> The nvmet_mdev_pci driver does not initially support SGLs. In some
> prelim testing I don't think there will be a perf gain (the virt related
> interface may be the major bottleneck so I may not notice) so I wasn't
> sure if they will be required/needed. This adds a nvmet_fabrics_ops flag
> so we can tell nvmet core to tell the host we do not supports SGLS.

That is a major spec violation as NVMe fabrics mandates SGL support.
So at the very least, this needs a big fat warning comment explaining what this
is about.

> 
> Signed-off-by: Mike Christie <michael.christie@oracle.com>
> ---
>  drivers/nvme/target/admin-cmd.c | 13 +++++++------
>  drivers/nvme/target/nvmet.h     |  1 +
>  2 files changed, 8 insertions(+), 6 deletions(-)
> 
> diff --git a/drivers/nvme/target/admin-cmd.c b/drivers/nvme/target/admin-cmd.c
> index acc138bbf8f2..486ed6f7b717 100644
> --- a/drivers/nvme/target/admin-cmd.c
> +++ b/drivers/nvme/target/admin-cmd.c
> @@ -755,12 +755,13 @@ static void nvmet_execute_identify_ctrl(struct nvmet_req *req)
>  	id->awun = 0;
>  	id->awupf = 0;
>  
> -	/* we always support SGLs */
> -	id->sgls = cpu_to_le32(NVME_CTRL_SGLS_BYTE_ALIGNED);
> -	if (ctrl->ops->flags & NVMF_KEYED_SGLS)
> -		id->sgls |= cpu_to_le32(NVME_CTRL_SGLS_KSDBDS);
> -	if (req->port->inline_data_size)
> -		id->sgls |= cpu_to_le32(NVME_CTRL_SGLS_SAOS);
> +	if (!(ctrl->ops->flags & NVMF_SGLS_NOT_SUPP)) {
> +		id->sgls = cpu_to_le32(NVME_CTRL_SGLS_BYTE_ALIGNED);
> +		if (ctrl->ops->flags & NVMF_KEYED_SGLS)
> +			id->sgls |= cpu_to_le32(NVME_CTRL_SGLS_KSDBDS);
> +		if (req->port->inline_data_size)
> +			id->sgls |= cpu_to_le32(NVME_CTRL_SGLS_SAOS);
> +	}
>  
>  	strscpy(id->subnqn, ctrl->subsys->subsysnqn, sizeof(id->subnqn));
>  
> diff --git a/drivers/nvme/target/nvmet.h b/drivers/nvme/target/nvmet.h
> index fcf4f460dc9a..ec3d10eb316a 100644
> --- a/drivers/nvme/target/nvmet.h
> +++ b/drivers/nvme/target/nvmet.h
> @@ -403,6 +403,7 @@ struct nvmet_fabrics_ops {
>  	unsigned int flags;
>  #define NVMF_KEYED_SGLS			(1 << 0)
>  #define NVMF_METADATA_SUPPORTED		(1 << 1)
> +#define NVMF_SGLS_NOT_SUPP		(1 << 2)
>  	void (*queue_response)(struct nvmet_req *req);
>  	int (*add_port)(struct nvmet_port *port);
>  	void (*remove_port)(struct nvmet_port *port);
Christoph Hellwig March 13, 2025, 9:13 a.m. UTC | #3
On Thu, Mar 13, 2025 at 06:02:29PM +0900, Damien Le Moal wrote:
> On 3/13/25 14:18, Mike Christie wrote:
> > The nvmet_mdev_pci driver does not initially support SGLs. In some
> > prelim testing I don't think there will be a perf gain (the virt related
> > interface may be the major bottleneck so I may not notice) so I wasn't
> > sure if they will be required/needed. This adds a nvmet_fabrics_ops flag
> > so we can tell nvmet core to tell the host we do not supports SGLS.
> 
> That is a major spec violation as NVMe fabrics mandates SGL support.

But this is a PCIe controller implementation, not fabrics.

Fabrics does not support PRPs and has very different SGLs from the
PCIe ones.  The fact that the spec conflates those in very confusing
ways is one of the big mistakes in the spec.
Damien Le Moal March 13, 2025, 9:16 a.m. UTC | #4
On 3/13/25 18:13, Christoph Hellwig wrote:
> On Thu, Mar 13, 2025 at 06:02:29PM +0900, Damien Le Moal wrote:
>> On 3/13/25 14:18, Mike Christie wrote:
>>> The nvmet_mdev_pci driver does not initially support SGLs. In some
>>> prelim testing I don't think there will be a perf gain (the virt related
>>> interface may be the major bottleneck so I may not notice) so I wasn't
>>> sure if they will be required/needed. This adds a nvmet_fabrics_ops flag
>>> so we can tell nvmet core to tell the host we do not supports SGLS.
>>
>> That is a major spec violation as NVMe fabrics mandates SGL support.
> 
> But this is a PCIe controller implementation, not fabrics.

Ah ! yes !

> Fabrics does not support PRPs and has very different SGLs from the
> PCIe ones.  The fact that the spec conflates those in very confusing
> ways is one of the big mistakes in the spec.

Yes, and despite tripping on this several times with pci-epf, I did it again :)

pci-epf has code for handling both PCI PRPs and SGL. We probably can make that
common with mdev to facilitate SGL support.
Mike Christie March 13, 2025, 5:19 p.m. UTC | #5
On 3/13/25 4:16 AM, Damien Le Moal wrote:
> On 3/13/25 18:13, Christoph Hellwig wrote:
>> On Thu, Mar 13, 2025 at 06:02:29PM +0900, Damien Le Moal wrote:
>>> On 3/13/25 14:18, Mike Christie wrote:
>>>> The nvmet_mdev_pci driver does not initially support SGLs. In some
>>>> prelim testing I don't think there will be a perf gain (the virt related
>>>> interface may be the major bottleneck so I may not notice) so I wasn't
>>>> sure if they will be required/needed. This adds a nvmet_fabrics_ops flag
>>>> so we can tell nvmet core to tell the host we do not supports SGLS.
>>>
>>> That is a major spec violation as NVMe fabrics mandates SGL support.
>>
>> But this is a PCIe controller implementation, not fabrics.
> 
> Ah ! yes !
> 
>> Fabrics does not support PRPs and has very different SGLs from the
>> PCIe ones.  The fact that the spec conflates those in very confusing
>> ways is one of the big mistakes in the spec.
> 
> Yes, and despite tripping on this several times with pci-epf, I did it again :)
> 
> pci-epf has code for handling both PCI PRPs and SGL. We probably can make that
> common with mdev to facilitate SGL support.

Yes. We can. I have different patches depending how much you guys
wanted to integrate things. On the next submission I send them.
diff mbox series

Patch

diff --git a/drivers/nvme/target/admin-cmd.c b/drivers/nvme/target/admin-cmd.c
index acc138bbf8f2..486ed6f7b717 100644
--- a/drivers/nvme/target/admin-cmd.c
+++ b/drivers/nvme/target/admin-cmd.c
@@ -755,12 +755,13 @@  static void nvmet_execute_identify_ctrl(struct nvmet_req *req)
 	id->awun = 0;
 	id->awupf = 0;
 
-	/* we always support SGLs */
-	id->sgls = cpu_to_le32(NVME_CTRL_SGLS_BYTE_ALIGNED);
-	if (ctrl->ops->flags & NVMF_KEYED_SGLS)
-		id->sgls |= cpu_to_le32(NVME_CTRL_SGLS_KSDBDS);
-	if (req->port->inline_data_size)
-		id->sgls |= cpu_to_le32(NVME_CTRL_SGLS_SAOS);
+	if (!(ctrl->ops->flags & NVMF_SGLS_NOT_SUPP)) {
+		id->sgls = cpu_to_le32(NVME_CTRL_SGLS_BYTE_ALIGNED);
+		if (ctrl->ops->flags & NVMF_KEYED_SGLS)
+			id->sgls |= cpu_to_le32(NVME_CTRL_SGLS_KSDBDS);
+		if (req->port->inline_data_size)
+			id->sgls |= cpu_to_le32(NVME_CTRL_SGLS_SAOS);
+	}
 
 	strscpy(id->subnqn, ctrl->subsys->subsysnqn, sizeof(id->subnqn));
 
diff --git a/drivers/nvme/target/nvmet.h b/drivers/nvme/target/nvmet.h
index fcf4f460dc9a..ec3d10eb316a 100644
--- a/drivers/nvme/target/nvmet.h
+++ b/drivers/nvme/target/nvmet.h
@@ -403,6 +403,7 @@  struct nvmet_fabrics_ops {
 	unsigned int flags;
 #define NVMF_KEYED_SGLS			(1 << 0)
 #define NVMF_METADATA_SUPPORTED		(1 << 1)
+#define NVMF_SGLS_NOT_SUPP		(1 << 2)
 	void (*queue_response)(struct nvmet_req *req);
 	int (*add_port)(struct nvmet_port *port);
 	void (*remove_port)(struct nvmet_port *port);