Message ID | 20210519111340.20613-3-smalin@marvell.com (mailing list archive) |
---|---|
State | Superseded |
Headers | show |
Series | NVMeTCP Offload ULP and QEDN Device Driver | expand |
Context | Check | Description |
---|---|---|
netdev/cover_letter | success | Link |
netdev/fixes_present | success | Link |
netdev/patch_count | fail | Series longer than 15 patches |
netdev/tree_selection | success | Guessed tree name to be net-next |
netdev/subject_prefix | success | Link |
netdev/cc_maintainers | success | CCed 5 of 5 maintainers |
netdev/source_inline | success | Was 0 now: 0 |
netdev/verify_signedoff | success | Link |
netdev/module_param | success | Was 0 now: 0 |
netdev/build_32bit | success | Errors and warnings before: 6 this patch: 6 |
netdev/kdoc | success | Errors and warnings before: 2 this patch: 2 |
netdev/verify_fixes | success | Link |
netdev/checkpatch | success | total: 0 errors, 0 warnings, 0 checks, 26 lines checked |
netdev/build_allmodconfig_warn | success | Errors and warnings before: 6 this patch: 6 |
netdev/header_inline | success | Link |
On 5/19/21 6:13 AM, Shai Malin wrote: > From: Arie Gershberg <agershberg@marvell.com> > > Move NVMF_ALLOWED_OPTS and NVMF_REQUIRED_OPTS definitions > to header file, so it can be used by the different HW devices. > > NVMeTCP offload devices might have different limitations of the > allowed options, for example, a device that does not support all the > queue types. With tcp and rdma, only the nvme-tcp and nvme-rdma layers > handle those attributes and the HW devices do not create any limitations > for the allowed options. > > An alternative design could be to add separate fields in nvme_tcp_ofld_ops > such as max_hw_sectors and max_segments that we already have in this > series. > > Acked-by: Igor Russkikh <irusskikh@marvell.com> > Signed-off-by: Arie Gershberg <agershberg@marvell.com> > Signed-off-by: Prabhakar Kushwaha <pkushwaha@marvell.com> > Signed-off-by: Omkar Kulkarni <okulkarni@marvell.com> > Signed-off-by: Michal Kalderon <mkalderon@marvell.com> > Signed-off-by: Ariel Elior <aelior@marvell.com> > Signed-off-by: Shai Malin <smalin@marvell.com> > --- > drivers/nvme/host/fabrics.c | 7 ------- > drivers/nvme/host/fabrics.h | 7 +++++++ > 2 files changed, 7 insertions(+), 7 deletions(-) > > diff --git a/drivers/nvme/host/fabrics.c b/drivers/nvme/host/fabrics.c > index a2bb7fc63a73..e1e05aa2fada 100644 > --- a/drivers/nvme/host/fabrics.c > +++ b/drivers/nvme/host/fabrics.c > @@ -942,13 +942,6 @@ void nvmf_free_options(struct nvmf_ctrl_options *opts) > } > EXPORT_SYMBOL_GPL(nvmf_free_options); > > -#define NVMF_REQUIRED_OPTS (NVMF_OPT_TRANSPORT | NVMF_OPT_NQN) > -#define NVMF_ALLOWED_OPTS (NVMF_OPT_QUEUE_SIZE | NVMF_OPT_NR_IO_QUEUES | \ > - NVMF_OPT_KATO | NVMF_OPT_HOSTNQN | \ > - NVMF_OPT_HOST_ID | NVMF_OPT_DUP_CONNECT |\ > - NVMF_OPT_DISABLE_SQFLOW |\ > - NVMF_OPT_FAIL_FAST_TMO) > - > static struct nvme_ctrl * > nvmf_create_ctrl(struct device *dev, const char *buf) > { > diff --git a/drivers/nvme/host/fabrics.h b/drivers/nvme/host/fabrics.h > index d7f7974dc208..ce7fe3a842b1 100644 > --- a/drivers/nvme/host/fabrics.h > +++ b/drivers/nvme/host/fabrics.h > @@ -68,6 +68,13 @@ enum { > NVMF_OPT_FAIL_FAST_TMO = 1 << 20, > }; > > +#define NVMF_REQUIRED_OPTS (NVMF_OPT_TRANSPORT | NVMF_OPT_NQN) > +#define NVMF_ALLOWED_OPTS (NVMF_OPT_QUEUE_SIZE | NVMF_OPT_NR_IO_QUEUES | \ > + NVMF_OPT_KATO | NVMF_OPT_HOSTNQN | \ > + NVMF_OPT_HOST_ID | NVMF_OPT_DUP_CONNECT |\ > + NVMF_OPT_DISABLE_SQFLOW |\ > + NVMF_OPT_FAIL_FAST_TMO) > + > /** > * struct nvmf_ctrl_options - Used to hold the options specified > * with the parsing opts enum. > Reviewed-by: Himanshu Madhani <himanshu.madhani@oracle.com>
> From: Arie Gershberg <agershberg@marvell.com> > > Move NVMF_ALLOWED_OPTS and NVMF_REQUIRED_OPTS definitions > to header file, so it can be used by the different HW devices. > > NVMeTCP offload devices might have different limitations of the > allowed options, for example, a device that does not support all the > queue types. With tcp and rdma, only the nvme-tcp and nvme-rdma layers > handle those attributes and the HW devices do not create any limitations > for the allowed options. > > An alternative design could be to add separate fields in nvme_tcp_ofld_ops > such as max_hw_sectors and max_segments that we already have in this > series. Seems harmless... Acked-by: Sagi Grimberg <sagi@grimberg.me>
diff --git a/drivers/nvme/host/fabrics.c b/drivers/nvme/host/fabrics.c index a2bb7fc63a73..e1e05aa2fada 100644 --- a/drivers/nvme/host/fabrics.c +++ b/drivers/nvme/host/fabrics.c @@ -942,13 +942,6 @@ void nvmf_free_options(struct nvmf_ctrl_options *opts) } EXPORT_SYMBOL_GPL(nvmf_free_options); -#define NVMF_REQUIRED_OPTS (NVMF_OPT_TRANSPORT | NVMF_OPT_NQN) -#define NVMF_ALLOWED_OPTS (NVMF_OPT_QUEUE_SIZE | NVMF_OPT_NR_IO_QUEUES | \ - NVMF_OPT_KATO | NVMF_OPT_HOSTNQN | \ - NVMF_OPT_HOST_ID | NVMF_OPT_DUP_CONNECT |\ - NVMF_OPT_DISABLE_SQFLOW |\ - NVMF_OPT_FAIL_FAST_TMO) - static struct nvme_ctrl * nvmf_create_ctrl(struct device *dev, const char *buf) { diff --git a/drivers/nvme/host/fabrics.h b/drivers/nvme/host/fabrics.h index d7f7974dc208..ce7fe3a842b1 100644 --- a/drivers/nvme/host/fabrics.h +++ b/drivers/nvme/host/fabrics.h @@ -68,6 +68,13 @@ enum { NVMF_OPT_FAIL_FAST_TMO = 1 << 20, }; +#define NVMF_REQUIRED_OPTS (NVMF_OPT_TRANSPORT | NVMF_OPT_NQN) +#define NVMF_ALLOWED_OPTS (NVMF_OPT_QUEUE_SIZE | NVMF_OPT_NR_IO_QUEUES | \ + NVMF_OPT_KATO | NVMF_OPT_HOSTNQN | \ + NVMF_OPT_HOST_ID | NVMF_OPT_DUP_CONNECT |\ + NVMF_OPT_DISABLE_SQFLOW |\ + NVMF_OPT_FAIL_FAST_TMO) + /** * struct nvmf_ctrl_options - Used to hold the options specified * with the parsing opts enum.