diff mbox series

[v7,04/23] Revert "nvme-tcp: remove the unused queue_size member in nvme_tcp_queue"

Message ID 20221025135958.6242-5-aaptel@nvidia.com (mailing list archive)
State Changes Requested
Headers show
Series nvme-tcp receive offloads | expand

Checks

Context Check Description
netdev/tree_selection success Guessed tree name to be net-next, async
netdev/fixes_present success Fixes tag not required for -next series
netdev/subject_prefix success Link
netdev/cover_letter success Series has a cover letter
netdev/patch_count fail Series longer than 15 patches (and no cover letter)
netdev/header_inline success No static functions without inline keyword in header files
netdev/build_32bit success Errors and warnings before: 0 this patch: 0
netdev/cc_maintainers success CCed 5 of 5 maintainers
netdev/build_clang success Errors and warnings before: 0 this patch: 0
netdev/module_param success Was 0 now: 0
netdev/verify_signedoff success Signed-off-by tag matches author and committer
netdev/check_selftest success No net selftest shell script
netdev/verify_fixes success No Fixes tag
netdev/build_allmodconfig_warn success Errors and warnings before: 0 this patch: 0
netdev/checkpatch success total: 0 errors, 0 warnings, 0 checks, 39 lines checked
netdev/kdoc success Errors and warnings before: 0 this patch: 0
netdev/source_inline success Was 0 now: 0

Commit Message

Aurelien Aptel Oct. 25, 2022, 1:59 p.m. UTC
This reverts commit fb8745d040ef5b9080003325e56b91fefe1022bb.

The newly added NVMeTCP offload requires the field
nvme_tcp_queue->queue_size in the patch
"nvme-tcp: Add DDP offload control path" in nvme_tcp_offload_socket().
The queue size is part of struct ulp_ddp_config
parameters.

Fixed space alignment to open parenthesis from the original patch.

Signed-off-by: Shai Malin <smalin@nvidia.com>
Signed-off-by: Aurelien Aptel <aaptel@nvidia.com>
---
 drivers/nvme/host/tcp.c | 9 ++++++---
 1 file changed, 6 insertions(+), 3 deletions(-)

Comments

Christoph Hellwig Oct. 25, 2022, 4:14 p.m. UTC | #1
On Tue, Oct 25, 2022 at 04:59:39PM +0300, Aurelien Aptel wrote:
> This reverts commit fb8745d040ef5b9080003325e56b91fefe1022bb.
> 
> The newly added NVMeTCP offload requires the field
> nvme_tcp_queue->queue_size in the patch
> "nvme-tcp: Add DDP offload control path" in nvme_tcp_offload_socket().
> The queue size is part of struct ulp_ddp_config
> parameters.

Please never do reverts if you just bring something back for an entirely
differenet reason.  And I think we need a really good justification of
why you have a code path that can get the queue struct and not the
controller, which really should not happen.
Sagi Grimberg Oct. 26, 2022, 11:02 a.m. UTC | #2
>> This reverts commit fb8745d040ef5b9080003325e56b91fefe1022bb.
>>
>> The newly added NVMeTCP offload requires the field
>> nvme_tcp_queue->queue_size in the patch
>> "nvme-tcp: Add DDP offload control path" in nvme_tcp_offload_socket().
>> The queue size is part of struct ulp_ddp_config
>> parameters.
> 
> Please never do reverts if you just bring something back for an entirely
> differenet reason.

Agreed.

> And I think we need a really good justification of
> why you have a code path that can get the queue struct and not the
> controller, which really should not happen.

What is wrong with just using either ctrl->sqsize/NVME_AQ_DEPTH based
on the qid?
Shai Malin Oct. 26, 2022, 11:52 a.m. UTC | #3
On Wed, 26 Oct 2022 at 12:02, Sagi Grimberg <sagi@grimberg.me> wrote:
> >> This reverts commit fb8745d040ef5b9080003325e56b91fefe1022bb.
> >>
> >> The newly added NVMeTCP offload requires the field
> >> nvme_tcp_queue->queue_size in the patch
> >> "nvme-tcp: Add DDP offload control path" in nvme_tcp_offload_socket().
> >> The queue size is part of struct ulp_ddp_config
> >> parameters.
> >
> > Please never do reverts if you just bring something back for an entirely
> > differenet reason.
> 
> Agreed.

Sure.

> 
> > And I think we need a really good justification of
> > why you have a code path that can get the queue struct and not the
> > controller, which really should not happen.
> 
> What is wrong with just using either ctrl->sqsize/NVME_AQ_DEPTH based
> on the qid?

Thanks, we will use ctrl->sqsize.
No need to use NVME_AQ_DEPTH as the offload is used only with IO queues.
diff mbox series

Patch

diff --git a/drivers/nvme/host/tcp.c b/drivers/nvme/host/tcp.c
index 1eed0fc26b3a..42b2d86dcfc2 100644
--- a/drivers/nvme/host/tcp.c
+++ b/drivers/nvme/host/tcp.c
@@ -133,6 +133,7 @@  struct nvme_tcp_queue {
 	/* send state */
 	struct nvme_tcp_request *request;
 
+	int			queue_size;
 	u32			maxh2cdata;
 	size_t			cmnd_capsule_len;
 	struct nvme_tcp_ctrl	*ctrl;
@@ -1475,7 +1476,8 @@  static void nvme_tcp_set_queue_io_cpu(struct nvme_tcp_queue *queue)
 	queue->io_cpu = cpumask_next_wrap(n - 1, cpu_online_mask, -1, false);
 }
 
-static int nvme_tcp_alloc_queue(struct nvme_ctrl *nctrl, int qid)
+static int nvme_tcp_alloc_queue(struct nvme_ctrl *nctrl,
+				int qid, size_t queue_size)
 {
 	struct nvme_tcp_ctrl *ctrl = to_tcp_ctrl(nctrl);
 	struct nvme_tcp_queue *queue = &ctrl->queues[qid];
@@ -1487,6 +1489,7 @@  static int nvme_tcp_alloc_queue(struct nvme_ctrl *nctrl, int qid)
 	INIT_LIST_HEAD(&queue->send_list);
 	mutex_init(&queue->send_mutex);
 	INIT_WORK(&queue->io_work, nvme_tcp_io_work);
+	queue->queue_size = queue_size;
 
 	if (qid > 0)
 		queue->cmnd_capsule_len = nctrl->ioccsz * 16;
@@ -1734,7 +1737,7 @@  static int nvme_tcp_alloc_admin_queue(struct nvme_ctrl *ctrl)
 {
 	int ret;
 
-	ret = nvme_tcp_alloc_queue(ctrl, 0);
+	ret = nvme_tcp_alloc_queue(ctrl, 0, NVME_AQ_DEPTH);
 	if (ret)
 		return ret;
 
@@ -1754,7 +1757,7 @@  static int __nvme_tcp_alloc_io_queues(struct nvme_ctrl *ctrl)
 	int i, ret;
 
 	for (i = 1; i < ctrl->queue_count; i++) {
-		ret = nvme_tcp_alloc_queue(ctrl, i);
+		ret = nvme_tcp_alloc_queue(ctrl, i, ctrl->sqsize + 1);
 		if (ret)
 			goto out_free_queues;
 	}