diff mbox

nvmet-rdma: Don't use the inline buffer in order to avoid allocation for small reads

Message ID 1470040599-7294-1-git-send-email-sagi@grimberg.me (mailing list archive)
State Not Applicable
Headers show

Commit Message

Sagi Grimberg Aug. 1, 2016, 8:36 a.m. UTC
Under extreme conditions this might cause data corruptions. By doing that
we we repost the buffer and then post this buffer for the device to send.
If we happen to use shared receive queues the device might write to the
buffer before it sends it (there is no ordering between send and recv
queues). Without SRQs we probably won't get that if the host doesn't
mis-behave and send more than we allowed it, but relying on that is not
really a good idea.

Signed-off-by: Sagi Grimberg <sagi@grimberg.me>
---
 drivers/nvme/target/rdma.c | 13 ++++---------
 1 file changed, 4 insertions(+), 9 deletions(-)

Comments

Christoph Hellwig Aug. 2, 2016, 12:50 p.m. UTC | #1
On Mon, Aug 01, 2016 at 11:36:39AM +0300, Sagi Grimberg wrote:
> Under extreme conditions this might cause data corruptions. By doing that
> we we repost the buffer and then post this buffer for the device to send.
> If we happen to use shared receive queues the device might write to the
> buffer before it sends it (there is no ordering between send and recv
> queues). Without SRQs we probably won't get that if the host doesn't
> mis-behave and send more than we allowed it, but relying on that is not
> really a good idea.

Pitty - it seems so wasteful not being able to use these buffers for
anything that isn't an inline write.  I fully agree on the SRQ case,
but I think we should offer it for the non-SRP case.
--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Sagi Grimberg Aug. 2, 2016, 1:38 p.m. UTC | #2
>> Under extreme conditions this might cause data corruptions. By doing that
>> we we repost the buffer and then post this buffer for the device to send.
>> If we happen to use shared receive queues the device might write to the
>> buffer before it sends it (there is no ordering between send and recv
>> queues). Without SRQs we probably won't get that if the host doesn't
>> mis-behave and send more than we allowed it, but relying on that is not
>> really a good idea.
>
> Pitty - it seems so wasteful not being able to use these buffers for
> anything that isn't an inline write.

Totally agree, I'm open to smart ideas on this...

> I fully agree on the SRQ case, but I think we should offer it for the non-SRP case.

As I wrote, even in the non-srq case, if the host is sending a single
write over the negotiated queue size, the data can land in the buffer
that is currently being sent (its a rare race condition, but
theoretically possible). The reason is that we repost the inline data
buffer for receive before we post the send request. We used to have
it the other way around (which eliminates the issue) but we then saw
some latency bubbles due to the HW sending rnr-naks to the host in
the lack of a receive buffer (in iWARP the problem was even worse
because there is no flow-control).

Do you think it's OK to risk data corruption if the host is
misbehaving?
--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Jason Gunthorpe Aug. 2, 2016, 4:15 p.m. UTC | #3
On Tue, Aug 02, 2016 at 04:38:58PM +0300, Sagi Grimberg wrote:
> that is currently being sent (its a rare race condition, but
> theoretically possible). The reason is that we repost the inline data
> buffer for receive before we post the send request. We used to have

?? The same buffer is posted at the same time for send and recv? That
is never OK, SRQ or not.

Jason
--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Sagi Grimberg Aug. 3, 2016, 9:48 a.m. UTC | #4
>> that is currently being sent (its a rare race condition, but
>> theoretically possible). The reason is that we repost the inline data
>> buffer for receive before we post the send request. We used to have
>
> ?? The same buffer is posted at the same time for send and recv? That
> is never OK, SRQ or not.

I agree.

But I agree its a shame to lose this. Maybe if we over-allocate cmds
and restore the receive repost to be after the send completion? I'm not
too fond of that either (it's not just commands but also inline
pages...)
--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Christoph Hellwig Aug. 3, 2016, 9:49 a.m. UTC | #5
On Tue, Aug 02, 2016 at 10:15:26AM -0600, Jason Gunthorpe wrote:
> On Tue, Aug 02, 2016 at 04:38:58PM +0300, Sagi Grimberg wrote:
> > that is currently being sent (its a rare race condition, but
> > theoretically possible). The reason is that we repost the inline data
> > buffer for receive before we post the send request. We used to have
> 
> ?? The same buffer is posted at the same time for send and recv? That
> is never OK, SRQ or not.

We will never POST it for a SEND, but it would be used as the target
of RDMA READ / WRITE operations.
--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Sagi Grimberg Aug. 3, 2016, 10:37 a.m. UTC | #6
>>> that is currently being sent (its a rare race condition, but
>>> theoretically possible). The reason is that we repost the inline data
>>> buffer for receive before we post the send request. We used to have
>>
>> ?? The same buffer is posted at the same time for send and recv? That
>> is never OK, SRQ or not.
>
> We will never POST it for a SEND, but it would be used as the target
> of RDMA READ / WRITE operations.

Jason's comment still holds.
--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Christoph Hellwig Aug. 4, 2016, 11:49 a.m. UTC | #7
On Mon, Aug 01, 2016 at 11:36:39AM +0300, Sagi Grimberg wrote:
> Under extreme conditions this might cause data corruptions. By doing that
> we we repost the buffer and then post this buffer for the device to send.
> If we happen to use shared receive queues the device might write to the
> buffer before it sends it (there is no ordering between send and recv
> queues). Without SRQs we probably won't get that if the host doesn't
> mis-behave and send more than we allowed it, but relying on that is not
> really a good idea.

Allright, after the discussion:

Reviewed-by: Christoph Hellwig <hch@lst.de>
--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
diff mbox

Patch

diff --git a/drivers/nvme/target/rdma.c b/drivers/nvme/target/rdma.c
index e06d504bdf0c..4e83d92d6bdd 100644
--- a/drivers/nvme/target/rdma.c
+++ b/drivers/nvme/target/rdma.c
@@ -615,15 +615,10 @@  static u16 nvmet_rdma_map_sgl_keyed(struct nvmet_rdma_rsp *rsp,
 	if (!len)
 		return 0;
 
-	/* use the already allocated data buffer if possible */
-	if (len <= NVMET_RDMA_INLINE_DATA_SIZE && rsp->queue->host_qid) {
-		nvmet_rdma_use_inline_sg(rsp, len, 0);
-	} else {
-		status = nvmet_rdma_alloc_sgl(&rsp->req.sg, &rsp->req.sg_cnt,
-				len);
-		if (status)
-			return status;
-	}
+	status = nvmet_rdma_alloc_sgl(&rsp->req.sg, &rsp->req.sg_cnt,
+			len);
+	if (status)
+		return status;
 
 	ret = rdma_rw_ctx_init(&rsp->rw, cm_id->qp, cm_id->port_num,
 			rsp->req.sg, rsp->req.sg_cnt, 0, addr, key,