diff mbox series

RDMA/rxe: Fix data copy for IB_SEND_INLINE

Message ID 20240516095052.542767-1-honggangli@163.com (mailing list archive)
State Accepted
Headers show
Series RDMA/rxe: Fix data copy for IB_SEND_INLINE | expand

Commit Message

Honggang LI May 16, 2024, 9:50 a.m. UTC
For RDMA Send and Write with IB_SEND_INLINE, the memory buffers
specified in sge list will be placed inline in the Send Request.

The data should be copied by CPU from the virtual addresses of
corresponding sge list DMA addresses.

Fixes: 8d7c7c0eeb74 ("RDMA: Add ib_virt_dma_to_page()")
Signed-off-by: Honggang LI <honggangli@163.com>
---
 drivers/infiniband/sw/rxe/rxe_verbs.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

Comments

Zhu Yanjun May 16, 2024, 1:24 p.m. UTC | #1
On 16.05.24 11:50, Honggang LI wrote:
> For RDMA Send and Write with IB_SEND_INLINE, the memory buffers
> specified in sge list will be placed inline in the Send Request.
> 
> The data should be copied by CPU from the virtual addresses of
> corresponding sge list DMA addresses.
> 
> Fixes: 8d7c7c0eeb74 ("RDMA: Add ib_virt_dma_to_page()")
> Signed-off-by: Honggang LI <honggangli@163.com>

Thanks a lot.
Reviewed-by: Zhu Yanjun <yanjun.zhu@linux.dev>

Zhu Yanjun

> ---
>   drivers/infiniband/sw/rxe/rxe_verbs.c | 2 +-
>   1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/drivers/infiniband/sw/rxe/rxe_verbs.c b/drivers/infiniband/sw/rxe/rxe_verbs.c
> index 614581989b38..b94d05e9167a 100644
> --- a/drivers/infiniband/sw/rxe/rxe_verbs.c
> +++ b/drivers/infiniband/sw/rxe/rxe_verbs.c
> @@ -812,7 +812,7 @@ static void copy_inline_data_to_wqe(struct rxe_send_wqe *wqe,
>   	int i;
>   
>   	for (i = 0; i < ibwr->num_sge; i++, sge++) {
> -		memcpy(p, ib_virt_dma_to_page(sge->addr), sge->length);
> +		memcpy(p, ib_virt_dma_to_ptr(sge->addr), sge->length);
>   		p += sge->length;
>   	}
>   }
Zhijian Li (Fujitsu) May 17, 2024, 1:43 a.m. UTC | #2
On 16/05/2024 17:50, Honggang LI wrote:
> For RDMA Send and Write with IB_SEND_INLINE, the memory buffers
> specified in sge list will be placed inline in the Send Request.
> 
> The data should be copied by CPU from the virtual addresses of
> corresponding sge list DMA addresses.
> 
> Fixes: 8d7c7c0eeb74 ("RDMA: Add ib_virt_dma_to_page()")
> Signed-off-by: Honggang LI <honggangli@163.com>

Good catch.

Reviewed-by: Li Zhijian <lizhijian@fujitsu.com>

(BTW, Does it mean current pyverb tests in rdma-core have not covered IB_SEND_INLINE)


> ---
>   drivers/infiniband/sw/rxe/rxe_verbs.c | 2 +-
>   1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/drivers/infiniband/sw/rxe/rxe_verbs.c b/drivers/infiniband/sw/rxe/rxe_verbs.c
> index 614581989b38..b94d05e9167a 100644
> --- a/drivers/infiniband/sw/rxe/rxe_verbs.c
> +++ b/drivers/infiniband/sw/rxe/rxe_verbs.c
> @@ -812,7 +812,7 @@ static void copy_inline_data_to_wqe(struct rxe_send_wqe *wqe,
>   	int i;
>   
>   	for (i = 0; i < ibwr->num_sge; i++, sge++) {
> -		memcpy(p, ib_virt_dma_to_page(sge->addr), sge->length);
> +		memcpy(p, ib_virt_dma_to_ptr(sge->addr), sge->length);
>   		p += sge->length;
>   	}
>   }
Zhijian Li (Fujitsu) May 17, 2024, 10:47 a.m. UTC | #3
On 17/05/2024 09:43, Zhijian Li (Fujitsu) wrote:
> 
> 
> On 16/05/2024 17:50, Honggang LI wrote:
>> For RDMA Send and Write with IB_SEND_INLINE, the memory buffers
>> specified in sge list will be placed inline in the Send Request.
>>
>> The data should be copied by CPU from the virtual addresses of
>> corresponding sge list DMA addresses.
>>
>> Fixes: 8d7c7c0eeb74 ("RDMA: Add ib_virt_dma_to_page()")
>> Signed-off-by: Honggang LI <honggangli@163.com>
> 
> Good catch.
> 
> Reviewed-by: Li Zhijian <lizhijian@fujitsu.com>
> 
> (BTW, Does it mean current pyverb tests in rdma-core have not covered IB_SEND_INLINE)

At a glance, copy_inline_data_to_wqe() will only called from the ULP, not the rdma-core.



> 
> 
>> ---
>>    drivers/infiniband/sw/rxe/rxe_verbs.c | 2 +-
>>    1 file changed, 1 insertion(+), 1 deletion(-)
>>
>> diff --git a/drivers/infiniband/sw/rxe/rxe_verbs.c b/drivers/infiniband/sw/rxe/rxe_verbs.c
>> index 614581989b38..b94d05e9167a 100644
>> --- a/drivers/infiniband/sw/rxe/rxe_verbs.c
>> +++ b/drivers/infiniband/sw/rxe/rxe_verbs.c
>> @@ -812,7 +812,7 @@ static void copy_inline_data_to_wqe(struct rxe_send_wqe *wqe,
>>    	int i;
>>    
>>    	for (i = 0; i < ibwr->num_sge; i++, sge++) {
>> -		memcpy(p, ib_virt_dma_to_page(sge->addr), sge->length);
>> +		memcpy(p, ib_virt_dma_to_ptr(sge->addr), sge->length);
>>    		p += sge->length;
>>    	}
>>    }
Jason Gunthorpe May 24, 2024, 3:05 p.m. UTC | #4
On Thu, May 16, 2024 at 05:50:52PM +0800, Honggang LI wrote:
> For RDMA Send and Write with IB_SEND_INLINE, the memory buffers
> specified in sge list will be placed inline in the Send Request.
> 
> The data should be copied by CPU from the virtual addresses of
> corresponding sge list DMA addresses.
> 
> Fixes: 8d7c7c0eeb74 ("RDMA: Add ib_virt_dma_to_page()")
> Signed-off-by: Honggang LI <honggangli@163.com>
> ---
>  drivers/infiniband/sw/rxe/rxe_verbs.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)

This seems like a serious bug

Cc: stable@kernel.org
Reviewed-by: Jason Gunthorpe <jgg@nvidia.com>

Jason
Leon Romanovsky May 30, 2024, 1:20 p.m. UTC | #5
On Thu, 16 May 2024 17:50:52 +0800, Honggang LI wrote:
> For RDMA Send and Write with IB_SEND_INLINE, the memory buffers
> specified in sge list will be placed inline in the Send Request.
> 
> The data should be copied by CPU from the virtual addresses of
> corresponding sge list DMA addresses.
> 
> 
> [...]

Applied, thanks!

[1/1] RDMA/rxe: Fix data copy for IB_SEND_INLINE
      https://git.kernel.org/rdma/rdma/c/03fa18a992d562

Best regards,
diff mbox series

Patch

diff --git a/drivers/infiniband/sw/rxe/rxe_verbs.c b/drivers/infiniband/sw/rxe/rxe_verbs.c
index 614581989b38..b94d05e9167a 100644
--- a/drivers/infiniband/sw/rxe/rxe_verbs.c
+++ b/drivers/infiniband/sw/rxe/rxe_verbs.c
@@ -812,7 +812,7 @@  static void copy_inline_data_to_wqe(struct rxe_send_wqe *wqe,
 	int i;
 
 	for (i = 0; i < ibwr->num_sge; i++, sge++) {
-		memcpy(p, ib_virt_dma_to_page(sge->addr), sge->length);
+		memcpy(p, ib_virt_dma_to_ptr(sge->addr), sge->length);
 		p += sge->length;
 	}
 }