diff mbox series

[rdma-rc,v1] RDMA/core: fix sg_to_page mapping for boundary condition

Message ID 20220816081153.1580612-1-kashyap.desai@broadcom.com (mailing list archive)
State Changes Requested
Headers show
Series [rdma-rc,v1] RDMA/core: fix sg_to_page mapping for boundary condition | expand

Commit Message

Kashyap Desai Aug. 16, 2022, 8:11 a.m. UTC
This issue frequently hits if AMD IOMMU is enabled.

In case of 1MB data transfer, ib core is supposed to set 256 entries of
4K page size in MR page table. Because of the defect in ib_sg_to_pages,
it breaks just after setting one entry.
Memory region page table entries may find stale entries (or NULL if
address is memset). Something like this -

crash> x/32a 0xffff9cd9f7f84000
<<< -This looks like stale entries. Only first entry is valid ->>>
0xffff9cd9f7f84000:     0xfffffffffff00000      0x68d31000
0xffff9cd9f7f84010:     0x68d32000      0x68d33000
0xffff9cd9f7f84020:     0x68d34000      0x975c5000
0xffff9cd9f7f84030:     0x975c6000      0x975c7000
0xffff9cd9f7f84040:     0x975c8000      0x975c9000
0xffff9cd9f7f84050:     0x975ca000      0x975cb000
0xffff9cd9f7f84060:     0x975cc000      0x975cd000
0xffff9cd9f7f84070:     0x975ce000      0x975cf000
0xffff9cd9f7f84080:     0x0     0x0
0xffff9cd9f7f84090:     0x0     0x0
0xffff9cd9f7f840a0:     0x0     0x0
0xffff9cd9f7f840b0:     0x0     0x0
0xffff9cd9f7f840c0:     0x0     0x0
0xffff9cd9f7f840d0:     0x0     0x0
0xffff9cd9f7f840e0:     0x0     0x0
0xffff9cd9f7f840f0:     0x0     0x0

All addresses other than 0xfffffffffff00000 are stale entries.
Once this kind of incorrect page entries are passed to the RDMA h/w,
AMD IOMMU module detects the page fault whenever h/w tries to access
addresses which are not actually populated by the ib stack correctly.
Below prints are logged whenever this issue hits.

bnxt_en 0000:21:00.0: AMD-Vi: Event logged [IO_PAGE_FAULT domain=0x001e address=0x68d31000 flags=0x0050]

ib_sg_to_pages function populates the correct page address in most of the cases,
but there is one boundary condition which is not handled correctly.

Boundary condition explained -
Page addresses are not populated correctly if the dma buffer is mapped to the
very last region of address space.

One of the example -
Whenever page_add is  0xfffffffffff00000  (Last 1MB section of the address space)
and dma length is 1MB, end of the dma address = 0
(Derived from 0xfffffffffff00000 + 0x100000).

use dma buffer length instead of end_dma_addr to fill page addresses.

v0->v1 : Use first_page_off instead of page_off for readability
	 Fix functional issue of not reseting first_page_off

Fixes: 4c67e2bfc8b7 ("IB/core: Introduce new fast registration API")
Signed-off-by: Kashyap Desai <kashyap.desai@broadcom.com>
---
 drivers/infiniband/core/verbs.c | 10 ++++++++--
 1 file changed, 8 insertions(+), 2 deletions(-)

Comments

Jason Gunthorpe Aug. 18, 2022, 11:52 p.m. UTC | #1
On Tue, Aug 16, 2022 at 01:41:53PM +0530, Kashyap Desai wrote:
> This issue frequently hits if AMD IOMMU is enabled.
> 
> In case of 1MB data transfer, ib core is supposed to set 256 entries of
> 4K page size in MR page table. Because of the defect in ib_sg_to_pages,
> it breaks just after setting one entry.
> Memory region page table entries may find stale entries (or NULL if
> address is memset). Something like this -
> 
> crash> x/32a 0xffff9cd9f7f84000
> <<< -This looks like stale entries. Only first entry is valid ->>>
> 0xffff9cd9f7f84000:     0xfffffffffff00000      0x68d31000
> 0xffff9cd9f7f84010:     0x68d32000      0x68d33000
> 0xffff9cd9f7f84020:     0x68d34000      0x975c5000
> 0xffff9cd9f7f84030:     0x975c6000      0x975c7000
> 0xffff9cd9f7f84040:     0x975c8000      0x975c9000
> 0xffff9cd9f7f84050:     0x975ca000      0x975cb000
> 0xffff9cd9f7f84060:     0x975cc000      0x975cd000
> 0xffff9cd9f7f84070:     0x975ce000      0x975cf000
> 0xffff9cd9f7f84080:     0x0     0x0
> 0xffff9cd9f7f84090:     0x0     0x0
> 0xffff9cd9f7f840a0:     0x0     0x0
> 0xffff9cd9f7f840b0:     0x0     0x0
> 0xffff9cd9f7f840c0:     0x0     0x0
> 0xffff9cd9f7f840d0:     0x0     0x0
> 0xffff9cd9f7f840e0:     0x0     0x0
> 0xffff9cd9f7f840f0:     0x0     0x0
> 
> All addresses other than 0xfffffffffff00000 are stale entries.
> Once this kind of incorrect page entries are passed to the RDMA h/w,
> AMD IOMMU module detects the page fault whenever h/w tries to access
> addresses which are not actually populated by the ib stack correctly.
> Below prints are logged whenever this issue hits.

I don't understand this. AFAIK on AMD platforms you can't create an
IOVA mapping at -1 like you are saying above, so how is
0xfffffffffff00000 a valid DMA address?

Or, if the AMD IOMMU HW can actually do this, then I would say it is a
bug in the IOMM DMA API to allow the aperture used for DMA mapping to
get to the end of ULONG_MAX, it is just asking for overflow bugs.

And if we have to tolerate these addreses then the code should be
designed to avoid the overflow in the first place ie 'end_dma_addr'
should be changed to 'last_dma_addr = dma_addr + (dma_len - 1)' which
does not overflow, and all the logics carefully organized so none of
the math overflows.

Jason
Kashyap Desai Aug. 19, 2022, 9:43 a.m. UTC | #2
> > All addresses other than 0xfffffffffff00000 are stale entries.
> > Once this kind of incorrect page entries are passed to the RDMA h/w,
> > AMD IOMMU module detects the page fault whenever h/w tries to access
> > addresses which are not actually populated by the ib stack correctly.
> > Below prints are logged whenever this issue hits.
>
> I don't understand this. AFAIK on AMD platforms you can't create an IOVA
> mapping at -1 like you are saying above, so how is
> 0xfffffffffff00000 a valid DMA address?

Hi Jason -

Let me simplify - Consider a case if 1 SGE has 8K dma_len and starting dma
address is <0xffffffffffffe000>.
It is expected to have two page table entry - <0xffffffffffffe000 > and
<0xfffffffffffff000 >.
Both the DMA address not mapped to -1.   Device expose dma_mask_bits = 64,
so above two addresses are valid mapping from IOMMU perspective.

Since end_dma_addr will be zero (in current code) which is actually not
end_dma_addr but potential next_dma_addr, we will only endup set_page() call
one time.

I think this is a valid mapping request and don't see any issue with IOMMU
mapping is incorrect.

>
> Or, if the AMD IOMMU HW can actually do this, then I would say it is a bug
> in
> the IOMM DMA API to allow the aperture used for DMA mapping to get to the
> end of ULONG_MAX, it is just asking for overflow bugs.
>
> And if we have to tolerate these addreses then the code should be designed
> to
> avoid the overflow in the first place ie 'end_dma_addr'
> should be changed to 'last_dma_addr = dma_addr + (dma_len - 1)' which does
> not overflow, and all the logics carefully organized so none of the math
> overflows.

Making 'last_dma_addr = dma_addr + (dma_len - 1)' will have another side
effect. Driver may get call of set_page() more than max_pg_ptrs.
I have not debug how it can happen, but just wanted to share result with
you.  I can check if that is a preferred path.
'end_dma_addr' is used in code for other arithmetic.

How about just doing below ? This was my initial thought of fixing but I am
not sure which approach Is best.

diff --git a/drivers/infiniband/core/verbs.c
b/drivers/infiniband/core/verbs.c
index e54b3f1b730e..56d1f3b20e98 100644
--- a/drivers/infiniband/core/verbs.c
+++ b/drivers/infiniband/core/verbs.c
@@ -2709,7 +2709,7 @@ int ib_sg_to_pages(struct ib_mr *mr, struct
scatterlist *sgl, int sg_nents,
                        prev_addr = page_addr;
 next_page:
                        page_addr += mr->page_size;
-               } while (page_addr < end_dma_addr);
+               } while (page_addr < (end_dma_addr - 1));

                mr->length += dma_len;
                last_end_dma_addr = end_dma_addr;

>
> Jason
Jason Gunthorpe Aug. 19, 2022, 11:48 a.m. UTC | #3
On Fri, Aug 19, 2022 at 03:13:47PM +0530, Kashyap Desai wrote:
> > > All addresses other than 0xfffffffffff00000 are stale entries.
> > > Once this kind of incorrect page entries are passed to the RDMA h/w,
> > > AMD IOMMU module detects the page fault whenever h/w tries to access
> > > addresses which are not actually populated by the ib stack correctly.
> > > Below prints are logged whenever this issue hits.
> >
> > I don't understand this. AFAIK on AMD platforms you can't create an IOVA
> > mapping at -1 like you are saying above, so how is
> > 0xfffffffffff00000 a valid DMA address?
> 
> Hi Jason -
> 
> Let me simplify - Consider a case if 1 SGE has 8K dma_len and starting dma
> address is <0xffffffffffffe000>.
> It is expected to have two page table entry - <0xffffffffffffe000 > and
> <0xfffffffffffff000 >.
> Both the DMA address not mapped to -1.   Device expose dma_mask_bits = 64,
> so above two addresses are valid mapping from IOMMU perspective.

That is not my point.

My point is that 0xFFFFFFFFFFFFFFF should never be used as a DMA
address because it invites overflow on any maths, and we are not
careful about this in the kernel in general.

> Since end_dma_addr will be zero (in current code) which is actually not
> end_dma_addr but potential next_dma_addr, we will only endup set_page() call
> one time.

Which is the math overflow.

> I think this is a valid mapping request and don't see any issue with IOMMU
> mapping is incorrect.

It should not create mappings that are so dangerous. There is really
no reason to use the last page of IOVA space that includes -1.

> > And if we have to tolerate these addreses then the code should be
> > designed to avoid the overflow in the first place ie
> > 'end_dma_addr' should be changed to 'last_dma_addr = dma_addr +
> > (dma_len - 1)' which does not overflow, and all the logics
> > carefully organized so none of the math overflows.
> 
> Making 'last_dma_addr = dma_addr + (dma_len - 1)' will have another side
> effect. 

Yes, the patch would have to fix everything about the logic to work
with a last and avoid overflowing maths

> How about just doing below ? This was my initial thought of fixing but I am
> not sure which approach Is best.
> 
> diff --git a/drivers/infiniband/core/verbs.c
> b/drivers/infiniband/core/verbs.c
> index e54b3f1b730e..56d1f3b20e98 100644
> --- a/drivers/infiniband/core/verbs.c
> +++ b/drivers/infiniband/core/verbs.c
> @@ -2709,7 +2709,7 @@ int ib_sg_to_pages(struct ib_mr *mr, struct
> scatterlist *sgl, int sg_nents,
>                         prev_addr = page_addr;
>  next_page:
>                         page_addr += mr->page_size;
> -               } while (page_addr < end_dma_addr);
> +               } while (page_addr < (end_dma_addr - 1));

This is now overflowing twice :(

Does this bug even still exist? eg does this revert "fix" it?

https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=af3e9579ecfb

It makes me wonder if the use of -1 is why drivers started failing
with this mode.

Jason
Kashyap Desai Aug. 22, 2022, 2:21 p.m. UTC | #4
>
> On Fri, Aug 19, 2022 at 03:13:47PM +0530, Kashyap Desai wrote:
> > > > All addresses other than 0xfffffffffff00000 are stale entries.
> > > > Once this kind of incorrect page entries are passed to the RDMA
> > > > h/w, AMD IOMMU module detects the page fault whenever h/w tries to
> > > > access addresses which are not actually populated by the ib stack
correctly.
> > > > Below prints are logged whenever this issue hits.
> > >
> > > I don't understand this. AFAIK on AMD platforms you can't create an
> > > IOVA mapping at -1 like you are saying above, so how is
> > > 0xfffffffffff00000 a valid DMA address?
> >
> > Hi Jason -
> >
> > Let me simplify - Consider a case if 1 SGE has 8K dma_len and starting
> > dma address is <0xffffffffffffe000>.
> > It is expected to have two page table entry - <0xffffffffffffe000 >
> > and
> > <0xfffffffffffff000 >.
> > Both the DMA address not mapped to -1.   Device expose dma_mask_bits =
64,
> > so above two addresses are valid mapping from IOMMU perspective.
>
> That is not my point.
>
> My point is that 0xFFFFFFFFFFFFFFF should never be used as a DMA address
> because it invites overflow on any maths, and we are not careful about
this in
> the kernel in general.

I am not seeing Address overflow case.  It is just that buffer is ending
at "0xffffffffffffffff" and it is a genuine dma buffer.
So, worst case scenario is DMA address = fffffffffffffffe and dma_len = 1
byte.  This must be handled as genuine dma request.

>
> > Since end_dma_addr will be zero (in current code) which is actually
> > not end_dma_addr but potential next_dma_addr, we will only endup
> > set_page() call one time.
>
> Which is the math overflow.
Let's take this case - ib_sg_to_pages(struct ib_mr *mr, struct scatterlist
*sgl, int sg_nents,
                unsigned int *sg_offset_p, int (*set_page)(struct ib_mr *,
u64))

sg_nents = 1
struct scatterlist {
    unsigned long page_link;
    unsigned int offset;		= 0
    unsigned int length;		= 8192
    dma_addr_t dma_address;	= 0xffffffffffffe000
    unsigned int dma_length;	= 8192

Below loop will run only one time.
        for_each_sg(sgl, sg, sg_nents, i) {

Now, we will enter into below loop with dma_addr = page_addr =
0xffffffffffffe000 and "end_dma_addr = dma_addr + dma_len" is ZERO.
eval 0xffffffffffffe000 + 8192
hexadecimal: 0

                do {
                        ret = set_page(mr, page_addr);
<<< - This callback will be called one time with page_addr =
0xffffffffffffe000
                        if (unlikely(ret < 0)) {
                                sg_offset = prev_addr -
sg_dma_address(sg);
                                mr->length += prev_addr - dma_addr;
                                if (sg_offset_p)
                                        *sg_offset_p = sg_offset;
                                return i || sg_offset ? i : ret;
                        }
                        prev_addr = page_addr;
next_page:
                        page_addr += mr->page_size;
<<< - After one iteration  page_addr = 0xfffffffffffff000
                } while (page_addr < end_dma_addr);
<<< - This loop will break because (0xfffffffffffff000 < 0) is not true.
>
> > I think this is a valid mapping request and don't see any issue with
> > IOMMU mapping is incorrect.
>
> It should not create mappings that are so dangerous. There is really no
reason to
> use the last page of IOVA space that includes -1.

That is correct, but if API which deals with mapping they handle this kind
of request gracefully is needed. Right ?

>
> > > And if we have to tolerate these addreses then the code should be
> > > designed to avoid the overflow in the first place ie 'end_dma_addr'
> > > should be changed to 'last_dma_addr = dma_addr + (dma_len - 1)'
> > > which does not overflow, and all the logics carefully organized so
> > > none of the math overflows.
> >
> > Making 'last_dma_addr = dma_addr + (dma_len - 1)' will have another
> > side effect.
>
> Yes, the patch would have to fix everything about the logic to work with
a last
> and avoid overflowing maths

Noted.

>
> > How about just doing below ? This was my initial thought of fixing but
> > I am not sure which approach Is best.
> >
> > diff --git a/drivers/infiniband/core/verbs.c
> > b/drivers/infiniband/core/verbs.c index e54b3f1b730e..56d1f3b20e98
> > 100644
> > --- a/drivers/infiniband/core/verbs.c
> > +++ b/drivers/infiniband/core/verbs.c
> > @@ -2709,7 +2709,7 @@ int ib_sg_to_pages(struct ib_mr *mr, struct
> > scatterlist *sgl, int sg_nents,
> >                         prev_addr = page_addr;
> >  next_page:
> >                         page_addr += mr->page_size;
> > -               } while (page_addr < end_dma_addr);
> > +               } while (page_addr < (end_dma_addr - 1));
>
> This is now overflowing twice :(

I thought about better approach without creating regression and I found
having loop using sg_dma_len can avoid such issues gracefully.
How about original patch. ?

>
> Does this bug even still exist? eg does this revert "fix" it?
>
>
https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/
?id=a
> f3e9579ecfb

Above revert is part of my test. In my setup "iommu_dma_forcedac = $2 =
false".
Without above revert, it may be possible that I can hit the issue
frequently. Currently I need heavy IO of 1MB to hit this issue. Almost
~8GB is attempted in my test.
Total 64 NVME target of 128 QD is sending 1MB IO. Looks like first DMA
mapping is attempted from <4GB and whenever it exhaust and start mapping >
4GB memory region, this kind of IOV mapping occurs.

Kashyap

>
> It makes me wonder if the use of -1 is why drivers started failing with
this mode.
>
> Jason
Jason Gunthorpe Aug. 26, 2022, 1:14 p.m. UTC | #5
On Mon, Aug 22, 2022 at 07:51:22PM +0530, Kashyap Desai wrote:

> Now, we will enter into below loop with dma_addr = page_addr =
> 0xffffffffffffe000 and "end_dma_addr = dma_addr + dma_len" is ZERO.
> eval 0xffffffffffffe000 + 8192
> hexadecimal: 0

This is called overflow.

Anything doing maths on the sgl's is likely to become broken by this -
which is why I think it is unnecessarily dangerous for the iommu code
to general dma addresses like this. It just shouldn't.

> > It should not create mappings that are so dangerous. There is really no
> reason to
> > use the last page of IOVA space that includes -1.
> 
> That is correct, but if API which deals with mapping they handle this kind
> of request gracefully is needed. Right ?

Ideally, but that is a game of wack a mole across the kernel, and
redoing algorithms to avoid overflowing addition is tricky stuff.

> I thought about better approach without creating regression and I found
> having loop using sg_dma_len can avoid such issues gracefully.
> How about original patch. ?

It overflows too.

You need to write the code so you never create the situation where
A+B=0 - don't try to fix things up after that happens.

Usually that means transforming the algorithm so that it works on a
"last byte" so we never need to compute an address that is +1 to the
last byte, which would be the overflown 0.

> Above revert is part of my test. In my setup "iommu_dma_forcedac = $2 =
> false".

So you opt into this behavior, OK

Jason
Kashyap Desai Sept. 1, 2022, 12:06 p.m. UTC | #6
> -----Original Message-----
> From: Jason Gunthorpe [mailto:jgg@nvidia.com]
> Sent: Friday, August 26, 2022 6:45 PM
> To: Kashyap Desai <kashyap.desai@broadcom.com>
> Cc: linux-rdma@vger.kernel.org; leonro@nvidia.com; Selvin Xavier
> <selvin.xavier@broadcom.com>; Andrew Gospodarek
> <andrew.gospodarek@broadcom.com>
> Subject: Re: [PATCH rdma-rc v1] RDMA/core: fix sg_to_page mapping for
> boundary condition
>
> On Mon, Aug 22, 2022 at 07:51:22PM +0530, Kashyap Desai wrote:
>
> > Now, we will enter into below loop with dma_addr = page_addr =
> > 0xffffffffffffe000 and "end_dma_addr = dma_addr + dma_len" is ZERO.
> > eval 0xffffffffffffe000 + 8192
> > hexadecimal: 0
>
> This is called overflow.

Is this not DMAable for 64bit DMA mask device ? It is DMAable. So not sure
why you call it as overflow. ?
My understanding is -
There is  a roll over (overflow) issue, If we get DMA address =
0xffffffffffffe000 and length = 16K. This is a last page of the possible
DMA range.
Similar discussion I found
@https://marc.info/?l=git-commits-head&m=149437079023021&w=2

We need to handle size vs end_address gracefully.

>
> Anything doing maths on the sgl's is likely to become broken by this -
which is
> why I think it is unnecessarily dangerous for the iommu code to general
dma
> addresses like this. It just shouldn't.
>
> > > It should not create mappings that are so dangerous. There is really
> > > no
> > reason to
> > > use the last page of IOVA space that includes -1.

I agree that such mapping is obviously dangerous, but it is not illegal as
well.
Same sgl mapping works if it is direct attached Storage, so there will be
a logical question why IB stack is not handling this.

> >
> > That is correct, but if API which deals with mapping they handle this
> > kind of request gracefully is needed. Right ?
>
> Ideally, but that is a game of wack a mole across the kernel, and
redoing
> algorithms to avoid overflowing addition is tricky stuff.
>
> > I thought about better approach without creating regression and I
> > found having loop using sg_dma_len can avoid such issues gracefully.
> > How about original patch. ?
>
> It overflows too.
>
> You need to write the code so you never create the situation where
> A+B=0 - don't try to fix things up after that happens.

In proposed patch, A + B = 0 is possible, but it will be considered as end
of the loop.  So let's say it was supposed to setup 8 sgl entries, A + B =
0 will be detected only after 8th entry is setup.
Current code detect A + B = 0 much early and that is what I am trying to
fix in this patch (This patch will not fix any roll over issue).
At least it will serve the purpose of creating correct sgl entries in low
level driver's Memory region through set_page() callback.

I am fine with your call since this is a concern case and it is going to
change core function.

We have two choice -
If function ib_sg_to_pages() can't handle such case, would you like to
detect such mapping error and at least return -EINVAL ?
OR
Just parse whatever mapping is received in sgl to low level driver and
don't really care about overflow case. (May be this patch can help.) ?


Kashyap

>
> Usually that means transforming the algorithm so that it works on a
"last byte"
> so we never need to compute an address that is +1 to the last byte,
which would
> be the overflown 0.
>
> > Above revert is part of my test. In my setup "iommu_dma_forcedac = $2
> > = false".
>
> So you opt into this behavior, OK
>
> Jason
Jason Gunthorpe Sept. 6, 2022, 5:33 p.m. UTC | #7
On Thu, Sep 01, 2022 at 05:36:57PM +0530, Kashyap Desai wrote:
> > -----Original Message-----
> > From: Jason Gunthorpe [mailto:jgg@nvidia.com]
> > Sent: Friday, August 26, 2022 6:45 PM
> > To: Kashyap Desai <kashyap.desai@broadcom.com>
> > Cc: linux-rdma@vger.kernel.org; leonro@nvidia.com; Selvin Xavier
> > <selvin.xavier@broadcom.com>; Andrew Gospodarek
> > <andrew.gospodarek@broadcom.com>
> > Subject: Re: [PATCH rdma-rc v1] RDMA/core: fix sg_to_page mapping for
> > boundary condition
> >
> > On Mon, Aug 22, 2022 at 07:51:22PM +0530, Kashyap Desai wrote:
> >
> > > Now, we will enter into below loop with dma_addr = page_addr =
> > > 0xffffffffffffe000 and "end_dma_addr = dma_addr + dma_len" is ZERO.
> > > eval 0xffffffffffffe000 + 8192
> > > hexadecimal: 0
> >
> > This is called overflow.
> 
> Is this not DMAable for 64bit DMA mask device ? It is DMAable. So not sure
> why you call it as overflow. ?

Beacuse the normal math overflowed.

Should it work? Yes. Is it a special edge case that might have bugs?
Certainly.

So the IOMMU layer shouldn't be stressing this edge case at all. It is
crazy, there is no reason to do this.

> I agree that such mapping is obviously dangerous, but it is not illegal as
> well.
> Same sgl mapping works if it is direct attached Storage, so there will be
> a logical question why IB stack is not handling this.

Oh that is probably very driver dependent.

> > You need to write the code so you never create the situation where
> > A+B=0 - don't try to fix things up after that happens.
> 
> In proposed patch, A + B = 0 is possible, but it will be considered as end
> of the loop.

Like I said, don't do that. End of the loop is -1 which requires a
different loop logic design, so send a patch like that.

But I would still send a patch for iommu to not create this in the
first place.

Jason
Kashyap Desai Sept. 12, 2022, 11:02 a.m. UTC | #8
>
> On Thu, Sep 01, 2022 at 05:36:57PM +0530, Kashyap Desai wrote:
> > > -----Original Message-----
> > > From: Jason Gunthorpe [mailto:jgg@nvidia.com]
> > > Sent: Friday, August 26, 2022 6:45 PM
> > > To: Kashyap Desai <kashyap.desai@broadcom.com>
> > > Cc: linux-rdma@vger.kernel.org; leonro@nvidia.com; Selvin Xavier
> > > <selvin.xavier@broadcom.com>; Andrew Gospodarek
> > > <andrew.gospodarek@broadcom.com>
> > > Subject: Re: [PATCH rdma-rc v1] RDMA/core: fix sg_to_page mapping
> > > for boundary condition
> > >
> > > On Mon, Aug 22, 2022 at 07:51:22PM +0530, Kashyap Desai wrote:
> > >
> > > > Now, we will enter into below loop with dma_addr = page_addr =
> > > > 0xffffffffffffe000 and "end_dma_addr = dma_addr + dma_len" is
ZERO.
> > > > eval 0xffffffffffffe000 + 8192
> > > > hexadecimal: 0
> > >
> > > This is called overflow.
> >
> > Is this not DMAable for 64bit DMA mask device ? It is DMAable. So not
> > sure why you call it as overflow. ?
>
> Beacuse the normal math overflowed.
>
> Should it work? Yes. Is it a special edge case that might have bugs?
> Certainly.
>
> So the IOMMU layer shouldn't be stressing this edge case at all. It is
crazy, there
> is no reason to do this.
>
> > I agree that such mapping is obviously dangerous, but it is not
> > illegal as well.
> > Same sgl mapping works if it is direct attached Storage, so there will
> > be a logical question why IB stack is not handling this.
>
> Oh that is probably very driver dependent.
>
> > > You need to write the code so you never create the situation where
> > > A+B=0 - don't try to fix things up after that happens.
> >
> > In proposed patch, A + B = 0 is possible, but it will be considered as
> > end of the loop.
>
> Like I said, don't do that. End of the loop is -1 which requires a
different loop
> logic design, so send a patch like that.
>
> But I would still send a patch for iommu to not create this in the first
place.


Jason -
I noted your response.  Issue is possible to any RDMA h/w and issue is not
completely Rejected. It is just that you need another way to fix it
(preferred to handle it in iommu)
We can reopen discussion if we see another instance from other vendors.
Quick workaround is - use  63 bit DMA mask in rdma low level driver if
someone really wants to work around this issue until something more robust
fix committed in upstream kernel (either ib stack or iommu stack).

We can close this thread.

Kashyap
>
> Jason
Jason Gunthorpe Sept. 20, 2022, 7:14 p.m. UTC | #9
On Mon, Sep 12, 2022 at 04:32:09PM +0530, Kashyap Desai wrote:

> I noted your response.  Issue is possible to any RDMA h/w and issue is not
> completely Rejected. It is just that you need another way to fix it
> (preferred to handle it in iommu)
> We can reopen discussion if we see another instance from other vendors.
> Quick workaround is - use  63 bit DMA mask in rdma low level driver if
> someone really wants to work around this issue until something more robust
> fix committed in upstream kernel (either ib stack or iommu stack).

That is not quite right, I said it should also be fixed in RDMA, just
none of the patches you are proposing fix it correctly. You must avoid
the mathematical overflow by reworking the logic.

Jason
diff mbox series

Patch

diff --git a/drivers/infiniband/core/verbs.c b/drivers/infiniband/core/verbs.c
index e54b3f1b730e..5e72c44bac3a 100644
--- a/drivers/infiniband/core/verbs.c
+++ b/drivers/infiniband/core/verbs.c
@@ -2676,15 +2676,19 @@  int ib_sg_to_pages(struct ib_mr *mr, struct scatterlist *sgl, int sg_nents,
 		u64 dma_addr = sg_dma_address(sg) + sg_offset;
 		u64 prev_addr = dma_addr;
 		unsigned int dma_len = sg_dma_len(sg) - sg_offset;
+		unsigned int curr_dma_len = 0;
+		unsigned int first_page_off = 0;
 		u64 end_dma_addr = dma_addr + dma_len;
 		u64 page_addr = dma_addr & page_mask;
 
+		if (i == 0)
+			first_page_off = dma_addr - page_addr;
 		/*
 		 * For the second and later elements, check whether either the
 		 * end of element i-1 or the start of element i is not aligned
 		 * on a page boundary.
 		 */
-		if (i && (last_page_off != 0 || page_addr != dma_addr)) {
+		else if (last_page_off != 0 || page_addr != dma_addr) {
 			/* Stop mapping if there is a gap. */
 			if (last_end_dma_addr != dma_addr)
 				break;
@@ -2708,8 +2712,10 @@  int ib_sg_to_pages(struct ib_mr *mr, struct scatterlist *sgl, int sg_nents,
 			}
 			prev_addr = page_addr;
 next_page:
+			curr_dma_len += mr->page_size - first_page_off;
 			page_addr += mr->page_size;
-		} while (page_addr < end_dma_addr);
+			first_page_off = 0;
+		} while (curr_dma_len < dma_len);
 
 		mr->length += dma_len;
 		last_end_dma_addr = end_dma_addr;