diff mbox series

[for-rc] Revert "RDMA/efa: Use API to get contiguous memory blocks aligned to device supported page size"

Message ID 20200120141001.63544-1-galpress@amazon.com (mailing list archive)
State Superseded
Headers show
Series [for-rc] Revert "RDMA/efa: Use API to get contiguous memory blocks aligned to device supported page size" | expand

Commit Message

Gal Pressman Jan. 20, 2020, 2:10 p.m. UTC
The cited commit leads to register MR failures and random hangs when
running different MPI applications. The exact root cause for the issue
is still not clear, this revert brings us back to a stable state.

This reverts commit 40ddb3f020834f9afb7aab31385994811f4db259.

Fixes: 40ddb3f02083 ("RDMA/efa: Use API to get contiguous memory blocks aligned to device supported page size")
Cc: Shiraz Saleem <shiraz.saleem@intel.com>
Cc: stable@vger.kernel.org # 5.3
Signed-off-by: Gal Pressman <galpress@amazon.com>
---
 drivers/infiniband/hw/efa/efa_verbs.c | 88 ++++++++++++++++++++-------
 1 file changed, 67 insertions(+), 21 deletions(-)

Comments

Gal Pressman Jan. 21, 2020, 9:07 a.m. UTC | #1
On 20/01/2020 16:10, Gal Pressman wrote:
> The cited commit leads to register MR failures and random hangs when
> running different MPI applications. The exact root cause for the issue
> is still not clear, this revert brings us back to a stable state.
> 
> This reverts commit 40ddb3f020834f9afb7aab31385994811f4db259.
> 
> Fixes: 40ddb3f02083 ("RDMA/efa: Use API to get contiguous memory blocks aligned to device supported page size")
> Cc: Shiraz Saleem <shiraz.saleem@intel.com>
> Cc: stable@vger.kernel.org # 5.3
> Signed-off-by: Gal Pressman <galpress@amazon.com>

Shiraz, I think I found the root cause here.
I'm noticing a register MR of size 32k, which is constructed from two sges, the
first sge of size 12k and the second of 20k.

ib_umem_find_best_pgsz returns page shift 13 in the following way:

0x103dcb2000      0x103dcb5000       0x103dd5d000           0x103dd62000
          +----------+                      +------------------+
          |          |                      |                  |
          |  12k     |                      |     20k          |
          +----------+                      +------------------+

          +------+------+                 +------+------+------+
          |      |      |                 |      |      |      |
          | 8k   | 8k   |                 | 8k   | 8k   | 8k   |
          +------+------+                 +------+------+------+
0x103dcb2000       0x103dcb6000   0x103dd5c000              0x103dd62000


The top row is the original umem sgl, and the bottom is the sgl constructed by
rdma_for_each_block with page size of 8k.

Is this the expected output? The 8k pages cover addresses which aren't part of
the MR. This breaks some of the assumptions in the driver (for example, the way
we calculate the number of pages in the MR) and I'm not sure our device can
handle such sgl.
Leon Romanovsky Jan. 21, 2020, 4:24 p.m. UTC | #2
On Tue, Jan 21, 2020 at 11:07:21AM +0200, Gal Pressman wrote:
> On 20/01/2020 16:10, Gal Pressman wrote:
> > The cited commit leads to register MR failures and random hangs when
> > running different MPI applications. The exact root cause for the issue
> > is still not clear, this revert brings us back to a stable state.
> >
> > This reverts commit 40ddb3f020834f9afb7aab31385994811f4db259.
> >
> > Fixes: 40ddb3f02083 ("RDMA/efa: Use API to get contiguous memory blocks aligned to device supported page size")
> > Cc: Shiraz Saleem <shiraz.saleem@intel.com>
> > Cc: stable@vger.kernel.org # 5.3
> > Signed-off-by: Gal Pressman <galpress@amazon.com>
>
> Shiraz, I think I found the root cause here.
> I'm noticing a register MR of size 32k, which is constructed from two sges, the
> first sge of size 12k and the second of 20k.
>
> ib_umem_find_best_pgsz returns page shift 13 in the following way:
>
> 0x103dcb2000      0x103dcb5000       0x103dd5d000           0x103dd62000
>           +----------+                      +------------------+
>           |          |                      |                  |
>           |  12k     |                      |     20k          |
>           +----------+                      +------------------+
>
>           +------+------+                 +------+------+------+
>           |      |      |                 |      |      |      |
>           | 8k   | 8k   |                 | 8k   | 8k   | 8k   |
>           +------+------+                 +------+------+------+
> 0x103dcb2000       0x103dcb6000   0x103dd5c000              0x103dd62000
>
>
> The top row is the original umem sgl, and the bottom is the sgl constructed by
> rdma_for_each_block with page size of 8k.
>
> Is this the expected output? The 8k pages cover addresses which aren't part of
> the MR. This breaks some of the assumptions in the driver (for example, the way
> we calculate the number of pages in the MR) and I'm not sure our device can
> handle such sgl.

Artemy wrote this fix that can help you.

commit 60c9fe2d18b657df950a5f4d5a7955694bd08e63
Author: Artemy Kovalyov <artemyko@mellanox.com>
Date:   Sun Dec 15 12:43:13 2019 +0200

    RDMA/umem: Fix ib_umem_find_best_pgsz()

    Except for the last entry, the ending iova alignment sets the maximum
    possible page size as the low bits of the iova must be zero when
    starting the next chunk.

    Fixes: 4a35339958f1 ("RDMA/umem: Add API to find best driver supported page size in an MR")
    Signed-off-by: Artemy Kovalyov <artemyko@mellanox.com>
    Signed-off-by: Leon Romanovsky <leonro@mellanox.com>

diff --git a/drivers/infiniband/core/umem.c b/drivers/infiniband/core/umem.c
index c3769a5f096d..06b6125b5ae1 100644
--- a/drivers/infiniband/core/umem.c
+++ b/drivers/infiniband/core/umem.c
@@ -166,10 +166,13 @@ unsigned long ib_umem_find_best_pgsz(struct ib_umem *umem,
                 * for any address.
                 */
                mask |= (sg_dma_address(sg) + pgoff) ^ va;
-               if (i && i != (umem->nmap - 1))
-                       /* restrict by length as well for interior SGEs */
-                       mask |= sg_dma_len(sg);
                va += sg_dma_len(sg) - pgoff;
+               /* Except for the last entry, the ending iova alignment sets
+                * the maximum possible page size as the low bits of the iova
+                * must be zero when starting the next chunk.
+                */
+               if (i != (umem->nmap - 1))
+                       mask |= va;
                pgoff = 0;
        }
        best_pg_bit = rdma_find_pg_bit(mask, pgsz_bitmap);
Saleem, Shiraz Jan. 21, 2020, 4:39 p.m. UTC | #3
> Subject: Re: [PATCH for-rc] Revert "RDMA/efa: Use API to get contiguous
> memory blocks aligned to device supported page size"
> 
> On 20/01/2020 16:10, Gal Pressman wrote:
> > The cited commit leads to register MR failures and random hangs when
> > running different MPI applications. The exact root cause for the issue
> > is still not clear, this revert brings us back to a stable state.
> >
> > This reverts commit 40ddb3f020834f9afb7aab31385994811f4db259.
> >
> > Fixes: 40ddb3f02083 ("RDMA/efa: Use API to get contiguous memory
> > blocks aligned to device supported page size")
> > Cc: Shiraz Saleem <shiraz.saleem@intel.com>
> > Cc: stable@vger.kernel.org # 5.3
> > Signed-off-by: Gal Pressman <galpress@amazon.com>
> 
> Shiraz, I think I found the root cause here.
> I'm noticing a register MR of size 32k, which is constructed from two sges, the first
> sge of size 12k and the second of 20k.
> 
> ib_umem_find_best_pgsz returns page shift 13 in the following way:
> 
> 0x103dcb2000      0x103dcb5000       0x103dd5d000           0x103dd62000
>           +----------+                      +------------------+
>           |          |                      |                  |
>           |  12k     |                      |     20k          |
>           +----------+                      +------------------+
> 
>           +------+------+                 +------+------+------+
>           |      |      |                 |      |      |      |
>           | 8k   | 8k   |                 | 8k   | 8k   | 8k   |
>           +------+------+                 +------+------+------+
> 0x103dcb2000       0x103dcb6000   0x103dd5c000              0x103dd62000
> 
> 

Gal - would be useful to know the IOVA (virt) and umem->addr also for this MR in ib_umem_find_best_pgsz
Gal Pressman Jan. 22, 2020, 7:57 a.m. UTC | #4
On 21/01/2020 18:24, Leon Romanovsky wrote:
> On Tue, Jan 21, 2020 at 11:07:21AM +0200, Gal Pressman wrote:
>> On 20/01/2020 16:10, Gal Pressman wrote:
>>> The cited commit leads to register MR failures and random hangs when
>>> running different MPI applications. The exact root cause for the issue
>>> is still not clear, this revert brings us back to a stable state.
>>>
>>> This reverts commit 40ddb3f020834f9afb7aab31385994811f4db259.
>>>
>>> Fixes: 40ddb3f02083 ("RDMA/efa: Use API to get contiguous memory blocks aligned to device supported page size")
>>> Cc: Shiraz Saleem <shiraz.saleem@intel.com>
>>> Cc: stable@vger.kernel.org # 5.3
>>> Signed-off-by: Gal Pressman <galpress@amazon.com>
>>
>> Shiraz, I think I found the root cause here.
>> I'm noticing a register MR of size 32k, which is constructed from two sges, the
>> first sge of size 12k and the second of 20k.
>>
>> ib_umem_find_best_pgsz returns page shift 13 in the following way:
>>
>> 0x103dcb2000      0x103dcb5000       0x103dd5d000           0x103dd62000
>>           +----------+                      +------------------+
>>           |          |                      |                  |
>>           |  12k     |                      |     20k          |
>>           +----------+                      +------------------+
>>
>>           +------+------+                 +------+------+------+
>>           |      |      |                 |      |      |      |
>>           | 8k   | 8k   |                 | 8k   | 8k   | 8k   |
>>           +------+------+                 +------+------+------+
>> 0x103dcb2000       0x103dcb6000   0x103dd5c000              0x103dd62000
>>
>>
>> The top row is the original umem sgl, and the bottom is the sgl constructed by
>> rdma_for_each_block with page size of 8k.
>>
>> Is this the expected output? The 8k pages cover addresses which aren't part of
>> the MR. This breaks some of the assumptions in the driver (for example, the way
>> we calculate the number of pages in the MR) and I'm not sure our device can
>> handle such sgl.
> 
> Artemy wrote this fix that can help you.
> 
> commit 60c9fe2d18b657df950a5f4d5a7955694bd08e63
> Author: Artemy Kovalyov <artemyko@mellanox.com>
> Date:   Sun Dec 15 12:43:13 2019 +0200
> 
>     RDMA/umem: Fix ib_umem_find_best_pgsz()
> 
>     Except for the last entry, the ending iova alignment sets the maximum
>     possible page size as the low bits of the iova must be zero when
>     starting the next chunk.
> 
>     Fixes: 4a35339958f1 ("RDMA/umem: Add API to find best driver supported page size in an MR")
>     Signed-off-by: Artemy Kovalyov <artemyko@mellanox.com>
>     Signed-off-by: Leon Romanovsky <leonro@mellanox.com>
> 
> diff --git a/drivers/infiniband/core/umem.c b/drivers/infiniband/core/umem.c
> index c3769a5f096d..06b6125b5ae1 100644
> --- a/drivers/infiniband/core/umem.c
> +++ b/drivers/infiniband/core/umem.c
> @@ -166,10 +166,13 @@ unsigned long ib_umem_find_best_pgsz(struct ib_umem *umem,
>                  * for any address.
>                  */
>                 mask |= (sg_dma_address(sg) + pgoff) ^ va;
> -               if (i && i != (umem->nmap - 1))
> -                       /* restrict by length as well for interior SGEs */
> -                       mask |= sg_dma_len(sg);
>                 va += sg_dma_len(sg) - pgoff;
> +               /* Except for the last entry, the ending iova alignment sets
> +                * the maximum possible page size as the low bits of the iova
> +                * must be zero when starting the next chunk.
> +                */
> +               if (i != (umem->nmap - 1))
> +                       mask |= va;
>                 pgoff = 0;
>         }
>         best_pg_bit = rdma_find_pg_bit(mask, pgsz_bitmap);

Thanks Leon, I'll test this and let you know if it fixes the issue.
When are you planning to submit this?
Gal Pressman Jan. 22, 2020, 7:58 a.m. UTC | #5
On 21/01/2020 18:39, Saleem, Shiraz wrote:
>> Subject: Re: [PATCH for-rc] Revert "RDMA/efa: Use API to get contiguous
>> memory blocks aligned to device supported page size"
>>
>> On 20/01/2020 16:10, Gal Pressman wrote:
>>> The cited commit leads to register MR failures and random hangs when
>>> running different MPI applications. The exact root cause for the issue
>>> is still not clear, this revert brings us back to a stable state.
>>>
>>> This reverts commit 40ddb3f020834f9afb7aab31385994811f4db259.
>>>
>>> Fixes: 40ddb3f02083 ("RDMA/efa: Use API to get contiguous memory
>>> blocks aligned to device supported page size")
>>> Cc: Shiraz Saleem <shiraz.saleem@intel.com>
>>> Cc: stable@vger.kernel.org # 5.3
>>> Signed-off-by: Gal Pressman <galpress@amazon.com>
>>
>> Shiraz, I think I found the root cause here.
>> I'm noticing a register MR of size 32k, which is constructed from two sges, the first
>> sge of size 12k and the second of 20k.
>>
>> ib_umem_find_best_pgsz returns page shift 13 in the following way:
>>
>> 0x103dcb2000      0x103dcb5000       0x103dd5d000           0x103dd62000
>>           +----------+                      +------------------+
>>           |          |                      |                  |
>>           |  12k     |                      |     20k          |
>>           +----------+                      +------------------+
>>
>>           +------+------+                 +------+------+------+
>>           |      |      |                 |      |      |      |
>>           | 8k   | 8k   |                 | 8k   | 8k   | 8k   |
>>           +------+------+                 +------+------+------+
>> 0x103dcb2000       0x103dcb6000   0x103dd5c000              0x103dd62000
>>
>>
> 
> Gal - would be useful to know the IOVA (virt) and umem->addr also for this MR in ib_umem_find_best_pgsz

I'll update my debug prints to include the iova and rerun the tests.
Leon Romanovsky Jan. 23, 2020, 2:24 p.m. UTC | #6
On Wed, Jan 22, 2020 at 09:57:07AM +0200, Gal Pressman wrote:
> On 21/01/2020 18:24, Leon Romanovsky wrote:
> > On Tue, Jan 21, 2020 at 11:07:21AM +0200, Gal Pressman wrote:
> >> On 20/01/2020 16:10, Gal Pressman wrote:
> >>> The cited commit leads to register MR failures and random hangs when
> >>> running different MPI applications. The exact root cause for the issue
> >>> is still not clear, this revert brings us back to a stable state.
> >>>
> >>> This reverts commit 40ddb3f020834f9afb7aab31385994811f4db259.
> >>>
> >>> Fixes: 40ddb3f02083 ("RDMA/efa: Use API to get contiguous memory blocks aligned to device supported page size")
> >>> Cc: Shiraz Saleem <shiraz.saleem@intel.com>
> >>> Cc: stable@vger.kernel.org # 5.3
> >>> Signed-off-by: Gal Pressman <galpress@amazon.com>
> >>
> >> Shiraz, I think I found the root cause here.
> >> I'm noticing a register MR of size 32k, which is constructed from two sges, the
> >> first sge of size 12k and the second of 20k.
> >>
> >> ib_umem_find_best_pgsz returns page shift 13 in the following way:
> >>
> >> 0x103dcb2000      0x103dcb5000       0x103dd5d000           0x103dd62000
> >>           +----------+                      +------------------+
> >>           |          |                      |                  |
> >>           |  12k     |                      |     20k          |
> >>           +----------+                      +------------------+
> >>
> >>           +------+------+                 +------+------+------+
> >>           |      |      |                 |      |      |      |
> >>           | 8k   | 8k   |                 | 8k   | 8k   | 8k   |
> >>           +------+------+                 +------+------+------+
> >> 0x103dcb2000       0x103dcb6000   0x103dd5c000              0x103dd62000
> >>
> >>
> >> The top row is the original umem sgl, and the bottom is the sgl constructed by
> >> rdma_for_each_block with page size of 8k.
> >>
> >> Is this the expected output? The 8k pages cover addresses which aren't part of
> >> the MR. This breaks some of the assumptions in the driver (for example, the way
> >> we calculate the number of pages in the MR) and I'm not sure our device can
> >> handle such sgl.
> >
> > Artemy wrote this fix that can help you.
> >
> > commit 60c9fe2d18b657df950a5f4d5a7955694bd08e63
> > Author: Artemy Kovalyov <artemyko@mellanox.com>
> > Date:   Sun Dec 15 12:43:13 2019 +0200
> >
> >     RDMA/umem: Fix ib_umem_find_best_pgsz()
> >
> >     Except for the last entry, the ending iova alignment sets the maximum
> >     possible page size as the low bits of the iova must be zero when
> >     starting the next chunk.
> >
> >     Fixes: 4a35339958f1 ("RDMA/umem: Add API to find best driver supported page size in an MR")
> >     Signed-off-by: Artemy Kovalyov <artemyko@mellanox.com>
> >     Signed-off-by: Leon Romanovsky <leonro@mellanox.com>
> >
> > diff --git a/drivers/infiniband/core/umem.c b/drivers/infiniband/core/umem.c
> > index c3769a5f096d..06b6125b5ae1 100644
> > --- a/drivers/infiniband/core/umem.c
> > +++ b/drivers/infiniband/core/umem.c
> > @@ -166,10 +166,13 @@ unsigned long ib_umem_find_best_pgsz(struct ib_umem *umem,
> >                  * for any address.
> >                  */
> >                 mask |= (sg_dma_address(sg) + pgoff) ^ va;
> > -               if (i && i != (umem->nmap - 1))
> > -                       /* restrict by length as well for interior SGEs */
> > -                       mask |= sg_dma_len(sg);
> >                 va += sg_dma_len(sg) - pgoff;
> > +               /* Except for the last entry, the ending iova alignment sets
> > +                * the maximum possible page size as the low bits of the iova
> > +                * must be zero when starting the next chunk.
> > +                */
> > +               if (i != (umem->nmap - 1))
> > +                       mask |= va;
> >                 pgoff = 0;
> >         }
> >         best_pg_bit = rdma_find_pg_bit(mask, pgsz_bitmap);
>
> Thanks Leon, I'll test this and let you know if it fixes the issue.
> When are you planning to submit this?

If it fixes your issues, I will be happy to do it.

Thanks
Gal Pressman Jan. 23, 2020, 2:29 p.m. UTC | #7
On 23/01/2020 16:24, Leon Romanovsky wrote:
> On Wed, Jan 22, 2020 at 09:57:07AM +0200, Gal Pressman wrote:
>> On 21/01/2020 18:24, Leon Romanovsky wrote:
>>> On Tue, Jan 21, 2020 at 11:07:21AM +0200, Gal Pressman wrote:
>>>> On 20/01/2020 16:10, Gal Pressman wrote:
>>>>> The cited commit leads to register MR failures and random hangs when
>>>>> running different MPI applications. The exact root cause for the issue
>>>>> is still not clear, this revert brings us back to a stable state.
>>>>>
>>>>> This reverts commit 40ddb3f020834f9afb7aab31385994811f4db259.
>>>>>
>>>>> Fixes: 40ddb3f02083 ("RDMA/efa: Use API to get contiguous memory blocks aligned to device supported page size")
>>>>> Cc: Shiraz Saleem <shiraz.saleem@intel.com>
>>>>> Cc: stable@vger.kernel.org # 5.3
>>>>> Signed-off-by: Gal Pressman <galpress@amazon.com>
>>>>
>>>> Shiraz, I think I found the root cause here.
>>>> I'm noticing a register MR of size 32k, which is constructed from two sges, the
>>>> first sge of size 12k and the second of 20k.
>>>>
>>>> ib_umem_find_best_pgsz returns page shift 13 in the following way:
>>>>
>>>> 0x103dcb2000      0x103dcb5000       0x103dd5d000           0x103dd62000
>>>>           +----------+                      +------------------+
>>>>           |          |                      |                  |
>>>>           |  12k     |                      |     20k          |
>>>>           +----------+                      +------------------+
>>>>
>>>>           +------+------+                 +------+------+------+
>>>>           |      |      |                 |      |      |      |
>>>>           | 8k   | 8k   |                 | 8k   | 8k   | 8k   |
>>>>           +------+------+                 +------+------+------+
>>>> 0x103dcb2000       0x103dcb6000   0x103dd5c000              0x103dd62000
>>>>
>>>>
>>>> The top row is the original umem sgl, and the bottom is the sgl constructed by
>>>> rdma_for_each_block with page size of 8k.
>>>>
>>>> Is this the expected output? The 8k pages cover addresses which aren't part of
>>>> the MR. This breaks some of the assumptions in the driver (for example, the way
>>>> we calculate the number of pages in the MR) and I'm not sure our device can
>>>> handle such sgl.
>>>
>>> Artemy wrote this fix that can help you.
>>>
>>> commit 60c9fe2d18b657df950a5f4d5a7955694bd08e63
>>> Author: Artemy Kovalyov <artemyko@mellanox.com>
>>> Date:   Sun Dec 15 12:43:13 2019 +0200
>>>
>>>     RDMA/umem: Fix ib_umem_find_best_pgsz()
>>>
>>>     Except for the last entry, the ending iova alignment sets the maximum
>>>     possible page size as the low bits of the iova must be zero when
>>>     starting the next chunk.
>>>
>>>     Fixes: 4a35339958f1 ("RDMA/umem: Add API to find best driver supported page size in an MR")
>>>     Signed-off-by: Artemy Kovalyov <artemyko@mellanox.com>
>>>     Signed-off-by: Leon Romanovsky <leonro@mellanox.com>
>>>
>>> diff --git a/drivers/infiniband/core/umem.c b/drivers/infiniband/core/umem.c
>>> index c3769a5f096d..06b6125b5ae1 100644
>>> --- a/drivers/infiniband/core/umem.c
>>> +++ b/drivers/infiniband/core/umem.c
>>> @@ -166,10 +166,13 @@ unsigned long ib_umem_find_best_pgsz(struct ib_umem *umem,
>>>                  * for any address.
>>>                  */
>>>                 mask |= (sg_dma_address(sg) + pgoff) ^ va;
>>> -               if (i && i != (umem->nmap - 1))
>>> -                       /* restrict by length as well for interior SGEs */
>>> -                       mask |= sg_dma_len(sg);
>>>                 va += sg_dma_len(sg) - pgoff;
>>> +               /* Except for the last entry, the ending iova alignment sets
>>> +                * the maximum possible page size as the low bits of the iova
>>> +                * must be zero when starting the next chunk.
>>> +                */
>>> +               if (i != (umem->nmap - 1))
>>> +                       mask |= va;
>>>                 pgoff = 0;
>>>         }
>>>         best_pg_bit = rdma_find_pg_bit(mask, pgsz_bitmap);
>>
>> Thanks Leon, I'll test this and let you know if it fixes the issue.
>> When are you planning to submit this?
> 
> If it fixes your issues, I will be happy to do it.

So far it looks good to me, I'll let it run over the weekend to be on the safe side.

Shiraz, does this fix make sense to you?
Saleem, Shiraz Jan. 24, 2020, 12:40 a.m. UTC | #8
> Subject: Re: [PATCH for-rc] Revert "RDMA/efa: Use API to get contiguous
> memory blocks aligned to device supported page size"
> 
> On 23/01/2020 16:24, Leon Romanovsky wrote:
> > On Wed, Jan 22, 2020 at 09:57:07AM +0200, Gal Pressman wrote:
> >> On 21/01/2020 18:24, Leon Romanovsky wrote:
> >>> On Tue, Jan 21, 2020 at 11:07:21AM +0200, Gal Pressman wrote:
> >>>> On 20/01/2020 16:10, Gal Pressman wrote:
> >>>>> The cited commit leads to register MR failures and random hangs
> >>>>> when running different MPI applications. The exact root cause for
> >>>>> the issue is still not clear, this revert brings us back to a stable state.
> >>>>>
> >>>>> This reverts commit 40ddb3f020834f9afb7aab31385994811f4db259.
> >>>>>
> >>>>> Fixes: 40ddb3f02083 ("RDMA/efa: Use API to get contiguous memory
> >>>>> blocks aligned to device supported page size")
> >>>>> Cc: Shiraz Saleem <shiraz.saleem@intel.com>
> >>>>> Cc: stable@vger.kernel.org # 5.3
> >>>>> Signed-off-by: Gal Pressman <galpress@amazon.com>
> >>>>
> >>>> Shiraz, I think I found the root cause here.
> >>>> I'm noticing a register MR of size 32k, which is constructed from
> >>>> two sges, the first sge of size 12k and the second of 20k.
> >>>>
> >>>> ib_umem_find_best_pgsz returns page shift 13 in the following way:
> >>>>
> >>>> 0x103dcb2000      0x103dcb5000       0x103dd5d000           0x103dd62000
> >>>>           +----------+                      +------------------+
> >>>>           |          |                      |                  |
> >>>>           |  12k     |                      |     20k          |
> >>>>           +----------+                      +------------------+
> >>>>
> >>>>           +------+------+                 +------+------+------+
> >>>>           |      |      |                 |      |      |      |
> >>>>           | 8k   | 8k   |                 | 8k   | 8k   | 8k   |
> >>>>           +------+------+                 +------+------+------+
> >>>> 0x103dcb2000       0x103dcb6000   0x103dd5c000              0x103dd62000
> >>>>
> >>>>
> >>>> The top row is the original umem sgl, and the bottom is the sgl
> >>>> constructed by rdma_for_each_block with page size of 8k.
> >>>>
> >>>> Is this the expected output? The 8k pages cover addresses which
> >>>> aren't part of the MR. This breaks some of the assumptions in the
> >>>> driver (for example, the way we calculate the number of pages in
> >>>> the MR) and I'm not sure our device can handle such sgl.
> >>>
> >>> Artemy wrote this fix that can help you.
> >>>
> >>> commit 60c9fe2d18b657df950a5f4d5a7955694bd08e63
> >>> Author: Artemy Kovalyov <artemyko@mellanox.com>
> >>> Date:   Sun Dec 15 12:43:13 2019 +0200
> >>>
> >>>     RDMA/umem: Fix ib_umem_find_best_pgsz()
> >>>
> >>>     Except for the last entry, the ending iova alignment sets the maximum
> >>>     possible page size as the low bits of the iova must be zero when
> >>>     starting the next chunk.
> >>>
> >>>     Fixes: 4a35339958f1 ("RDMA/umem: Add API to find best driver supported
> page size in an MR")
> >>>     Signed-off-by: Artemy Kovalyov <artemyko@mellanox.com>
> >>>     Signed-off-by: Leon Romanovsky <leonro@mellanox.com>
> >>>
> >>> diff --git a/drivers/infiniband/core/umem.c
> >>> b/drivers/infiniband/core/umem.c index c3769a5f096d..06b6125b5ae1
> >>> 100644
> >>> --- a/drivers/infiniband/core/umem.c
> >>> +++ b/drivers/infiniband/core/umem.c
> >>> @@ -166,10 +166,13 @@ unsigned long ib_umem_find_best_pgsz(struct
> ib_umem *umem,
> >>>                  * for any address.
> >>>                  */
> >>>                 mask |= (sg_dma_address(sg) + pgoff) ^ va;
> >>> -               if (i && i != (umem->nmap - 1))
> >>> -                       /* restrict by length as well for interior SGEs */
> >>> -                       mask |= sg_dma_len(sg);
> >>>                 va += sg_dma_len(sg) - pgoff;
> >>> +               /* Except for the last entry, the ending iova alignment sets
> >>> +                * the maximum possible page size as the low bits of the iova
> >>> +                * must be zero when starting the next chunk.
> >>> +                */
> >>> +               if (i != (umem->nmap - 1))
> >>> +                       mask |= va;
> >>>                 pgoff = 0;
> >>>         }
> >>>         best_pg_bit = rdma_find_pg_bit(mask, pgsz_bitmap);
> >>
> >> Thanks Leon, I'll test this and let you know if it fixes the issue.
> >> When are you planning to submit this?
> >
> > If it fixes your issues, I will be happy to do it.
> 
> So far it looks good to me, I'll let it run over the weekend to be on the safe side.
> 
> Shiraz, does this fix make sense to you?

I 'think' the current algorithm is off because it is not mandating that the page
size be aligned to end of the first sge.

So if I assumed umem->addr = iova = 0x89002000, pgoff = 0.
mask = 0x8000 to start with before we enter for_each_sgl loop.

In your example of the PA layout.

for_each_sg(umem->sg_head.sgl, sg, umem->nmap, i) {
		/* Walk SGL and reduce max page size if VA/PA bits differ
		 * for any address.
		 */
		mask |= (sg_dma_address(sg) + pgoff) ^ va;
			
			/* In first iteration, 
			 * mask = 0x8000 | (0x103dcb2000 + 0) ^ 0x89002000
			 * mask = 0x10b4cb0000
			*/


		if (i && i != (umem->nmap - 1))
			/* restrict by length as well for interior SGEs */
			mask |= sg_dma_len(sg);
		va += sg_dma_len(sg) - pgoff;

			/* In first iteration, va is updated to,
		 	* va = 0x89002000 + 0x3000 = 0x89005000, aligned to 4K
			*/

		pgoff = 0;
	}


But in the second iteration,
0x10b4cb0000  | (0x103dd5d000+ 0) ^ 0x89005000, sets the mask to 0x10b4df8000
and relaxes the max page size.

It would be good to get the debug data to back this or prove it wrong.
But if this is indeed what's happening, then ORing in the sgl->length for the
first sge to restrict the page size might cut it. So something like,

diff --git a/drivers/infiniband/core/umem.c b/drivers/infiniband/core/umem.c
index 7a3b995..1aceb1b 100644
--- a/drivers/infiniband/core/umem.c
+++ b/drivers/infiniband/core/umem.c
@@ -166,8 +166,8 @@ unsigned long ib_umem_find_best_pgsz(struct ib_umem *umem,
                 * for any address.
                 */
                mask |= (sg_dma_address(sg) + pgoff) ^ va;
-               if (i && i != (umem->nmap - 1))
-                       /* restrict by length as well for interior SGEs */
+               if (i != (umem->nmap - 1))
+                       /* restrict by length for all but last SGE */
                        mask |= sg_dma_len(sg);
                va += sg_dma_len(sg) - pgoff;
                pgoff = 0;


Artemy fix of using the ending iova alignment to restricts the page size
appears to achieve the same ..
> >>>                 va += sg_dma_len(sg) - pgoff;
> >>> +               /* Except for the last entry, the ending iova alignment sets
> >>> +                * the maximum possible page size as the low bits of the iova
> >>> +                * must be zero when starting the next chunk.
> >>> +                */
> >>> +               if (i != (umem->nmap - 1))
> >>> +                       mask |= va;

In our example, at end of first iteration, mask = 0x10b4cb0000  | 0x89005000= 0x10bdcb5000,
you would restrict to 4K.
Jason Gunthorpe Jan. 24, 2020, 2:52 a.m. UTC | #9
On Fri, Jan 24, 2020 at 12:40:18AM +0000, Saleem, Shiraz wrote:
> It would be good to get the debug data to back this or prove it wrong.
> But if this is indeed what's happening, then ORing in the sgl->length for the
> first sge to restrict the page size might cut it. So something like,

or'ing in the sgl length is a nonsense thing to do, the length has
nothing to do with the restriction, which is entirely based on IOVA
bits which can't be passed through.

Jason
Gal Pressman Jan. 28, 2020, 12:32 p.m. UTC | #10
On 24/01/2020 4:52, Jason Gunthorpe wrote:
> On Fri, Jan 24, 2020 at 12:40:18AM +0000, Saleem, Shiraz wrote:
>> It would be good to get the debug data to back this or prove it wrong.
>> But if this is indeed what's happening, then ORing in the sgl->length for the
>> first sge to restrict the page size might cut it. So something like,
> 
> or'ing in the sgl length is a nonsense thing to do, the length has
> nothing to do with the restriction, which is entirely based on IOVA
> bits which can't be passed through.

The weekend runs passed with Leon's proposed patch.
Leon, can you please submit it so I can drop this revert?

Thanks
Leon Romanovsky Jan. 28, 2020, 1:47 p.m. UTC | #11
On Tue, Jan 28, 2020 at 02:32:19PM +0200, Gal Pressman wrote:
> On 24/01/2020 4:52, Jason Gunthorpe wrote:
> > On Fri, Jan 24, 2020 at 12:40:18AM +0000, Saleem, Shiraz wrote:
> >> It would be good to get the debug data to back this or prove it wrong.
> >> But if this is indeed what's happening, then ORing in the sgl->length for the
> >> first sge to restrict the page size might cut it. So something like,
> >
> > or'ing in the sgl length is a nonsense thing to do, the length has
> > nothing to do with the restriction, which is entirely based on IOVA
> > bits which can't be passed through.
>
> The weekend runs passed with Leon's proposed patch.
> Leon, can you please submit it so I can drop this revert?

I'll do it now, feel free to reply with your tags.

Thanks

>
> Thanks
diff mbox series

Patch

diff --git a/drivers/infiniband/hw/efa/efa_verbs.c b/drivers/infiniband/hw/efa/efa_verbs.c
index 50c22575aed6..567797a919e8 100644
--- a/drivers/infiniband/hw/efa/efa_verbs.c
+++ b/drivers/infiniband/hw/efa/efa_verbs.c
@@ -1005,15 +1005,21 @@  static int umem_to_page_list(struct efa_dev *dev,
 			     u8 hp_shift)
 {
 	u32 pages_in_hp = BIT(hp_shift - PAGE_SHIFT);
-	struct ib_block_iter biter;
+	struct sg_dma_page_iter sg_iter;
+	unsigned int page_idx = 0;
 	unsigned int hp_idx = 0;
 
 	ibdev_dbg(&dev->ibdev, "hp_cnt[%u], pages_in_hp[%u]\n",
 		  hp_cnt, pages_in_hp);
 
-	rdma_for_each_block(umem->sg_head.sgl, &biter, umem->nmap,
-			    BIT(hp_shift))
-		page_list[hp_idx++] = rdma_block_iter_dma_address(&biter);
+	for_each_sg_dma_page(umem->sg_head.sgl, &sg_iter, umem->nmap, 0) {
+		if (page_idx % pages_in_hp == 0) {
+			page_list[hp_idx] = sg_page_iter_dma_address(&sg_iter);
+			hp_idx++;
+		}
+
+		page_idx++;
+	}
 
 	return 0;
 }
@@ -1344,6 +1350,56 @@  static int efa_create_pbl(struct efa_dev *dev,
 	return 0;
 }
 
+static void efa_cont_pages(struct ib_umem *umem, u64 addr,
+			   unsigned long max_page_shift,
+			   int *count, u8 *shift, u32 *ncont)
+{
+	struct scatterlist *sg;
+	u64 base = ~0, p = 0;
+	unsigned long tmp;
+	unsigned long m;
+	u64 len, pfn;
+	int i = 0;
+	int entry;
+
+	addr = addr >> PAGE_SHIFT;
+	tmp = (unsigned long)addr;
+	m = find_first_bit(&tmp, BITS_PER_LONG);
+	if (max_page_shift)
+		m = min_t(unsigned long, max_page_shift - PAGE_SHIFT, m);
+
+	for_each_sg(umem->sg_head.sgl, sg, umem->nmap, entry) {
+		len = DIV_ROUND_UP(sg_dma_len(sg), PAGE_SIZE);
+		pfn = sg_dma_address(sg) >> PAGE_SHIFT;
+		if (base + p != pfn) {
+			/*
+			 * If either the offset or the new
+			 * base are unaligned update m
+			 */
+			tmp = (unsigned long)(pfn | p);
+			if (!IS_ALIGNED(tmp, 1 << m))
+				m = find_first_bit(&tmp, BITS_PER_LONG);
+
+			base = pfn;
+			p = 0;
+		}
+
+		p += len;
+		i += len;
+	}
+
+	if (i) {
+		m = min_t(unsigned long, ilog2(roundup_pow_of_two(i)), m);
+		*ncont = DIV_ROUND_UP(i, (1 << m));
+	} else {
+		m = 0;
+		*ncont = 0;
+	}
+
+	*shift = PAGE_SHIFT + m;
+	*count = i;
+}
+
 struct ib_mr *efa_reg_mr(struct ib_pd *ibpd, u64 start, u64 length,
 			 u64 virt_addr, int access_flags,
 			 struct ib_udata *udata)
@@ -1351,11 +1407,12 @@  struct ib_mr *efa_reg_mr(struct ib_pd *ibpd, u64 start, u64 length,
 	struct efa_dev *dev = to_edev(ibpd->device);
 	struct efa_com_reg_mr_params params = {};
 	struct efa_com_reg_mr_result result = {};
+	unsigned long max_page_shift;
 	struct pbl_context pbl;
 	int supp_access_flags;
-	unsigned int pg_sz;
 	struct efa_mr *mr;
 	int inline_size;
+	int npages;
 	int err;
 
 	if (udata->inlen &&
@@ -1396,24 +1453,13 @@  struct ib_mr *efa_reg_mr(struct ib_pd *ibpd, u64 start, u64 length,
 	params.iova = virt_addr;
 	params.mr_length_in_bytes = length;
 	params.permissions = access_flags;
+	max_page_shift = fls64(dev->dev_attr.page_size_cap);
 
-	pg_sz = ib_umem_find_best_pgsz(mr->umem,
-				       dev->dev_attr.page_size_cap,
-				       virt_addr);
-	if (!pg_sz) {
-		err = -EOPNOTSUPP;
-		ibdev_dbg(&dev->ibdev, "Failed to find a suitable page size in page_size_cap %#llx\n",
-			  dev->dev_attr.page_size_cap);
-		goto err_unmap;
-	}
-
-	params.page_shift = __ffs(pg_sz);
-	params.page_num = DIV_ROUND_UP(length + (start & (pg_sz - 1)),
-				       pg_sz);
-
+	efa_cont_pages(mr->umem, start, max_page_shift, &npages,
+		       &params.page_shift, &params.page_num);
 	ibdev_dbg(&dev->ibdev,
-		  "start %#llx length %#llx params.page_shift %u params.page_num %u\n",
-		  start, length, params.page_shift, params.page_num);
+		  "start %#llx length %#llx npages %d params.page_shift %u params.page_num %u\n",
+		  start, length, npages, params.page_shift, params.page_num);
 
 	inline_size = ARRAY_SIZE(params.pbl.inline_pbl_array);
 	if (params.page_num <= inline_size) {