mbox series

[v2,0/2] mm: fix cma allocation fail sometimes

Message ID 20220112131552.3329380-1-aisheng.dong@nxp.com (mailing list archive)
Headers show
Series mm: fix cma allocation fail sometimes | expand

Message

Dong Aisheng Jan. 12, 2022, 1:15 p.m. UTC
We observed an issue with NXP 5.15 LTS kernel that dma_alloc_coherent()
may fail sometimes when there're multiple processes trying to allocate
CMA memory.

This issue can be very easily reproduced on MX6Q SDB board with latest
linux-next kernel by writing a test module creating 16 or 32 threads
allocating random size of CMA memory in parallel at the background.
Or simply enabling CONFIG_CMA_DEBUG, you can see endless of CMA alloc
retries during booting:
[    1.452124] cma: cma_alloc(): memory range at (ptrval) is busy,retrying
....
(thousands of reties)
NOTE: MX6 has CONFIG_FORCE_MAX_ZONEORDER=14 which means MAX_ORDER is
13 (32M).

The root cause of this issue is that since commit a4efc174b382
("mm/cma.c: remove redundant cma_mutex lock"), CMA supports concurrent
memory allocation.
It's possible that the pageblock process A try to alloc has already
been isolated by the allocation of process B during memory migration.

When there're multi process allocating CMA memory in parallel, it's
likely that other the remain pageblocks may have also been isolated,
then CMA alloc fail finally during the first round of scanning of the
whole available CMA bitmap.

This patchset introduces a retry mechanism to rescan CMA bitmap for -EBUSY
error in case the target pageblock may has been temporarily isolated
by others and released later.
It also improves the CMA allocation performance by trying the next
pageblock during reties rather than looping in the same pageblock
which is in -EBUSY state.

Theoretically, this issue can be easily reproduced on ARMv7 platforms
with big MAX_ORDER/pageblock 
e.g. 1G RAM(320M reserved CMA) and 32M pageblock ARM platform:
Page block order: 13
Pages per block:  8192

The following test is based on linux-next: next-20211213.

Without the fix, it's easily fail.
# insmod cma_alloc.ko pnum=16
[  274.322369] CMA alloc test enter: thread number: 16
[  274.329948] cpu: 0, pid: 692, index 4 pages 144
[  274.330143] cpu: 1, pid: 694, index 2 pages 44
[  274.330359] cpu: 2, pid: 695, index 7 pages 757
[  274.330760] cpu: 2, pid: 696, index 4 pages 144
[  274.330974] cpu: 2, pid: 697, index 6 pages 512
[  274.331223] cpu: 2, pid: 698, index 6 pages 512
[  274.331499] cpu: 2, pid: 699, index 2 pages 44
[  274.332228] cpu: 2, pid: 700, index 0 pages 7
[  274.337421] cpu: 0, pid: 701, index 1 pages 38
[  274.337618] cpu: 2, pid: 702, index 0 pages 7
[  274.344669] cpu: 1, pid: 703, index 0 pages 7
[  274.344807] cpu: 3, pid: 704, index 6 pages 512
[  274.348269] cpu: 2, pid: 705, index 5 pages 148
[  274.349490] cma: cma_alloc: reserved: alloc failed, req-size: 38 pages, ret: -16
[  274.366292] cpu: 1, pid: 706, index 4 pages 144
[  274.366562] cpu: 0, pid: 707, index 3 pages 128
[  274.367356] cma: cma_alloc: reserved: alloc failed, req-size: 128 pages, ret: -16
[  274.367370] cpu: 0, pid: 707, index 3 pages 128 failed
[  274.371148] cma: cma_alloc: reserved: alloc failed, req-size: 148 pages, ret: -16
[  274.375348] cma: cma_alloc: reserved: alloc failed, req-size: 144 pages, ret: -16
[  274.384256] cpu: 2, pid: 708, index 0 pages 7
....

With the fix, 32 threads allocating in parallel can pass overnight
stress test.

root@imx6qpdlsolox:~# insmod cma_alloc.ko pnum=32
[  112.976809] cma_alloc: loading out-of-tree module taints kernel.
[  112.984128] CMA alloc test enter: thread number: 32
[  112.989748] cpu: 2, pid: 707, index 6 pages 512
[  112.994342] cpu: 1, pid: 708, index 6 pages 512
[  112.995162] cpu: 0, pid: 709, index 3 pages 128
[  112.995867] cpu: 2, pid: 710, index 0 pages 7
[  112.995910] cpu: 3, pid: 711, index 2 pages 44
[  112.996005] cpu: 3, pid: 712, index 7 pages 757
[  112.996098] cpu: 3, pid: 713, index 7 pages 757
...
[41877.368163] cpu: 1, pid: 737, index 2 pages 44
[41877.369388] cpu: 1, pid: 736, index 3 pages 128
[41878.486516] cpu: 0, pid: 737, index 2 pages 44
[41878.486515] cpu: 2, pid: 739, index 4 pages 144
[41878.486622] cpu: 1, pid: 736, index 3 pages 128
[41878.486948] cpu: 2, pid: 735, index 7 pages 757
[41878.487279] cpu: 2, pid: 738, index 4 pages 144
[41879.526603] cpu: 1, pid: 739, index 3 pages 128
[41879.606491] cpu: 2, pid: 737, index 3 pages 128
[41879.606550] cpu: 0, pid: 736, index 0 pages 7
[41879.612271] cpu: 2, pid: 738, index 4 pages 144
...

v1:
https://patchwork.kernel.org/project/linux-mm/cover/20211215080242.3034856-1-aisheng.dong@nxp.com/

Dong Aisheng (2):
  mm: cma: fix allocation may fail sometimes
  mm: cma: try next MAX_ORDER_NR_PAGES during retry

 mm/cma.c | 15 ++++++++++++++-
 1 file changed, 14 insertions(+), 1 deletion(-)

Comments

Dong Aisheng Jan. 24, 2022, 2:34 p.m. UTC | #1
Gently Ping...

On Wed, Jan 12, 2022 at 9:17 PM Dong Aisheng <aisheng.dong@nxp.com> wrote:
>
> We observed an issue with NXP 5.15 LTS kernel that dma_alloc_coherent()
> may fail sometimes when there're multiple processes trying to allocate
> CMA memory.
>
> This issue can be very easily reproduced on MX6Q SDB board with latest
> linux-next kernel by writing a test module creating 16 or 32 threads
> allocating random size of CMA memory in parallel at the background.
> Or simply enabling CONFIG_CMA_DEBUG, you can see endless of CMA alloc
> retries during booting:
> [    1.452124] cma: cma_alloc(): memory range at (ptrval) is busy,retrying
> ....
> (thousands of reties)
> NOTE: MX6 has CONFIG_FORCE_MAX_ZONEORDER=14 which means MAX_ORDER is
> 13 (32M).
>
> The root cause of this issue is that since commit a4efc174b382
> ("mm/cma.c: remove redundant cma_mutex lock"), CMA supports concurrent
> memory allocation.
> It's possible that the pageblock process A try to alloc has already
> been isolated by the allocation of process B during memory migration.
>
> When there're multi process allocating CMA memory in parallel, it's
> likely that other the remain pageblocks may have also been isolated,
> then CMA alloc fail finally during the first round of scanning of the
> whole available CMA bitmap.
>
> This patchset introduces a retry mechanism to rescan CMA bitmap for -EBUSY
> error in case the target pageblock may has been temporarily isolated
> by others and released later.
> It also improves the CMA allocation performance by trying the next
> pageblock during reties rather than looping in the same pageblock
> which is in -EBUSY state.
>
> Theoretically, this issue can be easily reproduced on ARMv7 platforms
> with big MAX_ORDER/pageblock
> e.g. 1G RAM(320M reserved CMA) and 32M pageblock ARM platform:
> Page block order: 13
> Pages per block:  8192
>
> The following test is based on linux-next: next-20211213.
>
> Without the fix, it's easily fail.
> # insmod cma_alloc.ko pnum=16
> [  274.322369] CMA alloc test enter: thread number: 16
> [  274.329948] cpu: 0, pid: 692, index 4 pages 144
> [  274.330143] cpu: 1, pid: 694, index 2 pages 44
> [  274.330359] cpu: 2, pid: 695, index 7 pages 757
> [  274.330760] cpu: 2, pid: 696, index 4 pages 144
> [  274.330974] cpu: 2, pid: 697, index 6 pages 512
> [  274.331223] cpu: 2, pid: 698, index 6 pages 512
> [  274.331499] cpu: 2, pid: 699, index 2 pages 44
> [  274.332228] cpu: 2, pid: 700, index 0 pages 7
> [  274.337421] cpu: 0, pid: 701, index 1 pages 38
> [  274.337618] cpu: 2, pid: 702, index 0 pages 7
> [  274.344669] cpu: 1, pid: 703, index 0 pages 7
> [  274.344807] cpu: 3, pid: 704, index 6 pages 512
> [  274.348269] cpu: 2, pid: 705, index 5 pages 148
> [  274.349490] cma: cma_alloc: reserved: alloc failed, req-size: 38 pages, ret: -16
> [  274.366292] cpu: 1, pid: 706, index 4 pages 144
> [  274.366562] cpu: 0, pid: 707, index 3 pages 128
> [  274.367356] cma: cma_alloc: reserved: alloc failed, req-size: 128 pages, ret: -16
> [  274.367370] cpu: 0, pid: 707, index 3 pages 128 failed
> [  274.371148] cma: cma_alloc: reserved: alloc failed, req-size: 148 pages, ret: -16
> [  274.375348] cma: cma_alloc: reserved: alloc failed, req-size: 144 pages, ret: -16
> [  274.384256] cpu: 2, pid: 708, index 0 pages 7
> ....
>
> With the fix, 32 threads allocating in parallel can pass overnight
> stress test.
>
> root@imx6qpdlsolox:~# insmod cma_alloc.ko pnum=32
> [  112.976809] cma_alloc: loading out-of-tree module taints kernel.
> [  112.984128] CMA alloc test enter: thread number: 32
> [  112.989748] cpu: 2, pid: 707, index 6 pages 512
> [  112.994342] cpu: 1, pid: 708, index 6 pages 512
> [  112.995162] cpu: 0, pid: 709, index 3 pages 128
> [  112.995867] cpu: 2, pid: 710, index 0 pages 7
> [  112.995910] cpu: 3, pid: 711, index 2 pages 44
> [  112.996005] cpu: 3, pid: 712, index 7 pages 757
> [  112.996098] cpu: 3, pid: 713, index 7 pages 757
> ...
> [41877.368163] cpu: 1, pid: 737, index 2 pages 44
> [41877.369388] cpu: 1, pid: 736, index 3 pages 128
> [41878.486516] cpu: 0, pid: 737, index 2 pages 44
> [41878.486515] cpu: 2, pid: 739, index 4 pages 144
> [41878.486622] cpu: 1, pid: 736, index 3 pages 128
> [41878.486948] cpu: 2, pid: 735, index 7 pages 757
> [41878.487279] cpu: 2, pid: 738, index 4 pages 144
> [41879.526603] cpu: 1, pid: 739, index 3 pages 128
> [41879.606491] cpu: 2, pid: 737, index 3 pages 128
> [41879.606550] cpu: 0, pid: 736, index 0 pages 7
> [41879.612271] cpu: 2, pid: 738, index 4 pages 144
> ...
>
> v1:
> https://patchwork.kernel.org/project/linux-mm/cover/20211215080242.3034856-1-aisheng.dong@nxp.com/
>
> Dong Aisheng (2):
>   mm: cma: fix allocation may fail sometimes
>   mm: cma: try next MAX_ORDER_NR_PAGES during retry
>
>  mm/cma.c | 15 ++++++++++++++-
>  1 file changed, 14 insertions(+), 1 deletion(-)
>
> --
> 2.25.1
>
David Hildenbrand Jan. 25, 2022, 4:33 p.m. UTC | #2
On 12.01.22 14:15, Dong Aisheng wrote:
> On an ARMv7 platform with 32M pageblock(MAX_ORDER 14), we observed a

Did you actually intend to talk about pageblocks here (and below)?

I assume you have to be clearer here that you talk about the maximum
allocation granularity, which is usually bigger than actual pageblock size.

> huge number of repeat retries of CMA allocation (1k+) during booting
> when allocating one page for each of 3 mmc instance probe.
> 
> This is caused by CMA now supports cocurrent allocation since commit
> a4efc174b382 ("mm/cma.c: remove redundant cma_mutex lock").
> The pageblock or (MAX_ORDER -1) from which we are trying to allocate
> memory may have already been acquired and isolated by others.
> Current cma_alloc() will then retry the next area by the step of
> bitmap_no + mask + 1 which are very likely within the same isolated range
> and fail again. So when the pageblock or MAX_ORDER is big (e.g. 8192),
> keep retrying in a small step become meaningless because it will be known
> to fail at a huge number of times due to the pageblock has been isolated
> by others, especially when allocating only one or two pages.
> 
> Instread of looping in the same pageblock and wasting CPU mips a lot,
> especially for big pageblock system (e.g. 16M or 32M),
> we try the next MAX_ORDER_NR_PAGES directly.
> 
> Doing this way can greatly mitigate the situtation.
> 
> Below is the original error log during booting:
> [    2.004804] cma: cma_alloc(cma (ptrval), count 1, align 0)
> [    2.010318] cma: cma_alloc(cma (ptrval), count 1, align 0)
> [    2.010776] cma: cma_alloc(): memory range at (ptrval) is busy, retrying
> [    2.010785] cma: cma_alloc(): memory range at (ptrval) is busy, retrying
> [    2.010793] cma: cma_alloc(): memory range at (ptrval) is busy, retrying
> [    2.010800] cma: cma_alloc(): memory range at (ptrval) is busy, retrying
> [    2.010807] cma: cma_alloc(): memory range at (ptrval) is busy, retrying
> [    2.010814] cma: cma_alloc(): memory range at (ptrval) is busy, retrying
> .... (+1K retries)
> 
> After fix, the 1200+ reties can be reduced to 0.
> Another test running 8 VPU decoder in parallel shows that 1500+ retries
> dropped to ~145.
> 
> IOW this patch can improve the CMA allocation speed a lot when there're
> enough CMA memory by reducing retries significantly.
> 
> Cc: Andrew Morton <akpm@linux-foundation.org>
> Cc: Marek Szyprowski <m.szyprowski@samsung.com>
> Cc: Lecopzer Chen <lecopzer.chen@mediatek.com>
> Cc: David Hildenbrand <david@redhat.com>
> Cc: Vlastimil Babka <vbabka@suse.cz>
> CC: stable@vger.kernel.org # 5.11+
> Fixes: a4efc174b382 ("mm/cma.c: remove redundant cma_mutex lock")
> Signed-off-by: Dong Aisheng <aisheng.dong@nxp.com>
> ---
> v1->v2:
>  * change to align with MAX_ORDER_NR_PAGES instead of pageblock_nr_pages
> ---
>  mm/cma.c | 4 +++-
>  1 file changed, 3 insertions(+), 1 deletion(-)
> 
> diff --git a/mm/cma.c b/mm/cma.c
> index 1c13a729d274..1251f65e2364 100644
> --- a/mm/cma.c
> +++ b/mm/cma.c
> @@ -500,7 +500,9 @@ struct page *cma_alloc(struct cma *cma, unsigned long count,
>  		trace_cma_alloc_busy_retry(cma->name, pfn, pfn_to_page(pfn),
>  					   count, align);
>  		/* try again with a bit different memory target */
> -		start = bitmap_no + mask + 1;
> +		start = ALIGN(bitmap_no + mask + 1,
> +			      MAX_ORDER_NR_PAGES >> cma->order_per_bit);

Mind giving the reader a hint in the code why we went for
MAX_ORDER_NR_PAGES?

What would happen if the CMA granularity is bigger than
MAX_ORDER_NR_PAGES? I'd assume no harm done, as we'd try aligning to 0.
Dong Aisheng Jan. 28, 2022, 12:20 p.m. UTC | #3
On Wed, Jan 26, 2022 at 12:33 AM David Hildenbrand <david@redhat.com> wrote:
>
> On 12.01.22 14:15, Dong Aisheng wrote:
> > On an ARMv7 platform with 32M pageblock(MAX_ORDER 14), we observed a
>
> Did you actually intend to talk about pageblocks here (and below)?
>
> I assume you have to be clearer here that you talk about the maximum
> allocation granularity, which is usually bigger than actual pageblock size.
>

I'm talking about the ARM32 case where pageblock_order is equal to MAX_ORDER -1.
/* If huge pages are not used, group by MAX_ORDER_NR_PAGES */
#define pageblock_order         (MAX_ORDER-1)
In order to be clearer, maybe I can add this info into the commit message too.

> > huge number of repeat retries of CMA allocation (1k+) during booting
> > when allocating one page for each of 3 mmc instance probe.
> >
> > This is caused by CMA now supports cocurrent allocation since commit
> > a4efc174b382 ("mm/cma.c: remove redundant cma_mutex lock").
> > The pageblock or (MAX_ORDER -1) from which we are trying to allocate
> > memory may have already been acquired and isolated by others.
> > Current cma_alloc() will then retry the next area by the step of
> > bitmap_no + mask + 1 which are very likely within the same isolated range
> > and fail again. So when the pageblock or MAX_ORDER is big (e.g. 8192),
> > keep retrying in a small step become meaningless because it will be known
> > to fail at a huge number of times due to the pageblock has been isolated
> > by others, especially when allocating only one or two pages.
> >
> > Instread of looping in the same pageblock and wasting CPU mips a lot,
> > especially for big pageblock system (e.g. 16M or 32M),
> > we try the next MAX_ORDER_NR_PAGES directly.
> >
> > Doing this way can greatly mitigate the situtation.
> >
> > Below is the original error log during booting:
> > [    2.004804] cma: cma_alloc(cma (ptrval), count 1, align 0)
> > [    2.010318] cma: cma_alloc(cma (ptrval), count 1, align 0)
> > [    2.010776] cma: cma_alloc(): memory range at (ptrval) is busy, retrying
> > [    2.010785] cma: cma_alloc(): memory range at (ptrval) is busy, retrying
> > [    2.010793] cma: cma_alloc(): memory range at (ptrval) is busy, retrying
> > [    2.010800] cma: cma_alloc(): memory range at (ptrval) is busy, retrying
> > [    2.010807] cma: cma_alloc(): memory range at (ptrval) is busy, retrying
> > [    2.010814] cma: cma_alloc(): memory range at (ptrval) is busy, retrying
> > .... (+1K retries)
> >
> > After fix, the 1200+ reties can be reduced to 0.
> > Another test running 8 VPU decoder in parallel shows that 1500+ retries
> > dropped to ~145.
> >
> > IOW this patch can improve the CMA allocation speed a lot when there're
> > enough CMA memory by reducing retries significantly.
> >
> > Cc: Andrew Morton <akpm@linux-foundation.org>
> > Cc: Marek Szyprowski <m.szyprowski@samsung.com>
> > Cc: Lecopzer Chen <lecopzer.chen@mediatek.com>
> > Cc: David Hildenbrand <david@redhat.com>
> > Cc: Vlastimil Babka <vbabka@suse.cz>
> > CC: stable@vger.kernel.org # 5.11+
> > Fixes: a4efc174b382 ("mm/cma.c: remove redundant cma_mutex lock")
> > Signed-off-by: Dong Aisheng <aisheng.dong@nxp.com>
> > ---
> > v1->v2:
> >  * change to align with MAX_ORDER_NR_PAGES instead of pageblock_nr_pages
> > ---
> >  mm/cma.c | 4 +++-
> >  1 file changed, 3 insertions(+), 1 deletion(-)
> >
> > diff --git a/mm/cma.c b/mm/cma.c
> > index 1c13a729d274..1251f65e2364 100644
> > --- a/mm/cma.c
> > +++ b/mm/cma.c
> > @@ -500,7 +500,9 @@ struct page *cma_alloc(struct cma *cma, unsigned long count,
> >               trace_cma_alloc_busy_retry(cma->name, pfn, pfn_to_page(pfn),
> >                                          count, align);
> >               /* try again with a bit different memory target */
> > -             start = bitmap_no + mask + 1;
> > +             start = ALIGN(bitmap_no + mask + 1,
> > +                           MAX_ORDER_NR_PAGES >> cma->order_per_bit);
>
> Mind giving the reader a hint in the code why we went for
> MAX_ORDER_NR_PAGES?
>

Yes, good suggestion.
I could add one more line of code comments as follows:
"As alloc_contig_range() will isolate all pageblocks within the range
which are aligned
with max_t(MAX_ORDER_NR_PAGES, pageblock_nr_pages),
here we align with MAX_ORDER_NR_PAGES  which is usually bigger
than actual pageblock size"
Does this look ok to you?

> What would happen if the CMA granularity is bigger than
> MAX_ORDER_NR_PAGES? I'd assume no harm done, as we'd try aligning to 0.
>

I think yes.

Regards
Aisheng

> --
> Thanks,
>
> David / dhildenb
>