Message ID | 20201022225308.2927890-1-guro@fb.com (mailing list archive) |
---|---|
Headers | show |
Series | mm: cma: introduce a non-blocking version of cma_release() | expand |
On 22 Oct 2020, at 18:53, Roman Gushchin wrote: > This small patchset introduces a non-blocking version of cma_release() > and simplifies the code in hugetlbfs, where previously we had to > temporarily drop hugetlb_lock around the cma_release() call. > > It should help Zi Yan on his work on 1 GB THPs: splitting a gigantic > THP under a memory pressure requires a cma_release() call. If it's Thanks for the patch. But during 1GB THP split, we only clear the bitmaps without releasing the pages. Also in cma_release_nowait(), the first page in the allocated CMA region is reused to store struct cma_clear_bitmap_work, but the same method cannot be used during THP split, since the first page is still in-use. We might need to allocate some new memory for struct cma_clear_bitmap_work, which might not be successful under memory pressure. Any suggestion on where to store struct cma_clear_bitmap_work when I only want to clear bitmap without releasing the pages? Thanks. — Best Regards, Yan Zi
On Thu, Oct 22, 2020 at 07:42:45PM -0400, Zi Yan wrote: > On 22 Oct 2020, at 18:53, Roman Gushchin wrote: > > > This small patchset introduces a non-blocking version of cma_release() > > and simplifies the code in hugetlbfs, where previously we had to > > temporarily drop hugetlb_lock around the cma_release() call. > > > > It should help Zi Yan on his work on 1 GB THPs: splitting a gigantic > > THP under a memory pressure requires a cma_release() call. If it's > > Thanks for the patch. But during 1GB THP split, we only clear > the bitmaps without releasing the pages. Also in cma_release_nowait(), > the first page in the allocated CMA region is reused to store > struct cma_clear_bitmap_work, but the same method cannot be used > during THP split, since the first page is still in-use. We might > need to allocate some new memory for struct cma_clear_bitmap_work, > which might not be successful under memory pressure. Any suggestion > on where to store struct cma_clear_bitmap_work when I only want to > clear bitmap without releasing the pages? It means we can't use cma_release() there either, because it does clear individual pages. We need to clear the cma bitmap without touching pages. Can you handle an error there? If so, we can introduce something like int cma_schedule_bitmap_clearance(), which will allocate a work structure and will be able to return -ENOMEM in the unlikely case of error. Will it work for you? Thanks!
On 22 Oct 2020, at 20:47, Roman Gushchin wrote: > On Thu, Oct 22, 2020 at 07:42:45PM -0400, Zi Yan wrote: >> On 22 Oct 2020, at 18:53, Roman Gushchin wrote: >> >>> This small patchset introduces a non-blocking version of cma_release() >>> and simplifies the code in hugetlbfs, where previously we had to >>> temporarily drop hugetlb_lock around the cma_release() call. >>> >>> It should help Zi Yan on his work on 1 GB THPs: splitting a gigantic >>> THP under a memory pressure requires a cma_release() call. If it's >> >> Thanks for the patch. But during 1GB THP split, we only clear >> the bitmaps without releasing the pages. Also in cma_release_nowait(), >> the first page in the allocated CMA region is reused to store >> struct cma_clear_bitmap_work, but the same method cannot be used >> during THP split, since the first page is still in-use. We might >> need to allocate some new memory for struct cma_clear_bitmap_work, >> which might not be successful under memory pressure. Any suggestion >> on where to store struct cma_clear_bitmap_work when I only want to >> clear bitmap without releasing the pages? > > It means we can't use cma_release() there either, because it does clear > individual pages. We need to clear the cma bitmap without touching pages. > > Can you handle an error there? > > If so, we can introduce something like int cma_schedule_bitmap_clearance(), > which will allocate a work structure and will be able to return -ENOMEM > in the unlikely case of error. > > Will it work for you? Yes, it works. Thanks. — Best Regards, Yan Zi
On Thu, Oct 22, 2020 at 08:58:10PM -0400, Zi Yan wrote: > On 22 Oct 2020, at 20:47, Roman Gushchin wrote: > > > On Thu, Oct 22, 2020 at 07:42:45PM -0400, Zi Yan wrote: > >> On 22 Oct 2020, at 18:53, Roman Gushchin wrote: > >> > >>> This small patchset introduces a non-blocking version of cma_release() > >>> and simplifies the code in hugetlbfs, where previously we had to > >>> temporarily drop hugetlb_lock around the cma_release() call. > >>> > >>> It should help Zi Yan on his work on 1 GB THPs: splitting a gigantic > >>> THP under a memory pressure requires a cma_release() call. If it's > >> > >> Thanks for the patch. But during 1GB THP split, we only clear > >> the bitmaps without releasing the pages. Also in cma_release_nowait(), > >> the first page in the allocated CMA region is reused to store > >> struct cma_clear_bitmap_work, but the same method cannot be used > >> during THP split, since the first page is still in-use. We might > >> need to allocate some new memory for struct cma_clear_bitmap_work, > >> which might not be successful under memory pressure. Any suggestion > >> on where to store struct cma_clear_bitmap_work when I only want to > >> clear bitmap without releasing the pages? > > > > It means we can't use cma_release() there either, because it does clear > > individual pages. We need to clear the cma bitmap without touching pages. > > > > Can you handle an error there? > > > > If so, we can introduce something like int cma_schedule_bitmap_clearance(), > > which will allocate a work structure and will be able to return -ENOMEM > > in the unlikely case of error. > > > > Will it work for you? > > Yes, it works. Thanks. Nice! It means we can keep these two patches as they are (they do makes sense even without THP, because they simplify the hugetlbfs code). I'll send a patch implementing cma_schedule_bitmap_clearance() to you, so you'll be able to use it in the patchset. Thanks!