Message ID | 20201020102241.3729-1-lecopzer.chen@mediatek.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | mm/cma.c: remove redundant cma_mutex lock | expand |
On 20.10.20 12:22, Lecopzer Chen wrote: > The cma_mutex which protects alloc_contig_range() was first appeared in > commit 7ee793a62fa8c ("cma: Remove potential deadlock situation"), > at that time, there is no guarantee the behavior of concurrency inside > alloc_contig_range(). > > After the commit 2c7452a075d4db2dc > ("mm/page_isolation.c: make start_isolate_page_range() fail if already isolated") > > However, two subsystems (CMA and gigantic > > huge pages for example) could attempt operations on the same range. If > > this happens, one thread may 'undo' the work another thread is doing. > > This can result in pageblocks being incorrectly left marked as > > MIGRATE_ISOLATE and therefore not available for page allocation. > The concurrency inside alloc_contig_range() was clarified. > > Now we can find that hugepage and virtio call alloc_contig_range() without > any lock, thus cma_mutex is "redundant" in cma_alloc() now. > > Signed-off-by: Lecopzer Chen <lecopzer.chen@mediatek.com> > --- > mm/cma.c | 4 +--- > 1 file changed, 1 insertion(+), 3 deletions(-) > > diff --git a/mm/cma.c b/mm/cma.c > index 7f415d7cda9f..3692a34e2353 100644 > --- a/mm/cma.c > +++ b/mm/cma.c > @@ -38,7 +38,6 @@ > > struct cma cma_areas[MAX_CMA_AREAS]; > unsigned cma_area_count; > -static DEFINE_MUTEX(cma_mutex); > > phys_addr_t cma_get_base(const struct cma *cma) > { > @@ -454,10 +453,9 @@ struct page *cma_alloc(struct cma *cma, size_t count, unsigned int align, > mutex_unlock(&cma->lock); > > pfn = cma->base_pfn + (bitmap_no << cma->order_per_bit); > - mutex_lock(&cma_mutex); > ret = alloc_contig_range(pfn, pfn + count, MIGRATE_CMA, > GFP_KERNEL | (no_warn ? __GFP_NOWARN : 0)); > - mutex_unlock(&cma_mutex); > + > if (ret == 0) { > page = pfn_to_page(pfn); > break; > I guess this is fine. In case there is a race we return with -EBUSY - which is suboptimal (as it could just be a temporary issue if the other user backs off), but should be good enough for now. Acked-by: David Hildenbrand <david@redhat.com>
On 10/20/20 1:27 PM, David Hildenbrand wrote: > On 20.10.20 12:22, Lecopzer Chen wrote: >> The cma_mutex which protects alloc_contig_range() was first appeared in >> commit 7ee793a62fa8c ("cma: Remove potential deadlock situation"), >> at that time, there is no guarantee the behavior of concurrency inside >> alloc_contig_range(). >> >> After the commit 2c7452a075d4db2dc >> ("mm/page_isolation.c: make start_isolate_page_range() fail if already isolated") >> > However, two subsystems (CMA and gigantic >> > huge pages for example) could attempt operations on the same range. If >> > this happens, one thread may 'undo' the work another thread is doing. >> > This can result in pageblocks being incorrectly left marked as >> > MIGRATE_ISOLATE and therefore not available for page allocation. >> The concurrency inside alloc_contig_range() was clarified. >> >> Now we can find that hugepage and virtio call alloc_contig_range() without >> any lock, thus cma_mutex is "redundant" in cma_alloc() now. >> >> Signed-off-by: Lecopzer Chen <lecopzer.chen@mediatek.com> >> --- >> mm/cma.c | 4 +--- >> 1 file changed, 1 insertion(+), 3 deletions(-) >> >> diff --git a/mm/cma.c b/mm/cma.c >> index 7f415d7cda9f..3692a34e2353 100644 >> --- a/mm/cma.c >> +++ b/mm/cma.c >> @@ -38,7 +38,6 @@ >> >> struct cma cma_areas[MAX_CMA_AREAS]; >> unsigned cma_area_count; >> -static DEFINE_MUTEX(cma_mutex); >> >> phys_addr_t cma_get_base(const struct cma *cma) >> { >> @@ -454,10 +453,9 @@ struct page *cma_alloc(struct cma *cma, size_t count, unsigned int align, >> mutex_unlock(&cma->lock); >> >> pfn = cma->base_pfn + (bitmap_no << cma->order_per_bit); >> - mutex_lock(&cma_mutex); >> ret = alloc_contig_range(pfn, pfn + count, MIGRATE_CMA, >> GFP_KERNEL | (no_warn ? __GFP_NOWARN : 0)); >> - mutex_unlock(&cma_mutex); >> + >> if (ret == 0) { >> page = pfn_to_page(pfn); >> break; >> > > I guess this is fine. In case there is a race we return with -EBUSY - > which is suboptimal (as it could just be a temporary issue if the other > user backs off), but should be good enough for now. Agreed. > Acked-by: David Hildenbrand <david@redhat.com> Acked-by: Vlastimil Babka <vbabka@suse.cz>
diff --git a/mm/cma.c b/mm/cma.c index 7f415d7cda9f..3692a34e2353 100644 --- a/mm/cma.c +++ b/mm/cma.c @@ -38,7 +38,6 @@ struct cma cma_areas[MAX_CMA_AREAS]; unsigned cma_area_count; -static DEFINE_MUTEX(cma_mutex); phys_addr_t cma_get_base(const struct cma *cma) { @@ -454,10 +453,9 @@ struct page *cma_alloc(struct cma *cma, size_t count, unsigned int align, mutex_unlock(&cma->lock); pfn = cma->base_pfn + (bitmap_no << cma->order_per_bit); - mutex_lock(&cma_mutex); ret = alloc_contig_range(pfn, pfn + count, MIGRATE_CMA, GFP_KERNEL | (no_warn ? __GFP_NOWARN : 0)); - mutex_unlock(&cma_mutex); + if (ret == 0) { page = pfn_to_page(pfn); break;