Message ID | 20211025130033.1547667-3-Arunpravin.PaneerSelvam@amd.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | [v2,1/8] drm: move the buddy allocator from i915 into common drm | expand |
On 25/10/2021 14:00, Arunpravin wrote: > On contiguous allocation, we round up the size > to the *next* power of 2, implement a function > to free the unused pages after the newly allocate block. > > Signed-off-by: Arunpravin <Arunpravin.PaneerSelvam@amd.com> Ideally this gets added with some user, so we can see it in action? Maybe squash the next patch here? > --- > drivers/gpu/drm/drm_buddy.c | 103 ++++++++++++++++++++++++++++++++++++ > include/drm/drm_buddy.h | 4 ++ > 2 files changed, 107 insertions(+) > > diff --git a/drivers/gpu/drm/drm_buddy.c b/drivers/gpu/drm/drm_buddy.c > index 9d3547bcc5da..0da8510736eb 100644 > --- a/drivers/gpu/drm/drm_buddy.c > +++ b/drivers/gpu/drm/drm_buddy.c > @@ -284,6 +284,109 @@ static inline bool contains(u64 s1, u64 e1, u64 s2, u64 e2) > return s1 <= s2 && e1 >= e2; > } > > +/** > + * drm_buddy_free_unused_pages - free unused pages > + * > + * @mm: DRM buddy manager > + * @actual_size: original size requested > + * @blocks: output list head to add allocated blocks > + * > + * For contiguous allocation, we round up the size to the nearest > + * power of two value, drivers consume *actual* size, so remaining > + * portions are unused and it can be freed. > + * > + * Returns: > + * 0 on success, error code on failure. > + */ > +int drm_buddy_free_unused_pages(struct drm_buddy_mm *mm, drm_buddy_block_trim? > + u64 actual_size, new_size? > + struct list_head *blocks) > +{ > + struct drm_buddy_block *block; > + struct drm_buddy_block *buddy; > + u64 actual_start; > + u64 actual_end; > + LIST_HEAD(dfs); > + u64 count = 0; > + int err; > + > + if (!list_is_singular(blocks)) > + return -EINVAL; > + > + block = list_first_entry_or_null(blocks, > + struct drm_buddy_block, > + link); > + > + if (!block) > + return -EINVAL; list_is_singular() already ensures that I guess? > + > + if (actual_size > drm_buddy_block_size(mm, block)) > + return -EINVAL; > + > + if (actual_size == drm_buddy_block_size(mm, block)) > + return 0; Probably need to check the alignment of the actual_size, and also check that it is non-zero? > + > + list_del(&block->link); > + > + actual_start = drm_buddy_block_offset(block); > + actual_end = actual_start + actual_size - 1; > + > + if (drm_buddy_block_is_allocated(block)) That should rather be a programmer error. > + mark_free(mm, block); > + > + list_add(&block->tmp_link, &dfs); > + > + while (1) { > + block = list_first_entry_or_null(&dfs, > + struct drm_buddy_block, > + tmp_link); > + > + if (!block) > + break; > + > + list_del(&block->tmp_link); > + > + if (count == actual_size) > + return 0; Check for overlaps somewhere here to avoid needless searching and splitting? > + > + if (contains(actual_start, actual_end, drm_buddy_block_offset(block), > + (drm_buddy_block_offset(block) + drm_buddy_block_size(mm, block) - 1))) { Could maybe record the start/end for better readability? > + BUG_ON(!drm_buddy_block_is_free(block)); > + > + /* Allocate only required blocks */ > + mark_allocated(block); > + mm->avail -= drm_buddy_block_size(mm, block); > + list_add_tail(&block->link, blocks); > + count += drm_buddy_block_size(mm, block); > + continue; > + } > + > + if (drm_buddy_block_order(block) == 0) > + continue; Should be impossible with overlaps check added. > + > + if (!drm_buddy_block_is_split(block)) { That should always be true. > + err = split_block(mm, block); > + > + if (unlikely(err)) > + goto err_undo; > + } > + > + list_add(&block->right->tmp_link, &dfs); > + list_add(&block->left->tmp_link, &dfs); > + } > + > + return -ENOSPC; Would it make sense to factor out part of the alloc_range for this? It looks roughly the same. > + > +err_undo: > + buddy = get_buddy(block); > + if (buddy && > + (drm_buddy_block_is_free(block) && > + drm_buddy_block_is_free(buddy))) > + __drm_buddy_free(mm, block); > + return err; Where do we add the block back to the original list? Did we not just leak it? > +} > +EXPORT_SYMBOL(drm_buddy_free_unused_pages); > + > static struct drm_buddy_block * > alloc_range(struct drm_buddy_mm *mm, > u64 start, u64 end, > diff --git a/include/drm/drm_buddy.h b/include/drm/drm_buddy.h > index cd8021d2d6e7..1dfc80c88e1f 100644 > --- a/include/drm/drm_buddy.h > +++ b/include/drm/drm_buddy.h > @@ -145,6 +145,10 @@ int drm_buddy_alloc(struct drm_buddy_mm *mm, > struct list_head *blocks, > unsigned long flags); > > +int drm_buddy_free_unused_pages(struct drm_buddy_mm *mm, > + u64 actual_size, > + struct list_head *blocks); > + > void drm_buddy_free(struct drm_buddy_mm *mm, struct drm_buddy_block *block); > > void drm_buddy_free_list(struct drm_buddy_mm *mm, struct list_head *objects); >
On 04/11/21 12:46 am, Matthew Auld wrote: > On 25/10/2021 14:00, Arunpravin wrote: >> On contiguous allocation, we round up the size >> to the *next* power of 2, implement a function >> to free the unused pages after the newly allocate block. >> >> Signed-off-by: Arunpravin <Arunpravin.PaneerSelvam@amd.com> > > Ideally this gets added with some user, so we can see it in action? > Maybe squash the next patch here? [Arun] ok > >> --- >> drivers/gpu/drm/drm_buddy.c | 103 ++++++++++++++++++++++++++++++++++++ >> include/drm/drm_buddy.h | 4 ++ >> 2 files changed, 107 insertions(+) >> >> diff --git a/drivers/gpu/drm/drm_buddy.c b/drivers/gpu/drm/drm_buddy.c >> index 9d3547bcc5da..0da8510736eb 100644 >> --- a/drivers/gpu/drm/drm_buddy.c >> +++ b/drivers/gpu/drm/drm_buddy.c >> @@ -284,6 +284,109 @@ static inline bool contains(u64 s1, u64 e1, u64 s2, u64 e2) >> return s1 <= s2 && e1 >= e2; >> } >> >> +/** >> + * drm_buddy_free_unused_pages - free unused pages >> + * >> + * @mm: DRM buddy manager >> + * @actual_size: original size requested >> + * @blocks: output list head to add allocated blocks >> + * >> + * For contiguous allocation, we round up the size to the nearest >> + * power of two value, drivers consume *actual* size, so remaining >> + * portions are unused and it can be freed. >> + * >> + * Returns: >> + * 0 on success, error code on failure. >> + */ >> +int drm_buddy_free_unused_pages(struct drm_buddy_mm *mm, > > drm_buddy_block_trim? [Arun] ok > >> + u64 actual_size, > > new_size? [Arun] ok > >> + struct list_head *blocks) >> +{ >> + struct drm_buddy_block *block; >> + struct drm_buddy_block *buddy; >> + u64 actual_start; >> + u64 actual_end; >> + LIST_HEAD(dfs); >> + u64 count = 0; >> + int err; >> + >> + if (!list_is_singular(blocks)) >> + return -EINVAL; >> + >> + block = list_first_entry_or_null(blocks, >> + struct drm_buddy_block, >> + link); >> + >> + if (!block) >> + return -EINVAL; > > list_is_singular() already ensures that I guess? [Arun] yes it checks the list empty status, I will remove 'if (!block)' check > > >> + >> + if (actual_size > drm_buddy_block_size(mm, block)) >> + return -EINVAL; >> + >> + if (actual_size == drm_buddy_block_size(mm, block)) >> + return 0; > > Probably need to check the alignment of the actual_size, and also check > that it is non-zero? [Arun] ok > >> + >> + list_del(&block->link); >> + >> + actual_start = drm_buddy_block_offset(block); >> + actual_end = actual_start + actual_size - 1; >> + >> + if (drm_buddy_block_is_allocated(block)) > > That should rather be a programmer error. [Arun] ok, I will check for the allocation status and return -EINVAL if the block is not allocated. > >> + mark_free(mm, block); >> + >> + list_add(&block->tmp_link, &dfs); >> + >> + while (1) { >> + block = list_first_entry_or_null(&dfs, >> + struct drm_buddy_block, >> + tmp_link); >> + >> + if (!block) >> + break; >> + >> + list_del(&block->tmp_link); >> + >> + if (count == actual_size) >> + return 0; > > > Check for overlaps somewhere here to avoid needless searching and splitting? [Arun] ok > >> + >> + if (contains(actual_start, actual_end, drm_buddy_block_offset(block), >> + (drm_buddy_block_offset(block) + drm_buddy_block_size(mm, block) - 1))) { > > Could maybe record the start/end for better readability? [Arun] ok > >> + BUG_ON(!drm_buddy_block_is_free(block)); >> + >> + /* Allocate only required blocks */ >> + mark_allocated(block); >> + mm->avail -= drm_buddy_block_size(mm, block); >> + list_add_tail(&block->link, blocks); >> + count += drm_buddy_block_size(mm, block); >> + continue; >> + } >> + >> + if (drm_buddy_block_order(block) == 0) >> + continue; > > Should be impossible with overlaps check added. [Arun] yes, I will remove > >> + >> + if (!drm_buddy_block_is_split(block)) { > > That should always be true. [Arun] ok > >> + err = split_block(mm, block); >> + >> + if (unlikely(err)) >> + goto err_undo; >> + } >> + >> + list_add(&block->right->tmp_link, &dfs); >> + list_add(&block->left->tmp_link, &dfs); >> + } >> + >> + return -ENOSPC; > > > Would it make sense to factor out part of the alloc_range for this? It > looks roughly the same. [Arun] This function gets called for non-range allocations (0..max_size) as well on contiguous allocation. alloc_range() is called only for range allocations. > >> + >> +err_undo: >> + buddy = get_buddy(block); >> + if (buddy && >> + (drm_buddy_block_is_free(block) && >> + drm_buddy_block_is_free(buddy))) >> + __drm_buddy_free(mm, block); >> + return err; > > > Where do we add the block back to the original list? Did we not just > leak it? [Arun] we are adding back to the original list if contains() check becomes true. we are adding all the blocks within the actual_start and actual_end, and remaining blocks are freed (added to free list). > > >> +} >> +EXPORT_SYMBOL(drm_buddy_free_unused_pages); >> + >> static struct drm_buddy_block * >> alloc_range(struct drm_buddy_mm *mm, >> u64 start, u64 end, >> diff --git a/include/drm/drm_buddy.h b/include/drm/drm_buddy.h >> index cd8021d2d6e7..1dfc80c88e1f 100644 >> --- a/include/drm/drm_buddy.h >> +++ b/include/drm/drm_buddy.h >> @@ -145,6 +145,10 @@ int drm_buddy_alloc(struct drm_buddy_mm *mm, >> struct list_head *blocks, >> unsigned long flags); >> >> +int drm_buddy_free_unused_pages(struct drm_buddy_mm *mm, >> + u64 actual_size, >> + struct list_head *blocks); >> + >> void drm_buddy_free(struct drm_buddy_mm *mm, struct drm_buddy_block *block); >> >> void drm_buddy_free_list(struct drm_buddy_mm *mm, struct list_head *objects); >>
diff --git a/drivers/gpu/drm/drm_buddy.c b/drivers/gpu/drm/drm_buddy.c index 9d3547bcc5da..0da8510736eb 100644 --- a/drivers/gpu/drm/drm_buddy.c +++ b/drivers/gpu/drm/drm_buddy.c @@ -284,6 +284,109 @@ static inline bool contains(u64 s1, u64 e1, u64 s2, u64 e2) return s1 <= s2 && e1 >= e2; } +/** + * drm_buddy_free_unused_pages - free unused pages + * + * @mm: DRM buddy manager + * @actual_size: original size requested + * @blocks: output list head to add allocated blocks + * + * For contiguous allocation, we round up the size to the nearest + * power of two value, drivers consume *actual* size, so remaining + * portions are unused and it can be freed. + * + * Returns: + * 0 on success, error code on failure. + */ +int drm_buddy_free_unused_pages(struct drm_buddy_mm *mm, + u64 actual_size, + struct list_head *blocks) +{ + struct drm_buddy_block *block; + struct drm_buddy_block *buddy; + u64 actual_start; + u64 actual_end; + LIST_HEAD(dfs); + u64 count = 0; + int err; + + if (!list_is_singular(blocks)) + return -EINVAL; + + block = list_first_entry_or_null(blocks, + struct drm_buddy_block, + link); + + if (!block) + return -EINVAL; + + if (actual_size > drm_buddy_block_size(mm, block)) + return -EINVAL; + + if (actual_size == drm_buddy_block_size(mm, block)) + return 0; + + list_del(&block->link); + + actual_start = drm_buddy_block_offset(block); + actual_end = actual_start + actual_size - 1; + + if (drm_buddy_block_is_allocated(block)) + mark_free(mm, block); + + list_add(&block->tmp_link, &dfs); + + while (1) { + block = list_first_entry_or_null(&dfs, + struct drm_buddy_block, + tmp_link); + + if (!block) + break; + + list_del(&block->tmp_link); + + if (count == actual_size) + return 0; + + if (contains(actual_start, actual_end, drm_buddy_block_offset(block), + (drm_buddy_block_offset(block) + drm_buddy_block_size(mm, block) - 1))) { + BUG_ON(!drm_buddy_block_is_free(block)); + + /* Allocate only required blocks */ + mark_allocated(block); + mm->avail -= drm_buddy_block_size(mm, block); + list_add_tail(&block->link, blocks); + count += drm_buddy_block_size(mm, block); + continue; + } + + if (drm_buddy_block_order(block) == 0) + continue; + + if (!drm_buddy_block_is_split(block)) { + err = split_block(mm, block); + + if (unlikely(err)) + goto err_undo; + } + + list_add(&block->right->tmp_link, &dfs); + list_add(&block->left->tmp_link, &dfs); + } + + return -ENOSPC; + +err_undo: + buddy = get_buddy(block); + if (buddy && + (drm_buddy_block_is_free(block) && + drm_buddy_block_is_free(buddy))) + __drm_buddy_free(mm, block); + return err; +} +EXPORT_SYMBOL(drm_buddy_free_unused_pages); + static struct drm_buddy_block * alloc_range(struct drm_buddy_mm *mm, u64 start, u64 end, diff --git a/include/drm/drm_buddy.h b/include/drm/drm_buddy.h index cd8021d2d6e7..1dfc80c88e1f 100644 --- a/include/drm/drm_buddy.h +++ b/include/drm/drm_buddy.h @@ -145,6 +145,10 @@ int drm_buddy_alloc(struct drm_buddy_mm *mm, struct list_head *blocks, unsigned long flags); +int drm_buddy_free_unused_pages(struct drm_buddy_mm *mm, + u64 actual_size, + struct list_head *blocks); + void drm_buddy_free(struct drm_buddy_mm *mm, struct drm_buddy_block *block); void drm_buddy_free_list(struct drm_buddy_mm *mm, struct list_head *objects);
On contiguous allocation, we round up the size to the *next* power of 2, implement a function to free the unused pages after the newly allocate block. Signed-off-by: Arunpravin <Arunpravin.PaneerSelvam@amd.com> --- drivers/gpu/drm/drm_buddy.c | 103 ++++++++++++++++++++++++++++++++++++ include/drm/drm_buddy.h | 4 ++ 2 files changed, 107 insertions(+)