diff mbox series

[V2,4/6] mm/vmalloc: Check free space in vmap_block lockless

Message ID 20230525124504.750481992@linutronix.de (mailing list archive)
State New
Headers show
Series mm/vmalloc: Assorted fixes and improvements | expand

Commit Message

Thomas Gleixner May 25, 2023, 12:57 p.m. UTC
vb_alloc() unconditionally locks a vmap_block on the free list to check the
free space.

This can be done locklessly because vmap_block::free never increases, it's
only decreased on allocations.

Check the free space lockless and only if that succeeds, recheck under the
lock.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Reviewed-by: Uladzislau Rezki (Sony) <urezki@gmail.com>
---
 mm/vmalloc.c |    5 ++++-
 1 file changed, 4 insertions(+), 1 deletion(-)

Comments

Christoph Hellwig May 26, 2023, 7:56 a.m. UTC | #1
Looks good:

Reviewed-by: Christoph Hellwig <hch@lst.de>
Lorenzo Stoakes May 27, 2023, 6:27 p.m. UTC | #2
On Thu, May 25, 2023 at 02:57:07PM +0200, Thomas Gleixner wrote:
> vb_alloc() unconditionally locks a vmap_block on the free list to check the
> free space.
>
> This can be done locklessly because vmap_block::free never increases, it's
> only decreased on allocations.
>
> Check the free space lockless and only if that succeeds, recheck under the
> lock.
>
> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
> Reviewed-by: Uladzislau Rezki (Sony) <urezki@gmail.com>
> ---
>  mm/vmalloc.c |    5 ++++-
>  1 file changed, 4 insertions(+), 1 deletion(-)
>
> --- a/mm/vmalloc.c
> +++ b/mm/vmalloc.c
> @@ -2168,6 +2168,9 @@ static void *vb_alloc(unsigned long size
>  	list_for_each_entry_rcu(vb, &vbq->free, free_list) {
>  		unsigned long pages_off;
>
> +		if (READ_ONCE(vb->free) < (1UL << order))
> +			continue;
> +
>  		spin_lock(&vb->lock);
>  		if (vb->free < (1UL << order)) {
>  			spin_unlock(&vb->lock);
> @@ -2176,7 +2179,7 @@ static void *vb_alloc(unsigned long size
>
>  		pages_off = VMAP_BBMAP_BITS - vb->free;
>  		vaddr = vmap_block_vaddr(vb->va->va_start, pages_off);
> -		vb->free -= 1UL << order;
> +		WRITE_ONCE(vb->free, vb->free - (1UL << order));
>  		bitmap_set(vb->used_map, pages_off, (1UL << order));
>  		if (vb->free == 0) {
>  			spin_lock(&vbq->lock);
>
>

Looks good to me,

Reviewed-by: Lorenzo Stoakes <lstoakes@gmail.com>
diff mbox series

Patch

--- a/mm/vmalloc.c
+++ b/mm/vmalloc.c
@@ -2168,6 +2168,9 @@  static void *vb_alloc(unsigned long size
 	list_for_each_entry_rcu(vb, &vbq->free, free_list) {
 		unsigned long pages_off;
 
+		if (READ_ONCE(vb->free) < (1UL << order))
+			continue;
+
 		spin_lock(&vb->lock);
 		if (vb->free < (1UL << order)) {
 			spin_unlock(&vb->lock);
@@ -2176,7 +2179,7 @@  static void *vb_alloc(unsigned long size
 
 		pages_off = VMAP_BBMAP_BITS - vb->free;
 		vaddr = vmap_block_vaddr(vb->va->va_start, pages_off);
-		vb->free -= 1UL << order;
+		WRITE_ONCE(vb->free, vb->free - (1UL << order));
 		bitmap_set(vb->used_map, pages_off, (1UL << order));
 		if (vb->free == 0) {
 			spin_lock(&vbq->lock);