diff mbox series

[3/6] mm/vmalloc: Prevent flushing dirty space over and over

Message ID 20230523140002.690874212@linutronix.de (mailing list archive)
State New
Headers show
Series mm/vmalloc: Assorted fixes and improvements | expand

Commit Message

Thomas Gleixner May 23, 2023, 2:02 p.m. UTC
vmap blocks which have active mappings cannot be purged. Allocations which
have been freed are accounted for in vmap_block::dirty_min/max, so that
they can be detected in _vm_unmap_aliases() as potentially stale TLBs.

If there are several invocations of _vm_unmap_aliases() then each of them
will flush the dirty range. That's pointless and just increases the
probability of full TLB flushes.

Avoid that by resetting the flush range after accounting for it. That's
safe versus other invocations of _vm_unmap_aliases() because this is all
serialized with vmap_purge_lock.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
---
 mm/vmalloc.c |    8 ++++++--
 1 file changed, 6 insertions(+), 2 deletions(-)

Comments

Christoph Hellwig May 23, 2023, 3:27 p.m. UTC | #1
On Tue, May 23, 2023 at 04:02:13PM +0200, Thomas Gleixner wrote:
> vmap blocks which have active mappings cannot be purged. Allocations which
> have been freed are accounted for in vmap_block::dirty_min/max, so that
> they can be detected in _vm_unmap_aliases() as potentially stale TLBs.
> 
> If there are several invocations of _vm_unmap_aliases() then each of them
> will flush the dirty range. That's pointless and just increases the
> probability of full TLB flushes.
> 
> Avoid that by resetting the flush range after accounting for it. That's
> safe versus other invocations of _vm_unmap_aliases() because this is all
> serialized with vmap_purge_lock.

Just nitpicking, but isn't vb->lock the actually relevant lock here?
vmap_purge_lock is only taken after the loop.
Thomas Gleixner May 23, 2023, 4:10 p.m. UTC | #2
On Tue, May 23 2023 at 17:27, Christoph Hellwig wrote:

> On Tue, May 23, 2023 at 04:02:13PM +0200, Thomas Gleixner wrote:
>> vmap blocks which have active mappings cannot be purged. Allocations which
>> have been freed are accounted for in vmap_block::dirty_min/max, so that
>> they can be detected in _vm_unmap_aliases() as potentially stale TLBs.
>> 
>> If there are several invocations of _vm_unmap_aliases() then each of them
>> will flush the dirty range. That's pointless and just increases the
>> probability of full TLB flushes.
>> 
>> Avoid that by resetting the flush range after accounting for it. That's
>> safe versus other invocations of _vm_unmap_aliases() because this is all
>> serialized with vmap_purge_lock.
>
> Just nitpicking, but isn't vb->lock the actually relevant lock here?
> vmap_purge_lock is only taken after the loop.

No. The avoid double list iteration change moves the purge lock up
before the loop as it needs to protect against a concurrent purge
attempt.

Thanks,

        tglx
Baoquan He May 24, 2023, 9:43 a.m. UTC | #3
On 05/23/23 at 04:02pm, Thomas Gleixner wrote:
> vmap blocks which have active mappings cannot be purged. Allocations which
> have been freed are accounted for in vmap_block::dirty_min/max, so that
> they can be detected in _vm_unmap_aliases() as potentially stale TLBs.
> 
> If there are several invocations of _vm_unmap_aliases() then each of them
> will flush the dirty range. That's pointless and just increases the
> probability of full TLB flushes.
> 
> Avoid that by resetting the flush range after accounting for it. That's
> safe versus other invocations of _vm_unmap_aliases() because this is all
> serialized with vmap_purge_lock.
> 
> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
> ---
>  mm/vmalloc.c |    8 ++++++--
>  1 file changed, 6 insertions(+), 2 deletions(-)
> 
> --- a/mm/vmalloc.c
> +++ b/mm/vmalloc.c
> @@ -2224,7 +2224,7 @@ static void vb_free(unsigned long addr,
>  
>  	spin_lock(&vb->lock);
>  
> -	/* Expand dirty range */
> +	/* Expand the not yet TLB flushed dirty range */
>  	vb->dirty_min = min(vb->dirty_min, offset);
>  	vb->dirty_max = max(vb->dirty_max, offset + (1UL << order));
>  
> @@ -2262,7 +2262,7 @@ static void _vm_unmap_aliases(unsigned l
>  			 * space to be flushed.
>  			 */
>  			if (!purge_fragmented_block(vb, vbq, &purge_list) &&
> -			    vb->dirty && vb->dirty != VMAP_BBMAP_BITS) {
> +			    vb->dirty_max && vb->dirty != VMAP_BBMAP_BITS) {
>  				unsigned long va_start = vb->va->va_start;
>  				unsigned long s, e;
>  
> @@ -2272,6 +2272,10 @@ static void _vm_unmap_aliases(unsigned l
>  				start = min(s, start);
>  				end   = max(e, end);
>  
> +				/* Prevent that this is flushed more than once */
> +				vb->dirty_min = VMAP_BBMAP_BITS;
> +				vb->dirty_max = 0;
> +

This is really a great catch and improvement.

Reviewed-by: Baoquan He <bhe@redhat.com>

>  				flush = 1;
>  			}
>  			spin_unlock(&vb->lock);
>
diff mbox series

Patch

--- a/mm/vmalloc.c
+++ b/mm/vmalloc.c
@@ -2224,7 +2224,7 @@  static void vb_free(unsigned long addr,
 
 	spin_lock(&vb->lock);
 
-	/* Expand dirty range */
+	/* Expand the not yet TLB flushed dirty range */
 	vb->dirty_min = min(vb->dirty_min, offset);
 	vb->dirty_max = max(vb->dirty_max, offset + (1UL << order));
 
@@ -2262,7 +2262,7 @@  static void _vm_unmap_aliases(unsigned l
 			 * space to be flushed.
 			 */
 			if (!purge_fragmented_block(vb, vbq, &purge_list) &&
-			    vb->dirty && vb->dirty != VMAP_BBMAP_BITS) {
+			    vb->dirty_max && vb->dirty != VMAP_BBMAP_BITS) {
 				unsigned long va_start = vb->va->va_start;
 				unsigned long s, e;
 
@@ -2272,6 +2272,10 @@  static void _vm_unmap_aliases(unsigned l
 				start = min(s, start);
 				end   = max(e, end);
 
+				/* Prevent that this is flushed more than once */
+				vb->dirty_min = VMAP_BBMAP_BITS;
+				vb->dirty_max = 0;
+
 				flush = 1;
 			}
 			spin_unlock(&vb->lock);