diff mbox

[4/4] mm: Don't hold heap lock in alloc_heap_pages() longer than necessary

Message ID 1503952829-11065-5-git-send-email-boris.ostrovsky@oracle.com (mailing list archive)
State New, archived
Headers show

Commit Message

Boris Ostrovsky Aug. 28, 2017, 8:40 p.m. UTC
Once pages are removed from the heap we don't need to hold the heap
lock. It is especially useful to drop it for an unscrubbed buddy since
we will be scrubbing it.

Signed-off-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
---
 xen/common/page_alloc.c | 7 ++++++-
 1 file changed, 6 insertions(+), 1 deletion(-)

Comments

Wei Liu Aug. 29, 2017, 12:16 p.m. UTC | #1
On Mon, Aug 28, 2017 at 04:40:29PM -0400, Boris Ostrovsky wrote:
> Once pages are removed from the heap we don't need to hold the heap
> lock. It is especially useful to drop it for an unscrubbed buddy since
> we will be scrubbing it.
> 
> Signed-off-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>

Reviewed-by: Wei Liu <wei.liu2@citrix.com>
Jan Beulich Aug. 29, 2017, 1:33 p.m. UTC | #2
>>> On 28.08.17 at 22:40, <boris.ostrovsky@oracle.com> wrote:
> @@ -943,6 +944,8 @@ static struct page_info *alloc_heap_pages(
>  
>      check_low_mem_virq();
>  
> +    spin_unlock(&heap_lock);
> +
>      if ( d != NULL )
>          d->last_alloc_node = node;

I'm not sure about the placement - as long as there's only a single
heap lock it certainly also protects the last_alloc_node updates
visible in context here. The consumer of this field also holds the
heap lock afaict, so at least for the moment it would feel more
safe if you moved the unlock past that update. With that feel
free to add
Reviewed-by: Jan Beulich <jbeulich@suse.com>

Jan
diff mbox

Patch

diff --git a/xen/common/page_alloc.c b/xen/common/page_alloc.c
index b93dae9..1ec788e 100644
--- a/xen/common/page_alloc.c
+++ b/xen/common/page_alloc.c
@@ -858,6 +858,7 @@  static struct page_info *alloc_heap_pages(
     struct page_info *pg;
     bool need_tlbflush = false;
     uint32_t tlbflush_timestamp = 0;
+    unsigned int dirty_cnt = 0;
 
     /* Make sure there are enough bits in memflags for nodeID. */
     BUILD_BUG_ON((_MEMF_bits - _MEMF_node) < (8 * sizeof(nodeid_t)));
@@ -943,6 +944,8 @@  static struct page_info *alloc_heap_pages(
 
     check_low_mem_virq();
 
+    spin_unlock(&heap_lock);
+
     if ( d != NULL )
         d->last_alloc_node = node;
 
@@ -955,7 +958,7 @@  static struct page_info *alloc_heap_pages(
         {
             if ( !(memflags & MEMF_no_scrub) )
                 scrub_one_page(&pg[i]);
-            node_need_scrub[node]--;
+            dirty_cnt++;
         }
 
         pg[i].count_info = PGC_state_inuse;
@@ -977,6 +980,8 @@  static struct page_info *alloc_heap_pages(
             check_one_page(&pg[i]);
     }
 
+    spin_lock(&heap_lock);
+    node_need_scrub[node] -= dirty_cnt;
     spin_unlock(&heap_lock);
 
     if ( need_tlbflush )