diff mbox series

[v5,2/4] mm: modify domain_adjust_tot_pages() to better handle a zero adjustment

Message ID 20200129101643.1394-3-pdurrant@amazon.com (mailing list archive)
State Superseded
Headers show
Series purge free_shared_domheap_page() | expand

Commit Message

Paul Durrant Jan. 29, 2020, 10:16 a.m. UTC
Currently the function will pointlessly acquire and release the global
'heap_lock' in this case.

NOTE: No caller yet calls domain_adjust_tot_pages() with a zero 'pages'
      argument, but a subsequent patch will make this possible.

Signed-off-by: Paul Durrant <pdurrant@amazon.com>
---
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: George Dunlap <George.Dunlap@eu.citrix.com>
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Cc: Jan Beulich <jbeulich@suse.com>
Cc: Julien Grall <julien@xen.org>
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Cc: Stefano Stabellini <sstabellini@kernel.org>
Cc: Wei Liu <wl@xen.org>

v5:
 - Split out from the subsequent 'make MEMF_no_refcount pages safe to
   assign' patch as requested by Jan
---
 xen/common/page_alloc.c | 3 +++
 1 file changed, 3 insertions(+)

Comments

Jan Beulich Jan. 29, 2020, 11:12 a.m. UTC | #1
On 29.01.2020 11:16, Paul Durrant wrote:
> Currently the function will pointlessly acquire and release the global
> 'heap_lock' in this case.
> 
> NOTE: No caller yet calls domain_adjust_tot_pages() with a zero 'pages'
>       argument, but a subsequent patch will make this possible.

With this memory_exchange(), as previously indicated, now needlessly
prevents the call when !dec_count. I do think, as said there, that
together with the addition here then redundant checks in callers
should be dropped (and as it looks the named one is the only one).

Jan
Durrant, Paul Jan. 29, 2020, 11:15 a.m. UTC | #2
> -----Original Message-----
> From: Jan Beulich <jbeulich@suse.com>
> Sent: 29 January 2020 11:13
> To: Durrant, Paul <pdurrant@amazon.co.uk>
> Cc: xen-devel@lists.xenproject.org; Andrew Cooper
> <andrew.cooper3@citrix.com>; George Dunlap <George.Dunlap@eu.citrix.com>;
> Ian Jackson <ian.jackson@eu.citrix.com>; Julien Grall <julien@xen.org>;
> Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>; Stefano Stabellini
> <sstabellini@kernel.org>; Wei Liu <wl@xen.org>
> Subject: Re: [PATCH v5 2/4] mm: modify domain_adjust_tot_pages() to better
> handle a zero adjustment
> 
> On 29.01.2020 11:16, Paul Durrant wrote:
> > Currently the function will pointlessly acquire and release the global
> > 'heap_lock' in this case.
> >
> > NOTE: No caller yet calls domain_adjust_tot_pages() with a zero 'pages'
> >       argument, but a subsequent patch will make this possible.
> 
> With this memory_exchange(), as previously indicated, now needlessly
> prevents the call when !dec_count. I do think, as said there, that
> together with the addition here then redundant checks in callers
> should be dropped (and as it looks the named one is the only one).
> 

Ok, yes I missed that.

  Paul

> Jan
diff mbox series

Patch

diff --git a/xen/common/page_alloc.c b/xen/common/page_alloc.c
index 919a270587..135e15bae0 100644
--- a/xen/common/page_alloc.c
+++ b/xen/common/page_alloc.c
@@ -460,6 +460,9 @@  unsigned long domain_adjust_tot_pages(struct domain *d, long pages)
 {
     long dom_before, dom_after, dom_claimed, sys_before, sys_after;
 
+    if ( !pages )
+        goto out;
+
     ASSERT(spin_is_locked(&d->page_alloc_lock));
     d->tot_pages += pages;