diff mbox

memory: add early bail out from cpu_physical_memory_set_dirty_range

Message ID 1453827580-29159-1-git-send-email-pbonzini@redhat.com (mailing list archive)
State New, archived
Headers show

Commit Message

Paolo Bonzini Jan. 26, 2016, 4:59 p.m. UTC
This condition is true in the common case, so we can cut out the body of
the function.  In addition, this makes it easier for the compiler to do
at least partial inlining, even if it decides that fully inlining the
function is unreasonable.

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
 include/exec/ram_addr.h | 4 ++++
 1 file changed, 4 insertions(+)

Comments

Stefan Hajnoczi Jan. 28, 2016, 11:06 a.m. UTC | #1
On Tue, Jan 26, 2016 at 05:59:40PM +0100, Paolo Bonzini wrote:
> This condition is true in the common case, so we can cut out the body of
> the function.  In addition, this makes it easier for the compiler to do
> at least partial inlining, even if it decides that fully inlining the
> function is unreasonable.
> 
> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
> ---
>  include/exec/ram_addr.h | 4 ++++
>  1 file changed, 4 insertions(+)

Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
diff mbox

Patch

diff --git a/include/exec/ram_addr.h b/include/exec/ram_addr.h
index ef1489d..6e31fb5 100644
--- a/include/exec/ram_addr.h
+++ b/include/exec/ram_addr.h
@@ -161,6 +161,10 @@  static inline void cpu_physical_memory_set_dirty_range(ram_addr_t start,
     unsigned long end, page;
     unsigned long **d = ram_list.dirty_memory;
 
+    if (!mask && !xen_enabled()) {
+        return;
+    }
+
     end = TARGET_PAGE_ALIGN(start + length) >> TARGET_PAGE_BITS;
     page = start >> TARGET_PAGE_BITS;
     if (likely(mask & (1 << DIRTY_MEMORY_MIGRATION))) {