diff mbox

kvm: x86: increase user memory slots to 509

Message ID 1415289167-24661-1-git-send-email-imammedo@redhat.com (mailing list archive)
State New, archived
Headers show

Commit Message

Igor Mammedov Nov. 6, 2014, 3:52 p.m. UTC
With the 3 private slots, this gives us 512 slots total.
Motivation for this is in addition to assigned devices
support more memory hotplug slots, where 1 slot is
used by a hotplugged memory stick.
It will allow to support upto 256 hotplug memory
slots and leave 253 slots for assigned devices and
other devices that use them.

Signed-off-by: Igor Mammedov <imammedo@redhat.com>
---
previously increased to 125 slots for assigned devices
by 0f888f5acd
---
 arch/x86/include/asm/kvm_host.h | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

Comments

Paolo Bonzini Nov. 6, 2014, 4:23 p.m. UTC | #1
On 06/11/2014 16:52, Igor Mammedov wrote:
> With the 3 private slots, this gives us 512 slots total.
> Motivation for this is in addition to assigned devices
> support more memory hotplug slots, where 1 slot is
> used by a hotplugged memory stick.
> It will allow to support upto 256 hotplug memory
> slots and leave 253 slots for assigned devices and
> other devices that use them.
> 
> Signed-off-by: Igor Mammedov <imammedo@redhat.com>

It would use more memory, and some loops are now becoming more
expensive.  In general adding a memory slot to a VM is not cheap, and I
question the wisdom of having 256 hotplug memory slots.  But the
slowdown mostly would only happen if you actually _use_ those memory
slots, so it is not a blocker for this patch.

We probably should change the kmemdup + heap sort of
__kvm_set_memory_region + update_memslots to copy the array and insert
the new item at the right place, at the same time.  Using a heap sort is
overkill and unnecessarily goes from O(n^2) to O(n^2 log n).  With a
bigger constant in front as well.

If you want to do it, I'd be grateful.  Otherwise I can look at it as
time permits.

Paolo
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Igor Mammedov Nov. 14, 2014, 2:10 p.m. UTC | #2
On Thu, 06 Nov 2014 17:23:58 +0100
Paolo Bonzini <pbonzini@redhat.com> wrote:

> 
> 
> On 06/11/2014 16:52, Igor Mammedov wrote:
> > With the 3 private slots, this gives us 512 slots total.
> > Motivation for this is in addition to assigned devices
> > support more memory hotplug slots, where 1 slot is
> > used by a hotplugged memory stick.
> > It will allow to support upto 256 hotplug memory
> > slots and leave 253 slots for assigned devices and
> > other devices that use them.
> > 
> > Signed-off-by: Igor Mammedov <imammedo@redhat.com>
> 
> It would use more memory, and some loops are now becoming more
> expensive.  In general adding a memory slot to a VM is not cheap, and
> I question the wisdom of having 256 hotplug memory slots.  But the
> slowdown mostly would only happen if you actually _use_ those memory
> slots, so it is not a blocker for this patch.
It might be useful to have a big amount of slots for big guests
and although linux works with minimum section 128Mb but Windows memory
hotplug works just fine even with page-sized slots so when unplug in
QEMU is implemented it would be possible to drop balooning driver at
least there.

And providing that memslots could be allocated during runtime when guest
programs devices or maps roms (i.e. no fail path), I don't see a way
to fix it in QEMU (i.e. avoid abort when limit is reached).
Hence an attempt to bump memslots limit to 512, where current 125
are reserved for initial memory mappings and passthrough devices 
256 goes to hotplug memory slots and leaves us 128 free slots for
future expansion.

To see what would be affected by large amount of slots I played with
perf a bit and the biggest hotspot offender with large amount of
memslots was:

 gfn_to_memslot() -> ... -> search_memslots()

I'll try to make it faster for this case so 512 memslots wouldn't
affect guest performance.

So please consider applying this patch.

> 
> Paolo

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Paolo Bonzini Nov. 14, 2014, 2:53 p.m. UTC | #3
On 14/11/2014 15:10, Igor Mammedov wrote:
> On Thu, 06 Nov 2014 17:23:58 +0100 Paolo Bonzini <pbonzini@redhat.com> wrote:
>> It would use more memory, and some loops are now becoming more
>> expensive.  In general adding a memory slot to a VM is not cheap, and
>> I question the wisdom of having 256 hotplug memory slots.  But the
>> slowdown mostly would only happen if you actually _use_ those memory
>> slots, so it is not a blocker for this patch.
> It might be useful to have a big amount of slots for big guests
> and although linux works with minimum section 128Mb but Windows memory
> hotplug works just fine even with page-sized slots so when unplug in
> QEMU is implemented it would be possible to drop balooning driver at
> least there.

I think for a big (64G?) guest it doesn't make much sense anyway to
balloon at a granularity that is less than 1G or even more.  So I like
the idea of dropping ballooning in favor of memory hotplug for big guests.

> And providing that memslots could be allocated during runtime when guest
> programs devices or maps roms (i.e. no fail path), I don't see a way
> to fix it in QEMU (i.e. avoid abort when limit is reached).
> Hence an attempt to bump memslots limit to 512, where current 125
> are reserved for initial memory mappings and passthrough devices 
> 256 goes to hotplug memory slots and leaves us 128 free slots for
> future expansion.
> 
> To see what would be affected by large amount of slots I played with
> perf a bit and the biggest hotspot offender with large amount of
> memslots was:
> 
>  gfn_to_memslot() -> ... -> search_memslots()
> 
> I'll try to make it faster for this case so 512 memslots wouldn't
> affect guest performance.
> 
> So please consider applying this patch.

Yes, sorry for the delay---I am definitely going to apply it.

Paolo
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
diff mbox

Patch

diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index 6ed0c30..cfd60e3 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -33,7 +33,7 @@ 
 
 #define KVM_MAX_VCPUS 255
 #define KVM_SOFT_MAX_VCPUS 160
-#define KVM_USER_MEM_SLOTS 125
+#define KVM_USER_MEM_SLOTS 509
 /* memory slots that are not exposed to userspace */
 #define KVM_PRIVATE_MEM_SLOTS 3
 #define KVM_MEM_SLOTS_NUM (KVM_USER_MEM_SLOTS + KVM_PRIVATE_MEM_SLOTS)