diff mbox

[1/1] KVM: s390: fix cmma migration for multiple memory slots

Message ID 20171219081921.81670-2-borntraeger@de.ibm.com (mailing list archive)
State New, archived
Headers show

Commit Message

Christian Borntraeger Dec. 19, 2017, 8:19 a.m. UTC
when multiple memory slots are present the cmma migration code
does not allocate enough memory for the bitmap. The memory slots
are sorted in reverse order, so we must use gfn and size of
slot[0] instead of the last one.

Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
Reviewed-by: Claudio Imbrenda <imbrenda@linux.vnet.ibm.com>
Cc: stable@vger.kernel.org # 4.13+
Fixes: 190df4a212a7 (KVM: s390: CMMA tracking, ESSA emulation, migration mode)
---
 arch/s390/kvm/kvm-s390.c | 9 +++++----
 1 file changed, 5 insertions(+), 4 deletions(-)

Comments

Cornelia Huck Dec. 19, 2017, 10:41 a.m. UTC | #1
On Tue, 19 Dec 2017 09:19:21 +0100
Christian Borntraeger <borntraeger@de.ibm.com> wrote:

> when multiple memory slots are present the cmma migration code

s/when/When/

> does not allocate enough memory for the bitmap. The memory slots
> are sorted in reverse order, so we must use gfn and size of
> slot[0] instead of the last one.

I've spent way too much time looking at the memslot code, but this
seems correct.

> 
> Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
> Reviewed-by: Claudio Imbrenda <imbrenda@linux.vnet.ibm.com>
> Cc: stable@vger.kernel.org # 4.13+
> Fixes: 190df4a212a7 (KVM: s390: CMMA tracking, ESSA emulation, migration mode)
> ---
>  arch/s390/kvm/kvm-s390.c | 9 +++++----
>  1 file changed, 5 insertions(+), 4 deletions(-)
> 
> diff --git a/arch/s390/kvm/kvm-s390.c b/arch/s390/kvm/kvm-s390.c
> index 966ea611210a..3373d8dff131 100644
> --- a/arch/s390/kvm/kvm-s390.c
> +++ b/arch/s390/kvm/kvm-s390.c
> @@ -792,11 +792,12 @@ static int kvm_s390_vm_start_migration(struct kvm *kvm)
>  
>  	if (kvm->arch.use_cmma) {
>  		/*
> -		 * Get the last slot. They should be sorted by base_gfn, so the
> -		 * last slot is also the one at the end of the address space.
> -		 * We have verified above that at least one slot is present.
> +		 * Get the first slot. They are reverse sorted by base_gfn, so
> +		 * the first slot is also the one at the end of the address
> +		 * space. We have verified above that at least one slot is
> +		 * present.
>  		 */
> -		ms = slots->memslots + slots->used_slots - 1;
> +		ms = slots->memslots;
>  		/* round up so we only use full longs */
>  		ram_pages = roundup(ms->base_gfn + ms->npages, BITS_PER_LONG);
>  		/* allocate enough bytes to store all the bits */

Reviewed-by: Cornelia Huck <cohuck@redhat.com>

As you wrote, this is good as a minimal fix.
Christian Borntraeger Dec. 19, 2017, 11:04 a.m. UTC | #2
On 12/19/2017 11:41 AM, Cornelia Huck wrote:
> On Tue, 19 Dec 2017 09:19:21 +0100
> Christian Borntraeger <borntraeger@de.ibm.com> wrote:
> 
>> when multiple memory slots are present the cmma migration code
> 
> s/when/When/
> 
>> does not allocate enough memory for the bitmap. The memory slots
>> are sorted in reverse order, so we must use gfn and size of
>> slot[0] instead of the last one.
> 
> I've spent way too much time looking at the memslot code, but this
> seems correct.
> 
>>
>> Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
>> Reviewed-by: Claudio Imbrenda <imbrenda@linux.vnet.ibm.com>
>> Cc: stable@vger.kernel.org # 4.13+
>> Fixes: 190df4a212a7 (KVM: s390: CMMA tracking, ESSA emulation, migration mode)
>> ---
>>  arch/s390/kvm/kvm-s390.c | 9 +++++----
>>  1 file changed, 5 insertions(+), 4 deletions(-)
>>
>> diff --git a/arch/s390/kvm/kvm-s390.c b/arch/s390/kvm/kvm-s390.c
>> index 966ea611210a..3373d8dff131 100644
>> --- a/arch/s390/kvm/kvm-s390.c
>> +++ b/arch/s390/kvm/kvm-s390.c
>> @@ -792,11 +792,12 @@ static int kvm_s390_vm_start_migration(struct kvm *kvm)
>>  
>>  	if (kvm->arch.use_cmma) {
>>  		/*
>> -		 * Get the last slot. They should be sorted by base_gfn, so the
>> -		 * last slot is also the one at the end of the address space.
>> -		 * We have verified above that at least one slot is present.
>> +		 * Get the first slot. They are reverse sorted by base_gfn, so
>> +		 * the first slot is also the one at the end of the address
>> +		 * space. We have verified above that at least one slot is
>> +		 * present.
>>  		 */
>> -		ms = slots->memslots + slots->used_slots - 1;
>> +		ms = slots->memslots;
>>  		/* round up so we only use full longs */
>>  		ram_pages = roundup(ms->base_gfn + ms->npages, BITS_PER_LONG);
>>  		/* allocate enough bytes to store all the bits */
> 
> Reviewed-by: Cornelia Huck <cohuck@redhat.com>
> 
> As you wrote, this is good as a minimal fix.


Paolo, Radim,

do you want a respin and/or pull request or can you take a fixed up version (
adding the Review and fixing when vs When/ for kvm/master?
David Hildenbrand Dec. 19, 2017, 5:11 p.m. UTC | #3
On 19.12.2017 09:19, Christian Borntraeger wrote:
> when multiple memory slots are present the cmma migration code
> does not allocate enough memory for the bitmap. The memory slots
> are sorted in reverse order, so we must use gfn and size of
> slot[0] instead of the last one.
> 
> Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
> Reviewed-by: Claudio Imbrenda <imbrenda@linux.vnet.ibm.com>
> Cc: stable@vger.kernel.org # 4.13+
> Fixes: 190df4a212a7 (KVM: s390: CMMA tracking, ESSA emulation, migration mode)
> ---
>  arch/s390/kvm/kvm-s390.c | 9 +++++----
>  1 file changed, 5 insertions(+), 4 deletions(-)
> 
> diff --git a/arch/s390/kvm/kvm-s390.c b/arch/s390/kvm/kvm-s390.c
> index 966ea611210a..3373d8dff131 100644
> --- a/arch/s390/kvm/kvm-s390.c
> +++ b/arch/s390/kvm/kvm-s390.c
> @@ -792,11 +792,12 @@ static int kvm_s390_vm_start_migration(struct kvm *kvm)
>  
>  	if (kvm->arch.use_cmma) {
>  		/*
> -		 * Get the last slot. They should be sorted by base_gfn, so the
> -		 * last slot is also the one at the end of the address space.
> -		 * We have verified above that at least one slot is present.
> +		 * Get the first slot. They are reverse sorted by base_gfn, so
> +		 * the first slot is also the one at the end of the address
> +		 * space. We have verified above that at least one slot is
> +		 * present.
>  		 */
> -		ms = slots->memslots + slots->used_slots - 1;
> +		ms = slots->memslots;
>  		/* round up so we only use full longs */
>  		ram_pages = roundup(ms->base_gfn + ms->npages, BITS_PER_LONG);
>  		/* allocate enough bytes to store all the bits */
> 

If I am not wrong, can't user space:

a) Create a VM, boot linux
b) Trigger VM START migration, initializing the bitmap
c) add a new memslot
c) make the guest execute an ESSA instruction on that new memslot

in do_essa(), gfn_to_hva() will now succeed and we will write into some
random memory as the bitmap is too short:

"test_and_set_bit(gfn, ms->pgste_bitmap)"


The same could be possible if the guest onlines memory (creating the
memslot via SCLP) during VM migration.
diff mbox

Patch

diff --git a/arch/s390/kvm/kvm-s390.c b/arch/s390/kvm/kvm-s390.c
index 966ea611210a..3373d8dff131 100644
--- a/arch/s390/kvm/kvm-s390.c
+++ b/arch/s390/kvm/kvm-s390.c
@@ -792,11 +792,12 @@  static int kvm_s390_vm_start_migration(struct kvm *kvm)
 
 	if (kvm->arch.use_cmma) {
 		/*
-		 * Get the last slot. They should be sorted by base_gfn, so the
-		 * last slot is also the one at the end of the address space.
-		 * We have verified above that at least one slot is present.
+		 * Get the first slot. They are reverse sorted by base_gfn, so
+		 * the first slot is also the one at the end of the address
+		 * space. We have verified above that at least one slot is
+		 * present.
 		 */
-		ms = slots->memslots + slots->used_slots - 1;
+		ms = slots->memslots;
 		/* round up so we only use full longs */
 		ram_pages = roundup(ms->base_gfn + ms->npages, BITS_PER_LONG);
 		/* allocate enough bytes to store all the bits */