diff mbox

[RFC] arm/arm64: KVM: allow the use of THP on 2MB aligned memslots

Message ID 1386859881-13482-1-git-send-email-marc.zyngier@arm.com (mailing list archive)
State New, archived
Headers show

Commit Message

Marc Zyngier Dec. 12, 2013, 2:51 p.m. UTC
The THP code in KVM/ARM is a bit restrictive in not allowing a THP
to be used if the VMA is not 2MB aligned. Actually, it is not so much
the VMA that matters, but the associated memslot:

A process can perfectly mmap a region with no particular alignment
restriction, and then pass a 2MB aligned address to KVM. In this
case, KVM will only use this 2MB aligned region, and will ignore
the range between vma->vm_start and memslot->userspace_addr.

The fix is then to check the alignment of memslot->userspace_addr.

Cc: Christoffer Dall <christoffer.dall@linaro.org>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
 arch/arm/kvm/mmu.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

Comments

Christoffer Dall Dec. 13, 2013, 1:36 a.m. UTC | #1
On Thu, Dec 12, 2013 at 02:51:21PM +0000, Marc Zyngier wrote:
> The THP code in KVM/ARM is a bit restrictive in not allowing a THP
> to be used if the VMA is not 2MB aligned. Actually, it is not so much
> the VMA that matters, but the associated memslot:
> 
> A process can perfectly mmap a region with no particular alignment
> restriction, and then pass a 2MB aligned address to KVM. In this
> case, KVM will only use this 2MB aligned region, and will ignore
> the range between vma->vm_start and memslot->userspace_addr.
> 
> The fix is then to check the alignment of memslot->userspace_addr.

That's more correct, but I'm wondering if it's enough.

What happens if the base_gfn is not aligned to a 2MB region, will we not
be mapping something completely bogus here?


> 
> Cc: Christoffer Dall <christoffer.dall@linaro.org>
> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
> ---
>  arch/arm/kvm/mmu.c | 4 ++--
>  1 file changed, 2 insertions(+), 2 deletions(-)
> 
> diff --git a/arch/arm/kvm/mmu.c b/arch/arm/kvm/mmu.c
> index 5809069..cec641a 100644
> --- a/arch/arm/kvm/mmu.c
> +++ b/arch/arm/kvm/mmu.c
> @@ -667,14 +667,14 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
>  		gfn = (fault_ipa & PMD_MASK) >> PAGE_SHIFT;
>  	} else {
>  		/*
> -		 * Pages belonging to VMAs not aligned to the PMD mapping
> +		 * Pages belonging to memslots not aligned to the PMD mapping
>  		 * granularity cannot be mapped using block descriptors even
>  		 * if the pages belong to a THP for the process, because the
>  		 * stage-2 block descriptor will cover more than a single THP
>  		 * and we loose atomicity for unmapping, updates, and splits
>  		 * of the THP or other pages in the stage-2 block range.
>  		 */
> -		if (vma->vm_start & ~PMD_MASK)
> +		if (memslot->userspace_addr & ~PMD_MASK)
>  			force_pte = true;
>  	}
>  	up_read(&current->mm->mmap_sem);
> -- 
> 1.8.2.3
> 
>
Marc Zyngier Dec. 13, 2013, 8:34 a.m. UTC | #2
On 2013-12-13 01:36, Christoffer Dall wrote:
> On Thu, Dec 12, 2013 at 02:51:21PM +0000, Marc Zyngier wrote:
>> The THP code in KVM/ARM is a bit restrictive in not allowing a THP
>> to be used if the VMA is not 2MB aligned. Actually, it is not so 
>> much
>> the VMA that matters, but the associated memslot:
>>
>> A process can perfectly mmap a region with no particular alignment
>> restriction, and then pass a 2MB aligned address to KVM. In this
>> case, KVM will only use this 2MB aligned region, and will ignore
>> the range between vma->vm_start and memslot->userspace_addr.
>>
>> The fix is then to check the alignment of memslot->userspace_addr.
>
> That's more correct, but I'm wondering if it's enough.
>
> What happens if the base_gfn is not aligned to a 2MB region, will we 
> not
> be mapping something completely bogus here?

Indeed. So far, we haven't seen a stupid enough userspace, but I'm sure 
it will happen.

I'll update this patch to also check for the base IPA of the memslot.

Thanks,

         M.
>
>>
>> Cc: Christoffer Dall <christoffer.dall@linaro.org>
>> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
>> ---
>>  arch/arm/kvm/mmu.c | 4 ++--
>>  1 file changed, 2 insertions(+), 2 deletions(-)
>>
>> diff --git a/arch/arm/kvm/mmu.c b/arch/arm/kvm/mmu.c
>> index 5809069..cec641a 100644
>> --- a/arch/arm/kvm/mmu.c
>> +++ b/arch/arm/kvm/mmu.c
>> @@ -667,14 +667,14 @@ static int user_mem_abort(struct kvm_vcpu 
>> *vcpu, phys_addr_t fault_ipa,
>>  		gfn = (fault_ipa & PMD_MASK) >> PAGE_SHIFT;
>>  	} else {
>>  		/*
>> -		 * Pages belonging to VMAs not aligned to the PMD mapping
>> +		 * Pages belonging to memslots not aligned to the PMD mapping
>>  		 * granularity cannot be mapped using block descriptors even
>>  		 * if the pages belong to a THP for the process, because the
>>  		 * stage-2 block descriptor will cover more than a single THP
>>  		 * and we loose atomicity for unmapping, updates, and splits
>>  		 * of the THP or other pages in the stage-2 block range.
>>  		 */
>> -		if (vma->vm_start & ~PMD_MASK)
>> +		if (memslot->userspace_addr & ~PMD_MASK)
>>  			force_pte = true;
>>  	}
>>  	up_read(&current->mm->mmap_sem);
>> --
>> 1.8.2.3
>>
>>
diff mbox

Patch

diff --git a/arch/arm/kvm/mmu.c b/arch/arm/kvm/mmu.c
index 5809069..cec641a 100644
--- a/arch/arm/kvm/mmu.c
+++ b/arch/arm/kvm/mmu.c
@@ -667,14 +667,14 @@  static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
 		gfn = (fault_ipa & PMD_MASK) >> PAGE_SHIFT;
 	} else {
 		/*
-		 * Pages belonging to VMAs not aligned to the PMD mapping
+		 * Pages belonging to memslots not aligned to the PMD mapping
 		 * granularity cannot be mapped using block descriptors even
 		 * if the pages belong to a THP for the process, because the
 		 * stage-2 block descriptor will cover more than a single THP
 		 * and we loose atomicity for unmapping, updates, and splits
 		 * of the THP or other pages in the stage-2 block range.
 		 */
-		if (vma->vm_start & ~PMD_MASK)
+		if (memslot->userspace_addr & ~PMD_MASK)
 			force_pte = true;
 	}
 	up_read(&current->mm->mmap_sem);