diff mbox series

[v6,30/43] arm64: rme: Prevent Device mappings for Realms

Message ID 20241212155610.76522-31-steven.price@arm.com (mailing list archive)
State New, archived
Headers show
Series arm64: Support for Arm CCA in KVM | expand

Commit Message

Steven Price Dec. 12, 2024, 3:55 p.m. UTC
Physical device assignment is not yet supported by the RMM, so it
doesn't make much sense to allow device mappings within the realm.
Prevent them when the guest is a realm.

Signed-off-by: Steven Price <steven.price@arm.com>
---
Changes from v5:
 * Also prevent accesses in user_mem_abort()
---
 arch/arm64/kvm/mmu.c | 12 ++++++++++++
 1 file changed, 12 insertions(+)

Comments

Gavin Shan Feb. 2, 2025, 7:12 a.m. UTC | #1
On 12/13/24 1:55 AM, Steven Price wrote:
> Physical device assignment is not yet supported by the RMM, so it
> doesn't make much sense to allow device mappings within the realm.
> Prevent them when the guest is a realm.
> 
> Signed-off-by: Steven Price <steven.price@arm.com>
> ---
> Changes from v5:
>   * Also prevent accesses in user_mem_abort()
> ---
>   arch/arm64/kvm/mmu.c | 12 ++++++++++++
>   1 file changed, 12 insertions(+)
> 
> diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c
> index 9ede143ccef1..cef7c3dcbf99 100644
> --- a/arch/arm64/kvm/mmu.c
> +++ b/arch/arm64/kvm/mmu.c
> @@ -1149,6 +1149,10 @@ int kvm_phys_addr_ioremap(struct kvm *kvm, phys_addr_t guest_ipa,
>   	if (is_protected_kvm_enabled())
>   		return -EPERM;
>   
> +	/* We don't support mapping special pages into a Realm */
> +	if (kvm_is_realm(kvm))
> +		return -EINVAL;
> +

		return -EPERM;

>   	size += offset_in_page(guest_ipa);
>   	guest_ipa &= PAGE_MASK;
>   
> @@ -1725,6 +1729,14 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
>   	if (exec_fault && device)
>   		return -ENOEXEC;
>   
> +	/*
> +	 * Don't allow device accesses to protected memory as we don't (yet)
> +	 * support protected devices.
> +	 */
> +	if (device && kvm_is_realm(kvm) &&
> +	    kvm_gpa_from_fault(kvm, fault_ipa) == fault_ipa)
> +		return -EINVAL;
> +

s/kvm_is_realm/vcpu_is_rec

I don't understand the check very well. What I understood is mem_abort() is called
only when kvm_gpa_from_fault(kvm, fault_ipa) != fault_ipa, meaning only the page
faults in the shared address space is handled by mem_abort(). So I guess we perhaps
need something like below.

	if (vcpu_is_rec(vcpu) && device)
		return -EPERM;

kvm_handle_guest_abort
   kvm_slot_can_be_private
     private_memslot_fault	// page fault in the private space is handled here
   io_mem_abort			// MMIO emulation is handled here
   user_mem_abort                // page fault in the shared space is handled here

>   	/*
>   	 * Potentially reduce shadow S2 permissions to match the guest's own
>   	 * S2. For exec faults, we'd only reach this point if the guest

Thanks,
Gavin
Steven Price Feb. 7, 2025, 5:08 p.m. UTC | #2
On 02/02/2025 07:12, Gavin Shan wrote:
> On 12/13/24 1:55 AM, Steven Price wrote:
>> Physical device assignment is not yet supported by the RMM, so it
>> doesn't make much sense to allow device mappings within the realm.
>> Prevent them when the guest is a realm.
>>
>> Signed-off-by: Steven Price <steven.price@arm.com>
>> ---
>> Changes from v5:
>>   * Also prevent accesses in user_mem_abort()
>> ---
>>   arch/arm64/kvm/mmu.c | 12 ++++++++++++
>>   1 file changed, 12 insertions(+)
>>
>> diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c
>> index 9ede143ccef1..cef7c3dcbf99 100644
>> --- a/arch/arm64/kvm/mmu.c
>> +++ b/arch/arm64/kvm/mmu.c
>> @@ -1149,6 +1149,10 @@ int kvm_phys_addr_ioremap(struct kvm *kvm,
>> phys_addr_t guest_ipa,
>>       if (is_protected_kvm_enabled())
>>           return -EPERM;
>>   +    /* We don't support mapping special pages into a Realm */
>> +    if (kvm_is_realm(kvm))
>> +        return -EINVAL;
>> +
> 
>         return -EPERM;
> 
>>       size += offset_in_page(guest_ipa);
>>       guest_ipa &= PAGE_MASK;
>>   @@ -1725,6 +1729,14 @@ static int user_mem_abort(struct kvm_vcpu
>> *vcpu, phys_addr_t fault_ipa,
>>       if (exec_fault && device)
>>           return -ENOEXEC;
>>   +    /*
>> +     * Don't allow device accesses to protected memory as we don't (yet)
>> +     * support protected devices.
>> +     */
>> +    if (device && kvm_is_realm(kvm) &&
>> +        kvm_gpa_from_fault(kvm, fault_ipa) == fault_ipa)
>> +        return -EINVAL;
>> +
> 
> s/kvm_is_realm/vcpu_is_rec
> 
> I don't understand the check very well. What I understood is mem_abort()
> is called
> only when kvm_gpa_from_fault(kvm, fault_ipa) != fault_ipa, meaning only
> the page
> faults in the shared address space is handled by mem_abort(). So I guess
> we perhaps
> need something like below.

private_memslot_fault() is only when the memslot is a private
guest_memfd(). So there's also the situation of a 'normal' memslot which
can still end up in user_mem_abort().

As things currently stand we can't really deal with this (the bottom
part of the IPA is protected, and therefore must come from
guest_memfd()). But in the future we are expecting to be able to support
protected devices.

So I think really the correct solution for now is to drop the "device"
check and update the comment to reflect the fact that
private_memslot_fault() should be handling all protected address until
we get support for assigning devices.

Thanks,
Steve

>     if (vcpu_is_rec(vcpu) && device)
>         return -EPERM;
> 
> kvm_handle_guest_abort
>   kvm_slot_can_be_private
>     private_memslot_fault    // page fault in the private space is
> handled here
>   io_mem_abort            // MMIO emulation is handled here
>   user_mem_abort                // page fault in the shared space is
> handled here
> 
>>       /*
>>        * Potentially reduce shadow S2 permissions to match the guest's
>> own
>>        * S2. For exec faults, we'd only reach this point if the guest
> 
> Thanks,
> Gavin
>
diff mbox series

Patch

diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c
index 9ede143ccef1..cef7c3dcbf99 100644
--- a/arch/arm64/kvm/mmu.c
+++ b/arch/arm64/kvm/mmu.c
@@ -1149,6 +1149,10 @@  int kvm_phys_addr_ioremap(struct kvm *kvm, phys_addr_t guest_ipa,
 	if (is_protected_kvm_enabled())
 		return -EPERM;
 
+	/* We don't support mapping special pages into a Realm */
+	if (kvm_is_realm(kvm))
+		return -EINVAL;
+
 	size += offset_in_page(guest_ipa);
 	guest_ipa &= PAGE_MASK;
 
@@ -1725,6 +1729,14 @@  static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
 	if (exec_fault && device)
 		return -ENOEXEC;
 
+	/*
+	 * Don't allow device accesses to protected memory as we don't (yet)
+	 * support protected devices.
+	 */
+	if (device && kvm_is_realm(kvm) &&
+	    kvm_gpa_from_fault(kvm, fault_ipa) == fault_ipa)
+		return -EINVAL;
+
 	/*
 	 * Potentially reduce shadow S2 permissions to match the guest's own
 	 * S2. For exec faults, we'd only reach this point if the guest