diff mbox series

[v3,7/9] KVM: selftests: Add interface to manually flag protected/encrypted ranges

Message ID 20240905124107.6954-8-pratikrajesh.sampat@amd.com (mailing list archive)
State New
Headers show
Series SEV Kernel Selftests | expand

Commit Message

Pratik R. Sampat Sept. 5, 2024, 12:41 p.m. UTC
From: Michael Roth <michael.roth@amd.com>

For SEV and SNP, currently __vm_phy_pages_alloc() handles setting the
region->protected_phy_pages bitmap to mark that the region needs to be
encrypted/measured into the initial guest state prior to
finalizing/starting the guest. It also marks what GPAs need to be mapped
as encrypted in the initial guest page table.

This works when using virtual/physical allocators to manage memory, but
if the test manages allocations/mapping directly then an alternative is
needed to set region->protected_phy_pages directly. Add an interface to
handle that.

Signed-off-by: Michael Roth <michael.roth@amd.com>
Signed-off-by: Pratik R. Sampat <pratikrajesh.sampat@amd.com>
Tested-by: Peter Gonda <pgonda@google.com>
Tested-by: Srikanth Aithal <sraithal@amd.com>
---
 .../testing/selftests/kvm/include/kvm_util.h  |  2 +
 tools/testing/selftests/kvm/lib/kvm_util.c    | 45 +++++++++++++++++--
 2 files changed, 43 insertions(+), 4 deletions(-)

Comments

Sean Christopherson Oct. 14, 2024, 10:58 p.m. UTC | #1
On Thu, Sep 05, 2024, Pratik R. Sampat wrote:
> From: Michael Roth <michael.roth@amd.com>
> 
> For SEV and SNP, currently __vm_phy_pages_alloc() handles setting the
> region->protected_phy_pages bitmap to mark that the region needs to be
> encrypted/measured into the initial guest state prior to

Nothing needs to be measured, no?  (because there's no attestation)

> finalizing/starting the guest. It also marks what GPAs need to be mapped
> as encrypted in the initial guest page table.

...

>  static inline void vm_mem_set_private(struct kvm_vm *vm, uint64_t gpa,
>  				      uint64_t size)
> diff --git a/tools/testing/selftests/kvm/lib/kvm_util.c b/tools/testing/selftests/kvm/lib/kvm_util.c
> index bbf90ad224da..d44a37aebcec 100644
> --- a/tools/testing/selftests/kvm/lib/kvm_util.c
> +++ b/tools/testing/selftests/kvm/lib/kvm_util.c
> @@ -1991,6 +1991,43 @@ const char *exit_reason_str(unsigned int exit_reason)
>  	return "Unknown";
>  }
>  
> +/*
> + * Set what guest GFNs need to be encrypted prior to finalizing a CoCo VM.
> + *
> + * Input Args:
> + *   vm - Virtual Machine
> + *   memslot - Memory region to allocate page from
> + *   paddr - Start of physical address to mark as encrypted
> + *   num - number of pages
> + *
> + * Output Args: None
> + *
> + * Return: None
> + *
> + * Generally __vm_phy_pages_alloc() will handle this automatically, but
> + * for cases where the test handles managing the physical allocation and
> + * mapping directly this interface should be used to mark physical pages
> + * that are intended to be encrypted as part of the initial guest state.
> + * This will also affect whether virt_map()/virt_pg_map() will map the
> + * page as encrypted or not in the initial guest page table.
> + *
> + * If the initial guest state has already been finalized, then setting
> + * it as encrypted will essentially be a noop since nothing more can be
> + * encrypted into the initial guest state at that point.
> + */
> +void vm_mem_set_protected(struct kvm_vm *vm, uint32_t memslot,
> +			  vm_paddr_t paddr, size_t num)
> +{
> +	struct userspace_mem_region *region;
> +	sparsebit_idx_t pg, base;
> +
> +	base = paddr >> vm->page_shift;
> +	region = memslot2region(vm, memslot);

Please no, doing a memslot lookup in a helper like this is only going to encourage
proliferation of bad code.  vm_mem_add() really should be able to mark the region
as protected.

E.g. practically speaking, the only code that will be able to use this helper is
code that is marking the entire memslot as protection.  And ability to _clear_
the protected_phy_pages bit is conspicuously missing.

> +
> +	for (pg = base; pg < base + num; ++pg)
> +		sparsebit_set(region->protected_phy_pages, pg);
> +}
Pratik R. Sampat Oct. 21, 2024, 8:23 p.m. UTC | #2
Hi Sean,

On 10/14/2024 5:58 PM, Sean Christopherson wrote:
> On Thu, Sep 05, 2024, Pratik R. Sampat wrote:
>> From: Michael Roth <michael.roth@amd.com>
>>
>> For SEV and SNP, currently __vm_phy_pages_alloc() handles setting the
>> region->protected_phy_pages bitmap to mark that the region needs to be
>> encrypted/measured into the initial guest state prior to
> 
> Nothing needs to be measured, no?  (because there's no attestation)
> 

Right.

>> finalizing/starting the guest. It also marks what GPAs need to be mapped
>> as encrypted in the initial guest page table.
> 
> ...
> 
>>  static inline void vm_mem_set_private(struct kvm_vm *vm, uint64_t gpa,
>>  				      uint64_t size)
>> diff --git a/tools/testing/selftests/kvm/lib/kvm_util.c b/tools/testing/selftests/kvm/lib/kvm_util.c
>> index bbf90ad224da..d44a37aebcec 100644
>> --- a/tools/testing/selftests/kvm/lib/kvm_util.c
>> +++ b/tools/testing/selftests/kvm/lib/kvm_util.c
>> @@ -1991,6 +1991,43 @@ const char *exit_reason_str(unsigned int exit_reason)
>>  	return "Unknown";
>>  }
>>  
>> +/*
>> + * Set what guest GFNs need to be encrypted prior to finalizing a CoCo VM.
>> + *
>> + * Input Args:
>> + *   vm - Virtual Machine
>> + *   memslot - Memory region to allocate page from
>> + *   paddr - Start of physical address to mark as encrypted
>> + *   num - number of pages
>> + *
>> + * Output Args: None
>> + *
>> + * Return: None
>> + *
>> + * Generally __vm_phy_pages_alloc() will handle this automatically, but
>> + * for cases where the test handles managing the physical allocation and
>> + * mapping directly this interface should be used to mark physical pages
>> + * that are intended to be encrypted as part of the initial guest state.
>> + * This will also affect whether virt_map()/virt_pg_map() will map the
>> + * page as encrypted or not in the initial guest page table.
>> + *
>> + * If the initial guest state has already been finalized, then setting
>> + * it as encrypted will essentially be a noop since nothing more can be
>> + * encrypted into the initial guest state at that point.
>> + */
>> +void vm_mem_set_protected(struct kvm_vm *vm, uint32_t memslot,
>> +			  vm_paddr_t paddr, size_t num)
>> +{
>> +	struct userspace_mem_region *region;
>> +	sparsebit_idx_t pg, base;
>> +
>> +	base = paddr >> vm->page_shift;
>> +	region = memslot2region(vm, memslot);
> 
> Please no, doing a memslot lookup in a helper like this is only going to encourage
> proliferation of bad code.  vm_mem_add() really should be able to mark the region
> as protected.
> 
> E.g. practically speaking, the only code that will be able to use this helper is
> code that is marking the entire memslot as protection.  And ability to _clear_
> the protected_phy_pages bit is conspicuously missing.
> 

From my understanding, vm_mem_add() only allocates the protected
sparsebits but does not populate them. For that maybe a better path
would be to go through, vm_phy_pages_alloc() - something similar to how
the set_memory_region_test.c does? I think we avoided doing that because
vm_phys_pages_alloc() takes a paddr_min rather than guaranteeing the
specific paddr and even then would need the similar virt_map() logic to
stay. If this is a cleaner approach though, I'm happy to redo this code
that way.

Thanks!
Pratik

>> +
>> +	for (pg = base; pg < base + num; ++pg)
>> +		sparsebit_set(region->protected_phy_pages, pg);
>> +}
diff mbox series

Patch

diff --git a/tools/testing/selftests/kvm/include/kvm_util.h b/tools/testing/selftests/kvm/include/kvm_util.h
index ab213708b551..642740fe1c59 100644
--- a/tools/testing/selftests/kvm/include/kvm_util.h
+++ b/tools/testing/selftests/kvm/include/kvm_util.h
@@ -394,6 +394,8 @@  static inline void vm_set_memory_attributes(struct kvm_vm *vm, uint64_t gpa,
 	vm_ioctl(vm, KVM_SET_MEMORY_ATTRIBUTES, &attr);
 }
 
+void vm_mem_set_protected(struct kvm_vm *vm, uint32_t memslot,
+			  vm_paddr_t paddr, size_t num);
 
 static inline void vm_mem_set_private(struct kvm_vm *vm, uint64_t gpa,
 				      uint64_t size)
diff --git a/tools/testing/selftests/kvm/lib/kvm_util.c b/tools/testing/selftests/kvm/lib/kvm_util.c
index bbf90ad224da..d44a37aebcec 100644
--- a/tools/testing/selftests/kvm/lib/kvm_util.c
+++ b/tools/testing/selftests/kvm/lib/kvm_util.c
@@ -1991,6 +1991,43 @@  const char *exit_reason_str(unsigned int exit_reason)
 	return "Unknown";
 }
 
+/*
+ * Set what guest GFNs need to be encrypted prior to finalizing a CoCo VM.
+ *
+ * Input Args:
+ *   vm - Virtual Machine
+ *   memslot - Memory region to allocate page from
+ *   paddr - Start of physical address to mark as encrypted
+ *   num - number of pages
+ *
+ * Output Args: None
+ *
+ * Return: None
+ *
+ * Generally __vm_phy_pages_alloc() will handle this automatically, but
+ * for cases where the test handles managing the physical allocation and
+ * mapping directly this interface should be used to mark physical pages
+ * that are intended to be encrypted as part of the initial guest state.
+ * This will also affect whether virt_map()/virt_pg_map() will map the
+ * page as encrypted or not in the initial guest page table.
+ *
+ * If the initial guest state has already been finalized, then setting
+ * it as encrypted will essentially be a noop since nothing more can be
+ * encrypted into the initial guest state at that point.
+ */
+void vm_mem_set_protected(struct kvm_vm *vm, uint32_t memslot,
+			  vm_paddr_t paddr, size_t num)
+{
+	struct userspace_mem_region *region;
+	sparsebit_idx_t pg, base;
+
+	base = paddr >> vm->page_shift;
+	region = memslot2region(vm, memslot);
+
+	for (pg = base; pg < base + num; ++pg)
+		sparsebit_set(region->protected_phy_pages, pg);
+}
+
 /*
  * Physical Contiguous Page Allocator
  *
@@ -2048,11 +2085,11 @@  vm_paddr_t __vm_phy_pages_alloc(struct kvm_vm *vm, size_t num,
 		abort();
 	}
 
-	for (pg = base; pg < base + num; ++pg) {
+	for (pg = base; pg < base + num; ++pg)
 		sparsebit_clear(region->unused_phy_pages, pg);
-		if (protected)
-			sparsebit_set(region->protected_phy_pages, pg);
-	}
+
+	if (protected)
+		vm_mem_set_protected(vm, memslot, base << vm->page_shift, num);
 
 	return base * vm->page_size;
 }