diff mbox series

[Part1,RFC,v2,11/20] x86/compressed: Add helper for validating pages in the decompression stage

Message ID 20210430121616.2295-12-brijesh.singh@amd.com (mailing list archive)
State New, archived
Headers show
Series Add AMD Secure Nested Paging (SEV-SNP) Guest Support | expand

Commit Message

Brijesh Singh April 30, 2021, 12:16 p.m. UTC
Many of the integrity guarantees of SEV-SNP are enforced through the
Reverse Map Table (RMP). Each RMP entry contains the GPA at which a
particular page of DRAM should be mapped. The VMs can request the
hypervisor to add pages in the RMP table via the Page State Change VMGEXIT
defined in the GHCB specification. Inside each RMP entry is a Validated
flag; this flag is automatically cleared to 0 by the CPU hardware when a
new RMP entry is created for a guest. Each VM page can be either
validated or invalidated, as indicated by the Validated flag in the RMP
entry. Memory access to a private page that is not validated generates
a #VC. A VM must use PVALIDATE instruction to validate the private page
before using it.

To maintain the security guarantee of SEV-SNP guests, when transitioning
pages from private to shared, the guest must invalidate the pages before
asking the hypervisor to change the page state to shared in the RMP table.

After the pages are mapped private in the page table, the guest must issue a
page state change VMGEXIT to make the pages private in the RMP table and
validate it.

On boot, BIOS should have validated the entire system memory. During
the kernel decompression stage, the VC handler uses the
set_memory_decrypted() to make the GHCB page shared (i.e clear encryption
attribute). And while exiting from the decompression, it calls the
set_page_encrypted() to make the page private.

Add sev_snp_set_page_{private,shared}() helper that is used by the
set_memory_{decrypt,encrypt}() to change the page state in the RMP table.

Signed-off-by: Brijesh Singh <brijesh.singh@amd.com>
---
 arch/x86/boot/compressed/ident_map_64.c | 17 +++++++++
 arch/x86/boot/compressed/sev.c          | 50 +++++++++++++++++++++++++
 arch/x86/boot/compressed/sev.h          | 25 +++++++++++++
 3 files changed, 92 insertions(+)
 create mode 100644 arch/x86/boot/compressed/sev.h

Comments

Borislav Petkov May 20, 2021, 5:52 p.m. UTC | #1
On Fri, Apr 30, 2021 at 07:16:07AM -0500, Brijesh Singh wrote:
> @@ -278,12 +279,28 @@ static int set_clr_page_flags(struct x86_mapping_info *info,
>  	if ((set | clr) & _PAGE_ENC)
>  		clflush_page(address);
>  
> +	/*
> +	 * If the encryption attribute is being cleared, then change the page state to
> +	 * shared in the RMP entry. Change of the page state must be done before the
> +	 * PTE updates.
> +	 */
> +	if (clr & _PAGE_ENC)
> +		snp_set_page_shared(pte_pfn(*ptep) << PAGE_SHIFT);

From the last review:

The statement above already looks at clr - just merge the two together.

> @@ -136,6 +137,55 @@ static inline bool sev_snp_enabled(void)
>  	return sev_status_val & MSR_AMD64_SEV_SNP_ENABLED ? true : false;
>  }
>  
> +static void snp_page_state_change(unsigned long paddr, int op)

From the last review:

no need for too many prefixes on static functions - just call this one
__change_page_state() or so, so that the below one can be called...

> +{
> +	u64 val;
> +
> +	if (!sev_snp_enabled())
> +		return;
> +
> +	/*
> +	 * If the page is getting changed from private to shard then invalidate the page

							shared

And you can write this a lot shorter

	* If private -> shared, ...

> +	 * before requesting the state change in the RMP table.
> +	 */
> +	if ((op == SNP_PAGE_STATE_SHARED) && pvalidate(paddr, RMP_PG_SIZE_4K, 0))
> +		goto e_pvalidate;
> +
> +	/* Issue VMGEXIT to change the page state in RMP table. */
> +	sev_es_wr_ghcb_msr(GHCB_MSR_PSC_REQ_GFN(paddr >> PAGE_SHIFT, op));
> +	VMGEXIT();
> +
> +	/* Read the response of the VMGEXIT. */
> +	val = sev_es_rd_ghcb_msr();
> +	if ((GHCB_RESP_CODE(val) != GHCB_MSR_PSC_RESP) || GHCB_MSR_PSC_RESP_VAL(val))
> +		goto e_psc;

That label is used only once - just do the termination here directly and
remove it.

Thx.
Brijesh Singh May 20, 2021, 6:05 p.m. UTC | #2
On 5/20/21 12:52 PM, Borislav Petkov wrote:
> On Fri, Apr 30, 2021 at 07:16:07AM -0500, Brijesh Singh wrote:
>> @@ -278,12 +279,28 @@ static int set_clr_page_flags(struct x86_mapping_info *info,
>>  	if ((set | clr) & _PAGE_ENC)
>>  		clflush_page(address);
>>  
>> +	/*
>> +	 * If the encryption attribute is being cleared, then change the page state to
>> +	 * shared in the RMP entry. Change of the page state must be done before the
>> +	 * PTE updates.
>> +	 */
>> +	if (clr & _PAGE_ENC)
>> +		snp_set_page_shared(pte_pfn(*ptep) << PAGE_SHIFT);
> From the last review:
>
> The statement above already looks at clr - just merge the two together.

Maybe I am missing something, the statement above was executed for
either set or clr but the page shared need to happen only for clr. So,
from code readability point I kept it outside of that if().

Otherwise we may have to do something like.

...

if ((set | clr) & _PAGE_EN) {

   if (clr)

    snp_set_page_shared(pte_pfn(*ptep) << PAGE_SHIFT);

  }

I am okay with above is the preferred approach.

>
>> @@ -136,6 +137,55 @@ static inline bool sev_snp_enabled(void)
>>  	return sev_status_val & MSR_AMD64_SEV_SNP_ENABLED ? true : false;
>>  }
>>  
>> +static void snp_page_state_change(unsigned long paddr, int op)
> From the last review:
>
> no need for too many prefixes on static functions - just call this one
> __change_page_state() or so, so that the below one can be called...

I guess I still kept the "snp" prefix because vmgexit was named that
way. Based on your feedback, I am droping the "SNP" prefix from the
VMGEXIT and will update it as well.


>> +{
>> +	u64 val;
>> +
>> +	if (!sev_snp_enabled())
>> +		return;
>> +
>> +	/*
>> +	 * If the page is getting changed from private to shard then invalidate the page
> 							shared
>
> And you can write this a lot shorter
>
> 	* If private -> shared, ...
>
>> +	 * before requesting the state change in the RMP table.
>> +	 */
>> +	if ((op == SNP_PAGE_STATE_SHARED) && pvalidate(paddr, RMP_PG_SIZE_4K, 0))
>> +		goto e_pvalidate;
>> +
>> +	/* Issue VMGEXIT to change the page state in RMP table. */
>> +	sev_es_wr_ghcb_msr(GHCB_MSR_PSC_REQ_GFN(paddr >> PAGE_SHIFT, op));
>> +	VMGEXIT();
>> +
>> +	/* Read the response of the VMGEXIT. */
>> +	val = sev_es_rd_ghcb_msr();
>> +	if ((GHCB_RESP_CODE(val) != GHCB_MSR_PSC_RESP) || GHCB_MSR_PSC_RESP_VAL(val))
>> +		goto e_psc;
> That label is used only once - just do the termination here directly and
> remove it.

Noted.


>
> Thx.
>
Borislav Petkov May 25, 2021, 10:18 a.m. UTC | #3
On Thu, May 20, 2021 at 01:05:15PM -0500, Brijesh Singh wrote:
> Maybe I am missing something, the statement above was executed for
> either set or clr but the page shared need to happen only for clr. So,
> from code readability point I kept it outside of that if().
> 
> Otherwise we may have to do something like.
> 
> ...
> 
> if ((set | clr) & _PAGE_EN) {
> 
>    if (clr)
> 
>     snp_set_page_shared(pte_pfn(*ptep) << PAGE_SHIFT);
> 
>   }
> 
> I am okay with above is the preferred approach.

Yes pls, because it keeps the _PAGE_ENC handling together in one
statement.

Thx.
diff mbox series

Patch

diff --git a/arch/x86/boot/compressed/ident_map_64.c b/arch/x86/boot/compressed/ident_map_64.c
index f7213d0943b8..c2a1a5311b47 100644
--- a/arch/x86/boot/compressed/ident_map_64.c
+++ b/arch/x86/boot/compressed/ident_map_64.c
@@ -21,6 +21,7 @@ 
 
 #include "error.h"
 #include "misc.h"
+#include "sev.h"
 
 /* These actually do the work of building the kernel identity maps. */
 #include <linux/pgtable.h>
@@ -278,12 +279,28 @@  static int set_clr_page_flags(struct x86_mapping_info *info,
 	if ((set | clr) & _PAGE_ENC)
 		clflush_page(address);
 
+	/*
+	 * If the encryption attribute is being cleared, then change the page state to
+	 * shared in the RMP entry. Change of the page state must be done before the
+	 * PTE updates.
+	 */
+	if (clr & _PAGE_ENC)
+		snp_set_page_shared(pte_pfn(*ptep) << PAGE_SHIFT);
+
 	/* Update PTE */
 	pte = *ptep;
 	pte = pte_set_flags(pte, set);
 	pte = pte_clear_flags(pte, clr);
 	set_pte(ptep, pte);
 
+	/*
+	 * If the encryption attribute is being set, then change the page state to
+	 * private in the RMP entry. The page state must be done after the PTE
+	 * is updated.
+	 */
+	if (set & _PAGE_ENC)
+		snp_set_page_private(pte_pfn(*ptep) << PAGE_SHIFT);
+
 	/* Flush TLB after changing encryption attribute */
 	write_cr3(top_level_pgt);
 
diff --git a/arch/x86/boot/compressed/sev.c b/arch/x86/boot/compressed/sev.c
index 7badbeb6cb95..4f215d0c9f76 100644
--- a/arch/x86/boot/compressed/sev.c
+++ b/arch/x86/boot/compressed/sev.c
@@ -22,6 +22,7 @@ 
 #include <asm/svm.h>
 
 #include "error.h"
+#include "sev.h"
 
 struct ghcb boot_ghcb_page __aligned(PAGE_SIZE);
 struct ghcb *boot_ghcb;
@@ -136,6 +137,55 @@  static inline bool sev_snp_enabled(void)
 	return sev_status_val & MSR_AMD64_SEV_SNP_ENABLED ? true : false;
 }
 
+static void snp_page_state_change(unsigned long paddr, int op)
+{
+	u64 val;
+
+	if (!sev_snp_enabled())
+		return;
+
+	/*
+	 * If the page is getting changed from private to shard then invalidate the page
+	 * before requesting the state change in the RMP table.
+	 */
+	if ((op == SNP_PAGE_STATE_SHARED) && pvalidate(paddr, RMP_PG_SIZE_4K, 0))
+		goto e_pvalidate;
+
+	/* Issue VMGEXIT to change the page state in RMP table. */
+	sev_es_wr_ghcb_msr(GHCB_MSR_PSC_REQ_GFN(paddr >> PAGE_SHIFT, op));
+	VMGEXIT();
+
+	/* Read the response of the VMGEXIT. */
+	val = sev_es_rd_ghcb_msr();
+	if ((GHCB_RESP_CODE(val) != GHCB_MSR_PSC_RESP) || GHCB_MSR_PSC_RESP_VAL(val))
+		goto e_psc;
+
+	/*
+	 * Now that page is added in the RMP table, validate it so that it is consistent
+	 * with the RMP entry.
+	 */
+	if ((op == SNP_PAGE_STATE_PRIVATE) && pvalidate(paddr, RMP_PG_SIZE_4K, 1))
+		goto e_pvalidate;
+
+	return;
+
+e_psc:
+	sev_es_terminate(1, GHCB_TERM_PSC);
+
+e_pvalidate:
+	sev_es_terminate(1, GHCB_TERM_PVALIDATE);
+}
+
+void snp_set_page_private(unsigned long paddr)
+{
+	snp_page_state_change(paddr, SNP_PAGE_STATE_PRIVATE);
+}
+
+void snp_set_page_shared(unsigned long paddr)
+{
+	snp_page_state_change(paddr, SNP_PAGE_STATE_SHARED);
+}
+
 static bool early_setup_sev_es(void)
 {
 	if (!sev_es_negotiate_protocol())
diff --git a/arch/x86/boot/compressed/sev.h b/arch/x86/boot/compressed/sev.h
new file mode 100644
index 000000000000..a693eabc379b
--- /dev/null
+++ b/arch/x86/boot/compressed/sev.h
@@ -0,0 +1,25 @@ 
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * AMD Secure Encrypted Virtualization
+ *
+ * Copyright (C) 2021 Advanced Micro Devices, Inc.
+ *
+ * Author: Brijesh Singh <brijesh.singh@amd.com>
+ */
+
+#ifndef BOOT_COMPRESSED_SEV_H
+#define BOOT_COMPRESSED_SEV_H
+
+#ifdef CONFIG_AMD_MEM_ENCRYPT
+
+void snp_set_page_private(unsigned long paddr);
+void snp_set_page_shared(unsigned long paddr);
+
+#else
+
+static inline void snp_set_page_private(unsigned long paddr) { }
+static inline void snp_set_page_shared(unsigned long paddr) { }
+
+#endif /* CONFIG_AMD_MEM_ENCRYPT */
+
+#endif /* BOOT_COMPRESSED_SEV_H */