diff mbox series

[v8,12/40] x86/sev: Add helper for validating pages in early enc attribute changes

Message ID 20211210154332.11526-13-brijesh.singh@amd.com (mailing list archive)
State New, archived
Headers show
Series Add AMD Secure Nested Paging (SEV-SNP) Guest Support | expand

Commit Message

Brijesh Singh Dec. 10, 2021, 3:43 p.m. UTC
The early_set_memory_{encrypt,decrypt}() are used for changing the
page from decrypted (shared) to encrypted (private) and vice versa.
When SEV-SNP is active, the page state transition needs to go through
additional steps.

If the page is transitioned from shared to private, then perform the
following after the encryption attribute is set in the page table:

1. Issue the page state change VMGEXIT to add the page as a private
   in the RMP table.
2. Validate the page after its successfully added in the RMP table.

To maintain the security guarantees, if the page is transitioned from
private to shared, then perform the following before clearing the
encryption attribute from the page table.

1. Invalidate the page.
2. Issue the page state change VMGEXIT to make the page shared in the
   RMP table.

The early_set_memory_{encrypt,decrypt} can be called before the GHCB
is setup, use the SNP page state MSR protocol VMGEXIT defined in the GHCB
specification to request the page state change in the RMP table.

While at it, add a helper snp_prep_memory() that can be used outside
the sev specific files to change the page state for a specified memory
range.

Signed-off-by: Brijesh Singh <brijesh.singh@amd.com>
---
 arch/x86/include/asm/sev.h |  10 ++++
 arch/x86/kernel/sev.c      | 102 +++++++++++++++++++++++++++++++++++++
 arch/x86/mm/mem_encrypt.c  |  51 +++++++++++++++++--
 3 files changed, 159 insertions(+), 4 deletions(-)

Comments

Borislav Petkov Dec. 23, 2021, 11:50 a.m. UTC | #1
On Fri, Dec 10, 2021 at 09:43:04AM -0600, Brijesh Singh wrote:
> The early_set_memory_{encrypt,decrypt}() are used for changing the
					^
					ed()


> page from decrypted (shared) to encrypted (private) and vice versa.
> When SEV-SNP is active, the page state transition needs to go through
> additional steps.
> 
> If the page is transitioned from shared to private, then perform the
> following after the encryption attribute is set in the page table:
> 
> 1. Issue the page state change VMGEXIT to add the page as a private
>    in the RMP table.
> 2. Validate the page after its successfully added in the RMP table.
> 
> To maintain the security guarantees, if the page is transitioned from
> private to shared, then perform the following before clearing the
> encryption attribute from the page table.
> 
> 1. Invalidate the page.
> 2. Issue the page state change VMGEXIT to make the page shared in the
>    RMP table.
> 
> The early_set_memory_{encrypt,decrypt} can be called before the GHCB

ditto.

> is setup, use the SNP page state MSR protocol VMGEXIT defined in the GHCB
> specification to request the page state change in the RMP table.
> 
> While at it, add a helper snp_prep_memory() that can be used outside
> the sev specific files to change the page state for a specified memory

"outside of the sev specific"? What is that trying to say?

/me goes and looks at the whole patchset...

Right, so that is used only in probe_roms(). So that should say:

"Add a helper ... which will be used in probe_roms(), in a later patch."

> range.
> 
> Signed-off-by: Brijesh Singh <brijesh.singh@amd.com>
> ---
>  arch/x86/include/asm/sev.h |  10 ++++
>  arch/x86/kernel/sev.c      | 102 +++++++++++++++++++++++++++++++++++++
>  arch/x86/mm/mem_encrypt.c  |  51 +++++++++++++++++--

Right, for the next revision, that file is called mem_encrypt_amd.c now.

...

> diff --git a/arch/x86/mm/mem_encrypt.c b/arch/x86/mm/mem_encrypt.c
> index 3ba801ff6afc..5d19aad06670 100644
> --- a/arch/x86/mm/mem_encrypt.c
> +++ b/arch/x86/mm/mem_encrypt.c
> @@ -31,6 +31,7 @@
>  #include <asm/processor-flags.h>
>  #include <asm/msr.h>
>  #include <asm/cmdline.h>
> +#include <asm/sev.h>
>  
>  #include "mm_internal.h"
>  
> @@ -49,6 +50,34 @@ EXPORT_SYMBOL_GPL(sev_enable_key);
>  /* Buffer used for early in-place encryption by BSP, no locking needed */
>  static char sme_early_buffer[PAGE_SIZE] __initdata __aligned(PAGE_SIZE);
>  
> +/*
> + * When SNP is active, change the page state from private to shared before
> + * copying the data from the source to destination and restore after the copy.
> + * This is required because the source address is mapped as decrypted by the
> + * caller of the routine.
> + */
> +static inline void __init snp_memcpy(void *dst, void *src, size_t sz,
> +				     unsigned long paddr, bool decrypt)
> +{
> +	unsigned long npages = PAGE_ALIGN(sz) >> PAGE_SHIFT;
> +
> +	if (!cc_platform_has(CC_ATTR_SEV_SNP) || !decrypt) {

Yeah, looking at this again, I don't really like this multiplexing.
Let's do this instead, diff ontop:

---
diff --git a/arch/x86/mm/mem_encrypt_amd.c b/arch/x86/mm/mem_encrypt_amd.c
index c14fd8254198..e3f7a84449bb 100644
--- a/arch/x86/mm/mem_encrypt_amd.c
+++ b/arch/x86/mm/mem_encrypt_amd.c
@@ -49,24 +49,18 @@ EXPORT_SYMBOL(sme_me_mask);
 static char sme_early_buffer[PAGE_SIZE] __initdata __aligned(PAGE_SIZE);
 
 /*
- * When SNP is active, change the page state from private to shared before
- * copying the data from the source to destination and restore after the copy.
- * This is required because the source address is mapped as decrypted by the
- * caller of the routine.
+ * SNP-specific routine which needs to additionally change the page state from
+ * private to shared before copying the data from the source to destination and
+ * restore after the copy.
  */
 static inline void __init snp_memcpy(void *dst, void *src, size_t sz,
 				     unsigned long paddr, bool decrypt)
 {
 	unsigned long npages = PAGE_ALIGN(sz) >> PAGE_SHIFT;
 
-	if (!cc_platform_has(CC_ATTR_SEV_SNP) || !decrypt) {
-		memcpy(dst, src, sz);
-		return;
-	}
-
 	/*
-	 * With SNP, the paddr needs to be accessed decrypted, mark the page
-	 * shared in the RMP table before copying it.
+	 * @paddr needs to be accessed decrypted, mark the page shared in the
+	 * RMP table before copying it.
 	 */
 	early_snp_set_memory_shared((unsigned long)__va(paddr), paddr, npages);
 
@@ -124,8 +118,13 @@ static void __init __sme_early_enc_dec(resource_size_t paddr,
 		 * Use a temporary buffer, of cache-line multiple size, to
 		 * avoid data corruption as documented in the APM.
 		 */
-		snp_memcpy(sme_early_buffer, src, len, paddr, enc);
-		snp_memcpy(dst, sme_early_buffer, len, paddr, !enc);
+		if (cc_platform_has(CC_ATTR_SEV_SNP)) {
+			snp_memcpy(sme_early_buffer, src, len, paddr, enc);
+			snp_memcpy(dst, sme_early_buffer, len, paddr, !enc);
+		} else {
+			memcpy(sme_early_buffer, src, len);
+			memcpy(dst, sme_early_buffer, len);
+		}
 
 		early_memunmap(dst, len);
 		early_memunmap(src, len);
Venu Busireddy Jan. 3, 2022, 11:28 p.m. UTC | #2
On 2021-12-10 09:43:04 -0600, Brijesh Singh wrote:
> The early_set_memory_{encrypt,decrypt}() are used for changing the

s/encrypt,decrypt/encrypted,decrypted/

> page from decrypted (shared) to encrypted (private) and vice versa.
> When SEV-SNP is active, the page state transition needs to go through
> additional steps.
> 
> If the page is transitioned from shared to private, then perform the
> following after the encryption attribute is set in the page table:
> 
> 1. Issue the page state change VMGEXIT to add the page as a private
>    in the RMP table.
> 2. Validate the page after its successfully added in the RMP table.
> 
> To maintain the security guarantees, if the page is transitioned from
> private to shared, then perform the following before clearing the
> encryption attribute from the page table.
> 
> 1. Invalidate the page.
> 2. Issue the page state change VMGEXIT to make the page shared in the
>    RMP table.
> 
> The early_set_memory_{encrypt,decrypt} can be called before the GHCB

s/encrypt,decrypt/encrypted,decrypted/

> is setup, use the SNP page state MSR protocol VMGEXIT defined in the GHCB
> specification to request the page state change in the RMP table.
> 
> While at it, add a helper snp_prep_memory() that can be used outside
> the sev specific files to change the page state for a specified memory
> range.
> 
> Signed-off-by: Brijesh Singh <brijesh.singh@amd.com>

And with a few other nits below:

Reviewed-by: Venu Busireddy <venu.busireddy@oracle.com>

> diff --git a/arch/x86/kernel/sev.c b/arch/x86/kernel/sev.c
> +
> +	 /*
> +	  * Ask the hypervisor to mark the memory pages as private in the RMP
> +	  * table.
> +	  */

Indentation is off. While at it, you may want to collapse it into a one
line comment.

> +	early_set_page_state(paddr, npages, SNP_PAGE_STATE_PRIVATE);
> +
> +	/* Validate the memory pages after they've been added in the RMP table. */
> +	pvalidate_pages(vaddr, npages, 1);
> +}
> +
> +void __init early_snp_set_memory_shared(unsigned long vaddr, unsigned long paddr,
> +					unsigned int npages)
> +{
> +	if (!cc_platform_has(CC_ATTR_SEV_SNP))
> +		return;
> +
> +	/*
> +	 * Invalidate the memory pages before they are marked shared in the
> +	 * RMP table.
> +	 */

Collapse into one line?

> +	pvalidate_pages(vaddr, npages, 0);
> +
> +	 /* Ask hypervisor to mark the memory pages shared in the RMP table. */

Indentation is off.

> +		/*
> +		 * ON SNP, the page state in the RMP table must happen
> +		 * before the page table updates.
> +		 */
> +		early_snp_set_memory_shared((unsigned long)__va(pa), pa, 1);

I know "1" implies "true", but to emphasize that the argument is
actually a boolean, could you please change the "1" to "true?"

> +	}
> +
>  	/* Change the page encryption mask. */
>  	new_pte = pfn_pte(pfn, new_prot);
>  	set_pte_atomic(kpte, new_pte);
> +
> +	/*
> +	 * If page is set encrypted in the page table, then update the RMP table to
> +	 * add this page as private.
> +	 */
> +	if (enc)
> +		early_snp_set_memory_private((unsigned long)__va(pa), pa, 1);

Here too, could you please change the "1" to "true?"

Venu
Brijesh Singh Jan. 4, 2022, 3:33 p.m. UTC | #3
On 12/23/21 5:50 AM, Borislav Petkov wrote:
>> While at it, add a helper snp_prep_memory() that can be used outside
>> the sev specific files to change the page state for a specified memory
> 
> "outside of the sev specific"? What is that trying to say?
> 
> /me goes and looks at the whole patchset...
> 
> Right, so that is used only in probe_roms(). So that should say:
> 
> "Add a helper ... which will be used in probe_roms(), in a later patch."
> 

Currently the helper is used for the probe_roms() only but it can be 
used by others in future. I will go ahead and spell out saying that it 
is for the probe_roms().


> 
> Yeah, looking at this again, I don't really like this multiplexing.
> Let's do this instead, diff ontop:
> 

thanks, I will apply your diff.
Brijesh Singh Jan. 11, 2022, 9:22 p.m. UTC | #4
Hi Venu,


On 1/3/22 5:28 PM, Venu Busireddy wrote:
...

>> +
>> +	 /*
>> +	  * Ask the hypervisor to mark the memory pages as private in the RMP
>> +	  * table.
>> +	  */
> 
> Indentation is off. While at it, you may want to collapse it into a one
> line comment.
> 

Based on previous review feedback I tried to keep the comment to 80 
character limit.

>> +	early_set_page_state(paddr, npages, SNP_PAGE_STATE_PRIVATE);
>> +
>> +	/* Validate the memory pages after they've been added in the RMP table. */
>> +	pvalidate_pages(vaddr, npages, 1);
>> +}
>> +
>> +void __init early_snp_set_memory_shared(unsigned long vaddr, unsigned long paddr,
>> +					unsigned int npages)
>> +{
>> +	if (!cc_platform_has(CC_ATTR_SEV_SNP))
>> +		return;
>> +
>> +	/*
>> +	 * Invalidate the memory pages before they are marked shared in the
>> +	 * RMP table.
>> +	 */
> 
> Collapse into one line?
> 

same as above.

...

>> +		/*
>> +		 * ON SNP, the page state in the RMP table must happen
>> +		 * before the page table updates.
>> +		 */
>> +		early_snp_set_memory_shared((unsigned long)__va(pa), pa, 1);
> 
> I know "1" implies "true", but to emphasize that the argument is
> actually a boolean, could you please change the "1" to "true?"
> 

I assume you mean the last argument to the 
early_snp_set_memory_{private,shared}. Please note that its a number of 
pages (unsigned int). The 'true' does not make sense to me.

>> +	}
>> +
>>   	/* Change the page encryption mask. */
>>   	new_pte = pfn_pte(pfn, new_prot);
>>   	set_pte_atomic(kpte, new_pte);
>> +
>> +	/*
>> +	 * If page is set encrypted in the page table, then update the RMP table to
>> +	 * add this page as private.
>> +	 */
>> +	if (enc)
>> +		early_snp_set_memory_private((unsigned long)__va(pa), pa, 1);
> 
> Here too, could you please change the "1" to "true?"
> 

same as above.

thanks
Venu Busireddy Jan. 11, 2022, 9:51 p.m. UTC | #5
On 2022-01-11 15:22:01 -0600, Brijesh Singh wrote:
> Hi Venu,
> 
> 
> On 1/3/22 5:28 PM, Venu Busireddy wrote:
> ...
> 
> > > +
> > > +	 /*
> > > +	  * Ask the hypervisor to mark the memory pages as private in the RMP
> > > +	  * table.
> > > +	  */
> > 
> > Indentation is off. While at it, you may want to collapse it into a one
> > line comment.
> > 
> 
> Based on previous review feedback I tried to keep the comment to 80
> character limit.

Isn't the line length limit 100 now? Also, there are quite a few lines
that are longer than 80 characters in this file, and elsewhere.

But you can ignore my comment.

> > > +	early_set_page_state(paddr, npages, SNP_PAGE_STATE_PRIVATE);
> > > +
> > > +	/* Validate the memory pages after they've been added in the RMP table. */
> > > +	pvalidate_pages(vaddr, npages, 1);
> > > +}
> > > +
> > > +void __init early_snp_set_memory_shared(unsigned long vaddr, unsigned long paddr,
> > > +					unsigned int npages)
> > > +{
> > > +	if (!cc_platform_has(CC_ATTR_SEV_SNP))
> > > +		return;
> > > +
> > > +	/*
> > > +	 * Invalidate the memory pages before they are marked shared in the
> > > +	 * RMP table.
> > > +	 */
> > 
> > Collapse into one line?
> > 
> 
> same as above.

Same as above.

> 
> ...
> 
> > > +		/*
> > > +		 * ON SNP, the page state in the RMP table must happen
> > > +		 * before the page table updates.
> > > +		 */
> > > +		early_snp_set_memory_shared((unsigned long)__va(pa), pa, 1);
> > 
> > I know "1" implies "true", but to emphasize that the argument is
> > actually a boolean, could you please change the "1" to "true?"
> > 
> 
> I assume you mean the last argument to the
> early_snp_set_memory_{private,shared}. Please note that its a number of
> pages (unsigned int). The 'true' does not make sense to me.

Sorry. While reading the code, I was looking at the invocations
of pvalidate_pages(), where 0 and 1 are passed instead of "false"
and "true" for the third argument. But while replying to the thread,
I marked my comment at the wrong place. I meant to suggest to change
the third argument to pvalidate_pages().


> > > +	}
> > > +
> > >   	/* Change the page encryption mask. */
> > >   	new_pte = pfn_pte(pfn, new_prot);
> > >   	set_pte_atomic(kpte, new_pte);
> > > +
> > > +	/*
> > > +	 * If page is set encrypted in the page table, then update the RMP table to
> > > +	 * add this page as private.
> > > +	 */
> > > +	if (enc)
> > > +		early_snp_set_memory_private((unsigned long)__va(pa), pa, 1);
> > 
> > Here too, could you please change the "1" to "true?"
> > 
> 
> same as above.
> 
> thanks
Brijesh Singh Jan. 11, 2022, 9:57 p.m. UTC | #6
On 1/11/22 3:51 PM, Venu Busireddy wrote:
> On 2022-01-11 15:22:01 -0600, Brijesh Singh wrote:
>> Hi Venu,
>>
>>
>> On 1/3/22 5:28 PM, Venu Busireddy wrote:
>> ...
>>
>>>> +
>>>> +	 /*
>>>> +	  * Ask the hypervisor to mark the memory pages as private in the RMP
>>>> +	  * table.
>>>> +	  */
>>>
>>> Indentation is off. While at it, you may want to collapse it into a one
>>> line comment.
>>>
>>
>> Based on previous review feedback I tried to keep the comment to 80
>> character limit.
> 
> Isn't the line length limit 100 now? Also, there are quite a few lines
> that are longer than 80 characters in this file, and elsewhere.
> 
> But you can ignore my comment.
> 

Yes, the actual line limit is 100, but I was asked to keep the comments 
to 80 cols [1] to keep it consistent with other comments in this file.

https://lore.kernel.org/lkml/f9a69ad8-54bb-70f1-d606-6497e5753bb0@amd.com/

thanks
Venu Busireddy Jan. 11, 2022, 10:42 p.m. UTC | #7
On 2022-01-11 15:57:13 -0600, Brijesh Singh wrote:
> 
> 
> On 1/11/22 3:51 PM, Venu Busireddy wrote:
> > On 2022-01-11 15:22:01 -0600, Brijesh Singh wrote:
> > > Hi Venu,
> > > 
> > > 
> > > On 1/3/22 5:28 PM, Venu Busireddy wrote:
> > > ...
> > > 
> > > > > +
> > > > > +	 /*
> > > > > +	  * Ask the hypervisor to mark the memory pages as private in the RMP
> > > > > +	  * table.
> > > > > +	  */
> > > > 
> > > > Indentation is off. While at it, you may want to collapse it into a one
> > > > line comment.
> > > > 
> > > 
> > > Based on previous review feedback I tried to keep the comment to 80
> > > character limit.
> > 
> > Isn't the line length limit 100 now? Also, there are quite a few lines
> > that are longer than 80 characters in this file, and elsewhere.
> > 
> > But you can ignore my comment.
> > 
> 
> Yes, the actual line limit is 100, but I was asked to keep the comments to
> 80 cols [1] to keep it consistent with other comments in this file.

Well, now that you mention it, the comment that immediately precedes this
one in the file is 91 characters long, and the comment that immediately
follows this one is 82 characters long! And both those lines are also
added as part of this patch.

Venu

> 
> https://lore.kernel.org/lkml/f9a69ad8-54bb-70f1-d606-6497e5753bb0@amd.com/
> 
> thanks
diff mbox series

Patch

diff --git a/arch/x86/include/asm/sev.h b/arch/x86/include/asm/sev.h
index 0df508374a35..eec2e1b9d557 100644
--- a/arch/x86/include/asm/sev.h
+++ b/arch/x86/include/asm/sev.h
@@ -123,6 +123,11 @@  static inline int pvalidate(unsigned long vaddr, bool rmp_psize, bool validate)
 	return rc;
 }
 void sev_snp_register_ghcb(void);
+void __init early_snp_set_memory_private(unsigned long vaddr, unsigned long paddr,
+					 unsigned int npages);
+void __init early_snp_set_memory_shared(unsigned long vaddr, unsigned long paddr,
+					unsigned int npages);
+void __init snp_prep_memory(unsigned long paddr, unsigned int sz, enum psc_op op);
 #else
 static inline void sev_es_ist_enter(struct pt_regs *regs) { }
 static inline void sev_es_ist_exit(void) { }
@@ -132,6 +137,11 @@  static inline int sev_es_efi_map_ghcbs(pgd_t *pgd) { return 0; }
 static inline int pvalidate(unsigned long vaddr, bool rmp_psize, bool validate) { return 0; }
 static inline int rmpadjust(unsigned long vaddr, bool rmp_psize, unsigned long attrs) { return 0; }
 static inline void sev_snp_register_ghcb(void) { }
+static inline void __init
+early_snp_set_memory_private(unsigned long vaddr, unsigned long paddr, unsigned int npages) { }
+static inline void __init
+early_snp_set_memory_shared(unsigned long vaddr, unsigned long paddr, unsigned int npages) { }
+static inline void __init snp_prep_memory(unsigned long paddr, unsigned int sz, enum psc_op op) { }
 #endif
 
 #endif
diff --git a/arch/x86/kernel/sev.c b/arch/x86/kernel/sev.c
index 17ad603f62da..2971aa280ce6 100644
--- a/arch/x86/kernel/sev.c
+++ b/arch/x86/kernel/sev.c
@@ -557,6 +557,108 @@  static u64 get_jump_table_addr(void)
 	return ret;
 }
 
+static void pvalidate_pages(unsigned long vaddr, unsigned int npages, bool validate)
+{
+	unsigned long vaddr_end;
+	int rc;
+
+	vaddr = vaddr & PAGE_MASK;
+	vaddr_end = vaddr + (npages << PAGE_SHIFT);
+
+	while (vaddr < vaddr_end) {
+		rc = pvalidate(vaddr, RMP_PG_SIZE_4K, validate);
+		if (WARN(rc, "Failed to validate address 0x%lx ret %d", vaddr, rc))
+			sev_es_terminate(SEV_TERM_SET_LINUX, GHCB_TERM_PVALIDATE);
+
+		vaddr = vaddr + PAGE_SIZE;
+	}
+}
+
+static void __init early_set_page_state(unsigned long paddr, unsigned int npages, enum psc_op op)
+{
+	unsigned long paddr_end;
+	u64 val;
+
+	paddr = paddr & PAGE_MASK;
+	paddr_end = paddr + (npages << PAGE_SHIFT);
+
+	while (paddr < paddr_end) {
+		/*
+		 * Use the MSR protocol because this function can be called before the GHCB
+		 * is established.
+		 */
+		sev_es_wr_ghcb_msr(GHCB_MSR_PSC_REQ_GFN(paddr >> PAGE_SHIFT, op));
+		VMGEXIT();
+
+		val = sev_es_rd_ghcb_msr();
+
+		if (WARN(GHCB_RESP_CODE(val) != GHCB_MSR_PSC_RESP,
+			 "Wrong PSC response code: 0x%x\n",
+			 (unsigned int)GHCB_RESP_CODE(val)))
+			goto e_term;
+
+		if (WARN(GHCB_MSR_PSC_RESP_VAL(val),
+			 "Failed to change page state to '%s' paddr 0x%lx error 0x%llx\n",
+			 op == SNP_PAGE_STATE_PRIVATE ? "private" : "shared",
+			 paddr, GHCB_MSR_PSC_RESP_VAL(val)))
+			goto e_term;
+
+		paddr = paddr + PAGE_SIZE;
+	}
+
+	return;
+
+e_term:
+	sev_es_terminate(SEV_TERM_SET_LINUX, GHCB_TERM_PSC);
+}
+
+void __init early_snp_set_memory_private(unsigned long vaddr, unsigned long paddr,
+					 unsigned int npages)
+{
+	if (!cc_platform_has(CC_ATTR_SEV_SNP))
+		return;
+
+	 /*
+	  * Ask the hypervisor to mark the memory pages as private in the RMP
+	  * table.
+	  */
+	early_set_page_state(paddr, npages, SNP_PAGE_STATE_PRIVATE);
+
+	/* Validate the memory pages after they've been added in the RMP table. */
+	pvalidate_pages(vaddr, npages, 1);
+}
+
+void __init early_snp_set_memory_shared(unsigned long vaddr, unsigned long paddr,
+					unsigned int npages)
+{
+	if (!cc_platform_has(CC_ATTR_SEV_SNP))
+		return;
+
+	/*
+	 * Invalidate the memory pages before they are marked shared in the
+	 * RMP table.
+	 */
+	pvalidate_pages(vaddr, npages, 0);
+
+	 /* Ask hypervisor to mark the memory pages shared in the RMP table. */
+	early_set_page_state(paddr, npages, SNP_PAGE_STATE_SHARED);
+}
+
+void __init snp_prep_memory(unsigned long paddr, unsigned int sz, enum psc_op op)
+{
+	unsigned long vaddr, npages;
+
+	vaddr = (unsigned long)__va(paddr);
+	npages = PAGE_ALIGN(sz) >> PAGE_SHIFT;
+
+	if (op == SNP_PAGE_STATE_PRIVATE)
+		early_snp_set_memory_private(vaddr, paddr, npages);
+	else if (op == SNP_PAGE_STATE_SHARED)
+		early_snp_set_memory_shared(vaddr, paddr, npages);
+	else
+		WARN(1, "invalid memory op %d\n", op);
+}
+
 int sev_es_setup_ap_jump_table(struct real_mode_header *rmh)
 {
 	u16 startup_cs, startup_ip;
diff --git a/arch/x86/mm/mem_encrypt.c b/arch/x86/mm/mem_encrypt.c
index 3ba801ff6afc..5d19aad06670 100644
--- a/arch/x86/mm/mem_encrypt.c
+++ b/arch/x86/mm/mem_encrypt.c
@@ -31,6 +31,7 @@ 
 #include <asm/processor-flags.h>
 #include <asm/msr.h>
 #include <asm/cmdline.h>
+#include <asm/sev.h>
 
 #include "mm_internal.h"
 
@@ -49,6 +50,34 @@  EXPORT_SYMBOL_GPL(sev_enable_key);
 /* Buffer used for early in-place encryption by BSP, no locking needed */
 static char sme_early_buffer[PAGE_SIZE] __initdata __aligned(PAGE_SIZE);
 
+/*
+ * When SNP is active, change the page state from private to shared before
+ * copying the data from the source to destination and restore after the copy.
+ * This is required because the source address is mapped as decrypted by the
+ * caller of the routine.
+ */
+static inline void __init snp_memcpy(void *dst, void *src, size_t sz,
+				     unsigned long paddr, bool decrypt)
+{
+	unsigned long npages = PAGE_ALIGN(sz) >> PAGE_SHIFT;
+
+	if (!cc_platform_has(CC_ATTR_SEV_SNP) || !decrypt) {
+		memcpy(dst, src, sz);
+		return;
+	}
+
+	/*
+	 * With SNP, the paddr needs to be accessed decrypted, mark the page
+	 * shared in the RMP table before copying it.
+	 */
+	early_snp_set_memory_shared((unsigned long)__va(paddr), paddr, npages);
+
+	memcpy(dst, src, sz);
+
+	/* Restore the page state after the memcpy. */
+	early_snp_set_memory_private((unsigned long)__va(paddr), paddr, npages);
+}
+
 /*
  * This routine does not change the underlying encryption setting of the
  * page(s) that map this memory. It assumes that eventually the memory is
@@ -97,8 +126,8 @@  static void __init __sme_early_enc_dec(resource_size_t paddr,
 		 * Use a temporary buffer, of cache-line multiple size, to
 		 * avoid data corruption as documented in the APM.
 		 */
-		memcpy(sme_early_buffer, src, len);
-		memcpy(dst, sme_early_buffer, len);
+		snp_memcpy(sme_early_buffer, src, len, paddr, enc);
+		snp_memcpy(dst, sme_early_buffer, len, paddr, !enc);
 
 		early_memunmap(dst, len);
 		early_memunmap(src, len);
@@ -320,14 +349,28 @@  static void __init __set_clr_pte_enc(pte_t *kpte, int level, bool enc)
 	clflush_cache_range(__va(pa), size);
 
 	/* Encrypt/decrypt the contents in-place */
-	if (enc)
+	if (enc) {
 		sme_early_encrypt(pa, size);
-	else
+	} else {
 		sme_early_decrypt(pa, size);
 
+		/*
+		 * ON SNP, the page state in the RMP table must happen
+		 * before the page table updates.
+		 */
+		early_snp_set_memory_shared((unsigned long)__va(pa), pa, 1);
+	}
+
 	/* Change the page encryption mask. */
 	new_pte = pfn_pte(pfn, new_prot);
 	set_pte_atomic(kpte, new_pte);
+
+	/*
+	 * If page is set encrypted in the page table, then update the RMP table to
+	 * add this page as private.
+	 */
+	if (enc)
+		early_snp_set_memory_private((unsigned long)__va(pa), pa, 1);
 }
 
 static int __init early_set_memory_enc_dec(unsigned long vaddr,