diff mbox series

[v10,21/45] x86/mm: Add support to validate memory when changing C-bit

Message ID 20220209181039.1262882-22-brijesh.singh@amd.com (mailing list archive)
State New, archived
Headers show
Series Add AMD Secure Nested Paging (SEV-SNP) Guest Support | expand

Commit Message

Brijesh Singh Feb. 9, 2022, 6:10 p.m. UTC
The set_memory_{encrypted,decrypted}() are used for changing the pages
from decrypted (shared) to encrypted (private) and vice versa.
When SEV-SNP is active, the page state transition needs to go through
additional steps done by the guest.

If the page is transitioned from shared to private, then perform the
following after the encryption attribute is set in the page table:

1. Issue the page state change VMGEXIT to add the memory region in
   the RMP table.
2. Validate the memory region after the RMP entry is added.

To maintain the security guarantees, if the page is transitioned from
private to shared, then perform the following before encryption attribute
is removed from the page table:

1. Invalidate the page.
2. Issue the page state change VMGEXIT to remove the page from RMP table.

To change the page state in the RMP table, use the Page State Change
VMGEXIT defined in the GHCB specification.

The GHCB specification provides the flexibility to use either 4K or 2MB
page size in during the page state change (PSC) request. For now use the
4K page size for all the PSC until RMP page size tracking is supported
in the kernel.

Signed-off-by: Brijesh Singh <brijesh.singh@amd.com>
---
 arch/x86/include/asm/sev-common.h |  22 ++++
 arch/x86/include/asm/sev.h        |   4 +
 arch/x86/include/uapi/asm/svm.h   |   2 +
 arch/x86/kernel/sev.c             | 168 ++++++++++++++++++++++++++++++
 arch/x86/mm/pat/set_memory.c      |  15 +++
 5 files changed, 211 insertions(+)

Comments

Borislav Petkov Feb. 10, 2022, 4:48 p.m. UTC | #1
On Wed, Feb 09, 2022 at 12:10:15PM -0600, Brijesh Singh wrote:
> The set_memory_{encrypted,decrypted}() are used for changing the pages
> from decrypted (shared) to encrypted (private) and vice versa.
> When SEV-SNP is active, the page state transition needs to go through
> additional steps done by the guest.
> 
> If the page is transitioned from shared to private, then perform the
> following after the encryption attribute is set in the page table:
> 
> 1. Issue the page state change VMGEXIT to add the memory region in
>    the RMP table.
> 2. Validate the memory region after the RMP entry is added.
> 
> To maintain the security guarantees, if the page is transitioned from
> private to shared, then perform the following before encryption attribute
> is removed from the page table:
> 
> 1. Invalidate the page.
> 2. Issue the page state change VMGEXIT to remove the page from RMP table.
> 
> To change the page state in the RMP table, use the Page State Change
> VMGEXIT defined in the GHCB specification.
> 
> The GHCB specification provides the flexibility to use either 4K or 2MB
> page size in during the page state change (PSC) request. For now use the
> 4K page size for all the PSC until RMP page size tracking is supported
> in the kernel.

This commit message sounds familiar because I've read it before - patch
18 - and it looks copied. So I've turned into a simple one which says it
all:

    x86/mm: Validate memory when changing the C-bit

    Add the needed functionality to change pages state from shared
    to private and vice-versa using the Page State Change VMGEXIT as
    documented in the GHCB spec.

Thx.
Borislav Petkov Feb. 11, 2022, 2:55 p.m. UTC | #2
+ Kirill.

On Wed, Feb 09, 2022 at 12:10:15PM -0600, Brijesh Singh wrote:
> @@ -2012,8 +2013,22 @@ static int __set_memory_enc_pgtable(unsigned long addr, int numpages, bool enc)
>  	 */
>  	cpa_flush(&cpa, !this_cpu_has(X86_FEATURE_SME_COHERENT));
>  
> +	/*
> +	 * To maintain the security guarantees of SEV-SNP guests, make sure
> +	 * to invalidate the memory before clearing the encryption attribute.
> +	 */
> +	if (!enc)
> +		snp_set_memory_shared(addr, numpages);
> +
>  	ret = __change_page_attr_set_clr(&cpa, 1);
>  
> +	/*
> +	 * Now that memory is mapped encrypted in the page table, validate it
> +	 * so that it is consistent with the above page state.
> +	 */
> +	if (!ret && enc)
> +		snp_set_memory_private(addr, numpages);
> +
>  	/*
>  	 * After changing the encryption attribute, we need to flush TLBs again
>  	 * in case any speculative TLB caching occurred (but no need to flush
> -- 

Right, as tglx rightfully points out here:

https://lore.kernel.org/r/875ypyvz07.ffs@tglx

this piece of code needs careful coordinated design so that it is clean
for both SEV and TDX.

First, as we've said here:

https://lore.kernel.org/r/1d77e91c-e151-7846-6cd4-6264236ca5ae@intel.com

we'd need generic functions which turn a pgprot into an encrypted or
decrypted pgprot on both SEV and TDX so we could do:

cc_pgprot_enc()
cc_pgprot_dec()

which does the required conversion on each guest type.

Also, I think adding required functions to x86_platform.guest. is a very
nice way to solve the ugly if (guest_type) querying all over the place.

Also, I was thinking of sme_me_mask and the corresponding
tdx_shared_mask I threw into the mix here:

https://lore.kernel.org/r/YgFIaJ8ijgQQ04Nv@zn.tnic

and we should simply add those without ifdeffery but unconditionally.

Simply have them always present. They will have !0 values on the
respective guest types and 0 otherwise. This should simplify a lot of
code and another unconditionally present u64 won't be the end of the
world.

Any other aspect I'm missing?
Brijesh Singh Feb. 11, 2022, 5:27 p.m. UTC | #3
On 2/11/22 8:55 AM, Borislav Petkov wrote:
>
> Simply have them always present. They will have !0 values on the
> respective guest types and 0 otherwise. This should simplify a lot of
> code and another unconditionally present u64 won't be the end of the
> world.
>
> Any other aspect I'm missing?

I think that's mostly about it. IIUC, the recommendation is to define a
new callback in x86_platform_op. The callback will be invoked
unconditionally; The default implementation for this callback is NOP;
The TDX and SEV will override with the platform specific implementation.
I think we may able to handle everything in one callback hook but having
pre and post will be a more desirable. Here is why I am thinking so:

* On SNP, the page must be invalidated before clearing the _PAGE_ENC
from the page table attribute

* On SNP, the page must be validated after setting the _PAGE_ENC in the
page table attribute.

~Brijesh
Borislav Petkov Feb. 13, 2022, 12:15 p.m. UTC | #4
On Fri, Feb 11, 2022 at 11:27:54AM -0600, Brijesh Singh wrote:
> > Simply have them always present. They will have !0 values on the
> > respective guest types and 0 otherwise. This should simplify a lot of
> > code and another unconditionally present u64 won't be the end of the
> > world.
> >
> > Any other aspect I'm missing?
> 
> I think that's mostly about it. IIUC, the recommendation is to define a
> new callback in x86_platform_op. The callback will be invoked
> unconditionally; The default implementation for this callback is NOP;
> The TDX and SEV will override with the platform specific implementation.
> I think we may able to handle everything in one callback hook but having
> pre and post will be a more desirable. Here is why I am thinking so:
> 
> * On SNP, the page must be invalidated before clearing the _PAGE_ENC
> from the page table attribute
> 
> * On SNP, the page must be validated after setting the _PAGE_ENC in the
> page table attribute.

Right, we could have a pre- and post- callback, if that would make
things simpler/clearer.

Also, in thinking further about the encryption mask, we could make it a
*single*, *global* variable called cc_mask which each guest type sets it
as it wants to.

Then, it would use it in the vendor-specific encrypt/decrypt helpers
accordingly and that would simplify a lot of code. And we can get rid of
all the ifdeffery around it too.

So I think the way to go should be we do the common functionality, I
queue it on the common tip:x86/cc branch and then SNP and TDX will be
both based ontop of it.

Thoughts?
Tom Lendacky Feb. 13, 2022, 2:50 p.m. UTC | #5
On 2/13/22 06:15, Borislav Petkov wrote:
> On Fri, Feb 11, 2022 at 11:27:54AM -0600, Brijesh Singh wrote:
>>> Simply have them always present. They will have !0 values on the
>>> respective guest types and 0 otherwise. This should simplify a lot of
>>> code and another unconditionally present u64 won't be the end of the
>>> world.
>>>
>>> Any other aspect I'm missing?
>>
>> I think that's mostly about it. IIUC, the recommendation is to define a
>> new callback in x86_platform_op. The callback will be invoked
>> unconditionally; The default implementation for this callback is NOP;
>> The TDX and SEV will override with the platform specific implementation.
>> I think we may able to handle everything in one callback hook but having
>> pre and post will be a more desirable. Here is why I am thinking so:
>>
>> * On SNP, the page must be invalidated before clearing the _PAGE_ENC
>> from the page table attribute
>>
>> * On SNP, the page must be validated after setting the _PAGE_ENC in the
>> page table attribute.
> 
> Right, we could have a pre- and post- callback, if that would make
> things simpler/clearer.
> 
> Also, in thinking further about the encryption mask, we could make it a
> *single*, *global* variable called cc_mask which each guest type sets it
> as it wants to.
> 
> Then, it would use it in the vendor-specific encrypt/decrypt helpers
> accordingly and that would simplify a lot of code. And we can get rid of
> all the ifdeffery around it too.
> 
> So I think the way to go should be we do the common functionality, I
> queue it on the common tip:x86/cc branch and then SNP and TDX will be
> both based ontop of it.
> 
> Thoughts?

I think there were a lot of assumptions that only SME/SEV would set 
sme_me_mask and that is used, for example, in the cc_platform_has() 
routine to figure out whether we're AMD or Intel. If you go the cc_mask 
route, I think we'll need to add a cc_vendor variable that would then be 
checked in cc_platform_has(). All other uses of sme_me_mask would need to 
be audited to see whether cc_vendor would need to be checked, too.

Thanks,
Tom

>
Borislav Petkov Feb. 13, 2022, 5:21 p.m. UTC | #6
On Sun, Feb 13, 2022 at 08:50:48AM -0600, Tom Lendacky wrote:
> I think there were a lot of assumptions that only SME/SEV would set
> sme_me_mask and that is used, for example, in the cc_platform_has() routine
> to figure out whether we're AMD or Intel. If you go the cc_mask route, I
> think we'll need to add a cc_vendor variable that would then be checked in
> cc_platform_has().

Right, or cc_platform_type or whatever. It would probably be a good
idea to have a variable explicitly state what the active coco flavor is
anyway, as we had some ambiguity questions in the past along the lines
of, what does cc_platform_has() need to return when running as a guest
on the respective platform.

If you have it explicitly, then it would work unambiguously simple. And
then we can get rid of CC_ATTR_GUEST_SEV_SNP or CC_ATTR_GUEST_TDX which
is clumsy.

Thx.
Kirill A . Shutemov Feb. 15, 2022, 12:43 p.m. UTC | #7
On Sun, Feb 13, 2022 at 01:15:23PM +0100, Borislav Petkov wrote:
> On Fri, Feb 11, 2022 at 11:27:54AM -0600, Brijesh Singh wrote:
> > > Simply have them always present. They will have !0 values on the
> > > respective guest types and 0 otherwise. This should simplify a lot of
> > > code and another unconditionally present u64 won't be the end of the
> > > world.
> > >
> > > Any other aspect I'm missing?
> > 
> > I think that's mostly about it. IIUC, the recommendation is to define a
> > new callback in x86_platform_op. The callback will be invoked
> > unconditionally; The default implementation for this callback is NOP;
> > The TDX and SEV will override with the platform specific implementation.
> > I think we may able to handle everything in one callback hook but having
> > pre and post will be a more desirable. Here is why I am thinking so:
> > 
> > * On SNP, the page must be invalidated before clearing the _PAGE_ENC
> > from the page table attribute
> > 
> > * On SNP, the page must be validated after setting the _PAGE_ENC in the
> > page table attribute.
> 
> Right, we could have a pre- and post- callback, if that would make
> things simpler/clearer.
> 
> Also, in thinking further about the encryption mask, we could make it a
> *single*, *global* variable called cc_mask which each guest type sets it
> as it wants to.

I don't think it works. TDX and SME/SEV has opposite polarity of the mask.
SME/SEV has to clear the mask to share the page. TDX has to set it.

Making a single global mask only increases confusion.
Borislav Petkov Feb. 15, 2022, 12:54 p.m. UTC | #8
On Tue, Feb 15, 2022 at 03:43:31PM +0300, Kirill A. Shutemov wrote:
> I don't think it works. TDX and SME/SEV has opposite polarity of the mask.
> SME/SEV has to clear the mask to share the page. TDX has to set it.
> 
> Making a single global mask only increases confusion.

Didn't you read the rest of the thread with Tom's suggestion? I think
there's a merit in having a cc_vendor or so which explicitly states what
type of HV the kernel runs on...
Kirill A . Shutemov Feb. 15, 2022, 1:15 p.m. UTC | #9
On Tue, Feb 15, 2022 at 01:54:48PM +0100, Borislav Petkov wrote:
> On Tue, Feb 15, 2022 at 03:43:31PM +0300, Kirill A. Shutemov wrote:
> > I don't think it works. TDX and SME/SEV has opposite polarity of the mask.
> > SME/SEV has to clear the mask to share the page. TDX has to set it.
> > 
> > Making a single global mask only increases confusion.
> 
> Didn't you read the rest of the thread with Tom's suggestion? I think
> there's a merit in having a cc_vendor or so which explicitly states what
> type of HV the kernel runs on...

I have no problem with cc_vendor idea. It looks good.

Regarding the masks, if we want to have common ground here we can add two
mask: cc_enc_mask and cc_dec_mask. And then

pgprotval_t cc_enc(pgprotval_t protval)
{
	protval |= cc_enc_mask;
	protval &= ~cc_dec_mask;
	return protval;
}

pgprotval_t cc_dec(pgprotval_t protval)
{
	protval |= cc_dec_mask;
	protval &= ~cc_enc_mask;
	return protval;
}

It assumes (cc_enc_mask & cc_dec_mask) == 0.

Any opinions?
Borislav Petkov Feb. 15, 2022, 2:41 p.m. UTC | #10
On Tue, Feb 15, 2022 at 04:15:22PM +0300, Kirill A. Shutemov wrote:
> I have no problem with cc_vendor idea. It looks good.

Good.

> Regarding the masks, if we want to have common ground here we can add two
> mask: cc_enc_mask and cc_dec_mask. And then

If we do two masks, then we can just as well leave the SME and TDX
masks. The point of the whole exercise is to have simpler code and less
ifdeffery.

If you "hide" how the mask works on each vendor in the respective
functions - and yes, cc_pgprot_dec/enc() reads better - then it doesn't
matter how the mask is defined.

Because you don't need two masks to encrypt/decrypt pages - you need a
single mask but apply it differently.

Thx.
Borislav Petkov Feb. 16, 2022, 1:32 p.m. UTC | #11
On Fri, Feb 11, 2022 at 03:55:23PM +0100, Borislav Petkov wrote:
> Also, I think adding required functions to x86_platform.guest. is a very
> nice way to solve the ugly if (guest_type) querying all over the place.

So I guess something like below. It builds here...

---
 arch/x86/include/asm/set_memory.h |  1 -
 arch/x86/include/asm/sev.h        |  2 ++
 arch/x86/include/asm/x86_init.h   | 12 ++++++++++++
 arch/x86/kernel/sev.c             |  2 ++
 arch/x86/mm/mem_encrypt_amd.c     |  6 +++---
 arch/x86/mm/pat/set_memory.c      |  2 +-
 6 files changed, 20 insertions(+), 5 deletions(-)

diff --git a/arch/x86/include/asm/set_memory.h b/arch/x86/include/asm/set_memory.h
index ff0f2d90338a..ce8dd215f5b3 100644
--- a/arch/x86/include/asm/set_memory.h
+++ b/arch/x86/include/asm/set_memory.h
@@ -84,7 +84,6 @@ int set_pages_rw(struct page *page, int numpages);
 int set_direct_map_invalid_noflush(struct page *page);
 int set_direct_map_default_noflush(struct page *page);
 bool kernel_page_present(struct page *page);
-void notify_range_enc_status_changed(unsigned long vaddr, int npages, bool enc);
 
 extern int kernel_set_to_readonly;
 
diff --git a/arch/x86/include/asm/sev.h b/arch/x86/include/asm/sev.h
index ec060c433589..2435b0ca6cfc 100644
--- a/arch/x86/include/asm/sev.h
+++ b/arch/x86/include/asm/sev.h
@@ -95,4 +95,6 @@ static inline void sev_es_nmi_complete(void) { }
 static inline int sev_es_efi_map_ghcbs(pgd_t *pgd) { return 0; }
 #endif
 
+void amd_notify_range_enc_status_changed(unsigned long vaddr, int npages, bool enc);
+
 #endif
diff --git a/arch/x86/include/asm/x86_init.h b/arch/x86/include/asm/x86_init.h
index 22b7412c08f6..226663e2d769 100644
--- a/arch/x86/include/asm/x86_init.h
+++ b/arch/x86/include/asm/x86_init.h
@@ -141,6 +141,17 @@ struct x86_init_acpi {
 	void (*reduced_hw_early_init)(void);
 };
 
+/**
+ * struct x86_guest - Functions used by misc guest incarnations like SEV, TDX,
+ * etc.
+ *
+ * @enc_status_change		Notify HV about change of encryption status of a
+ *				range of pages
+ */
+struct x86_guest {
+	void (*enc_status_change)(unsigned long vaddr, int npages, bool enc);
+};
+
 /**
  * struct x86_init_ops - functions for platform specific setup
  *
@@ -287,6 +298,7 @@ struct x86_platform_ops {
 	struct x86_legacy_features legacy;
 	void (*set_legacy_features)(void);
 	struct x86_hyper_runtime hyper;
+	struct x86_guest guest;
 };
 
 struct x86_apic_ops {
diff --git a/arch/x86/kernel/sev.c b/arch/x86/kernel/sev.c
index e6d316a01fdd..e645e868a49b 100644
--- a/arch/x86/kernel/sev.c
+++ b/arch/x86/kernel/sev.c
@@ -766,6 +766,8 @@ void __init sev_es_init_vc_handling(void)
 	if (!sev_es_check_cpu_features())
 		panic("SEV-ES CPU Features missing");
 
+	x86_platform.guest.enc_status_change = amd_notify_range_enc_status_changed;
+
 	/* Enable SEV-ES special handling */
 	static_branch_enable(&sev_es_enable_key);
 
diff --git a/arch/x86/mm/mem_encrypt_amd.c b/arch/x86/mm/mem_encrypt_amd.c
index 2b2d018ea345..7038a9f7ae55 100644
--- a/arch/x86/mm/mem_encrypt_amd.c
+++ b/arch/x86/mm/mem_encrypt_amd.c
@@ -256,7 +256,7 @@ static unsigned long pg_level_to_pfn(int level, pte_t *kpte, pgprot_t *ret_prot)
 	return pfn;
 }
 
-void notify_range_enc_status_changed(unsigned long vaddr, int npages, bool enc)
+void amd_notify_range_enc_status_changed(unsigned long vaddr, int npages, bool enc)
 {
 #ifdef CONFIG_PARAVIRT
 	unsigned long sz = npages << PAGE_SHIFT;
@@ -392,7 +392,7 @@ static int __init early_set_memory_enc_dec(unsigned long vaddr,
 
 	ret = 0;
 
-	notify_range_enc_status_changed(start, PAGE_ALIGN(size) >> PAGE_SHIFT, enc);
+	amd_notify_range_enc_status_changed(start, PAGE_ALIGN(size) >> PAGE_SHIFT, enc);
 out:
 	__flush_tlb_all();
 	return ret;
@@ -410,7 +410,7 @@ int __init early_set_memory_encrypted(unsigned long vaddr, unsigned long size)
 
 void __init early_set_mem_enc_dec_hypercall(unsigned long vaddr, int npages, bool enc)
 {
-	notify_range_enc_status_changed(vaddr, npages, enc);
+	amd_notify_range_enc_status_changed(vaddr, npages, enc);
 }
 
 void __init mem_encrypt_free_decrypted_mem(void)
diff --git a/arch/x86/mm/pat/set_memory.c b/arch/x86/mm/pat/set_memory.c
index b4072115c8ef..0acc52a3a5b7 100644
--- a/arch/x86/mm/pat/set_memory.c
+++ b/arch/x86/mm/pat/set_memory.c
@@ -2027,7 +2027,7 @@ static int __set_memory_enc_pgtable(unsigned long addr, int numpages, bool enc)
 	 * Notify hypervisor that a given memory range is mapped encrypted
 	 * or decrypted.
 	 */
-	notify_range_enc_status_changed(addr, numpages, enc);
+	x86_platform.guest.enc_status_change(addr, numpages, enc);
 
 	return ret;
 }
diff mbox series

Patch

diff --git a/arch/x86/include/asm/sev-common.h b/arch/x86/include/asm/sev-common.h
index f077a6c95e67..1aa72b5c2490 100644
--- a/arch/x86/include/asm/sev-common.h
+++ b/arch/x86/include/asm/sev-common.h
@@ -105,6 +105,28 @@  enum psc_op {
 
 #define GHCB_HV_FT_SNP			BIT_ULL(0)
 
+/* SNP Page State Change NAE event */
+#define VMGEXIT_PSC_MAX_ENTRY		253
+
+struct psc_hdr {
+	u16 cur_entry;
+	u16 end_entry;
+	u32 reserved;
+} __packed;
+
+struct psc_entry {
+	u64	cur_page	: 12,
+		gfn		: 40,
+		operation	: 4,
+		pagesize	: 1,
+		reserved	: 7;
+} __packed;
+
+struct snp_psc_desc {
+	struct psc_hdr hdr;
+	struct psc_entry entries[VMGEXIT_PSC_MAX_ENTRY];
+} __packed;
+
 #define GHCB_MSR_TERM_REQ		0x100
 #define GHCB_MSR_TERM_REASON_SET_POS	12
 #define GHCB_MSR_TERM_REASON_SET_MASK	0xf
diff --git a/arch/x86/include/asm/sev.h b/arch/x86/include/asm/sev.h
index f65d257e3d4a..feeb93e6ec97 100644
--- a/arch/x86/include/asm/sev.h
+++ b/arch/x86/include/asm/sev.h
@@ -128,6 +128,8 @@  void __init early_snp_set_memory_private(unsigned long vaddr, unsigned long padd
 void __init early_snp_set_memory_shared(unsigned long vaddr, unsigned long paddr,
 					unsigned int npages);
 void __init snp_prep_memory(unsigned long paddr, unsigned int sz, enum psc_op op);
+void snp_set_memory_shared(unsigned long vaddr, unsigned int npages);
+void snp_set_memory_private(unsigned long vaddr, unsigned int npages);
 #else
 static inline void sev_es_ist_enter(struct pt_regs *regs) { }
 static inline void sev_es_ist_exit(void) { }
@@ -142,6 +144,8 @@  early_snp_set_memory_private(unsigned long vaddr, unsigned long paddr, unsigned
 static inline void __init
 early_snp_set_memory_shared(unsigned long vaddr, unsigned long paddr, unsigned int npages) { }
 static inline void __init snp_prep_memory(unsigned long paddr, unsigned int sz, enum psc_op op) { }
+static inline void snp_set_memory_shared(unsigned long vaddr, unsigned int npages) { }
+static inline void snp_set_memory_private(unsigned long vaddr, unsigned int npages) { }
 #endif
 
 #endif
diff --git a/arch/x86/include/uapi/asm/svm.h b/arch/x86/include/uapi/asm/svm.h
index b0ad00f4c1e1..0dcdb6e0c913 100644
--- a/arch/x86/include/uapi/asm/svm.h
+++ b/arch/x86/include/uapi/asm/svm.h
@@ -108,6 +108,7 @@ 
 #define SVM_VMGEXIT_AP_JUMP_TABLE		0x80000005
 #define SVM_VMGEXIT_SET_AP_JUMP_TABLE		0
 #define SVM_VMGEXIT_GET_AP_JUMP_TABLE		1
+#define SVM_VMGEXIT_PSC				0x80000010
 #define SVM_VMGEXIT_HV_FEATURES			0x8000fffd
 #define SVM_VMGEXIT_UNSUPPORTED_EVENT		0x8000ffff
 
@@ -219,6 +220,7 @@ 
 	{ SVM_VMGEXIT_NMI_COMPLETE,	"vmgexit_nmi_complete" }, \
 	{ SVM_VMGEXIT_AP_HLT_LOOP,	"vmgexit_ap_hlt_loop" }, \
 	{ SVM_VMGEXIT_AP_JUMP_TABLE,	"vmgexit_ap_jump_table" }, \
+	{ SVM_VMGEXIT_PSC,	"vmgexit_page_state_change" }, \
 	{ SVM_VMGEXIT_HV_FEATURES,	"vmgexit_hypervisor_feature" }, \
 	{ SVM_EXIT_ERR,         "invalid_guest_state" }
 
diff --git a/arch/x86/kernel/sev.c b/arch/x86/kernel/sev.c
index 1e8dc71e7ba6..4315be1602d1 100644
--- a/arch/x86/kernel/sev.c
+++ b/arch/x86/kernel/sev.c
@@ -655,6 +655,174 @@  void __init snp_prep_memory(unsigned long paddr, unsigned int sz, enum psc_op op
 		WARN(1, "invalid memory op %d\n", op);
 }
 
+static int vmgexit_psc(struct snp_psc_desc *desc)
+{
+	int cur_entry, end_entry, ret = 0;
+	struct snp_psc_desc *data;
+	struct ghcb_state state;
+	struct es_em_ctxt ctxt;
+	unsigned long flags;
+	struct ghcb *ghcb;
+
+	/*
+	 * __sev_get_ghcb() needs to run with IRQs disabled because it is using
+	 * a per-CPU GHCB.
+	 */
+	local_irq_save(flags);
+
+	ghcb = __sev_get_ghcb(&state);
+	if (!ghcb) {
+		ret = 1;
+		goto out_unlock;
+	}
+
+	/* Copy the input desc into GHCB shared buffer */
+	data = (struct snp_psc_desc *)ghcb->shared_buffer;
+	memcpy(ghcb->shared_buffer, desc, min_t(int, GHCB_SHARED_BUF_SIZE, sizeof(*desc)));
+
+	/*
+	 * As per the GHCB specification, the hypervisor can resume the guest
+	 * before processing all the entries. Check whether all the entries
+	 * are processed. If not, then keep retrying. Note, the hypervisor
+	 * will update the data memory directly to indicate the status, so
+	 * reference the data->hdr everywhere.
+	 *
+	 * The strategy here is to wait for the hypervisor to change the page
+	 * state in the RMP table before guest accesses the memory pages. If the
+	 * page state change was not successful, then later memory access will
+	 * result in a crash.
+	 */
+	cur_entry = data->hdr.cur_entry;
+	end_entry = data->hdr.end_entry;
+
+	while (data->hdr.cur_entry <= data->hdr.end_entry) {
+		ghcb_set_sw_scratch(ghcb, (u64)__pa(data));
+
+		/* This will advance the shared buffer data points to. */
+		ret = sev_es_ghcb_hv_call(ghcb, true, &ctxt, SVM_VMGEXIT_PSC, 0, 0);
+
+		/*
+		 * Page State Change VMGEXIT can pass error code through
+		 * exit_info_2.
+		 */
+		if (WARN(ret || ghcb->save.sw_exit_info_2,
+			 "SNP: PSC failed ret=%d exit_info_2=%llx\n",
+			 ret, ghcb->save.sw_exit_info_2)) {
+			ret = 1;
+			goto out;
+		}
+
+		/* Verify that reserved bit is not set */
+		if (WARN(data->hdr.reserved, "Reserved bit is set in the PSC header\n")) {
+			ret = 1;
+			goto out;
+		}
+
+		/*
+		 * Sanity check that entry processing is not going backwards.
+		 * This will happen only if hypervisor is tricking us.
+		 */
+		if (WARN(data->hdr.end_entry > end_entry || cur_entry > data->hdr.cur_entry,
+"SNP: PSC processing going backward, end_entry %d (got %d) cur_entry %d (got %d)\n",
+			 end_entry, data->hdr.end_entry, cur_entry, data->hdr.cur_entry)) {
+			ret = 1;
+			goto out;
+		}
+	}
+
+out:
+	__sev_put_ghcb(&state);
+
+out_unlock:
+	local_irq_restore(flags);
+
+	return ret;
+}
+
+static void __set_pages_state(struct snp_psc_desc *data, unsigned long vaddr,
+			      unsigned long vaddr_end, int op)
+{
+	struct psc_hdr *hdr;
+	struct psc_entry *e;
+	unsigned long pfn;
+	int i;
+
+	hdr = &data->hdr;
+	e = data->entries;
+
+	memset(data, 0, sizeof(*data));
+	i = 0;
+
+	while (vaddr < vaddr_end) {
+		if (is_vmalloc_addr((void *)vaddr))
+			pfn = vmalloc_to_pfn((void *)vaddr);
+		else
+			pfn = __pa(vaddr) >> PAGE_SHIFT;
+
+		e->gfn = pfn;
+		e->operation = op;
+		hdr->end_entry = i;
+
+		/*
+		 * Current SNP implementation doesn't keep track of the RMP page
+		 * size so use 4K for simplicity.
+		 */
+		e->pagesize = RMP_PG_SIZE_4K;
+
+		vaddr = vaddr + PAGE_SIZE;
+		e++;
+		i++;
+	}
+
+	if (vmgexit_psc(data))
+		sev_es_terminate(SEV_TERM_SET_LINUX, GHCB_TERM_PSC);
+}
+
+static void set_pages_state(unsigned long vaddr, unsigned int npages, int op)
+{
+	unsigned long vaddr_end, next_vaddr;
+	struct snp_psc_desc *desc;
+
+	desc = kmalloc(sizeof(*desc), GFP_KERNEL_ACCOUNT);
+	if (!desc)
+		panic("SNP: failed to allocate memory for PSC descriptor\n");
+
+	vaddr = vaddr & PAGE_MASK;
+	vaddr_end = vaddr + (npages << PAGE_SHIFT);
+
+	while (vaddr < vaddr_end) {
+		/* Calculate the last vaddr that fits in one struct snp_psc_desc. */
+		next_vaddr = min_t(unsigned long, vaddr_end,
+				   (VMGEXIT_PSC_MAX_ENTRY * PAGE_SIZE) + vaddr);
+
+		__set_pages_state(desc, vaddr, next_vaddr, op);
+
+		vaddr = next_vaddr;
+	}
+
+	kfree(desc);
+}
+
+void snp_set_memory_shared(unsigned long vaddr, unsigned int npages)
+{
+	if (!cc_platform_has(CC_ATTR_GUEST_SEV_SNP))
+		return;
+
+	pvalidate_pages(vaddr, npages, false);
+
+	set_pages_state(vaddr, npages, SNP_PAGE_STATE_SHARED);
+}
+
+void snp_set_memory_private(unsigned long vaddr, unsigned int npages)
+{
+	if (!cc_platform_has(CC_ATTR_GUEST_SEV_SNP))
+		return;
+
+	set_pages_state(vaddr, npages, SNP_PAGE_STATE_PRIVATE);
+
+	pvalidate_pages(vaddr, npages, true);
+}
+
 int sev_es_setup_ap_jump_table(struct real_mode_header *rmh)
 {
 	u16 startup_cs, startup_ip;
diff --git a/arch/x86/mm/pat/set_memory.c b/arch/x86/mm/pat/set_memory.c
index b4072115c8ef..e58d57b038ee 100644
--- a/arch/x86/mm/pat/set_memory.c
+++ b/arch/x86/mm/pat/set_memory.c
@@ -32,6 +32,7 @@ 
 #include <asm/set_memory.h>
 #include <asm/hyperv-tlfs.h>
 #include <asm/mshyperv.h>
+#include <asm/sev.h>
 
 #include "../mm_internal.h"
 
@@ -2012,8 +2013,22 @@  static int __set_memory_enc_pgtable(unsigned long addr, int numpages, bool enc)
 	 */
 	cpa_flush(&cpa, !this_cpu_has(X86_FEATURE_SME_COHERENT));
 
+	/*
+	 * To maintain the security guarantees of SEV-SNP guests, make sure
+	 * to invalidate the memory before clearing the encryption attribute.
+	 */
+	if (!enc)
+		snp_set_memory_shared(addr, numpages);
+
 	ret = __change_page_attr_set_clr(&cpa, 1);
 
+	/*
+	 * Now that memory is mapped encrypted in the page table, validate it
+	 * so that it is consistent with the above page state.
+	 */
+	if (!ret && enc)
+		snp_set_memory_private(addr, numpages);
+
 	/*
 	 * After changing the encryption attribute, we need to flush TLBs again
 	 * in case any speculative TLB caching occurred (but no need to flush