diff mbox series

[v7,8/9] mmu: spp: Handle SPP protected pages when VM memory changes

Message ID 20191119084949.15471-9-weijiang.yang@intel.com (mailing list archive)
State New, archived
Headers show
Series Enable Sub-Page Write Protection Support | expand

Commit Message

Yang, Weijiang Nov. 19, 2019, 8:49 a.m. UTC
Host page swapping/migration may change the translation in
EPT leaf entry, if the target page is SPP protected,
re-enable SPP protection in MMU notifier. If SPPT shadow
page is reclaimed, the level1 pages don't have rmap to clear.

Signed-off-by: Yang Weijiang <weijiang.yang@intel.com>
---
 arch/x86/kvm/mmu.c | 22 ++++++++++++++++++++++
 1 file changed, 22 insertions(+)

Comments

Paolo Bonzini Nov. 21, 2019, 10:32 a.m. UTC | #1
On 19/11/19 09:49, Yang Weijiang wrote:
> +			/*
> +			 * if it's EPT leaf entry and the physical page is
> +			 * SPP protected, then re-enable SPP protection for
> +			 * the page.
> +			 */
> +			if (kvm->arch.spp_active &&
> +			    level == PT_PAGE_TABLE_LEVEL) {
> +				struct kvm_subpage spp_info = {0};
> +				int i;
> +
> +				spp_info.base_gfn = gfn;
> +				spp_info.npages = 1;
> +				i = kvm_spp_get_permission(kvm, &spp_info);
> +				if (i == 1 &&
> +				    spp_info.access_map[0] != FULL_SPP_ACCESS)
> +					new_spte |= PT_SPP_MASK;
> +			}

This can use gfn_to_subpage_wp_info directly (or is_spp_protected if you
prefer).

Paolo
Yang, Weijiang Nov. 21, 2019, 3:01 p.m. UTC | #2
On Thu, Nov 21, 2019 at 11:32:15AM +0100, Paolo Bonzini wrote:
> On 19/11/19 09:49, Yang Weijiang wrote:
> > +			/*
> > +			 * if it's EPT leaf entry and the physical page is
> > +			 * SPP protected, then re-enable SPP protection for
> > +			 * the page.
> > +			 */
> > +			if (kvm->arch.spp_active &&
> > +			    level == PT_PAGE_TABLE_LEVEL) {
> > +				struct kvm_subpage spp_info = {0};
> > +				int i;
> > +
> > +				spp_info.base_gfn = gfn;
> > +				spp_info.npages = 1;
> > +				i = kvm_spp_get_permission(kvm, &spp_info);
> > +				if (i == 1 &&
> > +				    spp_info.access_map[0] != FULL_SPP_ACCESS)
> > +					new_spte |= PT_SPP_MASK;
> > +			}
> 
> This can use gfn_to_subpage_wp_info directly (or is_spp_protected if you
> prefer).
>
Sure, will change it, thank you!
> Paolo
diff mbox series

Patch

diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index 9c5be402a0b2..7e9959a4a12b 100644
--- a/arch/x86/kvm/mmu.c
+++ b/arch/x86/kvm/mmu.c
@@ -1828,6 +1828,24 @@  static int kvm_set_pte_rmapp(struct kvm *kvm, struct kvm_rmap_head *rmap_head,
 			new_spte &= ~PT_WRITABLE_MASK;
 			new_spte &= ~SPTE_HOST_WRITEABLE;
 
+			/*
+			 * if it's EPT leaf entry and the physical page is
+			 * SPP protected, then re-enable SPP protection for
+			 * the page.
+			 */
+			if (kvm->arch.spp_active &&
+			    level == PT_PAGE_TABLE_LEVEL) {
+				struct kvm_subpage spp_info = {0};
+				int i;
+
+				spp_info.base_gfn = gfn;
+				spp_info.npages = 1;
+				i = kvm_spp_get_permission(kvm, &spp_info);
+				if (i == 1 &&
+				    spp_info.access_map[0] != FULL_SPP_ACCESS)
+					new_spte |= PT_SPP_MASK;
+			}
+
 			new_spte = mark_spte_for_access_track(new_spte);
 
 			mmu_spte_clear_track_bits(sptep);
@@ -2677,6 +2695,10 @@  static bool mmu_page_zap_pte(struct kvm *kvm, struct kvm_mmu_page *sp,
 	pte = *spte;
 	if (is_shadow_present_pte(pte)) {
 		if (is_last_spte(pte, sp->role.level)) {
+			/* SPPT leaf entries don't have rmaps*/
+			if (sp->role.level == PT_PAGE_TABLE_LEVEL &&
+			    is_spp_spte(sp))
+				return true;
 			drop_spte(kvm, spte);
 			if (is_large_pte(pte))
 				--kvm->stat.lpages;