From patchwork Tue Nov 19 08:49:48 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Yang, Weijiang" X-Patchwork-Id: 11251557 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id B0A0B14C0 for ; Tue, 19 Nov 2019 08:48:24 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 9A0B6222DE for ; Tue, 19 Nov 2019 08:48:24 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727626AbfKSIsU (ORCPT ); Tue, 19 Nov 2019 03:48:20 -0500 Received: from mga17.intel.com ([192.55.52.151]:26340 "EHLO mga17.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727574AbfKSIsQ (ORCPT ); Tue, 19 Nov 2019 03:48:16 -0500 X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga007.jf.intel.com ([10.7.209.58]) by fmsmga107.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 19 Nov 2019 00:48:15 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.68,322,1569308400"; d="scan'208";a="196427136" Received: from unknown (HELO local-michael-cet-test.sh.intel.com) ([10.239.159.128]) by orsmga007.jf.intel.com with ESMTP; 19 Nov 2019 00:48:13 -0800 From: Yang Weijiang To: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, pbonzini@redhat.com, jmattson@google.com, sean.j.christopherson@intel.com Cc: yu.c.zhang@linux.intel.com, alazar@bitdefender.com, edwin.zhai@intel.com, Yang Weijiang Subject: [PATCH v7 8/9] mmu: spp: Handle SPP protected pages when VM memory changes Date: Tue, 19 Nov 2019 16:49:48 +0800 Message-Id: <20191119084949.15471-9-weijiang.yang@intel.com> X-Mailer: git-send-email 2.17.2 In-Reply-To: <20191119084949.15471-1-weijiang.yang@intel.com> References: <20191119084949.15471-1-weijiang.yang@intel.com> Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Host page swapping/migration may change the translation in EPT leaf entry, if the target page is SPP protected, re-enable SPP protection in MMU notifier. If SPPT shadow page is reclaimed, the level1 pages don't have rmap to clear. Signed-off-by: Yang Weijiang --- arch/x86/kvm/mmu.c | 22 ++++++++++++++++++++++ 1 file changed, 22 insertions(+) diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c index 9c5be402a0b2..7e9959a4a12b 100644 --- a/arch/x86/kvm/mmu.c +++ b/arch/x86/kvm/mmu.c @@ -1828,6 +1828,24 @@ static int kvm_set_pte_rmapp(struct kvm *kvm, struct kvm_rmap_head *rmap_head, new_spte &= ~PT_WRITABLE_MASK; new_spte &= ~SPTE_HOST_WRITEABLE; + /* + * if it's EPT leaf entry and the physical page is + * SPP protected, then re-enable SPP protection for + * the page. + */ + if (kvm->arch.spp_active && + level == PT_PAGE_TABLE_LEVEL) { + struct kvm_subpage spp_info = {0}; + int i; + + spp_info.base_gfn = gfn; + spp_info.npages = 1; + i = kvm_spp_get_permission(kvm, &spp_info); + if (i == 1 && + spp_info.access_map[0] != FULL_SPP_ACCESS) + new_spte |= PT_SPP_MASK; + } + new_spte = mark_spte_for_access_track(new_spte); mmu_spte_clear_track_bits(sptep); @@ -2677,6 +2695,10 @@ static bool mmu_page_zap_pte(struct kvm *kvm, struct kvm_mmu_page *sp, pte = *spte; if (is_shadow_present_pte(pte)) { if (is_last_spte(pte, sp->role.level)) { + /* SPPT leaf entries don't have rmaps*/ + if (sp->role.level == PT_PAGE_TABLE_LEVEL && + is_spp_spte(sp)) + return true; drop_spte(kvm, spte); if (is_large_pte(pte)) --kvm->stat.lpages;