From patchwork Tue Nov 19 08:49:49 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yang Weijiang X-Patchwork-Id: 11251559 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id F2B0314C0 for ; Tue, 19 Nov 2019 08:48:27 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id D0A93222AB for ; Tue, 19 Nov 2019 08:48:27 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727557AbfKSIs1 (ORCPT ); Tue, 19 Nov 2019 03:48:27 -0500 Received: from mga17.intel.com ([192.55.52.151]:26340 "EHLO mga17.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727603AbfKSIsT (ORCPT ); Tue, 19 Nov 2019 03:48:19 -0500 X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga007.jf.intel.com ([10.7.209.58]) by fmsmga107.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 19 Nov 2019 00:48:19 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.68,322,1569308400"; d="scan'208";a="196427146" Received: from unknown (HELO local-michael-cet-test.sh.intel.com) ([10.239.159.128]) by orsmga007.jf.intel.com with ESMTP; 19 Nov 2019 00:48:16 -0800 From: Yang Weijiang To: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, pbonzini@redhat.com, jmattson@google.com, sean.j.christopherson@intel.com Cc: yu.c.zhang@linux.intel.com, alazar@bitdefender.com, edwin.zhai@intel.com, Yang Weijiang Subject: [PATCH v7 9/9] x86: spp: Add SPP protection check in emulation. Date: Tue, 19 Nov 2019 16:49:49 +0800 Message-Id: <20191119084949.15471-10-weijiang.yang@intel.com> X-Mailer: git-send-email 2.17.2 In-Reply-To: <20191119084949.15471-1-weijiang.yang@intel.com> References: <20191119084949.15471-1-weijiang.yang@intel.com> Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org In instruction/mmio emulation cases, if the target write memroy is SPP protected, exit to user-space to handle it as if it's caused by SPP induced EPT violation due to guest write. Signed-off-by: Yang Weijiang --- arch/x86/kvm/x86.c | 36 ++++++++++++++++++++++++++++++++++++ 1 file changed, 36 insertions(+) diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index fa114b5db672..71f5a8ae76cf 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -5525,6 +5525,36 @@ static const struct read_write_emulator_ops write_emultor = { .write = true, }; +static bool is_emulator_spp_protected(struct kvm_vcpu *vcpu, + gpa_t gpa, + unsigned int bytes) +{ + gfn_t gfn, start_gfn, end_gfn; + struct kvm *kvm = vcpu->kvm; + struct kvm_memory_slot *slot; + u32 access; + + if (!kvm->arch.spp_active) + return false; + + start_gfn = gpa >> PAGE_SHIFT; + end_gfn = (gpa + bytes) >> PAGE_SHIFT; + for (gfn = start_gfn; gfn <= end_gfn; gfn++) { + slot = gfn_to_memslot(kvm, gfn); + if (slot) { + access = *gfn_to_subpage_wp_info(slot, gfn); + if (access != FULL_SPP_ACCESS) { + vcpu->run->exit_reason = KVM_EXIT_SPP; + vcpu->run->spp.addr = gfn; + kvm_skip_emulated_instruction(vcpu); + return true; + } + } + } + + return false; +} + static int emulator_read_write_onepage(unsigned long addr, void *val, unsigned int bytes, struct x86_exception *exception, @@ -5555,6 +5585,9 @@ static int emulator_read_write_onepage(unsigned long addr, void *val, return X86EMUL_PROPAGATE_FAULT; } + if (write && is_emulator_spp_protected(vcpu, gpa, bytes)) + return X86EMUL_UNHANDLEABLE; + if (!ret && ops->read_write_emulate(vcpu, gpa, val, bytes)) return X86EMUL_CONTINUE; @@ -6616,6 +6649,9 @@ int x86_emulate_instruction(struct kvm_vcpu *vcpu, return EMULATE_DONE; if (r == EMULATION_FAILED) { + if (vcpu->run->exit_reason == KVM_EXIT_SPP) + return EMULATE_USER_EXIT; + if (reexecute_instruction(vcpu, cr2, write_fault_to_spt, emulation_type)) return EMULATE_DONE;