From patchwork Fri Jul 14 01:30:41 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Wanpeng Li X-Patchwork-Id: 9839775 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 19E7B60212 for ; Fri, 14 Jul 2017 01:31:20 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 26CD928757 for ; Fri, 14 Jul 2017 01:31:20 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 1B5AC28769; Fri, 14 Jul 2017 01:31:20 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.0 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, FREEMAIL_FROM, RCVD_IN_DNSWL_HI autolearn=unavailable version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id A006A28757 for ; Fri, 14 Jul 2017 01:31:19 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753313AbdGNBbC (ORCPT ); Thu, 13 Jul 2017 21:31:02 -0400 Received: from mail-pf0-f196.google.com ([209.85.192.196]:36171 "EHLO mail-pf0-f196.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753274AbdGNBa5 (ORCPT ); Thu, 13 Jul 2017 21:30:57 -0400 Received: by mail-pf0-f196.google.com with SMTP id z6so9054693pfk.3; Thu, 13 Jul 2017 18:30:57 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=NRmG6viZcgxxUstaK9QmVlgb8NYcifLxiYZNShKxobA=; b=euZv/tLtjMRSgS45/dg4svaJDXWt0xSQSRREiIQE2uJdph84CK32NGvaAnLp9o79/8 m04P0FscuhFrx0lh5x4EVhFgX5hwVMmdquVqAImaX69sf82XNGwj1SMdcK0jIWkx9Bog 0e6oRw80UjYn0dRoYWv9OC1BqYii0i9dJ44Kf4QzcquGtsIlYnmwNMo0deKXGkBB9ou5 9159w2e+3myiF1TSZhiQHula+7cLPz0Su8INoM5QX0Ymko7d0xTVuwz+1Pofq806jEpH FSaAPwOHO1dgmDQvuVcQRVtOBOnGwZRw6Glf37+GCfifgQurZSk6hd+UAQuUciaCku/G MtBw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=NRmG6viZcgxxUstaK9QmVlgb8NYcifLxiYZNShKxobA=; b=A1NLnjhhfmfxA9yy+tH/K//ipznhkX/ce3/oMO9Ii7Bbi4FKxoozzQ5ozmH+2nfTpP N/xXmKGlUuJsfOeq0W4/9S2gjiinjPGNWT/wDMVG7C1z/OPtcY8EoBbSDCyCM2Zi+iUo aHBaQcR55BFSuRZ5h8Y2oZVRjvVW7n+y7MS598P+c39uGJyCC1JbMX1D2C8skwxXd7v0 dPJMPupXKWlEhtvfL4Rbv50G+KzWJkepWwzgCXYlenR74s3ZNQMWSjPl1pjuyEeC0PGE E9uZ1aZGpfdzeHP+crg7m2/wzudJTb+ArsnkfKH4h1754wcQUyJ1TjPw9ZLJ2kxwjhsm cjwQ== X-Gm-Message-State: AIVw110iNVo8SxuJ/qbPMrrmOodRvCOcY3jpYO3N7ugzwfc4gbvwQvoL YwtwdpozMkBV7h6tKl8= X-Received: by 10.84.132.104 with SMTP id 95mr13140824ple.228.1499995856306; Thu, 13 Jul 2017 18:30:56 -0700 (PDT) Received: from localhost ([203.205.141.123]) by smtp.gmail.com with ESMTPSA id 191sm13696479pfb.70.2017.07.13.18.30.55 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 13 Jul 2017 18:30:55 -0700 (PDT) From: Wanpeng Li X-Google-Original-From: Wanpeng Li To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: Paolo Bonzini , =?UTF-8?q?Radim=20Kr=C4=8Dm=C3=A1=C5=99?= , Wanpeng Li Subject: [PATCH v8 3/4] KVM: async_pf: Force a nested vmexit if the injected #PF is async_pf Date: Thu, 13 Jul 2017 18:30:41 -0700 Message-Id: <1499995842-92976-4-git-send-email-wanpeng.li@hotmail.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1499995842-92976-1-git-send-email-wanpeng.li@hotmail.com> References: <1499995842-92976-1-git-send-email-wanpeng.li@hotmail.com> MIME-Version: 1.0 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Wanpeng Li Add an nested_apf field to vcpu->arch.exception to identify an async page fault, and constructs the expected vm-exit information fields. Force a nested VM exit from nested_vmx_check_exception() if the injected #PF is async page fault. Cc: Paolo Bonzini Cc: Radim Krčmář Signed-off-by: Wanpeng Li --- arch/x86/include/asm/kvm_emulate.h | 1 + arch/x86/include/asm/kvm_host.h | 2 ++ arch/x86/kvm/svm.c | 16 ++++++++++------ arch/x86/kvm/vmx.c | 17 ++++++++++++++--- arch/x86/kvm/x86.c | 9 ++++++++- 5 files changed, 35 insertions(+), 10 deletions(-) diff --git a/arch/x86/include/asm/kvm_emulate.h b/arch/x86/include/asm/kvm_emulate.h index 722d0e5..fde36f1 100644 --- a/arch/x86/include/asm/kvm_emulate.h +++ b/arch/x86/include/asm/kvm_emulate.h @@ -23,6 +23,7 @@ struct x86_exception { u16 error_code; bool nested_page_fault; u64 address; /* cr2 or nested page fault gpa */ + u8 async_page_fault; }; /* diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index 4f20ee6..5e9ac50 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -550,6 +550,7 @@ struct kvm_vcpu_arch { bool reinject; u8 nr; u32 error_code; + u8 nested_apf; } exception; struct kvm_queued_interrupt { @@ -651,6 +652,7 @@ struct kvm_vcpu_arch { u32 id; bool send_user_only; u32 host_apf_reason; + unsigned long nested_apf_token; } apf; /* OSVW MSRs (AMD only) */ diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c index fb23497..4d8141e 100644 --- a/arch/x86/kvm/svm.c +++ b/arch/x86/kvm/svm.c @@ -2423,15 +2423,19 @@ static int nested_svm_check_exception(struct vcpu_svm *svm, unsigned nr, if (!is_guest_mode(&svm->vcpu)) return 0; + vmexit = nested_svm_intercept(svm); + if (vmexit != NESTED_EXIT_DONE) + return 0; + svm->vmcb->control.exit_code = SVM_EXIT_EXCP_BASE + nr; svm->vmcb->control.exit_code_hi = 0; svm->vmcb->control.exit_info_1 = error_code; - svm->vmcb->control.exit_info_2 = svm->vcpu.arch.cr2; - - vmexit = nested_svm_intercept(svm); - if (vmexit == NESTED_EXIT_DONE) - svm->nested.exit_required = true; + if (svm->vcpu.arch.exception.nested_apf) + svm->vmcb->control.exit_info_2 = svm->vcpu.arch.apf.nested_apf_token; + else + svm->vmcb->control.exit_info_2 = svm->vcpu.arch.cr2; + svm->nested.exit_required = true; return vmexit; } @@ -2653,7 +2657,7 @@ static int nested_svm_intercept(struct vcpu_svm *svm) } /* async page fault always cause vmexit */ else if ((exit_code == SVM_EXIT_EXCP_BASE + PF_VECTOR) && - svm->vcpu.arch.apf.host_apf_reason != 0) + svm->vcpu.arch.exception.nested_apf != 0) vmexit = NESTED_EXIT_DONE; break; } diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c index c9c46e6..5a3bb1a 100644 --- a/arch/x86/kvm/vmx.c +++ b/arch/x86/kvm/vmx.c @@ -2422,13 +2422,24 @@ static void skip_emulated_instruction(struct kvm_vcpu *vcpu) * KVM wants to inject page-faults which it got to the guest. This function * checks whether in a nested guest, we need to inject them to L1 or L2. */ -static int nested_vmx_check_exception(struct kvm_vcpu *vcpu, unsigned nr) +static int nested_vmx_check_exception(struct kvm_vcpu *vcpu) { struct vmcs12 *vmcs12 = get_vmcs12(vcpu); + unsigned int nr = vcpu->arch.exception.nr; - if (!(vmcs12->exception_bitmap & (1u << nr))) + if (!((vmcs12->exception_bitmap & (1u << nr)) || + (nr == PF_VECTOR && vcpu->arch.exception.nested_apf))) return 0; + if (vcpu->arch.exception.nested_apf) { + vmcs_write32(VM_EXIT_INTR_ERROR_CODE, vcpu->arch.exception.error_code); + nested_vmx_vmexit(vcpu, EXIT_REASON_EXCEPTION_NMI, + PF_VECTOR | INTR_TYPE_HARD_EXCEPTION | + INTR_INFO_DELIVER_CODE_MASK | INTR_INFO_VALID_MASK, + vcpu->arch.apf.nested_apf_token); + return 1; + } + nested_vmx_vmexit(vcpu, EXIT_REASON_EXCEPTION_NMI, vmcs_read32(VM_EXIT_INTR_INFO), vmcs_readl(EXIT_QUALIFICATION)); @@ -2445,7 +2456,7 @@ static void vmx_queue_exception(struct kvm_vcpu *vcpu) u32 intr_info = nr | INTR_INFO_VALID_MASK; if (!reinject && is_guest_mode(vcpu) && - nested_vmx_check_exception(vcpu, nr)) + nested_vmx_check_exception(vcpu)) return; if (has_error_code) { diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index e149c92..f3f1015 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -450,7 +450,12 @@ EXPORT_SYMBOL_GPL(kvm_complete_insn_gp); void kvm_inject_page_fault(struct kvm_vcpu *vcpu, struct x86_exception *fault) { ++vcpu->stat.pf_guest; - vcpu->arch.cr2 = fault->address; + vcpu->arch.exception.nested_apf = + is_guest_mode(vcpu) && fault->async_page_fault; + if (vcpu->arch.exception.nested_apf) + vcpu->arch.apf.nested_apf_token = fault->address; + else + vcpu->arch.cr2 = fault->address; kvm_queue_exception_e(vcpu, PF_VECTOR, fault->error_code); } EXPORT_SYMBOL_GPL(kvm_inject_page_fault); @@ -8582,6 +8587,7 @@ void kvm_arch_async_page_not_present(struct kvm_vcpu *vcpu, fault.error_code = 0; fault.nested_page_fault = false; fault.address = work->arch.token; + fault.async_page_fault = true; kvm_inject_page_fault(vcpu, &fault); } } @@ -8604,6 +8610,7 @@ void kvm_arch_async_page_present(struct kvm_vcpu *vcpu, fault.error_code = 0; fault.nested_page_fault = false; fault.address = work->arch.token; + fault.async_page_fault = true; kvm_inject_page_fault(vcpu, &fault); } vcpu->arch.apf.halted = false;