From patchwork Wed Feb 21 17:47:18 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: KarimAllah Ahmed X-Patchwork-Id: 10233739 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 3662960209 for ; Wed, 21 Feb 2018 17:53:39 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 2FBD527F88 for ; Wed, 21 Feb 2018 17:53:39 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 24B2227F8C; Wed, 21 Feb 2018 17:53:39 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.0 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, RCVD_IN_DNSWL_HI autolearn=unavailable version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id BF8B527F88 for ; Wed, 21 Feb 2018 17:53:38 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S935051AbeBURru (ORCPT ); Wed, 21 Feb 2018 12:47:50 -0500 Received: from smtp-fw-6001.amazon.com ([52.95.48.154]:28428 "EHLO smtp-fw-6001.amazon.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S935027AbeBURrt (ORCPT ); Wed, 21 Feb 2018 12:47:49 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amazon.de; i=@amazon.de; q=dns/txt; s=amazon201209; t=1519235269; x=1550771269; h=from:to:cc:subject:date:message-id:in-reply-to: references; bh=IQQxS0lK8Q4gZ9PPBzjpHsv+1pgTvABRhWeL/bxzQuU=; b=ZiU+h2DsHfxMFmvOmeh4GeyjDJY8KJSDgacV0TJDeArc8MI4f8ctEAfc 5CbFhMxR4gWCXDNsazkT5t6oPb6xjTGGsGZkDMiUCgjrEPd+aIsX9ZVz7 nq0d29R/1dpWLvltfWf1Wccvyf4Edw5eW9dA/0umaZ5qLB67OSOToF6Ch k=; X-IronPort-AV: E=Sophos;i="5.47,375,1515456000"; d="scan'208";a="332936473" Received: from iad6-co-svc-p1-lb1-vlan3.amazon.com (HELO email-inbound-relay-2a-c5104f52.us-west-2.amazon.com) ([10.124.125.6]) by smtp-border-fw-out-6001.iad6.amazon.com with ESMTP/TLS/DHE-RSA-AES256-SHA; 21 Feb 2018 17:47:47 +0000 Received: from u54e1ad5160425a4b64ea.ant.amazon.com (pdx2-ws-svc-lb17-vlan2.amazon.com [10.247.140.66]) by email-inbound-relay-2a-c5104f52.us-west-2.amazon.com (8.14.7/8.14.7) with ESMTP id w1LHldfa086608 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO); Wed, 21 Feb 2018 17:47:41 GMT Received: from u54e1ad5160425a4b64ea.ant.amazon.com (localhost [127.0.0.1]) by u54e1ad5160425a4b64ea.ant.amazon.com (8.15.2/8.15.2/Debian-3) with ESMTP id w1LHlcIX006678; Wed, 21 Feb 2018 18:47:38 +0100 Received: (from karahmed@localhost) by u54e1ad5160425a4b64ea.ant.amazon.com (8.15.2/8.15.2/Submit) id w1LHlc9J006677; Wed, 21 Feb 2018 18:47:38 +0100 From: KarimAllah Ahmed To: x86@kernel.org, linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: hpa@zytor.com, jmattson@google.com, mingo@redhat.com, pbonzini@redhat.com, rkrcmar@redhat.com, tglx@linutronix.de, KarimAllah Ahmed Subject: [PATCH 07/10] KVM/nVMX: Use kvm_vcpu_map when mapping the posted interrupt descriptor table Date: Wed, 21 Feb 2018 18:47:18 +0100 Message-Id: <1519235241-6500-8-git-send-email-karahmed@amazon.de> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1519235241-6500-1-git-send-email-karahmed@amazon.de> References: <1519235241-6500-1-git-send-email-karahmed@amazon.de> Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP ... since using kvm_vcpu_gpa_to_page() and kmap() will only work for guest memory that has a "struct page". The life-cycle of the mapping also changes to avoid doing map and unmap on every single exit (which becomes very expesive once we use memremap). Now the memory is mapped and only unmapped when a new VMCS12 is loaded into the vCPU (or when the vCPU is freed!). Signed-off-by: KarimAllah Ahmed --- arch/x86/kvm/vmx.c | 45 +++++++++++++-------------------------------- 1 file changed, 13 insertions(+), 32 deletions(-) diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c index a700338..7b29419 100644 --- a/arch/x86/kvm/vmx.c +++ b/arch/x86/kvm/vmx.c @@ -461,7 +461,7 @@ struct nested_vmx { */ struct page *apic_access_page; struct kvm_host_map virtual_apic_map; - struct page *pi_desc_page; + struct kvm_host_map pi_desc_map; struct kvm_host_map msr_bitmap_map; struct pi_desc *pi_desc; @@ -7666,6 +7666,7 @@ static inline void nested_release_vmcs12(struct vcpu_vmx *vmx) vmx->nested.cached_vmcs12, 0, VMCS12_SIZE); kvm_vcpu_unmap(&vmx->nested.virtual_apic_map); + kvm_vcpu_unmap(&vmx->nested.pi_desc_map); kvm_vcpu_unmap(&vmx->nested.msr_bitmap_map); vmx->nested.current_vmptr = -1ull; @@ -7698,14 +7699,9 @@ static void free_nested(struct vcpu_vmx *vmx) vmx->nested.apic_access_page = NULL; } kvm_vcpu_unmap(&vmx->nested.virtual_apic_map); - if (vmx->nested.pi_desc_page) { - kunmap(vmx->nested.pi_desc_page); - kvm_release_page_dirty(vmx->nested.pi_desc_page); - vmx->nested.pi_desc_page = NULL; - vmx->nested.pi_desc = NULL; - } - + kvm_vcpu_unmap(&vmx->nested.pi_desc_map); kvm_vcpu_unmap(&vmx->nested.msr_bitmap_map); + vmx->nested.pi_desc = NULL; free_loaded_vmcs(&vmx->nested.vmcs02); } @@ -10278,24 +10274,16 @@ static void nested_get_vmcs12_pages(struct kvm_vcpu *vcpu, } if (nested_cpu_has_posted_intr(vmcs12)) { - if (vmx->nested.pi_desc_page) { /* shouldn't happen */ - kunmap(vmx->nested.pi_desc_page); - kvm_release_page_dirty(vmx->nested.pi_desc_page); - vmx->nested.pi_desc_page = NULL; + map = &vmx->nested.pi_desc_map; + + if (kvm_vcpu_map(vcpu, gpa_to_gfn(vmcs12->posted_intr_desc_addr), map)) { + vmx->nested.pi_desc = + (struct pi_desc *)(((void *)map->kaddr) + + offset_in_page(vmcs12->posted_intr_desc_addr)); + vmcs_write64(POSTED_INTR_DESC_ADDR, pfn_to_hpa(map->pfn) + + offset_in_page(vmcs12->posted_intr_desc_addr)); } - page = kvm_vcpu_gpa_to_page(vcpu, vmcs12->posted_intr_desc_addr); - if (is_error_page(page)) - return; - vmx->nested.pi_desc_page = page; - vmx->nested.pi_desc = kmap(vmx->nested.pi_desc_page); - vmx->nested.pi_desc = - (struct pi_desc *)((void *)vmx->nested.pi_desc + - (unsigned long)(vmcs12->posted_intr_desc_addr & - (PAGE_SIZE - 1))); - vmcs_write64(POSTED_INTR_DESC_ADDR, - page_to_phys(vmx->nested.pi_desc_page) + - (unsigned long)(vmcs12->posted_intr_desc_addr & - (PAGE_SIZE - 1))); + } if (nested_vmx_prepare_msr_bitmap(vcpu, vmcs12)) vmcs_set_bits(CPU_BASED_VM_EXEC_CONTROL, @@ -11893,13 +11881,6 @@ static void nested_vmx_vmexit(struct kvm_vcpu *vcpu, u32 exit_reason, kvm_release_page_dirty(vmx->nested.apic_access_page); vmx->nested.apic_access_page = NULL; } - if (vmx->nested.pi_desc_page) { - kunmap(vmx->nested.pi_desc_page); - kvm_release_page_dirty(vmx->nested.pi_desc_page); - vmx->nested.pi_desc_page = NULL; - vmx->nested.pi_desc = NULL; - } - /* * We are now running in L2, mmu_notifier will force to reload the * page's hpa for L2 vmcs. Need to reload it for L1 before entering L1.