From patchwork Thu Aug 3 15:36:02 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: =?utf-8?b?UmFkaW0gS3LEjW3DocWZ?= X-Patchwork-Id: 9879357 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 6A167603B4 for ; Thu, 3 Aug 2017 15:36:11 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 59BE0285B5 for ; Thu, 3 Aug 2017 15:36:11 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 4CA262872C; Thu, 3 Aug 2017 15:36:11 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.9 required=2.0 tests=BAYES_00,HK_RANDOM_FROM, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id E650A285B5 for ; Thu, 3 Aug 2017 15:36:10 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751247AbdHCPgI (ORCPT ); Thu, 3 Aug 2017 11:36:08 -0400 Received: from mx1.redhat.com ([209.132.183.28]:50474 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751010AbdHCPgI (ORCPT ); Thu, 3 Aug 2017 11:36:08 -0400 Received: from smtp.corp.redhat.com (int-mx06.intmail.prod.int.phx2.redhat.com [10.5.11.16]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id 2529D13A98 for ; Thu, 3 Aug 2017 15:36:08 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mx1.redhat.com 2529D13A98 Authentication-Results: ext-mx05.extmail.prod.ext.phx2.redhat.com; dmarc=none (p=none dis=none) header.from=redhat.com Authentication-Results: ext-mx05.extmail.prod.ext.phx2.redhat.com; spf=fail smtp.mailfrom=rkrcmar@redhat.com Received: from flask (unknown [10.43.2.80]) by smtp.corp.redhat.com (Postfix) with SMTP id E708379293; Thu, 3 Aug 2017 15:36:02 +0000 (UTC) Received: by flask (sSMTP sendmail emulation); Thu, 03 Aug 2017 17:36:02 +0200 Date: Thu, 3 Aug 2017 17:36:02 +0200 From: Radim =?utf-8?B?S3LEjW3DocWZ?= To: David Hildenbrand Cc: kvm@vger.kernel.org, Paolo Bonzini Subject: Re: [PATCH v1 1/2] KVM: nVMX: get rid of nested_get_page() Message-ID: <20170803153601.GH32403@flask> References: <20170803140907.23681-1-david@redhat.com> <20170803140907.23681-2-david@redhat.com> MIME-Version: 1.0 Content-Disposition: inline In-Reply-To: <20170803140907.23681-2-david@redhat.com> X-Scanned-By: MIMEDefang 2.79 on 10.5.11.16 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.29]); Thu, 03 Aug 2017 15:36:08 +0000 (UTC) Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP 2017-08-03 16:09+0200, David Hildenbrand: > nested_get_page() just sounds confusing. All we want is a page from G1. > This is even unrelated to nested. > > Let's introduce kvm_vcpu_gpa_to_page() so we don't get too lengthy > lines. > > Signed-off-by: David Hildenbrand > --- I like the cleanup, but a subtle change in behavior that makes me wary: > diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c > @@ -9535,15 +9528,15 @@ static void nested_get_vmcs12_pages(struct kvm_vcpu *vcpu, > */ > if (vmx->nested.apic_access_page) /* shouldn't happen */ > nested_release_page(vmx->nested.apic_access_page); > - vmx->nested.apic_access_page = > - nested_get_page(vcpu, vmcs12->apic_access_addr); > + page = kvm_vcpu_gpa_to_page(vcpu, vmcs12->apic_access_addr); If what shouldn't happen happened and then kvm_vcpu_gpa_to_page() failed, we'd be calling put_page() twice on the same page. I think the situation currently can happen if VM entry fails after this point. Assigning 'vmx->nested.apic_access_page = NULL' when releasing the page sounds safer. Unless I'm reading something wrong, the "shouldn't happen" really shouldn't happen if we did something like this: > /* > * If translation failed, no matter: This feature asks > * to exit when accessing the given address, and if it > * can never be accessed, this feature won't do > * anything anyway. > */ > - if (vmx->nested.apic_access_page) { > + if (!is_error_page(page)) { > + vmx->nested.apic_access_page = page; > hpa = page_to_phys(vmx->nested.apic_access_page); > vmcs_write64(APIC_ACCESS_ADDR, hpa); > } else { > @@ -9560,8 +9553,7 @@ static void nested_get_vmcs12_pages(struct kvm_vcpu *vcpu, > if (nested_cpu_has(vmcs12, CPU_BASED_TPR_SHADOW)) { > if (vmx->nested.virtual_apic_page) /* shouldn't happen */ > nested_release_page(vmx->nested.virtual_apic_page); Ditto, thanks. > - vmx->nested.virtual_apic_page = > - nested_get_page(vcpu, vmcs12->virtual_apic_page_addr); > + page = kvm_vcpu_gpa_to_page(vcpu, vmcs12->apic_access_addr); > > /* > * If translation failed, VM entry will fail because diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c index e34373838b31..d26e6693f748 100644 --- a/arch/x86/kvm/vmx.c +++ b/arch/x86/kvm/vmx.c @@ -10462,8 +10462,6 @@ static int enter_vmx_non_root_mode(struct kvm_vcpu *vcpu, bool from_vmentry) return 1; } - nested_get_vmcs12_pages(vcpu, vmcs12); - msr_entry_idx = nested_vmx_load_msr(vcpu, vmcs12->vm_entry_msr_load_addr, vmcs12->vm_entry_msr_load_count); @@ -10475,6 +10473,8 @@ static int enter_vmx_non_root_mode(struct kvm_vcpu *vcpu, bool from_vmentry) return 1; } + nested_get_vmcs12_pages(vcpu, vmcs12); + /* * Note no nested_vmx_succeed or nested_vmx_fail here. At this point * we are no longer running L1, and VMLAUNCH/VMRESUME has not yet