From patchwork Wed Apr 15 21:44:14 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 11492063 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 551D881 for ; Wed, 15 Apr 2020 21:44:42 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 3CC8220784 for ; Wed, 15 Apr 2020 21:44:42 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730760AbgDOVoj (ORCPT ); Wed, 15 Apr 2020 17:44:39 -0400 Received: from mga02.intel.com ([134.134.136.20]:2388 "EHLO mga02.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1730240AbgDOVoT (ORCPT ); Wed, 15 Apr 2020 17:44:19 -0400 IronPort-SDR: 2nvRmVKouYFi0Vwga/l5WBtikvGWcuDg4V3XZkKg61IROV3FIgvl2OWOOR+55+DOTm+AkFZ29u T2PgrXWtw+wg== X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga005.jf.intel.com ([10.7.209.41]) by orsmga101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 15 Apr 2020 14:44:15 -0700 IronPort-SDR: pVVdUUVLwq3NR9hrkR2k/UHGdivr/jKKmfOCSuV+gJO6vYTdRNRntKRn16DA02KtwD8H+HD+zJ Qyoswa+NtH5g== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.72,388,1580803200"; d="scan'208";a="427584257" Received: from sjchrist-coffee.jf.intel.com ([10.54.74.202]) by orsmga005.jf.intel.com with ESMTP; 15 Apr 2020 14:44:15 -0700 From: Sean Christopherson To: Paolo Bonzini Cc: Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Rick Edgecombe Subject: [PATCH 2/2] KVM: x86/mmu: Avoid an extra memslot lookup in try_async_pf() for L2 Date: Wed, 15 Apr 2020 14:44:14 -0700 Message-Id: <20200415214414.10194-3-sean.j.christopherson@intel.com> X-Mailer: git-send-email 2.26.0 In-Reply-To: <20200415214414.10194-1-sean.j.christopherson@intel.com> References: <20200415214414.10194-1-sean.j.christopherson@intel.com> MIME-Version: 1.0 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Tweak the L2 vs. private memslot handling in try_async_pf() to avoid an added memslot lookup and more precisely single out private memslots, i.e. defer to the common code to handle nonexistent or invalid memslots to make it clear L2 doesn't require special handling for those cases. Opportunstically squish a multi-line comment into a single-line comment. Note, the end result, KVM_PFN_NOSLOT, is unchanged. Cc: Jim Mattson Cc: Rick Edgecombe Signed-off-by: Sean Christopherson --- arch/x86/kvm/mmu/mmu.c | 9 +++------ 1 file changed, 3 insertions(+), 6 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 6d6cb9416179..06d0150ce53b 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -4082,19 +4082,16 @@ static bool try_async_pf(struct kvm_vcpu *vcpu, bool prefault, gfn_t gfn, gpa_t cr2_or_gpa, kvm_pfn_t *pfn, bool write, bool *writable) { - struct kvm_memory_slot *slot; + struct kvm_memory_slot *slot = kvm_vcpu_gfn_to_memslot(vcpu, gfn); bool async; - /* - * Don't expose private memslots to L2. - */ - if (is_guest_mode(vcpu) && !kvm_is_visible_gfn(vcpu->kvm, gfn)) { + /* Don't expose private memslots to L2. */ + if (is_guest_mode(vcpu) && slot && slot->id >= KVM_USER_MEM_SLOTS) { *pfn = KVM_PFN_NOSLOT; *writable = false; return false; } - slot = kvm_vcpu_gfn_to_memslot(vcpu, gfn); async = false; *pfn = __gfn_to_pfn_memslot(slot, gfn, false, &async, write, writable); if (!async)