From patchwork Fri Dec 1 10:45:35 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Paul Durrant X-Patchwork-Id: 13475641 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=xen.org header.i=@xen.org header.b="kI8mcnYH" Received: from mail.xenproject.org (mail.xenproject.org [104.130.215.37]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id DBD47196; Fri, 1 Dec 2023 02:46:21 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org; s=20200302mail; h=Content-Transfer-Encoding:MIME-Version:References: In-Reply-To:Message-Id:Date:Subject:To:From; bh=XqNvN8Mg13YD+DXjeh+PV6E2xDfXAoMQ4rXRMVJHewY=; b=kI8mcnYHDv1ww3/E+0zAz51qll 2OI4JiPs8G0XPAmE5gRwij/dPDhSkNZ6shIgZHO9dxsHaOnRzlSnVFQ8Daoom9nllED8XVlBguhzJ iG+uYpBwt+xLLqZpwmKL8XmLhrwT5MsXQOjtp6GTgiWPIMYjzD8eJtvC4MsW9QrVYPS4=; Received: from xenbits.xenproject.org ([104.239.192.120]) by mail.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1r911t-0005P4-7Z; Fri, 01 Dec 2023 10:45:53 +0000 Received: from 54-240-197-231.amazon.com ([54.240.197.231] helo=REM-PW02S00X.ant.amazon.com) by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92) (envelope-from ) id 1r911s-0003dT-Je; Fri, 01 Dec 2023 10:45:53 +0000 From: Paul Durrant To: David Woodhouse , Paul Durrant , Sean Christopherson , Paolo Bonzini , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, "H. Peter Anvin" , kvm@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH 1/2] KVM: xen: separate initialization of shared_info cache and content Date: Fri, 1 Dec 2023 10:45:35 +0000 Message-Id: <20231201104536.947-2-paul@xen.org> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20231201104536.947-1-paul@xen.org> References: <20231201104536.947-1-paul@xen.org> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Paul Durrant The shared_info cache should only need to be set up once by the VMM, but the content of the shared_info page may need changed if the mode of guest changes from 32-bit to 64-bit or vice versa. This re-initialization of the content will be handles in a subsequent patch. Signed-off-by: Paul Durrant --- arch/x86/kvm/xen.c | 70 ++++++++++++++++++++++++++-------------------- 1 file changed, 40 insertions(+), 30 deletions(-) diff --git a/arch/x86/kvm/xen.c b/arch/x86/kvm/xen.c index cfd5051e0800..7bead3f65e55 100644 --- a/arch/x86/kvm/xen.c +++ b/arch/x86/kvm/xen.c @@ -34,7 +34,7 @@ static bool kvm_xen_hcall_evtchn_send(struct kvm_vcpu *vcpu, u64 param, u64 *r); DEFINE_STATIC_KEY_DEFERRED_FALSE(kvm_xen_enabled, HZ); -static int kvm_xen_shared_info_init(struct kvm *kvm, u64 addr, bool addr_is_gfn) +static int kvm_xen_shared_info_init(struct kvm *kvm) { struct gfn_to_pfn_cache *gpc = &kvm->arch.xen.shinfo_cache; struct pvclock_wall_clock *wc; @@ -44,34 +44,22 @@ static int kvm_xen_shared_info_init(struct kvm *kvm, u64 addr, bool addr_is_gfn) int ret = 0; int idx = srcu_read_lock(&kvm->srcu); - if ((addr_is_gfn && addr == KVM_XEN_INVALID_GFN) || - (!addr_is_gfn && addr == 0)) { - kvm_gpc_deactivate(gpc); - goto out; - } + read_lock_irq(&gpc->lock); + while (!kvm_gpc_check(gpc, PAGE_SIZE)) { + read_unlock_irq(&gpc->lock); - do { - if (addr_is_gfn) - ret = kvm_gpc_activate(gpc, gfn_to_gpa(addr), PAGE_SIZE); - else - ret = kvm_gpc_activate_hva(gpc, addr, PAGE_SIZE); + ret = kvm_gpc_refresh(gpc, PAGE_SIZE); if (ret) goto out; - /* - * This code mirrors kvm_write_wall_clock() except that it writes - * directly through the pfn cache and doesn't mark the page dirty. - */ - wall_nsec = kvm_get_wall_clock_epoch(kvm); - - /* It could be invalid again already, so we need to check */ read_lock_irq(&gpc->lock); + } - if (gpc->valid) - break; - - read_unlock_irq(&gpc->lock); - } while (1); + /* + * This code mirrors kvm_write_wall_clock() except that it writes + * directly through the pfn cache and doesn't mark the page dirty. + */ + wall_nsec = ktime_get_real_ns() - get_kvmclock_ns(kvm); /* Paranoia checks on the 32-bit struct layout */ BUILD_BUG_ON(offsetof(struct compat_shared_info, wc) != 0x900); @@ -642,17 +630,39 @@ int kvm_xen_hvm_set_attr(struct kvm *kvm, struct kvm_xen_hvm_attr *data) break; case KVM_XEN_ATTR_TYPE_SHARED_INFO: - mutex_lock(&kvm->arch.xen.xen_lock); - r = kvm_xen_shared_info_init(kvm, data->u.shared_info.gfn, true); - mutex_unlock(&kvm->arch.xen.xen_lock); - break; + case KVM_XEN_ATTR_TYPE_SHARED_INFO_HVA: { + int idx; - case KVM_XEN_ATTR_TYPE_SHARED_INFO_HVA: mutex_lock(&kvm->arch.xen.xen_lock); - r = kvm_xen_shared_info_init(kvm, data->u.shared_info.hva, false); + + idx = srcu_read_lock(&kvm->srcu); + if (data->type == KVM_XEN_ATTR_TYPE_SHARED_INFO) { + if (data->u.shared_info.gfn == KVM_XEN_INVALID_GFN) { + kvm_gpc_deactivate(&kvm->arch.xen.shinfo_cache); + r = 0; + } else { + r = kvm_gpc_activate(&kvm->arch.xen.shinfo_cache, + gfn_to_gpa(data->u.shared_info.gfn), + PAGE_SIZE); + } + } else { + if (data->u.shared_info.hva == 0) { + kvm_gpc_deactivate(&kvm->arch.xen.shinfo_cache); + r = 0; + } else { + r = kvm_gpc_activate_hva(&kvm->arch.xen.shinfo_cache, + data->u.shared_info.hva, + PAGE_SIZE); + } + } + srcu_read_unlock(&kvm->srcu, idx); + + if (!r && kvm->arch.xen.shinfo_cache.active) + r = kvm_xen_shared_info_init(kvm); + mutex_unlock(&kvm->arch.xen.xen_lock); break; - + } case KVM_XEN_ATTR_TYPE_UPCALL_VECTOR: if (data->u.vector && data->u.vector < 0x10) r = -EINVAL; From patchwork Fri Dec 1 10:45:36 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Paul Durrant X-Patchwork-Id: 13475640 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=xen.org header.i=@xen.org header.b="d6NBEYP0" Received: from mail.xenproject.org (mail.xenproject.org [104.130.215.37]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C8C5110E5; Fri, 1 Dec 2023 02:46:16 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org; s=20200302mail; h=Content-Transfer-Encoding:MIME-Version:References: In-Reply-To:Message-Id:Date:Subject:To:From; bh=wFdaNfSO3kHrtdA/nDJtCTrlH575cz0J84l58QN9+OU=; b=d6NBEYP0mST7ARp0cVbuoc07p2 YzAjQQvJAKV5ODtHtFuEKZXdsW/oieEAI2nZ7FXcAT9PnCD5KrCHou6L3RNXtDMQ01oLoJG0CS++A mhMPng0hKPvGT6uFXHOXjVOh4aAdBvM+nCadD0bi/pO89VdnE4JfVIstxGL7KlPr1iZk=; Received: from xenbits.xenproject.org ([104.239.192.120]) by mail.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1r911v-0005P6-3k; Fri, 01 Dec 2023 10:45:55 +0000 Received: from 54-240-197-231.amazon.com ([54.240.197.231] helo=REM-PW02S00X.ant.amazon.com) by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92) (envelope-from ) id 1r911u-0003dT-QB; Fri, 01 Dec 2023 10:45:55 +0000 From: Paul Durrant To: David Woodhouse , Paul Durrant , Sean Christopherson , Paolo Bonzini , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, "H. Peter Anvin" , kvm@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH 2/2] KVM: xen: (re-)initialize shared_info if guest (32/64-bit) mode is set Date: Fri, 1 Dec 2023 10:45:36 +0000 Message-Id: <20231201104536.947-3-paul@xen.org> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20231201104536.947-1-paul@xen.org> References: <20231201104536.947-1-paul@xen.org> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Paul Durrant If the shared_info PFN cache has already been initialized then the content of the shared_info page needs to be (re-)initialized if the guest mode is set. It is no lnger done when the PFN cache is activated. Setting the guest mode is either done explicitly by the VMM via the KVM_XEN_ATTR_TYPE_LONG_MODE attribute, or implicitly when the guest writes the MSR to set up the hypercall page. Signed-off-by: Paul Durrant --- arch/x86/kvm/xen.c | 20 +++++++++++++++----- 1 file changed, 15 insertions(+), 5 deletions(-) diff --git a/arch/x86/kvm/xen.c b/arch/x86/kvm/xen.c index 7bead3f65e55..bfc8f6698cbc 100644 --- a/arch/x86/kvm/xen.c +++ b/arch/x86/kvm/xen.c @@ -624,8 +624,15 @@ int kvm_xen_hvm_set_attr(struct kvm *kvm, struct kvm_xen_hvm_attr *data) } else { mutex_lock(&kvm->arch.xen.xen_lock); kvm->arch.xen.long_mode = !!data->u.long_mode; + + /* + * If shared_info has already been initialized + * then re-initialize it with the new width. + */ + r = kvm->arch.xen.shinfo_cache.active ? + kvm_xen_shared_info_init(kvm) : 0; + mutex_unlock(&kvm->arch.xen.xen_lock); - r = 0; } break; @@ -657,9 +664,6 @@ int kvm_xen_hvm_set_attr(struct kvm *kvm, struct kvm_xen_hvm_attr *data) } srcu_read_unlock(&kvm->srcu, idx); - if (!r && kvm->arch.xen.shinfo_cache.active) - r = kvm_xen_shared_info_init(kvm); - mutex_unlock(&kvm->arch.xen.xen_lock); break; } @@ -1144,7 +1148,13 @@ int kvm_xen_write_hypercall_page(struct kvm_vcpu *vcpu, u64 data) bool lm = is_long_mode(vcpu); /* Latch long_mode for shared_info pages etc. */ - vcpu->kvm->arch.xen.long_mode = lm; + kvm->arch.xen.long_mode = lm; + + if (kvm->arch.xen.shinfo_cache.active && + kvm_xen_shared_info_init(kvm)) { + mutex_unlock(&kvm->arch.xen.xen_lock); + return 1; + } /* * If Xen hypercall intercept is enabled, fill the hypercall