From patchwork Thu Sep 14 08:49:39 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Paul Durrant X-Patchwork-Id: 13384879 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 70A43EDE99C for ; Thu, 14 Sep 2023 09:07:20 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236908AbjINJHX (ORCPT ); Thu, 14 Sep 2023 05:07:23 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:39814 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236840AbjINJHP (ORCPT ); Thu, 14 Sep 2023 05:07:15 -0400 X-Greylist: delayed 1013 seconds by postgrey-1.37 at lindbergh.monkeyblade.net; Thu, 14 Sep 2023 02:07:10 PDT Received: from mail.xenproject.org (mail.xenproject.org [104.130.215.37]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 05C7A1FEC; Thu, 14 Sep 2023 02:07:09 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org; s=20200302mail; h=Content-Transfer-Encoding:MIME-Version:References: In-Reply-To:Message-Id:Date:Subject:Cc:To:From; bh=Vg4nybFPifYv4mCdEydLFQh8wTvl7sWSN8mffLcAAaU=; b=Q/8QIj5be1SceDQi1cPfTcBAyx 3H4yVzw6rXOO6n4wDzHHnH+RRR7RzZYd7hXqLqssTdhW6JwDEv9T0pqHP6Y4znpK7OpohiOW/41Db L+30vbJxYAJkyusxE4GvnXaNEQhbbYZNEcxWd2WejYZermQUKYIlbVLVzol+EhF0Rx9I=; Received: from xenbits.xenproject.org ([104.239.192.120]) by mail.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1qgi3F-0001iU-DL; Thu, 14 Sep 2023 08:50:17 +0000 Received: from ec2-63-33-11-17.eu-west-1.compute.amazonaws.com ([63.33.11.17] helo=REM-PW02S00X.ant.amazon.com) by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92) (envelope-from ) id 1qgi3F-0002T9-58; Thu, 14 Sep 2023 08:50:17 +0000 From: Paul Durrant To: kvm@vger.kernel.org, linux-kernel@vger.kernel.org Cc: Paul Durrant , Paolo Bonzini , David Woodhouse Subject: [PATCH 1/8] KVM: pfncache: add a map helper function Date: Thu, 14 Sep 2023 08:49:39 +0000 Message-Id: <20230914084946.200043-2-paul@xen.org> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230914084946.200043-1-paul@xen.org> References: <20230914084946.200043-1-paul@xen.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Paul Durrant We have an unmap helper but mapping is open-coded. Arguably this is fine because mapping is done in only one place, hva_to_pfn_retry(), but adding the helper does make that function more readable. Signed-off-by: Paul Durrant Reviewed-by: David Woodhouse --- Cc: Paolo Bonzini Cc: David Woodhouse --- virt/kvm/pfncache.c | 43 +++++++++++++++++++++++++------------------ 1 file changed, 25 insertions(+), 18 deletions(-) diff --git a/virt/kvm/pfncache.c b/virt/kvm/pfncache.c index 2d6aba677830..0f36acdf577f 100644 --- a/virt/kvm/pfncache.c +++ b/virt/kvm/pfncache.c @@ -96,17 +96,28 @@ bool kvm_gpc_check(struct gfn_to_pfn_cache *gpc, unsigned long len) } EXPORT_SYMBOL_GPL(kvm_gpc_check); -static void gpc_unmap_khva(kvm_pfn_t pfn, void *khva) +static void *gpc_map(kvm_pfn_t pfn) +{ + if (pfn_valid(pfn)) + return kmap(pfn_to_page(pfn)); +#ifdef CONFIG_HAS_IOMEM + else + return memremap(pfn_to_hpa(pfn), PAGE_SIZE, MEMREMAP_WB); +#endif +} + +static void gpc_unmap(kvm_pfn_t pfn, void *khva) { /* Unmap the old pfn/page if it was mapped before. */ - if (!is_error_noslot_pfn(pfn) && khva) { - if (pfn_valid(pfn)) - kunmap(pfn_to_page(pfn)); + if (is_error_noslot_pfn(pfn) || !khva) + return; + + if (pfn_valid(pfn)) + kunmap(pfn_to_page(pfn)); #ifdef CONFIG_HAS_IOMEM - else - memunmap(khva); + else + memunmap(khva); #endif - } } static inline bool mmu_notifier_retry_cache(struct kvm *kvm, unsigned long mmu_seq) @@ -175,7 +186,7 @@ static kvm_pfn_t hva_to_pfn_retry(struct gfn_to_pfn_cache *gpc) * the existing mapping and didn't create a new one. */ if (new_khva != old_khva) - gpc_unmap_khva(new_pfn, new_khva); + gpc_unmap(new_pfn, new_khva); kvm_release_pfn_clean(new_pfn); @@ -193,15 +204,11 @@ static kvm_pfn_t hva_to_pfn_retry(struct gfn_to_pfn_cache *gpc) * too must be done outside of gpc->lock! */ if (gpc->usage & KVM_HOST_USES_PFN) { - if (new_pfn == gpc->pfn) { + if (new_pfn == gpc->pfn) new_khva = old_khva; - } else if (pfn_valid(new_pfn)) { - new_khva = kmap(pfn_to_page(new_pfn)); -#ifdef CONFIG_HAS_IOMEM - } else { - new_khva = memremap(pfn_to_hpa(new_pfn), PAGE_SIZE, MEMREMAP_WB); -#endif - } + else + new_khva = gpc_map(new_pfn); + if (!new_khva) { kvm_release_pfn_clean(new_pfn); goto out_error; @@ -326,7 +333,7 @@ static int __kvm_gpc_refresh(struct gfn_to_pfn_cache *gpc, gpa_t gpa, mutex_unlock(&gpc->refresh_lock); if (unmap_old) - gpc_unmap_khva(old_pfn, old_khva); + gpc_unmap(old_pfn, old_khva); return ret; } @@ -412,7 +419,7 @@ void kvm_gpc_deactivate(struct gfn_to_pfn_cache *gpc) list_del(&gpc->list); spin_unlock(&kvm->gpc_lock); - gpc_unmap_khva(old_pfn, old_khva); + gpc_unmap(old_pfn, old_khva); } } EXPORT_SYMBOL_GPL(kvm_gpc_deactivate); From patchwork Thu Sep 14 08:49:40 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Paul Durrant X-Patchwork-Id: 13384887 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 76098EDE99D for ; Thu, 14 Sep 2023 09:08:00 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S237039AbjINJIB (ORCPT ); Thu, 14 Sep 2023 05:08:01 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34906 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S237286AbjINJHi (ORCPT ); Thu, 14 Sep 2023 05:07:38 -0400 Received: from mail.xenproject.org (mail.xenproject.org [104.130.215.37]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 491EE2103; Thu, 14 Sep 2023 02:07:32 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org; s=20200302mail; h=Content-Transfer-Encoding:MIME-Version:References: In-Reply-To:Message-Id:Date:Subject:Cc:To:From; bh=9j0AmZ/7qm/2BMWnwfaTspyGXATj4NrYu8AP7nkD548=; b=RPnnteFvJXgcU4Lg7jAK3PqeGu tQ1BrctcBMnaNUoiZVl53xU5GWt9IFi3FTxGuEMTsnr5y2vt/OH1crR0nQEeonNCaVRvBnD1ALw0F pthdP1+Q1o1DAPlPIH/juuIZ5X9F0y5PW6o40YQuwqN6qXBhUPA4Fb+UYDGXwVGrQnj4=; Received: from xenbits.xenproject.org ([104.239.192.120]) by mail.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1qgi3H-0001iY-00; Thu, 14 Sep 2023 08:50:19 +0000 Received: from ec2-63-33-11-17.eu-west-1.compute.amazonaws.com ([63.33.11.17] helo=REM-PW02S00X.ant.amazon.com) by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92) (envelope-from ) id 1qgi3G-0002T9-MT; Thu, 14 Sep 2023 08:50:18 +0000 From: Paul Durrant To: kvm@vger.kernel.org, linux-kernel@vger.kernel.org Cc: Paul Durrant , Sean Christopherson , Paolo Bonzini , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , "H. Peter Anvin" , David Woodhouse , x86@kernel.org Subject: [PATCH 2/8] KVM: pfncache: add a mark-dirty helper Date: Thu, 14 Sep 2023 08:49:40 +0000 Message-Id: <20230914084946.200043-3-paul@xen.org> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230914084946.200043-1-paul@xen.org> References: <20230914084946.200043-1-paul@xen.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Paul Durrant At the moment pages are marked dirty by open-coded calls to mark_page_dirty_in_slot(), directly deferefencing the gpa and memslot from the cache. After a subsequent patch these may not always be set so add a helper now so that caller will protected from need to know about this detail. Signed-off-by: Paul Durrant Reviewed-by: David Woodhouse --- Cc: Sean Christopherson Cc: Paolo Bonzini Cc: Thomas Gleixner Cc: Ingo Molnar Cc: Borislav Petkov Cc: Dave Hansen Cc: "H. Peter Anvin" Cc: David Woodhouse Cc: x86@kernel.org --- arch/x86/kvm/x86.c | 2 +- arch/x86/kvm/xen.c | 13 ++++++------- include/linux/kvm_host.h | 7 +++++++ virt/kvm/pfncache.c | 6 ++++++ 4 files changed, 20 insertions(+), 8 deletions(-) diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 6c9c81e82e65..d669a8801265 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -3137,7 +3137,7 @@ static void kvm_setup_guest_pvclock(struct kvm_vcpu *v, guest_hv_clock->version = ++vcpu->hv_clock.version; - mark_page_dirty_in_slot(v->kvm, gpc->memslot, gpc->gpa >> PAGE_SHIFT); + kvm_gpc_mark_dirty(gpc); read_unlock_irqrestore(&gpc->lock, flags); trace_kvm_pvclock_update(v->vcpu_id, &vcpu->hv_clock); diff --git a/arch/x86/kvm/xen.c b/arch/x86/kvm/xen.c index 40edf4d1974c..33fddd29824b 100644 --- a/arch/x86/kvm/xen.c +++ b/arch/x86/kvm/xen.c @@ -430,14 +430,13 @@ static void kvm_xen_update_runstate_guest(struct kvm_vcpu *v, bool atomic) smp_wmb(); } - if (user_len2) + if (user_len2) { + kvm_gpc_mark_dirty(gpc2); read_unlock(&gpc2->lock); + } + kvm_gpc_mark_dirty(gpc1); read_unlock_irqrestore(&gpc1->lock, flags); - - mark_page_dirty_in_slot(v->kvm, gpc1->memslot, gpc1->gpa >> PAGE_SHIFT); - if (user_len2) - mark_page_dirty_in_slot(v->kvm, gpc2->memslot, gpc2->gpa >> PAGE_SHIFT); } void kvm_xen_update_runstate(struct kvm_vcpu *v, int state) @@ -543,13 +542,13 @@ void kvm_xen_inject_pending_events(struct kvm_vcpu *v) : "0" (evtchn_pending_sel32)); WRITE_ONCE(vi->evtchn_upcall_pending, 1); } + + kvm_gpc_mark_dirty(gpc); read_unlock_irqrestore(&gpc->lock, flags); /* For the per-vCPU lapic vector, deliver it as MSI. */ if (v->arch.xen.upcall_vector) kvm_xen_inject_vcpu_vector(v); - - mark_page_dirty_in_slot(v->kvm, gpc->memslot, gpc->gpa >> PAGE_SHIFT); } int __kvm_xen_has_interrupt(struct kvm_vcpu *v) diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index fb6c6109fdca..c71e8fbccaaf 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -1367,6 +1367,13 @@ int kvm_gpc_refresh(struct gfn_to_pfn_cache *gpc, unsigned long len); */ void kvm_gpc_deactivate(struct gfn_to_pfn_cache *gpc); +/** + * kvm_gpc_mark_dirty - mark a cached page as dirty. + * + * @gpc: struct gfn_to_pfn_cache object. + */ +void kvm_gpc_mark_dirty(struct gfn_to_pfn_cache *gpc); + void kvm_sigset_activate(struct kvm_vcpu *vcpu); void kvm_sigset_deactivate(struct kvm_vcpu *vcpu); diff --git a/virt/kvm/pfncache.c b/virt/kvm/pfncache.c index 0f36acdf577f..b68ed7fa56a2 100644 --- a/virt/kvm/pfncache.c +++ b/virt/kvm/pfncache.c @@ -386,6 +386,12 @@ int kvm_gpc_activate(struct gfn_to_pfn_cache *gpc, gpa_t gpa, unsigned long len) } EXPORT_SYMBOL_GPL(kvm_gpc_activate); +void kvm_gpc_mark_dirty(struct gfn_to_pfn_cache *gpc) +{ + mark_page_dirty_in_slot(gpc->kvm, gpc->memslot, gpc->gpa >> PAGE_SHIFT); +} +EXPORT_SYMBOL_GPL(kvm_gpc_mark_dirty); + void kvm_gpc_deactivate(struct gfn_to_pfn_cache *gpc) { struct kvm *kvm = gpc->kvm; From patchwork Thu Sep 14 08:49:41 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Paul Durrant X-Patchwork-Id: 13384886 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id B74C2EDE99C for ; Thu, 14 Sep 2023 09:07:49 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236647AbjINJHw (ORCPT ); Thu, 14 Sep 2023 05:07:52 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54140 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S237238AbjINJHe (ORCPT ); Thu, 14 Sep 2023 05:07:34 -0400 Received: from mail.xenproject.org (mail.xenproject.org [104.130.215.37]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B46551FEC; Thu, 14 Sep 2023 02:07:30 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org; s=20200302mail; h=Content-Transfer-Encoding:MIME-Version:References: In-Reply-To:Message-Id:Date:Subject:Cc:To:From; bh=Ho3cj7zg6ZIpkOjl4ECt/WO9up0/ZCgal7wWGoWnZjk=; b=47u13g3Z4yS/4L5iSdSA50vS5K 7JLS93qQz0fUled8RoXqc3w38/kvoqTsOVllL9NcUN8nA/TzOmpZVgT5GofQM8cBdAcAlxIgyVYbO XV70HtNHsPwYhCpj6O70BAF3PnQc6XPjEpl27KWI64P8d4kCnQmNaJwRd0Zsb9tqjmFQ=; Received: from xenbits.xenproject.org ([104.239.192.120]) by mail.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1qgi3I-0001id-F6; Thu, 14 Sep 2023 08:50:20 +0000 Received: from ec2-63-33-11-17.eu-west-1.compute.amazonaws.com ([63.33.11.17] helo=REM-PW02S00X.ant.amazon.com) by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92) (envelope-from ) id 1qgi3I-0002T9-7a; Thu, 14 Sep 2023 08:50:20 +0000 From: Paul Durrant To: kvm@vger.kernel.org, linux-kernel@vger.kernel.org Cc: Paul Durrant , David Woodhouse , Sean Christopherson , Paolo Bonzini , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , "H. Peter Anvin" , x86@kernel.org Subject: [PATCH 3/8] KVM: pfncache: add a helper to get the gpa Date: Thu, 14 Sep 2023 08:49:41 +0000 Message-Id: <20230914084946.200043-4-paul@xen.org> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230914084946.200043-1-paul@xen.org> References: <20230914084946.200043-1-paul@xen.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Paul Durrant A subsequent patch will rename this field since it will become overloaded. To avoid churn in places that currently retrieve the gpa, add a helper for that purpose now. Signed-off-by: Paul Durrant Reviewed-by: David Woodhouse --- Cc: David Woodhouse Cc: Sean Christopherson Cc: Paolo Bonzini Cc: Thomas Gleixner Cc: Ingo Molnar Cc: Borislav Petkov Cc: Dave Hansen Cc: "H. Peter Anvin" Cc: x86@kernel.org --- arch/x86/kvm/xen.c | 15 ++++++++------- include/linux/kvm_host.h | 7 +++++++ virt/kvm/pfncache.c | 6 ++++++ 3 files changed, 21 insertions(+), 7 deletions(-) diff --git a/arch/x86/kvm/xen.c b/arch/x86/kvm/xen.c index 33fddd29824b..8e6fdcd7bb6e 100644 --- a/arch/x86/kvm/xen.c +++ b/arch/x86/kvm/xen.c @@ -261,8 +261,8 @@ static void kvm_xen_update_runstate_guest(struct kvm_vcpu *v, bool atomic) * alignment (and the 32-bit ABI doesn't align the 64-bit integers * anyway, even if the overall struct had been 64-bit aligned). */ - if ((gpc1->gpa & ~PAGE_MASK) + user_len >= PAGE_SIZE) { - user_len1 = PAGE_SIZE - (gpc1->gpa & ~PAGE_MASK); + if ((kvm_gpc_gpa(gpc1) & ~PAGE_MASK) + user_len >= PAGE_SIZE) { + user_len1 = PAGE_SIZE - (kvm_gpc_gpa(gpc1) & ~PAGE_MASK); user_len2 = user_len - user_len1; } else { user_len1 = user_len; @@ -343,7 +343,7 @@ static void kvm_xen_update_runstate_guest(struct kvm_vcpu *v, bool atomic) * to the second page now because the guest changed to * 64-bit mode, the second GPC won't have been set up. */ - if (kvm_gpc_activate(gpc2, gpc1->gpa + user_len1, + if (kvm_gpc_activate(gpc2, kvm_gpc_gpa(gpc1) + user_len1, user_len2)) return; @@ -677,7 +677,8 @@ int kvm_xen_hvm_get_attr(struct kvm *kvm, struct kvm_xen_hvm_attr *data) case KVM_XEN_ATTR_TYPE_SHARED_INFO: if (kvm->arch.xen.shinfo_cache.active) - data->u.shared_info.gfn = gpa_to_gfn(kvm->arch.xen.shinfo_cache.gpa); + data->u.shared_info.gfn = + gpa_to_gfn(kvm_gpc_gpa(&kvm->arch.xen.shinfo_cache)); else data->u.shared_info.gfn = KVM_XEN_INVALID_GFN; r = 0; @@ -955,7 +956,7 @@ int kvm_xen_vcpu_get_attr(struct kvm_vcpu *vcpu, struct kvm_xen_vcpu_attr *data) switch (data->type) { case KVM_XEN_VCPU_ATTR_TYPE_VCPU_INFO: if (vcpu->arch.xen.vcpu_info_cache.active) - data->u.gpa = vcpu->arch.xen.vcpu_info_cache.gpa; + data->u.gpa = kvm_gpc_gpa(&vcpu->arch.xen.vcpu_info_cache); else data->u.gpa = KVM_XEN_INVALID_GPA; r = 0; @@ -963,7 +964,7 @@ int kvm_xen_vcpu_get_attr(struct kvm_vcpu *vcpu, struct kvm_xen_vcpu_attr *data) case KVM_XEN_VCPU_ATTR_TYPE_VCPU_TIME_INFO: if (vcpu->arch.xen.vcpu_time_info_cache.active) - data->u.gpa = vcpu->arch.xen.vcpu_time_info_cache.gpa; + data->u.gpa = kvm_gpc_gpa(&vcpu->arch.xen.vcpu_time_info_cache); else data->u.gpa = KVM_XEN_INVALID_GPA; r = 0; @@ -975,7 +976,7 @@ int kvm_xen_vcpu_get_attr(struct kvm_vcpu *vcpu, struct kvm_xen_vcpu_attr *data) break; } if (vcpu->arch.xen.runstate_cache.active) { - data->u.gpa = vcpu->arch.xen.runstate_cache.gpa; + data->u.gpa = kvm_gpc_gpa(&vcpu->arch.xen.runstate_cache); r = 0; } break; diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index c71e8fbccaaf..4d8027fe9928 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -1374,6 +1374,13 @@ void kvm_gpc_deactivate(struct gfn_to_pfn_cache *gpc); */ void kvm_gpc_mark_dirty(struct gfn_to_pfn_cache *gpc); +/** + * kvm_gpc_gpa - retrieve the guest physical address of a cached mapping + * + * @gpc: struct gfn_to_pfn_cache object. + */ +gpa_t kvm_gpc_gpa(struct gfn_to_pfn_cache *gpc); + void kvm_sigset_activate(struct kvm_vcpu *vcpu); void kvm_sigset_deactivate(struct kvm_vcpu *vcpu); diff --git a/virt/kvm/pfncache.c b/virt/kvm/pfncache.c index b68ed7fa56a2..17afbb464a70 100644 --- a/virt/kvm/pfncache.c +++ b/virt/kvm/pfncache.c @@ -386,6 +386,12 @@ int kvm_gpc_activate(struct gfn_to_pfn_cache *gpc, gpa_t gpa, unsigned long len) } EXPORT_SYMBOL_GPL(kvm_gpc_activate); +gpa_t kvm_gpc_gpa(struct gfn_to_pfn_cache *gpc) +{ + return gpc->gpa; +} +EXPORT_SYMBOL_GPL(kvm_gpc_gpa); + void kvm_gpc_mark_dirty(struct gfn_to_pfn_cache *gpc) { mark_page_dirty_in_slot(gpc->kvm, gpc->memslot, gpc->gpa >> PAGE_SHIFT); From patchwork Thu Sep 14 08:49:42 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Paul Durrant X-Patchwork-Id: 13384881 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id CE73BEDE99C for ; Thu, 14 Sep 2023 09:07:24 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S237107AbjINJH0 (ORCPT ); Thu, 14 Sep 2023 05:07:26 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51154 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236831AbjINJHW (ORCPT ); Thu, 14 Sep 2023 05:07:22 -0400 Received: from mail.xenproject.org (mail.xenproject.org [104.130.215.37]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 4EB4A1BF8; Thu, 14 Sep 2023 02:07:18 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org; s=20200302mail; h=Content-Transfer-Encoding:MIME-Version:References: In-Reply-To:Message-Id:Date:Subject:Cc:To:From; bh=9p3EjojVRcDnVHSKvpUes+XfyD1e4JyIwNQ5PTtGcrY=; b=iaGoPaMU1xJDSwLQtlFxK98q7w 9gB7UIN7MNlXhzpjT49l8Og3mntPxeCxNaqhwfldq7mC5LrcXs4mOSsHUnYJpMNHzv9WG9wkfjsDi 9OAL2QpTuseE7UF7Xf8cvniJ+r6aXUbKe8DPtPFmot/SwED8zjXa6EVZTpQcWQ0lTAhI=; Received: from xenbits.xenproject.org ([104.239.192.120]) by mail.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1qgi3J-0001ij-Ab; Thu, 14 Sep 2023 08:50:21 +0000 Received: from ec2-63-33-11-17.eu-west-1.compute.amazonaws.com ([63.33.11.17] helo=REM-PW02S00X.ant.amazon.com) by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92) (envelope-from ) id 1qgi3J-0002T9-3L; Thu, 14 Sep 2023 08:50:21 +0000 From: Paul Durrant To: kvm@vger.kernel.org, linux-kernel@vger.kernel.org Cc: Paul Durrant , Paolo Bonzini , David Woodhouse Subject: [PATCH 4/8] KVM: pfncache: base offset check on khva rather than gpa Date: Thu, 14 Sep 2023 08:49:42 +0000 Message-Id: <20230914084946.200043-5-paul@xen.org> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230914084946.200043-1-paul@xen.org> References: <20230914084946.200043-1-paul@xen.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Paul Durrant After a subsequent patch, the gpa may not always be set whereas khva will (as long as the cache valid flag is also set). Signed-off-by: Paul Durrant Reviewed-by: David Woodhouse --- Cc: Paolo Bonzini Cc: David Woodhouse --- virt/kvm/pfncache.c | 7 +++++-- 1 file changed, 5 insertions(+), 2 deletions(-) diff --git a/virt/kvm/pfncache.c b/virt/kvm/pfncache.c index 17afbb464a70..37bcb4399780 100644 --- a/virt/kvm/pfncache.c +++ b/virt/kvm/pfncache.c @@ -83,15 +83,18 @@ bool kvm_gpc_check(struct gfn_to_pfn_cache *gpc, unsigned long len) if (!gpc->active) return false; - if ((gpc->gpa & ~PAGE_MASK) + len > PAGE_SIZE) + if (gpc->generation != slots->generation) return false; - if (gpc->generation != slots->generation || kvm_is_error_hva(gpc->uhva)) + if (kvm_is_error_hva(gpc->uhva)) return false; if (!gpc->valid) return false; + if (offset_in_page(gpc->khva) + len > PAGE_SIZE) + return false; + return true; } EXPORT_SYMBOL_GPL(kvm_gpc_check); From patchwork Thu Sep 14 08:49:43 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Paul Durrant X-Patchwork-Id: 13384885 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8DADFEDE99E for ; Thu, 14 Sep 2023 09:07:46 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S237397AbjINJHt (ORCPT ); Thu, 14 Sep 2023 05:07:49 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:35124 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236725AbjINJHd (ORCPT ); Thu, 14 Sep 2023 05:07:33 -0400 Received: from mail.xenproject.org (mail.xenproject.org [104.130.215.37]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D9BED1FDF; Thu, 14 Sep 2023 02:07:28 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org; s=20200302mail; h=Content-Transfer-Encoding:MIME-Version:References: In-Reply-To:Message-Id:Date:Subject:Cc:To:From; bh=W+k90iySvRP86+2mqUltmg12TuGJWTg4zt6ggERYGVQ=; b=Pkgm43z+B9mnMcQotWvjK0wZC+ Uwc8AmlSUh0Y/IoU1POvF0k018ZLMBhPaAMY9X/jM2ZNbJ4yUzfmLpMTQIElbm/7161cKJa5LfqRQ nF+XYv/xlFeNKxvBnzStaSn0sR/njwS+FuCvH1xTKKYYFV+K2nbFIBofAg1gdZyfdYo8=; Received: from xenbits.xenproject.org ([104.239.192.120]) by mail.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1qgi3K-0001jZ-6d; Thu, 14 Sep 2023 08:50:22 +0000 Received: from ec2-63-33-11-17.eu-west-1.compute.amazonaws.com ([63.33.11.17] helo=REM-PW02S00X.ant.amazon.com) by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92) (envelope-from ) id 1qgi3J-0002T9-VF; Thu, 14 Sep 2023 08:50:22 +0000 From: Paul Durrant To: kvm@vger.kernel.org, linux-kernel@vger.kernel.org Cc: Paul Durrant , Paolo Bonzini , David Woodhouse Subject: [PATCH 5/8] KVM: pfncache: allow a cache to be activated with a fixed (userspace) HVA Date: Thu, 14 Sep 2023 08:49:43 +0000 Message-Id: <20230914084946.200043-6-paul@xen.org> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230914084946.200043-1-paul@xen.org> References: <20230914084946.200043-1-paul@xen.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Paul Durrant Some cached pages may actually be overlays on guest memory that have a fixed HVA within the VMM. It's pointless to invalidate such cached mappings if the overlay is moved so allow a cache to be activated directly with the HVA to cater for such cases. A subsequent patch will make use of this facility. Signed-off-by: Paul Durrant --- Cc: Paolo Bonzini Cc: David Woodhouse --- include/linux/kvm_host.h | 29 ++++++++++++++++ include/linux/kvm_types.h | 3 +- virt/kvm/pfncache.c | 73 ++++++++++++++++++++++++++++----------- 3 files changed, 84 insertions(+), 21 deletions(-) diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index 4d8027fe9928..6823bae5c66c 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -1321,6 +1321,22 @@ void kvm_gpc_init(struct gfn_to_pfn_cache *gpc, struct kvm *kvm, */ int kvm_gpc_activate(struct gfn_to_pfn_cache *gpc, gpa_t gpa, unsigned long len); +/** + * kvm_gpc_activate_hva - prepare a cached kernel mapping and HPA for a given HVA. + * + * @gpc: struct gfn_to_pfn_cache object. + * @hva: userspace virtual address to map. + * @len: sanity check; the range being access must fit a single page. + * + * @return: 0 for success. + * -EINVAL for a mapping which would cross a page boundary. + * -EFAULT for an untranslatable guest physical address. + * + * The semantics of this function are the same as those of kvm_gpc_activate(). It + * merely bypasses a layer of address translation. + */ +int kvm_gpc_activate_hva(struct gfn_to_pfn_cache *gpc, unsigned long hva, unsigned long len); + /** * kvm_gpc_check - check validity of a gfn_to_pfn_cache. * @@ -1378,9 +1394,22 @@ void kvm_gpc_mark_dirty(struct gfn_to_pfn_cache *gpc); * kvm_gpc_gpa - retrieve the guest physical address of a cached mapping * * @gpc: struct gfn_to_pfn_cache object. + * + * @return: If the cache was activated with a fixed HVA then INVALID_GPA + * will be returned. */ gpa_t kvm_gpc_gpa(struct gfn_to_pfn_cache *gpc); +/** + * kvm_gpc_hva - retrieve the fixed host physical address of a cached mapping + * + * @gpc: struct gfn_to_pfn_cache object. + * + * @return: If the cache was activated with a guest physical address then + * 0 will be returned. + */ +unsigned long kvm_gpc_hva(struct gfn_to_pfn_cache *gpc); + void kvm_sigset_activate(struct kvm_vcpu *vcpu); void kvm_sigset_deactivate(struct kvm_vcpu *vcpu); diff --git a/include/linux/kvm_types.h b/include/linux/kvm_types.h index 6f4737d5046a..ae97fb19c0d5 100644 --- a/include/linux/kvm_types.h +++ b/include/linux/kvm_types.h @@ -64,7 +64,8 @@ struct gfn_to_hva_cache { struct gfn_to_pfn_cache { u64 generation; - gpa_t gpa; + unsigned long addr; + bool addr_is_gpa; unsigned long uhva; struct kvm_memory_slot *memslot; struct kvm *kvm; diff --git a/virt/kvm/pfncache.c b/virt/kvm/pfncache.c index 37bcb4399780..a86dc6cf49c6 100644 --- a/virt/kvm/pfncache.c +++ b/virt/kvm/pfncache.c @@ -83,7 +83,7 @@ bool kvm_gpc_check(struct gfn_to_pfn_cache *gpc, unsigned long len) if (!gpc->active) return false; - if (gpc->generation != slots->generation) + if (gpc->addr_is_gpa && gpc->generation != slots->generation) return false; if (kvm_is_error_hva(gpc->uhva)) @@ -229,7 +229,7 @@ static kvm_pfn_t hva_to_pfn_retry(struct gfn_to_pfn_cache *gpc) gpc->valid = true; gpc->pfn = new_pfn; - gpc->khva = new_khva + (gpc->gpa & ~PAGE_MASK); + gpc->khva = new_khva + (gpc->addr & ~PAGE_MASK); /* * Put the reference to the _new_ pfn. The pfn is now tracked by the @@ -246,11 +246,11 @@ static kvm_pfn_t hva_to_pfn_retry(struct gfn_to_pfn_cache *gpc) return -EFAULT; } -static int __kvm_gpc_refresh(struct gfn_to_pfn_cache *gpc, gpa_t gpa, - unsigned long len) +static int __kvm_gpc_refresh(struct gfn_to_pfn_cache *gpc, u64 addr, + unsigned long len, bool addr_is_gpa) { struct kvm_memslots *slots = kvm_memslots(gpc->kvm); - unsigned long page_offset = gpa & ~PAGE_MASK; + unsigned long page_offset = addr & ~PAGE_MASK; bool unmap_old = false; unsigned long old_uhva; kvm_pfn_t old_pfn; @@ -282,22 +282,34 @@ static int __kvm_gpc_refresh(struct gfn_to_pfn_cache *gpc, gpa_t gpa, old_khva = gpc->khva - offset_in_page(gpc->khva); old_uhva = gpc->uhva; - /* If the userspace HVA is invalid, refresh that first */ - if (gpc->gpa != gpa || gpc->generation != slots->generation || + /* + * If the address has changed, switched from guest to host (or vice + * versa), or it's a guest address and the memory slots have been + * updated, we need to refresh the userspace HVA. + */ + if (gpc->addr != addr || + gpc->addr_is_gpa != addr_is_gpa || + (addr_is_gpa && gpc->generation != slots->generation) || kvm_is_error_hva(gpc->uhva)) { - gfn_t gfn = gpa_to_gfn(gpa); + gpc->addr = addr; + gpc->addr_is_gpa = addr_is_gpa; - gpc->gpa = gpa; - gpc->generation = slots->generation; - gpc->memslot = __gfn_to_memslot(slots, gfn); - gpc->uhva = gfn_to_hva_memslot(gpc->memslot, gfn); + if (addr_is_gpa) { + gfn_t gfn = gpa_to_gfn(addr); - if (kvm_is_error_hva(gpc->uhva)) { - ret = -EFAULT; - goto out; + gpc->generation = slots->generation; + gpc->memslot = __gfn_to_memslot(slots, gfn); + gpc->uhva = gfn_to_hva_memslot(gpc->memslot, gfn); + } else { + gpc->uhva = addr & PAGE_MASK; } } + if (kvm_is_error_hva(gpc->uhva)) { + ret = -EFAULT; + goto out; + } + /* * If the userspace HVA changed or the PFN was already invalid, * drop the lock and do the HVA to PFN lookup again. @@ -343,7 +355,7 @@ static int __kvm_gpc_refresh(struct gfn_to_pfn_cache *gpc, gpa_t gpa, int kvm_gpc_refresh(struct gfn_to_pfn_cache *gpc, unsigned long len) { - return __kvm_gpc_refresh(gpc, gpc->gpa, len); + return __kvm_gpc_refresh(gpc, gpc->addr, len, gpc->addr_is_gpa); } EXPORT_SYMBOL_GPL(kvm_gpc_refresh); @@ -364,7 +376,8 @@ void kvm_gpc_init(struct gfn_to_pfn_cache *gpc, struct kvm *kvm, } EXPORT_SYMBOL_GPL(kvm_gpc_init); -int kvm_gpc_activate(struct gfn_to_pfn_cache *gpc, gpa_t gpa, unsigned long len) +static int __kvm_gpc_activate(struct gfn_to_pfn_cache *gpc, unsigned long addr, unsigned long len, + bool addr_is_gpa) { struct kvm *kvm = gpc->kvm; @@ -385,19 +398,39 @@ int kvm_gpc_activate(struct gfn_to_pfn_cache *gpc, gpa_t gpa, unsigned long len) gpc->active = true; write_unlock_irq(&gpc->lock); } - return __kvm_gpc_refresh(gpc, gpa, len); + return __kvm_gpc_refresh(gpc, addr, len, addr_is_gpa); +} + +int kvm_gpc_activate(struct gfn_to_pfn_cache *gpc, gpa_t gpa, unsigned long len) +{ + return __kvm_gpc_activate(gpc, gpa, len, true); } EXPORT_SYMBOL_GPL(kvm_gpc_activate); gpa_t kvm_gpc_gpa(struct gfn_to_pfn_cache *gpc) { - return gpc->gpa; + return gpc->addr_is_gpa ? gpc->addr : INVALID_GPA; } EXPORT_SYMBOL_GPL(kvm_gpc_gpa); +int kvm_gpc_activate_hva(struct gfn_to_pfn_cache *gpc, unsigned long hva, unsigned long len) +{ + return __kvm_gpc_activate(gpc, hva, len, false); +} +EXPORT_SYMBOL_GPL(kvm_gpc_activate_hva); + +unsigned long kvm_gpc_hva(struct gfn_to_pfn_cache *gpc) +{ + return !gpc->addr_is_gpa ? gpc->addr : 0; +} +EXPORT_SYMBOL_GPL(kvm_gpc_hva); + void kvm_gpc_mark_dirty(struct gfn_to_pfn_cache *gpc) { - mark_page_dirty_in_slot(gpc->kvm, gpc->memslot, gpc->gpa >> PAGE_SHIFT); + if (!gpc->addr_is_gpa) + return; + + mark_page_dirty_in_slot(gpc->kvm, gpc->memslot, gpc->addr >> PAGE_SHIFT); } EXPORT_SYMBOL_GPL(kvm_gpc_mark_dirty); From patchwork Thu Sep 14 08:49:44 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Paul Durrant X-Patchwork-Id: 13384884 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 06675EDE99D for ; Thu, 14 Sep 2023 09:07:45 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S237148AbjINJHs (ORCPT ); Thu, 14 Sep 2023 05:07:48 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41552 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S237147AbjINJHb (ORCPT ); Thu, 14 Sep 2023 05:07:31 -0400 Received: from mail.xenproject.org (mail.xenproject.org [104.130.215.37]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 48DB61FD8; Thu, 14 Sep 2023 02:07:27 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org; s=20200302mail; h=Content-Transfer-Encoding:MIME-Version:References: In-Reply-To:Message-Id:Date:Subject:Cc:To:From; bh=oynQFx0otpLWpBva3wvVisOGWIw7kSbKH068QhWuZHg=; b=0SyaZmIYZEWLIMJhI4fFFRNZwP thgb8VZt7BuPHIFH+3CZSAd/SRB1oDDjz+lD1pYbFLhBUzm+MwXm7XtRa1QG9KvvFtLqJfzDkk02q Jrsi1U5rfTL+D5URpzu5ZxGQlt4eNYmzI0o0jeqnRwxR4Izz0gDQXZqNui0ieFf8qZPg=; Received: from xenbits.xenproject.org ([104.239.192.120]) by mail.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1qgi3L-0001kP-Ns; Thu, 14 Sep 2023 08:50:23 +0000 Received: from ec2-63-33-11-17.eu-west-1.compute.amazonaws.com ([63.33.11.17] helo=REM-PW02S00X.ant.amazon.com) by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92) (envelope-from ) id 1qgi3L-0002T9-GN; Thu, 14 Sep 2023 08:50:23 +0000 From: Paul Durrant To: kvm@vger.kernel.org, linux-kernel@vger.kernel.org Cc: Paul Durrant , Sean Christopherson , Paolo Bonzini , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , "H. Peter Anvin" , David Woodhouse , x86@kernel.org Subject: [PATCH 6/8] KVM: xen: allow shared_info to be mapped by fixed HVA Date: Thu, 14 Sep 2023 08:49:44 +0000 Message-Id: <20230914084946.200043-7-paul@xen.org> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230914084946.200043-1-paul@xen.org> References: <20230914084946.200043-1-paul@xen.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Paul Durrant The shared_info page is not guest memory as such. It is a dedicated page allocated by the VMM and overlaid onto guest memory in a GFN chosen by the guest. The guest may even request that shared_info be moved from one GFN to another, but the HVA is never going to change. Thus it makes much more sense to map the shared_info page in kernel once using this fixed HVA. Hence add a new KVM_XEN_ATTR_TYPE_SHARED_INFO_HVA attribute type for this purpose and a KVM_XEN_HVM_CONFIG_SHARED_INFO_HVA flag to advertize its availability. Signed-off-by: Paul Durrant Reviewed-by: David Woodhouse --- Cc: Sean Christopherson Cc: Paolo Bonzini Cc: Thomas Gleixner Cc: Ingo Molnar Cc: Borislav Petkov Cc: Dave Hansen Cc: "H. Peter Anvin" Cc: David Woodhouse Cc: x86@kernel.org --- arch/x86/kvm/x86.c | 3 ++- arch/x86/kvm/xen.c | 28 ++++++++++++++++++++++------ include/uapi/linux/kvm.h | 6 +++++- 3 files changed, 29 insertions(+), 8 deletions(-) diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index d669a8801265..0df06f47801c 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -4531,7 +4531,8 @@ int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext) KVM_XEN_HVM_CONFIG_INTERCEPT_HCALL | KVM_XEN_HVM_CONFIG_SHARED_INFO | KVM_XEN_HVM_CONFIG_EVTCHN_2LEVEL | - KVM_XEN_HVM_CONFIG_EVTCHN_SEND; + KVM_XEN_HVM_CONFIG_EVTCHN_SEND | + KVM_XEN_HVM_CONFIG_SHARED_INFO_HVA; if (sched_info_on()) r |= KVM_XEN_HVM_CONFIG_RUNSTATE | KVM_XEN_HVM_CONFIG_RUNSTATE_UPDATE_FLAG; diff --git a/arch/x86/kvm/xen.c b/arch/x86/kvm/xen.c index 8e6fdcd7bb6e..1abb4547642a 100644 --- a/arch/x86/kvm/xen.c +++ b/arch/x86/kvm/xen.c @@ -34,24 +34,27 @@ static bool kvm_xen_hcall_evtchn_send(struct kvm_vcpu *vcpu, u64 param, u64 *r); DEFINE_STATIC_KEY_DEFERRED_FALSE(kvm_xen_enabled, HZ); -static int kvm_xen_shared_info_init(struct kvm *kvm, gfn_t gfn) +static int kvm_xen_shared_info_init(struct kvm *kvm, u64 addr, bool addr_is_gfn) { struct gfn_to_pfn_cache *gpc = &kvm->arch.xen.shinfo_cache; struct pvclock_wall_clock *wc; - gpa_t gpa = gfn_to_gpa(gfn); u32 *wc_sec_hi; u32 wc_version; u64 wall_nsec; int ret = 0; int idx = srcu_read_lock(&kvm->srcu); - if (gfn == KVM_XEN_INVALID_GFN) { + if ((addr_is_gfn && addr == KVM_XEN_INVALID_GFN) || + (!addr_is_gfn && addr == 0)) { kvm_gpc_deactivate(gpc); goto out; } do { - ret = kvm_gpc_activate(gpc, gpa, PAGE_SIZE); + if (addr_is_gfn) + ret = kvm_gpc_activate(gpc, gfn_to_gpa(addr), PAGE_SIZE); + else + ret = kvm_gpc_activate_hva(gpc, addr, PAGE_SIZE); if (ret) goto out; @@ -604,7 +607,6 @@ int kvm_xen_hvm_set_attr(struct kvm *kvm, struct kvm_xen_hvm_attr *data) { int r = -ENOENT; - switch (data->type) { case KVM_XEN_ATTR_TYPE_LONG_MODE: if (!IS_ENABLED(CONFIG_64BIT) && data->u.long_mode) { @@ -619,7 +621,13 @@ int kvm_xen_hvm_set_attr(struct kvm *kvm, struct kvm_xen_hvm_attr *data) case KVM_XEN_ATTR_TYPE_SHARED_INFO: mutex_lock(&kvm->arch.xen.xen_lock); - r = kvm_xen_shared_info_init(kvm, data->u.shared_info.gfn); + r = kvm_xen_shared_info_init(kvm, data->u.shared_info.gfn, true); + mutex_unlock(&kvm->arch.xen.xen_lock); + break; + + case KVM_XEN_ATTR_TYPE_SHARED_INFO_HVA: + mutex_lock(&kvm->arch.xen.xen_lock); + r = kvm_xen_shared_info_init(kvm, data->u.shared_info.hva, false); mutex_unlock(&kvm->arch.xen.xen_lock); break; @@ -684,6 +692,14 @@ int kvm_xen_hvm_get_attr(struct kvm *kvm, struct kvm_xen_hvm_attr *data) r = 0; break; + case KVM_XEN_ATTR_TYPE_SHARED_INFO_HVA: + if (kvm->arch.xen.shinfo_cache.active) + data->u.shared_info.hva = kvm_gpc_hva(&kvm->arch.xen.shinfo_cache); + else + data->u.shared_info.hva = 0; + r = 0; + break; + case KVM_XEN_ATTR_TYPE_UPCALL_VECTOR: data->u.vector = kvm->arch.xen.upcall_vector; r = 0; diff --git a/include/uapi/linux/kvm.h b/include/uapi/linux/kvm.h index f089ab290978..d3f371bafad8 100644 --- a/include/uapi/linux/kvm.h +++ b/include/uapi/linux/kvm.h @@ -1282,6 +1282,7 @@ struct kvm_x86_mce { #define KVM_XEN_HVM_CONFIG_EVTCHN_2LEVEL (1 << 4) #define KVM_XEN_HVM_CONFIG_EVTCHN_SEND (1 << 5) #define KVM_XEN_HVM_CONFIG_RUNSTATE_UPDATE_FLAG (1 << 6) +#define KVM_XEN_HVM_CONFIG_SHARED_INFO_HVA (1 << 7) struct kvm_xen_hvm_config { __u32 flags; @@ -1786,9 +1787,10 @@ struct kvm_xen_hvm_attr { __u8 long_mode; __u8 vector; __u8 runstate_update_flag; - struct { + union { __u64 gfn; #define KVM_XEN_INVALID_GFN ((__u64)-1) + __u64 hva; } shared_info; struct { __u32 send_port; @@ -1830,6 +1832,8 @@ struct kvm_xen_hvm_attr { #define KVM_XEN_ATTR_TYPE_XEN_VERSION 0x4 /* Available with KVM_CAP_XEN_HVM / KVM_XEN_HVM_CONFIG_RUNSTATE_UPDATE_FLAG */ #define KVM_XEN_ATTR_TYPE_RUNSTATE_UPDATE_FLAG 0x5 +/* Available with KVM_CAP_XEN_HVM / KVM_XEN_HVM_CONFIG_SHARED_INFO_HVA */ +#define KVM_XEN_ATTR_TYPE_SHARED_INFO_HVA 0x6 /* Per-vCPU Xen attributes */ #define KVM_XEN_VCPU_GET_ATTR _IOWR(KVMIO, 0xca, struct kvm_xen_vcpu_attr) From patchwork Thu Sep 14 08:49:45 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Paul Durrant X-Patchwork-Id: 13384882 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id EFCABEDE99D for ; Thu, 14 Sep 2023 09:07:26 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236961AbjINJH3 (ORCPT ); Thu, 14 Sep 2023 05:07:29 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51224 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236997AbjINJHX (ORCPT ); Thu, 14 Sep 2023 05:07:23 -0400 Received: from mail.xenproject.org (mail.xenproject.org [104.130.215.37]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7DE971FC3; Thu, 14 Sep 2023 02:07:19 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org; s=20200302mail; h=Content-Transfer-Encoding:MIME-Version:References: In-Reply-To:Message-Id:Date:Subject:Cc:To:From; bh=pI5xVhH9buQW8CRMyJWS/eji0TR5A7l+TQ+o/QY3zBI=; b=mUgXBbw0amssQmoY0P2n1Dad3y wAPmcztRCFYbDZysZnHZKJ5IBDjTOz4rPHJNW+PrzKjW3zjI502rxSNwK6kUcjwLwLzfkc97CEGJL rBC3hzY2XivUC2A8tQvmPYp1Eg6QyDGOXkiHYfpQgeL61aQCiGzZ7vVFnjaLKStKyjiQ=; Received: from xenbits.xenproject.org ([104.239.192.120]) by mail.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1qgi3N-0001kZ-8u; Thu, 14 Sep 2023 08:50:25 +0000 Received: from ec2-63-33-11-17.eu-west-1.compute.amazonaws.com ([63.33.11.17] helo=REM-PW02S00X.ant.amazon.com) by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92) (envelope-from ) id 1qgi3N-0002T9-1R; Thu, 14 Sep 2023 08:50:25 +0000 From: Paul Durrant To: kvm@vger.kernel.org, linux-kernel@vger.kernel.org Cc: Paul Durrant , Sean Christopherson , Paolo Bonzini , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , "H. Peter Anvin" , David Woodhouse , x86@kernel.org Subject: [PATCH 7/8] KVM: xen: prepare for using 'default' vcpu_info Date: Thu, 14 Sep 2023 08:49:45 +0000 Message-Id: <20230914084946.200043-8-paul@xen.org> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230914084946.200043-1-paul@xen.org> References: <20230914084946.200043-1-paul@xen.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Paul Durrant The shared_info page contains an array of 32 vcpu_info structures which may be used by guests (with less than 32 vCPUs) if they don't explicitly register vcpu_info structures in their own memory, using a VCPUOP_register_vcpu_info hypercall. Currently we rely on the VMM always registering vcpu_info structures, even if the guest doesn't make that hypercall, which is somewhat bogus as (as has been stated in the comment of a previous commit) the shared_info page is not guest memory. Prepare to automatically use the vcpu_info info embedded in shared_info by default, by adding a get_vcpu_info_cache() helper function. This function also passes back an offset to be added to the cached khva. This is currently always zero since we're still relying on the current VMM behaviour. A subsequent patch will make proper use of it. NOTE: To avoid leaking detail of the vcpu_info duality into the main x86 code, a kvm_xen_guest_time_update() has also been added and use of this requires that kvm_setup_guest_pvclock() ceases to be a static function. Signed-off-by: Paul Durrant Reviewed-by: David Woodhouse --- Cc: Sean Christopherson Cc: Paolo Bonzini Cc: Thomas Gleixner Cc: Ingo Molnar Cc: Borislav Petkov Cc: Dave Hansen Cc: "H. Peter Anvin" Cc: David Woodhouse Cc: x86@kernel.org --- arch/x86/include/asm/kvm_host.h | 4 +++ arch/x86/kvm/x86.c | 12 +++----- arch/x86/kvm/xen.c | 50 ++++++++++++++++++++++++++------- arch/x86/kvm/xen.h | 6 +++- 4 files changed, 53 insertions(+), 19 deletions(-) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index 1a4def36d5bb..6d896f9161c2 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -2238,4 +2238,8 @@ int memslot_rmap_alloc(struct kvm_memory_slot *slot, unsigned long npages); */ #define KVM_EXIT_HYPERCALL_MBZ GENMASK_ULL(31, 1) +void kvm_setup_guest_pvclock(struct kvm_vcpu *v, + struct gfn_to_pfn_cache *gpc, + unsigned int offset); + #endif /* _ASM_X86_KVM_HOST_H */ diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 0df06f47801c..4cd577d01bc4 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -3094,9 +3094,9 @@ u64 get_kvmclock_ns(struct kvm *kvm) return data.clock; } -static void kvm_setup_guest_pvclock(struct kvm_vcpu *v, - struct gfn_to_pfn_cache *gpc, - unsigned int offset) +void kvm_setup_guest_pvclock(struct kvm_vcpu *v, + struct gfn_to_pfn_cache *gpc, + unsigned int offset) { struct kvm_vcpu_arch *vcpu = &v->arch; struct pvclock_vcpu_time_info *guest_hv_clock; @@ -3232,11 +3232,7 @@ static int kvm_guest_time_update(struct kvm_vcpu *v) if (vcpu->pv_time.active) kvm_setup_guest_pvclock(v, &vcpu->pv_time, 0); - if (vcpu->xen.vcpu_info_cache.active) - kvm_setup_guest_pvclock(v, &vcpu->xen.vcpu_info_cache, - offsetof(struct compat_vcpu_info, time)); - if (vcpu->xen.vcpu_time_info_cache.active) - kvm_setup_guest_pvclock(v, &vcpu->xen.vcpu_time_info_cache, 0); + kvm_xen_guest_time_update(v); kvm_hv_setup_tsc_page(v->kvm, &vcpu->hv_clock); return 0; } diff --git a/arch/x86/kvm/xen.c b/arch/x86/kvm/xen.c index 1abb4547642a..892563fea40f 100644 --- a/arch/x86/kvm/xen.c +++ b/arch/x86/kvm/xen.c @@ -489,6 +489,29 @@ static void kvm_xen_inject_vcpu_vector(struct kvm_vcpu *v) WARN_ON_ONCE(!kvm_irq_delivery_to_apic_fast(v->kvm, NULL, &irq, &r, NULL)); } +struct gfn_to_pfn_cache *get_vcpu_info_cache(struct kvm_vcpu *v, unsigned long *offset) +{ + if (offset) + *offset = 0; + + return &v->arch.xen.vcpu_info_cache; +} + +void kvm_xen_guest_time_update(struct kvm_vcpu *v) +{ + unsigned long offset; + struct gfn_to_pfn_cache *gpc = get_vcpu_info_cache(v, &offset); + + BUILD_BUG_ON(offsetof(struct vcpu_info, time) != + offsetof(struct compat_vcpu_info, time)); + + if (gpc->active) + kvm_setup_guest_pvclock(v, gpc, offset + offsetof(struct compat_vcpu_info, time)); + + if (v->arch.xen.vcpu_time_info_cache.active) + kvm_setup_guest_pvclock(v, &v->arch.xen.vcpu_time_info_cache, 0); +} + /* * On event channel delivery, the vcpu_info may not have been accessible. * In that case, there are bits in vcpu->arch.xen.evtchn_pending_sel which @@ -499,7 +522,8 @@ static void kvm_xen_inject_vcpu_vector(struct kvm_vcpu *v) void kvm_xen_inject_pending_events(struct kvm_vcpu *v) { unsigned long evtchn_pending_sel = READ_ONCE(v->arch.xen.evtchn_pending_sel); - struct gfn_to_pfn_cache *gpc = &v->arch.xen.vcpu_info_cache; + unsigned long offset; + struct gfn_to_pfn_cache *gpc = get_vcpu_info_cache(v, &offset); unsigned long flags; if (!evtchn_pending_sel) @@ -522,7 +546,7 @@ void kvm_xen_inject_pending_events(struct kvm_vcpu *v) /* Now gpc->khva is a valid kernel address for the vcpu_info */ if (IS_ENABLED(CONFIG_64BIT) && v->kvm->arch.xen.long_mode) { - struct vcpu_info *vi = gpc->khva; + struct vcpu_info *vi = gpc->khva + offset; asm volatile(LOCK_PREFIX "orq %0, %1\n" "notq %0\n" @@ -534,7 +558,7 @@ void kvm_xen_inject_pending_events(struct kvm_vcpu *v) WRITE_ONCE(vi->evtchn_upcall_pending, 1); } else { u32 evtchn_pending_sel32 = evtchn_pending_sel; - struct compat_vcpu_info *vi = gpc->khva; + struct compat_vcpu_info *vi = gpc->khva + offset; asm volatile(LOCK_PREFIX "orl %0, %1\n" "notl %0\n" @@ -556,7 +580,8 @@ void kvm_xen_inject_pending_events(struct kvm_vcpu *v) int __kvm_xen_has_interrupt(struct kvm_vcpu *v) { - struct gfn_to_pfn_cache *gpc = &v->arch.xen.vcpu_info_cache; + unsigned long offset; + struct gfn_to_pfn_cache *gpc = get_vcpu_info_cache(v, &offset); unsigned long flags; u8 rc = 0; @@ -598,7 +623,7 @@ int __kvm_xen_has_interrupt(struct kvm_vcpu *v) read_lock_irqsave(&gpc->lock, flags); } - rc = ((struct vcpu_info *)gpc->khva)->evtchn_upcall_pending; + rc = ((struct vcpu_info *)(gpc->khva + offset))->evtchn_upcall_pending; read_unlock_irqrestore(&gpc->lock, flags); return rc; } @@ -1567,7 +1592,7 @@ static void kvm_xen_check_poller(struct kvm_vcpu *vcpu, int port) */ int kvm_xen_set_evtchn_fast(struct kvm_xen_evtchn *xe, struct kvm *kvm) { - struct gfn_to_pfn_cache *gpc = &kvm->arch.xen.shinfo_cache; + struct gfn_to_pfn_cache *gpc; struct kvm_vcpu *vcpu; unsigned long *pending_bits, *mask_bits; unsigned long flags; @@ -1585,7 +1610,8 @@ int kvm_xen_set_evtchn_fast(struct kvm_xen_evtchn *xe, struct kvm *kvm) WRITE_ONCE(xe->vcpu_idx, vcpu->vcpu_idx); } - if (!vcpu->arch.xen.vcpu_info_cache.active) + gpc = get_vcpu_info_cache(vcpu, NULL); + if (!gpc->active) return -EINVAL; if (xe->port >= max_evtchn_port(kvm)) @@ -1594,6 +1620,7 @@ int kvm_xen_set_evtchn_fast(struct kvm_xen_evtchn *xe, struct kvm *kvm) rc = -EWOULDBLOCK; idx = srcu_read_lock(&kvm->srcu); + gpc = &kvm->arch.xen.shinfo_cache; read_lock_irqsave(&gpc->lock, flags); if (!kvm_gpc_check(gpc, PAGE_SIZE)) @@ -1624,10 +1651,13 @@ int kvm_xen_set_evtchn_fast(struct kvm_xen_evtchn *xe, struct kvm *kvm) rc = -ENOTCONN; /* Masked */ kvm_xen_check_poller(vcpu, xe->port); } else { + unsigned long offset; + rc = 1; /* Delivered to the bitmap in shared_info. */ + /* Now switch to the vCPU's vcpu_info to set the index and pending_sel */ read_unlock_irqrestore(&gpc->lock, flags); - gpc = &vcpu->arch.xen.vcpu_info_cache; + gpc = get_vcpu_info_cache(vcpu, &offset); read_lock_irqsave(&gpc->lock, flags); if (!kvm_gpc_check(gpc, sizeof(struct vcpu_info))) { @@ -1641,13 +1671,13 @@ int kvm_xen_set_evtchn_fast(struct kvm_xen_evtchn *xe, struct kvm *kvm) } if (IS_ENABLED(CONFIG_64BIT) && kvm->arch.xen.long_mode) { - struct vcpu_info *vcpu_info = gpc->khva; + struct vcpu_info *vcpu_info = gpc->khva + offset; if (!test_and_set_bit(port_word_bit, &vcpu_info->evtchn_pending_sel)) { WRITE_ONCE(vcpu_info->evtchn_upcall_pending, 1); kick_vcpu = true; } } else { - struct compat_vcpu_info *vcpu_info = gpc->khva; + struct compat_vcpu_info *vcpu_info = gpc->khva + offset; if (!test_and_set_bit(port_word_bit, (unsigned long *)&vcpu_info->evtchn_pending_sel)) { WRITE_ONCE(vcpu_info->evtchn_upcall_pending, 1); diff --git a/arch/x86/kvm/xen.h b/arch/x86/kvm/xen.h index f8f1fe22d090..c4d29ccbc3ab 100644 --- a/arch/x86/kvm/xen.h +++ b/arch/x86/kvm/xen.h @@ -16,6 +16,7 @@ extern struct static_key_false_deferred kvm_xen_enabled; +void kvm_xen_guest_time_update(struct kvm_vcpu *vcpu); int __kvm_xen_has_interrupt(struct kvm_vcpu *vcpu); void kvm_xen_inject_pending_events(struct kvm_vcpu *vcpu); int kvm_xen_vcpu_set_attr(struct kvm_vcpu *vcpu, struct kvm_xen_vcpu_attr *data); @@ -52,7 +53,6 @@ static inline bool kvm_xen_hypercall_enabled(struct kvm *kvm) static inline int kvm_xen_has_interrupt(struct kvm_vcpu *vcpu) { if (static_branch_unlikely(&kvm_xen_enabled.key) && - vcpu->arch.xen.vcpu_info_cache.active && vcpu->kvm->arch.xen.upcall_vector) return __kvm_xen_has_interrupt(vcpu); @@ -80,6 +80,10 @@ static inline int kvm_xen_has_pending_timer(struct kvm_vcpu *vcpu) void kvm_xen_inject_timer_irqs(struct kvm_vcpu *vcpu); #else +static inline void kvm_xen_guest_time_update(struct kvm_vcpu *vcpu) +{ +} + static inline int kvm_xen_write_hypercall_page(struct kvm_vcpu *vcpu, u64 data) { return 1; From patchwork Thu Sep 14 08:49:46 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Paul Durrant X-Patchwork-Id: 13384880 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 50B25EDE99D for ; Thu, 14 Sep 2023 09:07:22 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S237070AbjINJHY (ORCPT ); Thu, 14 Sep 2023 05:07:24 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51214 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S237134AbjINJHQ (ORCPT ); Thu, 14 Sep 2023 05:07:16 -0400 Received: from mail.xenproject.org (mail.xenproject.org [104.130.215.37]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8DFF11BF9; Thu, 14 Sep 2023 02:07:12 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org; s=20200302mail; h=Content-Transfer-Encoding:MIME-Version:References: In-Reply-To:Message-Id:Date:Subject:Cc:To:From; bh=b1ozqveOAFRuO/wOLzb/wpotEM9GW9TMSEMYXUlq+eo=; b=seWWyEITPDYjae4Nj30M50y4M3 HgTVvcdf04i37rSRTFWMkg+p4yMK7DRpKb4KtFf9Rn+ngHcUlgmr2q7D7K6DcHk2poBFQ5LmL/5eX hBBLHHZBCV+EQ/Hfu4hn0bVQYXXl/tGGqcQQi17tMebimnVGwgqnAXFMzPf1E7p/QWqg=; Received: from xenbits.xenproject.org ([104.239.192.120]) by mail.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1qgi3O-0001kl-Rt; Thu, 14 Sep 2023 08:50:26 +0000 Received: from ec2-63-33-11-17.eu-west-1.compute.amazonaws.com ([63.33.11.17] helo=REM-PW02S00X.ant.amazon.com) by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92) (envelope-from ) id 1qgi3O-0002T9-Ij; Thu, 14 Sep 2023 08:50:26 +0000 From: Paul Durrant To: kvm@vger.kernel.org, linux-kernel@vger.kernel.org Cc: Paul Durrant , Sean Christopherson , Paolo Bonzini , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , "H. Peter Anvin" , David Woodhouse , x86@kernel.org Subject: [PATCH 8/8] KVM: xen: automatically use the vcpu_info embedded in shared_info Date: Thu, 14 Sep 2023 08:49:46 +0000 Message-Id: <20230914084946.200043-9-paul@xen.org> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230914084946.200043-1-paul@xen.org> References: <20230914084946.200043-1-paul@xen.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Paul Durrant Add a KVM_XEN_HVM_CONFIG_DEFAULT_VCPU_INFO flag so that the VMM knows it need only set the KVM_XEN_VCPU_ATTR_TYPE_VCPU_INFO attribute in response to a VCPUOP_register_vcpu_info hypercall, and modify get_vcpu_info_cache() to pass back a pointer to the shared info pfn cache (and appropriate offset) for any of the first 32 vCPUs if the attribute has not been set. Signed-off-by: Paul Durrant --- Cc: Sean Christopherson Cc: Paolo Bonzini Cc: Thomas Gleixner Cc: Ingo Molnar Cc: Borislav Petkov Cc: Dave Hansen Cc: "H. Peter Anvin" Cc: David Woodhouse Cc: x86@kernel.org --- arch/x86/kvm/x86.c | 3 ++- arch/x86/kvm/xen.c | 15 +++++++++++++++ include/uapi/linux/kvm.h | 1 + 3 files changed, 18 insertions(+), 1 deletion(-) diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 4cd577d01bc4..cdce8c463a1b 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -4528,7 +4528,8 @@ int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext) KVM_XEN_HVM_CONFIG_SHARED_INFO | KVM_XEN_HVM_CONFIG_EVTCHN_2LEVEL | KVM_XEN_HVM_CONFIG_EVTCHN_SEND | - KVM_XEN_HVM_CONFIG_SHARED_INFO_HVA; + KVM_XEN_HVM_CONFIG_SHARED_INFO_HVA | + KVM_XEN_HVM_CONFIG_DEFAULT_VCPU_INFO; if (sched_info_on()) r |= KVM_XEN_HVM_CONFIG_RUNSTATE | KVM_XEN_HVM_CONFIG_RUNSTATE_UPDATE_FLAG; diff --git a/arch/x86/kvm/xen.c b/arch/x86/kvm/xen.c index 892563fea40f..66050a96ba25 100644 --- a/arch/x86/kvm/xen.c +++ b/arch/x86/kvm/xen.c @@ -491,6 +491,21 @@ static void kvm_xen_inject_vcpu_vector(struct kvm_vcpu *v) struct gfn_to_pfn_cache *get_vcpu_info_cache(struct kvm_vcpu *v, unsigned long *offset) { + if (!v->arch.xen.vcpu_info_cache.active && v->arch.xen.vcpu_id < MAX_VIRT_CPUS) { + struct kvm *kvm = v->kvm; + + if (offset) { + if (IS_ENABLED(CONFIG_64BIT) && kvm->arch.xen.long_mode) + *offset = offsetof(struct shared_info, + vcpu_info[v->arch.xen.vcpu_id]); + else + *offset = offsetof(struct compat_shared_info, + vcpu_info[v->arch.xen.vcpu_id]); + } + + return &kvm->arch.xen.shinfo_cache; + } + if (offset) *offset = 0; diff --git a/include/uapi/linux/kvm.h b/include/uapi/linux/kvm.h index d3f371bafad8..480612f73fc2 100644 --- a/include/uapi/linux/kvm.h +++ b/include/uapi/linux/kvm.h @@ -1283,6 +1283,7 @@ struct kvm_x86_mce { #define KVM_XEN_HVM_CONFIG_EVTCHN_SEND (1 << 5) #define KVM_XEN_HVM_CONFIG_RUNSTATE_UPDATE_FLAG (1 << 6) #define KVM_XEN_HVM_CONFIG_SHARED_INFO_HVA (1 << 7) +#define KVM_XEN_HVM_CONFIG_DEFAULT_VCPU_INFO (1 << 8) struct kvm_xen_hvm_config { __u32 flags;