From patchwork Thu Oct 13 21:12:19 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 13006414 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id E30F8C433FE for ; Thu, 13 Oct 2022 21:13:15 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230033AbiJMVNN (ORCPT ); Thu, 13 Oct 2022 17:13:13 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52986 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229737AbiJMVNL (ORCPT ); Thu, 13 Oct 2022 17:13:11 -0400 Received: from mail-pf1-x449.google.com (mail-pf1-x449.google.com [IPv6:2607:f8b0:4864:20::449]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id DAE3C190E53 for ; Thu, 13 Oct 2022 14:12:55 -0700 (PDT) Received: by mail-pf1-x449.google.com with SMTP id v1-20020aa78081000000b005636d8a1947so1798255pff.0 for ; Thu, 13 Oct 2022 14:12:55 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=gevzVagMbQoR45WzsGVt74F8pXR1wYOxGraePZU6BPw=; b=OcajF8ZdgWvzSjcSQgUM4TAej7hicV5o0oo4Ytv01XUnlvgM+Ikr84KRLTVWz0c5Z9 DmNyhDEtyZ/OHJWuRq6z7UCtDqlf2nDgENRpHNf/e8H7av/CBQ6u2qYJq76pZxA5X6n0 8R3qBEnPdbtT86At0jRnWXq6pM219EXJjhDm5fz5a+cONM3kPUp/LJflWEAXdeoFDZia KMctiUTjFfG/li0HSb7VMZX1hFbQnibzUNeEGVdeejgjmHNmdCMPguR9/0R1/ov0HgnA 4g9wxFeUEjEloz31pcP3mg0FJ04WqHEMPMC6On/EylL4ZvAv2LvkNcnLMrRFq/wu4efJ 8aPg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=gevzVagMbQoR45WzsGVt74F8pXR1wYOxGraePZU6BPw=; b=MciK/otD3b01Yjq+KozyuRxy0bRbzhJ9MMu3tSZtGVsdVcpOxQzSGkAipitO6Rpz81 FTuJ6T6tbr2EjjtLdRKUmAOkVqe3lL+ou0/TkIOPP7HqXQXVIA2pDWzLYwCSxnpEfjlr msVGyNlHWpqeKCXGXpCzcsMUb5ZNXzeWskz0Vq5cBgSapw+9n8qXnxz+4M9c8FziML7v Ey+cnYdgWSZbryuMTenWJwb4uWWHvcsWW3IPjO3C16j4nvVDUs9gBv45TwdGgdLXTwEO xweRuhyZzeoOEoSMCvZTLGnH4kTm6CCkywmNOXhEGl51w4T/VA2XLeLsnTkrTHIs6Li7 WB6g== X-Gm-Message-State: ACrzQf26Oc4uatqqk0X7d70FVm5gkhkE1hYvBagKDMySGVb9Jg8sXQ3B cew7Kjvko4Kn/yGe3UGxbtlGux3bW+g= X-Google-Smtp-Source: AMsMyM6OhYuX5d5FpblM517k1J148s7nC5q11XNu/RMmGV5MfHPuyfAcPCnJztM1vSQjo0AhrVmnnDrGc4A= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a05:6a00:ad0:b0:555:ac02:433b with SMTP id c16-20020a056a000ad000b00555ac02433bmr1501633pfl.18.1665695560839; Thu, 13 Oct 2022 14:12:40 -0700 (PDT) Reply-To: Sean Christopherson Date: Thu, 13 Oct 2022 21:12:19 +0000 In-Reply-To: <20221013211234.1318131-1-seanjc@google.com> Mime-Version: 1.0 References: <20221013211234.1318131-1-seanjc@google.com> X-Mailer: git-send-email 2.38.0.413.g74048e4d9e-goog Message-ID: <20221013211234.1318131-2-seanjc@google.com> Subject: [PATCH v2 01/16] KVM: Initialize gfn_to_pfn_cache locks in dedicated helper From: Sean Christopherson To: Sean Christopherson , Paolo Bonzini Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Michal Luczaj , David Woodhouse Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Michal Luczaj Move the gfn_to_pfn_cache lock initialization to another helper and call the new helper during VM/vCPU creation. There are race conditions possible due to kvm_gfn_to_pfn_cache_init()'s ability to re-initialize the cache's locks. For example: a race between ioctl(KVM_XEN_HVM_EVTCHN_SEND) and kvm_gfn_to_pfn_cache_init() leads to a corrupted shinfo gpc lock. (thread 1) | (thread 2) | kvm_xen_set_evtchn_fast | read_lock_irqsave(&gpc->lock, ...) | | kvm_gfn_to_pfn_cache_init | rwlock_init(&gpc->lock) read_unlock_irqrestore(&gpc->lock, ...) | Rename "cache_init" and "cache_destroy" to activate+deactivate to avoid implying that the cache really is destroyed/freed. Note, there more races in the newly named kvm_gpc_activate() that will be addressed separately. Fixes: 982ed0de4753 ("KVM: Reinstate gfn_to_pfn_cache with invalidation support") Cc: stable@vger.kernel.org Suggested-by: Sean Christopherson Signed-off-by: Michal Luczaj [sean: call out that this is a bug fix] Signed-off-by: Sean Christopherson --- arch/x86/kvm/x86.c | 12 +++++---- arch/x86/kvm/xen.c | 57 +++++++++++++++++++++------------------- include/linux/kvm_host.h | 24 ++++++++++++----- virt/kvm/pfncache.c | 21 ++++++++------- 4 files changed, 66 insertions(+), 48 deletions(-) diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 4bd5f8a751de..943f039564e7 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -2315,11 +2315,11 @@ static void kvm_write_system_time(struct kvm_vcpu *vcpu, gpa_t system_time, /* we verify if the enable bit is set... */ if (system_time & 1) { - kvm_gfn_to_pfn_cache_init(vcpu->kvm, &vcpu->arch.pv_time, vcpu, - KVM_HOST_USES_PFN, system_time & ~1ULL, - sizeof(struct pvclock_vcpu_time_info)); + kvm_gpc_activate(vcpu->kvm, &vcpu->arch.pv_time, vcpu, + KVM_HOST_USES_PFN, system_time & ~1ULL, + sizeof(struct pvclock_vcpu_time_info)); } else { - kvm_gfn_to_pfn_cache_destroy(vcpu->kvm, &vcpu->arch.pv_time); + kvm_gpc_deactivate(vcpu->kvm, &vcpu->arch.pv_time); } return; @@ -3388,7 +3388,7 @@ static int kvm_pv_enable_async_pf_int(struct kvm_vcpu *vcpu, u64 data) static void kvmclock_reset(struct kvm_vcpu *vcpu) { - kvm_gfn_to_pfn_cache_destroy(vcpu->kvm, &vcpu->arch.pv_time); + kvm_gpc_deactivate(vcpu->kvm, &vcpu->arch.pv_time); vcpu->arch.time = 0; } @@ -11757,6 +11757,8 @@ int kvm_arch_vcpu_create(struct kvm_vcpu *vcpu) vcpu->arch.regs_avail = ~0; vcpu->arch.regs_dirty = ~0; + kvm_gpc_init(&vcpu->arch.pv_time); + if (!irqchip_in_kernel(vcpu->kvm) || kvm_vcpu_is_reset_bsp(vcpu)) vcpu->arch.mp_state = KVM_MP_STATE_RUNNABLE; else diff --git a/arch/x86/kvm/xen.c b/arch/x86/kvm/xen.c index 93c628d3e3a9..b2be60c6efa4 100644 --- a/arch/x86/kvm/xen.c +++ b/arch/x86/kvm/xen.c @@ -42,13 +42,13 @@ static int kvm_xen_shared_info_init(struct kvm *kvm, gfn_t gfn) int idx = srcu_read_lock(&kvm->srcu); if (gfn == GPA_INVALID) { - kvm_gfn_to_pfn_cache_destroy(kvm, gpc); + kvm_gpc_deactivate(kvm, gpc); goto out; } do { - ret = kvm_gfn_to_pfn_cache_init(kvm, gpc, NULL, KVM_HOST_USES_PFN, - gpa, PAGE_SIZE); + ret = kvm_gpc_activate(kvm, gpc, NULL, KVM_HOST_USES_PFN, gpa, + PAGE_SIZE); if (ret) goto out; @@ -554,15 +554,15 @@ int kvm_xen_vcpu_set_attr(struct kvm_vcpu *vcpu, struct kvm_xen_vcpu_attr *data) offsetof(struct compat_vcpu_info, time)); if (data->u.gpa == GPA_INVALID) { - kvm_gfn_to_pfn_cache_destroy(vcpu->kvm, &vcpu->arch.xen.vcpu_info_cache); + kvm_gpc_deactivate(vcpu->kvm, &vcpu->arch.xen.vcpu_info_cache); r = 0; break; } - r = kvm_gfn_to_pfn_cache_init(vcpu->kvm, - &vcpu->arch.xen.vcpu_info_cache, - NULL, KVM_HOST_USES_PFN, data->u.gpa, - sizeof(struct vcpu_info)); + r = kvm_gpc_activate(vcpu->kvm, + &vcpu->arch.xen.vcpu_info_cache, NULL, + KVM_HOST_USES_PFN, data->u.gpa, + sizeof(struct vcpu_info)); if (!r) kvm_make_request(KVM_REQ_CLOCK_UPDATE, vcpu); @@ -570,16 +570,16 @@ int kvm_xen_vcpu_set_attr(struct kvm_vcpu *vcpu, struct kvm_xen_vcpu_attr *data) case KVM_XEN_VCPU_ATTR_TYPE_VCPU_TIME_INFO: if (data->u.gpa == GPA_INVALID) { - kvm_gfn_to_pfn_cache_destroy(vcpu->kvm, - &vcpu->arch.xen.vcpu_time_info_cache); + kvm_gpc_deactivate(vcpu->kvm, + &vcpu->arch.xen.vcpu_time_info_cache); r = 0; break; } - r = kvm_gfn_to_pfn_cache_init(vcpu->kvm, - &vcpu->arch.xen.vcpu_time_info_cache, - NULL, KVM_HOST_USES_PFN, data->u.gpa, - sizeof(struct pvclock_vcpu_time_info)); + r = kvm_gpc_activate(vcpu->kvm, + &vcpu->arch.xen.vcpu_time_info_cache, + NULL, KVM_HOST_USES_PFN, data->u.gpa, + sizeof(struct pvclock_vcpu_time_info)); if (!r) kvm_make_request(KVM_REQ_CLOCK_UPDATE, vcpu); break; @@ -590,16 +590,15 @@ int kvm_xen_vcpu_set_attr(struct kvm_vcpu *vcpu, struct kvm_xen_vcpu_attr *data) break; } if (data->u.gpa == GPA_INVALID) { - kvm_gfn_to_pfn_cache_destroy(vcpu->kvm, - &vcpu->arch.xen.runstate_cache); + kvm_gpc_deactivate(vcpu->kvm, + &vcpu->arch.xen.runstate_cache); r = 0; break; } - r = kvm_gfn_to_pfn_cache_init(vcpu->kvm, - &vcpu->arch.xen.runstate_cache, - NULL, KVM_HOST_USES_PFN, data->u.gpa, - sizeof(struct vcpu_runstate_info)); + r = kvm_gpc_activate(vcpu->kvm, &vcpu->arch.xen.runstate_cache, + NULL, KVM_HOST_USES_PFN, data->u.gpa, + sizeof(struct vcpu_runstate_info)); break; case KVM_XEN_VCPU_ATTR_TYPE_RUNSTATE_CURRENT: @@ -1816,7 +1815,12 @@ void kvm_xen_init_vcpu(struct kvm_vcpu *vcpu) { vcpu->arch.xen.vcpu_id = vcpu->vcpu_idx; vcpu->arch.xen.poll_evtchn = 0; + timer_setup(&vcpu->arch.xen.poll_timer, cancel_evtchn_poll, 0); + + kvm_gpc_init(&vcpu->arch.xen.runstate_cache); + kvm_gpc_init(&vcpu->arch.xen.vcpu_info_cache); + kvm_gpc_init(&vcpu->arch.xen.vcpu_time_info_cache); } void kvm_xen_destroy_vcpu(struct kvm_vcpu *vcpu) @@ -1824,18 +1828,17 @@ void kvm_xen_destroy_vcpu(struct kvm_vcpu *vcpu) if (kvm_xen_timer_enabled(vcpu)) kvm_xen_stop_timer(vcpu); - kvm_gfn_to_pfn_cache_destroy(vcpu->kvm, - &vcpu->arch.xen.runstate_cache); - kvm_gfn_to_pfn_cache_destroy(vcpu->kvm, - &vcpu->arch.xen.vcpu_info_cache); - kvm_gfn_to_pfn_cache_destroy(vcpu->kvm, - &vcpu->arch.xen.vcpu_time_info_cache); + kvm_gpc_deactivate(vcpu->kvm, &vcpu->arch.xen.runstate_cache); + kvm_gpc_deactivate(vcpu->kvm, &vcpu->arch.xen.vcpu_info_cache); + kvm_gpc_deactivate(vcpu->kvm, &vcpu->arch.xen.vcpu_time_info_cache); + del_timer_sync(&vcpu->arch.xen.poll_timer); } void kvm_xen_init_vm(struct kvm *kvm) { idr_init(&kvm->arch.xen.evtchn_ports); + kvm_gpc_init(&kvm->arch.xen.shinfo_cache); } void kvm_xen_destroy_vm(struct kvm *kvm) @@ -1843,7 +1846,7 @@ void kvm_xen_destroy_vm(struct kvm *kvm) struct evtchnfd *evtchnfd; int i; - kvm_gfn_to_pfn_cache_destroy(kvm, &kvm->arch.xen.shinfo_cache); + kvm_gpc_deactivate(kvm, &kvm->arch.xen.shinfo_cache); idr_for_each_entry(&kvm->arch.xen.evtchn_ports, evtchnfd, i) { if (!evtchnfd->deliver.port.port) diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index 32f259fa5801..694c4cb6caf4 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -1240,8 +1240,18 @@ int kvm_vcpu_write_guest(struct kvm_vcpu *vcpu, gpa_t gpa, const void *data, void kvm_vcpu_mark_page_dirty(struct kvm_vcpu *vcpu, gfn_t gfn); /** - * kvm_gfn_to_pfn_cache_init - prepare a cached kernel mapping and HPA for a - * given guest physical address. + * kvm_gpc_init - initialize gfn_to_pfn_cache. + * + * @gpc: struct gfn_to_pfn_cache object. + * + * This sets up a gfn_to_pfn_cache by initializing locks. Note, the cache must + * be zero-allocated (or zeroed by the caller before init). + */ +void kvm_gpc_init(struct gfn_to_pfn_cache *gpc); + +/** + * kvm_gpc_activate - prepare a cached kernel mapping and HPA for a given guest + * physical address. * * @kvm: pointer to kvm instance. * @gpc: struct gfn_to_pfn_cache object. @@ -1265,9 +1275,9 @@ void kvm_vcpu_mark_page_dirty(struct kvm_vcpu *vcpu, gfn_t gfn); * kvm_gfn_to_pfn_cache_check() to ensure that the cache is valid before * accessing the target page. */ -int kvm_gfn_to_pfn_cache_init(struct kvm *kvm, struct gfn_to_pfn_cache *gpc, - struct kvm_vcpu *vcpu, enum pfn_cache_usage usage, - gpa_t gpa, unsigned long len); +int kvm_gpc_activate(struct kvm *kvm, struct gfn_to_pfn_cache *gpc, + struct kvm_vcpu *vcpu, enum pfn_cache_usage usage, + gpa_t gpa, unsigned long len); /** * kvm_gfn_to_pfn_cache_check - check validity of a gfn_to_pfn_cache. @@ -1324,7 +1334,7 @@ int kvm_gfn_to_pfn_cache_refresh(struct kvm *kvm, struct gfn_to_pfn_cache *gpc, void kvm_gfn_to_pfn_cache_unmap(struct kvm *kvm, struct gfn_to_pfn_cache *gpc); /** - * kvm_gfn_to_pfn_cache_destroy - destroy and unlink a gfn_to_pfn_cache. + * kvm_gpc_deactivate - deactivate and unlink a gfn_to_pfn_cache. * * @kvm: pointer to kvm instance. * @gpc: struct gfn_to_pfn_cache object. @@ -1332,7 +1342,7 @@ void kvm_gfn_to_pfn_cache_unmap(struct kvm *kvm, struct gfn_to_pfn_cache *gpc); * This removes a cache from the @kvm's list to be processed on MMU notifier * invocation. */ -void kvm_gfn_to_pfn_cache_destroy(struct kvm *kvm, struct gfn_to_pfn_cache *gpc); +void kvm_gpc_deactivate(struct kvm *kvm, struct gfn_to_pfn_cache *gpc); void kvm_sigset_activate(struct kvm_vcpu *vcpu); void kvm_sigset_deactivate(struct kvm_vcpu *vcpu); diff --git a/virt/kvm/pfncache.c b/virt/kvm/pfncache.c index 68ff41d39545..08f97cf97264 100644 --- a/virt/kvm/pfncache.c +++ b/virt/kvm/pfncache.c @@ -346,17 +346,20 @@ void kvm_gfn_to_pfn_cache_unmap(struct kvm *kvm, struct gfn_to_pfn_cache *gpc) } EXPORT_SYMBOL_GPL(kvm_gfn_to_pfn_cache_unmap); +void kvm_gpc_init(struct gfn_to_pfn_cache *gpc) +{ + rwlock_init(&gpc->lock); + mutex_init(&gpc->refresh_lock); +} +EXPORT_SYMBOL_GPL(kvm_gpc_init); -int kvm_gfn_to_pfn_cache_init(struct kvm *kvm, struct gfn_to_pfn_cache *gpc, - struct kvm_vcpu *vcpu, enum pfn_cache_usage usage, - gpa_t gpa, unsigned long len) +int kvm_gpc_activate(struct kvm *kvm, struct gfn_to_pfn_cache *gpc, + struct kvm_vcpu *vcpu, enum pfn_cache_usage usage, + gpa_t gpa, unsigned long len) { WARN_ON_ONCE(!usage || (usage & KVM_GUEST_AND_HOST_USE_PFN) != usage); if (!gpc->active) { - rwlock_init(&gpc->lock); - mutex_init(&gpc->refresh_lock); - gpc->khva = NULL; gpc->pfn = KVM_PFN_ERR_FAULT; gpc->uhva = KVM_HVA_ERR_BAD; @@ -371,9 +374,9 @@ int kvm_gfn_to_pfn_cache_init(struct kvm *kvm, struct gfn_to_pfn_cache *gpc, } return kvm_gfn_to_pfn_cache_refresh(kvm, gpc, gpa, len); } -EXPORT_SYMBOL_GPL(kvm_gfn_to_pfn_cache_init); +EXPORT_SYMBOL_GPL(kvm_gpc_activate); -void kvm_gfn_to_pfn_cache_destroy(struct kvm *kvm, struct gfn_to_pfn_cache *gpc) +void kvm_gpc_deactivate(struct kvm *kvm, struct gfn_to_pfn_cache *gpc) { if (gpc->active) { spin_lock(&kvm->gpc_lock); @@ -384,4 +387,4 @@ void kvm_gfn_to_pfn_cache_destroy(struct kvm *kvm, struct gfn_to_pfn_cache *gpc) gpc->active = false; } } -EXPORT_SYMBOL_GPL(kvm_gfn_to_pfn_cache_destroy); +EXPORT_SYMBOL_GPL(kvm_gpc_deactivate); From patchwork Thu Oct 13 21:12:20 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 13006416 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id A0EB5C43217 for ; Thu, 13 Oct 2022 21:13:27 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230071AbiJMVNY (ORCPT ); Thu, 13 Oct 2022 17:13:24 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53916 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230058AbiJMVNT (ORCPT ); Thu, 13 Oct 2022 17:13:19 -0400 Received: from mail-yb1-xb4a.google.com (mail-yb1-xb4a.google.com [IPv6:2607:f8b0:4864:20::b4a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A24A5191D42 for ; Thu, 13 Oct 2022 14:12:57 -0700 (PDT) Received: by mail-yb1-xb4a.google.com with SMTP id f9-20020a25b089000000b006be298e2a8dso2585211ybj.20 for ; Thu, 13 Oct 2022 14:12:57 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=content-transfer-encoding:cc:to:from:subject:message-id:references :mime-version:in-reply-to:date:reply-to:from:to:cc:subject:date :message-id:reply-to; bh=BR8P37dIvaTzFUy4lljy43WBv5S0DCgP55/AwwfIul8=; b=CCO6wUnaRGcTstghOOGYPgobIQnpiUGsYX/hCr5+qDry/5v3Aund/1oMz7QmAiJ9Rc qHtK5CFnvyCSLEQwMgXznejUx9u+4wkGV8aoLCND0rOJeaKdeTe+kjeevWvOX3W3ZDap UtKRvvUyA6hYauHJYsNI8BHgtt6mO7R/Hr28mdLgdPebppsjoR5A+R8QYNujc3nOIPvm ZreplDyFfEtuvnJvSnH1L0+SYE42Lygeuk0bChDkmHAKdIUNA8mNcCi2rJSjF9uHjyRX s1BTh4U1dK1fd+RGvcHWbK/w6QqxqeLgE7Bzr1z0J+GXsMUXg/E2UGmmcphr0UDIkrmR HDVg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:cc:to:from:subject:message-id:references :mime-version:in-reply-to:date:reply-to:x-gm-message-state:from:to :cc:subject:date:message-id:reply-to; bh=BR8P37dIvaTzFUy4lljy43WBv5S0DCgP55/AwwfIul8=; b=gwTVnjEE2vjsjTMjSyGBmLU6ELZZlXNSWTz3VNJpMzP+MJqBGApaAkFIf2+5laR8K8 QIOZkud8b9PondLBulWY3oW9SdLMpHVb3y7R5sSh1jwxx1b8shAR2mhccfFBC/kihDrU wQZhqHl7Srz66CWaMjIjXWc8cN5B1iChPMS2Eg5anZZiv5yIAMe9RL6N8OGNRYw2WTyP pHUk6DNSEs92pT7Bb2YtG67lK3Rb77y9u62StdE046z6yUuGBzPhSGS5MsxMlKGaXiQi W4Cc2XaYZb/Ek68QoIkAUebQTVivfneeQm9tJfF84ncjNTx4gvqyjVslDjdM6zYiWlKa hrFQ== X-Gm-Message-State: ACrzQf1SYCrxcAthNiQTgDFmdiJECP9pB+EBq/Wu1qvA4iAievwPDrTm H2y5VSIk145qhy2tLXVuHGoZ1BaGpkM= X-Google-Smtp-Source: AMsMyM5LbnzAqbPMUXUd7XT5kRqUCn/x5Q7YTZS7zNDZ3dH5rXweKKHifdxs21L0r4AlnVY0kMB8XcRQ6V0= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a25:344e:0:b0:6c1:40e4:5a3b with SMTP id b75-20020a25344e000000b006c140e45a3bmr1712215yba.423.1665695562532; Thu, 13 Oct 2022 14:12:42 -0700 (PDT) Reply-To: Sean Christopherson Date: Thu, 13 Oct 2022 21:12:20 +0000 In-Reply-To: <20221013211234.1318131-1-seanjc@google.com> Mime-Version: 1.0 References: <20221013211234.1318131-1-seanjc@google.com> X-Mailer: git-send-email 2.38.0.413.g74048e4d9e-goog Message-ID: <20221013211234.1318131-3-seanjc@google.com> Subject: [PATCH v2 02/16] KVM: Reject attempts to consume or refresh inactive gfn_to_pfn_cache From: Sean Christopherson To: Sean Christopherson , Paolo Bonzini Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Michal Luczaj , David Woodhouse Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Reject kvm_gpc_check() and kvm_gpc_refresh() if the cache is inactive. Not checking the active flag during refresh is particularly egregious, as KVM can end up with a valid, inactive cache, which can lead to a variety of use-after-free bugs, e.g. consuming a NULL kernel pointer or missing an mmu_notifier invalidation due to the cache not being on the list of gfns to invalidate. Note, "active" needs to be set if and only if the cache is on the list of caches, i.e. is reachable via mmu_notifier events. If a relevant mmu_notifier event occurs while the cache is "active" but not on the list, KVM will not acquire the cache's lock and so will not serailize the mmu_notifier event with active users and/or kvm_gpc_refresh(). A race between KVM_XEN_ATTR_TYPE_SHARED_INFO and KVM_XEN_HVM_EVTCHN_SEND can be exploited to trigger the bug. 1. Deactivate shinfo cache: kvm_xen_hvm_set_attr case KVM_XEN_ATTR_TYPE_SHARED_INFO kvm_gpc_deactivate kvm_gpc_unmap gpc->valid = false gpc->khva = NULL gpc->active = false Result: active = false, valid = false 2. Cause cache refresh: kvm_arch_vm_ioctl case KVM_XEN_HVM_EVTCHN_SEND kvm_xen_hvm_evtchn_send kvm_xen_set_evtchn kvm_xen_set_evtchn_fast kvm_gpc_check return -EWOULDBLOCK because !gpc->valid kvm_xen_set_evtchn_fast return -EWOULDBLOCK kvm_gpc_refresh hva_to_pfn_retry gpc->valid = true gpc->khva = not NULL Result: active = false, valid = true 3. Race ioctl KVM_XEN_HVM_EVTCHN_SEND against ioctl KVM_XEN_ATTR_TYPE_SHARED_INFO: kvm_arch_vm_ioctl case KVM_XEN_HVM_EVTCHN_SEND kvm_xen_hvm_evtchn_send kvm_xen_set_evtchn kvm_xen_set_evtchn_fast read_lock gpc->lock kvm_xen_hvm_set_attr case KVM_XEN_ATTR_TYPE_SHARED_INFO mutex_lock kvm->lock kvm_xen_shared_info_init kvm_gpc_activate gpc->khva = NULL kvm_gpc_check [ Check passes because gpc->valid is still true, even though gpc->khva is already NULL. ] shinfo = gpc->khva pending_bits = shinfo->evtchn_pending CRASH: test_and_set_bit(..., pending_bits) Fixes: 982ed0de4753 ("KVM: Reinstate gfn_to_pfn_cache with invalidation support") Cc: stable@vger.kernel.org Reported-by: : Michal Luczaj Signed-off-by: Sean Christopherson --- virt/kvm/pfncache.c | 41 ++++++++++++++++++++++++++++++++++------- 1 file changed, 34 insertions(+), 7 deletions(-) diff --git a/virt/kvm/pfncache.c b/virt/kvm/pfncache.c index 08f97cf97264..346e47f15572 100644 --- a/virt/kvm/pfncache.c +++ b/virt/kvm/pfncache.c @@ -81,6 +81,9 @@ bool kvm_gfn_to_pfn_cache_check(struct kvm *kvm, struct gfn_to_pfn_cache *gpc, { struct kvm_memslots *slots = kvm_memslots(kvm); + if (!gpc->active) + return false; + if ((gpa & ~PAGE_MASK) + len > PAGE_SIZE) return false; @@ -240,10 +243,11 @@ int kvm_gfn_to_pfn_cache_refresh(struct kvm *kvm, struct gfn_to_pfn_cache *gpc, { struct kvm_memslots *slots = kvm_memslots(kvm); unsigned long page_offset = gpa & ~PAGE_MASK; - kvm_pfn_t old_pfn, new_pfn; + bool unmap_old = false; unsigned long old_uhva; + kvm_pfn_t old_pfn; void *old_khva; - int ret = 0; + int ret; /* * If must fit within a single page. The 'len' argument is @@ -261,6 +265,11 @@ int kvm_gfn_to_pfn_cache_refresh(struct kvm *kvm, struct gfn_to_pfn_cache *gpc, write_lock_irq(&gpc->lock); + if (!gpc->active) { + ret = -EINVAL; + goto out_unlock; + } + old_pfn = gpc->pfn; old_khva = gpc->khva - offset_in_page(gpc->khva); old_uhva = gpc->uhva; @@ -291,6 +300,7 @@ int kvm_gfn_to_pfn_cache_refresh(struct kvm *kvm, struct gfn_to_pfn_cache *gpc, /* If the HVA→PFN mapping was already valid, don't unmap it. */ old_pfn = KVM_PFN_ERR_FAULT; old_khva = NULL; + ret = 0; } out: @@ -305,14 +315,15 @@ int kvm_gfn_to_pfn_cache_refresh(struct kvm *kvm, struct gfn_to_pfn_cache *gpc, gpc->khva = NULL; } - /* Snapshot the new pfn before dropping the lock! */ - new_pfn = gpc->pfn; + /* Detect a pfn change before dropping the lock! */ + unmap_old = (old_pfn != gpc->pfn); +out_unlock: write_unlock_irq(&gpc->lock); mutex_unlock(&gpc->refresh_lock); - if (old_pfn != new_pfn) + if (unmap_old) gpc_unmap_khva(kvm, old_pfn, old_khva); return ret; @@ -366,11 +377,19 @@ int kvm_gpc_activate(struct kvm *kvm, struct gfn_to_pfn_cache *gpc, gpc->vcpu = vcpu; gpc->usage = usage; gpc->valid = false; - gpc->active = true; spin_lock(&kvm->gpc_lock); list_add(&gpc->list, &kvm->gpc_list); spin_unlock(&kvm->gpc_lock); + + /* + * Activate the cache after adding it to the list, a concurrent + * refresh must not establish a mapping until the cache is + * reachable by mmu_notifier events. + */ + write_lock_irq(&gpc->lock); + gpc->active = true; + write_unlock_irq(&gpc->lock); } return kvm_gfn_to_pfn_cache_refresh(kvm, gpc, gpa, len); } @@ -379,12 +398,20 @@ EXPORT_SYMBOL_GPL(kvm_gpc_activate); void kvm_gpc_deactivate(struct kvm *kvm, struct gfn_to_pfn_cache *gpc) { if (gpc->active) { + /* + * Deactivate the cache before removing it from the list, KVM + * must stall mmu_notifier events until all users go away, i.e. + * until gpc->lock is dropped and refresh is guaranteed to fail. + */ + write_lock_irq(&gpc->lock); + gpc->active = false; + write_unlock_irq(&gpc->lock); + spin_lock(&kvm->gpc_lock); list_del(&gpc->list); spin_unlock(&kvm->gpc_lock); kvm_gfn_to_pfn_cache_unmap(kvm, gpc); - gpc->active = false; } } EXPORT_SYMBOL_GPL(kvm_gpc_deactivate); From patchwork Thu Oct 13 21:12:21 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 13006415 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id A3EAFC433FE for ; Thu, 13 Oct 2022 21:13:24 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230057AbiJMVNV (ORCPT ); Thu, 13 Oct 2022 17:13:21 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53896 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230050AbiJMVNT (ORCPT ); Thu, 13 Oct 2022 17:13:19 -0400 Received: from mail-yw1-x114a.google.com (mail-yw1-x114a.google.com [IPv6:2607:f8b0:4864:20::114a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B3EE8192B89 for ; Thu, 13 Oct 2022 14:13:05 -0700 (PDT) Received: by mail-yw1-x114a.google.com with SMTP id 00721157ae682-35813c409cbso28027237b3.5 for ; Thu, 13 Oct 2022 14:13:05 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=dWPjjwmCXdw1FoLWbTCqb8PkUleSoznDfVrorISf0Bs=; b=Wl3SxD4RgiI8mdPWO8AkxLNNNuFvmx2TVKUslqBgYcFOtlxKo0PibzHbprgwD8PCQH oq2O2wmIg5XJC70e4y7GFFVrlusSXIq4WR8SveMxveZzg0/6WISqHpZ7OzBkjr01nnp7 C6uTtRPcGcj5EF+MFcNS3KIOsZAX3AR665gsokStcFaoOmibFpyvlUqT4tzvPCCyECxs lpfUQE8m3YQVfaZP6hMuCSAug86mpg81Q/U0VR/UF1XvD1AAX+FStgqMrwKT8N4FNF6y lcR5IGwRUJE7UaFfSWZucL1k1lLGHEBhkfck+fYHsFX0SS+sCTzjwGEKQ62XJ1/7pdsi UJ+w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=dWPjjwmCXdw1FoLWbTCqb8PkUleSoznDfVrorISf0Bs=; b=IEKpJlYj4GyMsCC55q4h+lJ7CFDPR/BvyWJZUQNGvS9DMMC+9DgWRpLwyvHK46fH5z ajMgNgQXX+VCGiAiC/XZY7g97fT1Fpbuc+X9ETLhHhfh2PzYOurl1qVip8p022GcLAGI 3tubZagPQLjBP1lVez68QJsmj9NhsKNJn1qgPAH5AsM5fN8F7Ux9BtOgjHW33ThuuKSO IQ/KUO9PbsIicuxuxz+rnA9PJ/KOzQ7tXI47Oa/1qeJm+InPt3AB3m64yRu0AGjW+yv2 RDureLi3Mcjbr231Jpw4fr9jpdzkAviV+QUr/sF5XckAjoJOaBP18zt2720EnktRF+BS hjsQ== X-Gm-Message-State: ACrzQf0RF+dzXtyqTAXZ8LyxOGj7Lq1LoY/jN0+I41sWXEuNdeZuNsrx +jnluJS9N0/1L/1WFgt/06xMOxXlXkY= X-Google-Smtp-Source: AMsMyM5MqNN6X4L0yO41k0hy74o8gIZHqHekpNg9zfTkf+yB9vmeBeh0sLHX4cs8h+d/YE+9RnqmgaeNThY= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a25:bc4b:0:b0:6bc:a469:468c with SMTP id d11-20020a25bc4b000000b006bca469468cmr1922360ybk.199.1665695564234; Thu, 13 Oct 2022 14:12:44 -0700 (PDT) Reply-To: Sean Christopherson Date: Thu, 13 Oct 2022 21:12:21 +0000 In-Reply-To: <20221013211234.1318131-1-seanjc@google.com> Mime-Version: 1.0 References: <20221013211234.1318131-1-seanjc@google.com> X-Mailer: git-send-email 2.38.0.413.g74048e4d9e-goog Message-ID: <20221013211234.1318131-4-seanjc@google.com> Subject: [PATCH v2 03/16] KVM: x86: Always use non-compat vcpu_runstate_info size for gfn=>pfn cache From: Sean Christopherson To: Sean Christopherson , Paolo Bonzini Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Michal Luczaj , David Woodhouse Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Always use the size of Xen's non-compat vcpu_runstate_info struct when checking that the GPA+size doesn't cross a page boundary. Conceptually, using the current mode is more correct, but KVM isn't consistent with itself as kvm_xen_vcpu_set_attr() unconditionally uses the "full" size when activating the cache. More importantly, prior to the introduction of the gfn_to_pfn_cache, KVM _always_ used the full size, i.e. allowing the guest (userspace?) to use a poorly aligned GPA in 32-bit mode but not 64-bit mode is more of a bug than a feature, and fixing the bug doesn't break KVM's historical ABI. Always using the non-compat size will allow for future gfn_to_pfn_cache clenups as this is (was) the only case where KVM uses a different size at check()+refresh() than at activate(). E.g. the length/size of the cache can be made immutable and dropped from check()+refresh(), which yields a cleaner set of APIs and avoids potential bugs that could occur if check() where invoked with a different size than refresh(). Fixes: a795cd43c5b5 ("KVM: x86/xen: Use gfn_to_pfn_cache for runstate area") Cc: David Woodhouse Signed-off-by: Sean Christopherson --- arch/x86/kvm/xen.c | 5 +---- 1 file changed, 1 insertion(+), 4 deletions(-) diff --git a/arch/x86/kvm/xen.c b/arch/x86/kvm/xen.c index b2be60c6efa4..9e79ef2cca99 100644 --- a/arch/x86/kvm/xen.c +++ b/arch/x86/kvm/xen.c @@ -212,10 +212,7 @@ void kvm_xen_update_runstate_guest(struct kvm_vcpu *v, int state) if (!vx->runstate_cache.active) return; - if (IS_ENABLED(CONFIG_64BIT) && v->kvm->arch.xen.long_mode) - user_len = sizeof(struct vcpu_runstate_info); - else - user_len = sizeof(struct compat_vcpu_runstate_info); + user_len = sizeof(struct vcpu_runstate_info); read_lock_irqsave(&gpc->lock, flags); while (!kvm_gfn_to_pfn_cache_check(v->kvm, gpc, gpc->gpa, From patchwork Thu Oct 13 21:12:22 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 13006417 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1017EC4332F for ; Thu, 13 Oct 2022 21:13:30 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230081AbiJMVN2 (ORCPT ); Thu, 13 Oct 2022 17:13:28 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53898 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230040AbiJMVNT (ORCPT ); Thu, 13 Oct 2022 17:13:19 -0400 Received: from mail-pf1-x44a.google.com (mail-pf1-x44a.google.com [IPv6:2607:f8b0:4864:20::44a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 9E221191D65 for ; Thu, 13 Oct 2022 14:13:09 -0700 (PDT) Received: by mail-pf1-x44a.google.com with SMTP id y18-20020a056a00181200b00565b27c2611so1768999pfa.14 for ; Thu, 13 Oct 2022 14:13:09 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=ZCYCVjmO2J72H0KFHJBSx9/YTLGBB6hsk4zZ8ILpai0=; b=BAJw0IF60pdsbuWPNmAB32su9AuL606MvSCrJ0kCG9QXn5CbIWEGqPpRHejSajjH9X VQXsslI/2hycC9eGhoX+Kp/GSiQXYxrZZvrdD3V4uJH8e7qeHpiegAxgd8Mv5iXNRFzY Gbwnbzbips8KTUvCTR1u068vE3OE3rK4qtaVtlW7+RV71nJiEUIHMbK/MVOEQt8afEuM 0As8jmxorDCqpNDFublPhw39vixQLAh0b1DBIIk9Zd6/sZfXQ/tu5HPSp8k3yzO2tgPy jq8OUIFNk9+WAmTpB5/tCO+k2xaqNDEVoelle/75tyEb661C7vy2+wfzjiwEtjhiUFDE RSIw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=ZCYCVjmO2J72H0KFHJBSx9/YTLGBB6hsk4zZ8ILpai0=; b=Y2YLL6H7W4WIugnGPm5ZmESMBBQQ8GRIb+t2kj8OE4jllr/qaPFAwxqLKS82X7vipY koijezStOBFgIBsi5O8VVqQLgDY6NuyBHlpUO78QFHZQWGe27zu7SZifXtDuBfvGKduF IzHwRDjI8XUuLiyOb1V2MGu4oEs17h+F0WvFX4zDOlY40j6WsaBVq4+2UI7+T+V/Yfgj WgJAiEull7iJyqIkmiiU+XhdM+ToqQ71cnsoVNQ+HNM8OufA6LLja7nJHbo41C4T4K4m 5yi7KpVmlBVzqRZJKrv3ETx5RioRzNgzm6JoyBtKVnFoitPc02LGQ+CW6jlT/aQxLmFK nYlg== X-Gm-Message-State: ACrzQf3EFW99JC3xuhX7puAdzrSFw0XmCCSdjJzZMlJWAcVzDNIfdVQQ ijt0+hKaMo+VyXmv8cWSKdASnlC+AsI= X-Google-Smtp-Source: AMsMyM6gx+vDFWbDdQGIunEB0MXJrMfMZ6n/GycVSLwOykukLHA2lD8p/fsMb+tYz8smiS8IcfhQO+wwI7M= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a05:6a00:234d:b0:561:ec04:e77e with SMTP id j13-20020a056a00234d00b00561ec04e77emr1527327pfj.14.1665695565830; Thu, 13 Oct 2022 14:12:45 -0700 (PDT) Reply-To: Sean Christopherson Date: Thu, 13 Oct 2022 21:12:22 +0000 In-Reply-To: <20221013211234.1318131-1-seanjc@google.com> Mime-Version: 1.0 References: <20221013211234.1318131-1-seanjc@google.com> X-Mailer: git-send-email 2.38.0.413.g74048e4d9e-goog Message-ID: <20221013211234.1318131-5-seanjc@google.com> Subject: [PATCH v2 04/16] KVM: Shorten gfn_to_pfn_cache function names From: Sean Christopherson To: Sean Christopherson , Paolo Bonzini Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Michal Luczaj , David Woodhouse Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Michal Luczaj Formalize "gpc" as the acronym and use it in function names. No functional change intended. Suggested-by: Sean Christopherson Signed-off-by: Michal Luczaj Signed-off-by: Sean Christopherson --- arch/x86/kvm/x86.c | 8 ++++---- arch/x86/kvm/xen.c | 29 ++++++++++++++--------------- include/linux/kvm_host.h | 21 ++++++++++----------- virt/kvm/pfncache.c | 20 ++++++++++---------- 4 files changed, 38 insertions(+), 40 deletions(-) diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 943f039564e7..fd00e6a33203 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -3034,12 +3034,12 @@ static void kvm_setup_guest_pvclock(struct kvm_vcpu *v, unsigned long flags; read_lock_irqsave(&gpc->lock, flags); - while (!kvm_gfn_to_pfn_cache_check(v->kvm, gpc, gpc->gpa, - offset + sizeof(*guest_hv_clock))) { + while (!kvm_gpc_check(v->kvm, gpc, gpc->gpa, + offset + sizeof(*guest_hv_clock))) { read_unlock_irqrestore(&gpc->lock, flags); - if (kvm_gfn_to_pfn_cache_refresh(v->kvm, gpc, gpc->gpa, - offset + sizeof(*guest_hv_clock))) + if (kvm_gpc_refresh(v->kvm, gpc, gpc->gpa, + offset + sizeof(*guest_hv_clock))) return; read_lock_irqsave(&gpc->lock, flags); diff --git a/arch/x86/kvm/xen.c b/arch/x86/kvm/xen.c index 9e79ef2cca99..74d9f4985f93 100644 --- a/arch/x86/kvm/xen.c +++ b/arch/x86/kvm/xen.c @@ -215,15 +215,14 @@ void kvm_xen_update_runstate_guest(struct kvm_vcpu *v, int state) user_len = sizeof(struct vcpu_runstate_info); read_lock_irqsave(&gpc->lock, flags); - while (!kvm_gfn_to_pfn_cache_check(v->kvm, gpc, gpc->gpa, - user_len)) { + while (!kvm_gpc_check(v->kvm, gpc, gpc->gpa, user_len)) { read_unlock_irqrestore(&gpc->lock, flags); /* When invoked from kvm_sched_out() we cannot sleep */ if (state == RUNSTATE_runnable) return; - if (kvm_gfn_to_pfn_cache_refresh(v->kvm, gpc, gpc->gpa, user_len)) + if (kvm_gpc_refresh(v->kvm, gpc, gpc->gpa, user_len)) return; read_lock_irqsave(&gpc->lock, flags); @@ -349,12 +348,12 @@ void kvm_xen_inject_pending_events(struct kvm_vcpu *v) * little more honest about it. */ read_lock_irqsave(&gpc->lock, flags); - while (!kvm_gfn_to_pfn_cache_check(v->kvm, gpc, gpc->gpa, - sizeof(struct vcpu_info))) { + while (!kvm_gpc_check(v->kvm, gpc, gpc->gpa, + sizeof(struct vcpu_info))) { read_unlock_irqrestore(&gpc->lock, flags); - if (kvm_gfn_to_pfn_cache_refresh(v->kvm, gpc, gpc->gpa, - sizeof(struct vcpu_info))) + if (kvm_gpc_refresh(v->kvm, gpc, gpc->gpa, + sizeof(struct vcpu_info))) return; read_lock_irqsave(&gpc->lock, flags); @@ -414,8 +413,8 @@ int __kvm_xen_has_interrupt(struct kvm_vcpu *v) sizeof_field(struct compat_vcpu_info, evtchn_upcall_pending)); read_lock_irqsave(&gpc->lock, flags); - while (!kvm_gfn_to_pfn_cache_check(v->kvm, gpc, gpc->gpa, - sizeof(struct vcpu_info))) { + while (!kvm_gpc_check(v->kvm, gpc, gpc->gpa, + sizeof(struct vcpu_info))) { read_unlock_irqrestore(&gpc->lock, flags); /* @@ -429,8 +428,8 @@ int __kvm_xen_has_interrupt(struct kvm_vcpu *v) if (in_atomic() || !task_is_running(current)) return 1; - if (kvm_gfn_to_pfn_cache_refresh(v->kvm, gpc, gpc->gpa, - sizeof(struct vcpu_info))) { + if (kvm_gpc_refresh(v->kvm, gpc, gpc->gpa, + sizeof(struct vcpu_info))) { /* * If this failed, userspace has screwed up the * vcpu_info mapping. No interrupts for you. @@ -963,7 +962,7 @@ static bool wait_pending_event(struct kvm_vcpu *vcpu, int nr_ports, read_lock_irqsave(&gpc->lock, flags); idx = srcu_read_lock(&kvm->srcu); - if (!kvm_gfn_to_pfn_cache_check(kvm, gpc, gpc->gpa, PAGE_SIZE)) + if (!kvm_gpc_check(kvm, gpc, gpc->gpa, PAGE_SIZE)) goto out_rcu; ret = false; @@ -1354,7 +1353,7 @@ int kvm_xen_set_evtchn_fast(struct kvm_xen_evtchn *xe, struct kvm *kvm) idx = srcu_read_lock(&kvm->srcu); read_lock_irqsave(&gpc->lock, flags); - if (!kvm_gfn_to_pfn_cache_check(kvm, gpc, gpc->gpa, PAGE_SIZE)) + if (!kvm_gpc_check(kvm, gpc, gpc->gpa, PAGE_SIZE)) goto out_rcu; if (IS_ENABLED(CONFIG_64BIT) && kvm->arch.xen.long_mode) { @@ -1388,7 +1387,7 @@ int kvm_xen_set_evtchn_fast(struct kvm_xen_evtchn *xe, struct kvm *kvm) gpc = &vcpu->arch.xen.vcpu_info_cache; read_lock_irqsave(&gpc->lock, flags); - if (!kvm_gfn_to_pfn_cache_check(kvm, gpc, gpc->gpa, sizeof(struct vcpu_info))) { + if (!kvm_gpc_check(kvm, gpc, gpc->gpa, sizeof(struct vcpu_info))) { /* * Could not access the vcpu_info. Set the bit in-kernel * and prod the vCPU to deliver it for itself. @@ -1486,7 +1485,7 @@ static int kvm_xen_set_evtchn(struct kvm_xen_evtchn *xe, struct kvm *kvm) break; idx = srcu_read_lock(&kvm->srcu); - rc = kvm_gfn_to_pfn_cache_refresh(kvm, gpc, gpc->gpa, PAGE_SIZE); + rc = kvm_gpc_refresh(kvm, gpc, gpc->gpa, PAGE_SIZE); srcu_read_unlock(&kvm->srcu, idx); } while(!rc); diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index 694c4cb6caf4..bb020ee3b2fe 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -1271,16 +1271,15 @@ void kvm_gpc_init(struct gfn_to_pfn_cache *gpc); * -EFAULT for an untranslatable guest physical address. * * This primes a gfn_to_pfn_cache and links it into the @kvm's list for - * invalidations to be processed. Callers are required to use - * kvm_gfn_to_pfn_cache_check() to ensure that the cache is valid before - * accessing the target page. + * invalidations to be processed. Callers are required to use kvm_gpc_check() + * to ensure that the cache is valid before accessing the target page. */ int kvm_gpc_activate(struct kvm *kvm, struct gfn_to_pfn_cache *gpc, struct kvm_vcpu *vcpu, enum pfn_cache_usage usage, gpa_t gpa, unsigned long len); /** - * kvm_gfn_to_pfn_cache_check - check validity of a gfn_to_pfn_cache. + * kvm_gpc_check - check validity of a gfn_to_pfn_cache. * * @kvm: pointer to kvm instance. * @gpc: struct gfn_to_pfn_cache object. @@ -1297,11 +1296,11 @@ int kvm_gpc_activate(struct kvm *kvm, struct gfn_to_pfn_cache *gpc, * Callers in IN_GUEST_MODE may do so without locking, although they should * still hold a read lock on kvm->scru for the memslot checks. */ -bool kvm_gfn_to_pfn_cache_check(struct kvm *kvm, struct gfn_to_pfn_cache *gpc, - gpa_t gpa, unsigned long len); +bool kvm_gpc_check(struct kvm *kvm, struct gfn_to_pfn_cache *gpc, gpa_t gpa, + unsigned long len); /** - * kvm_gfn_to_pfn_cache_refresh - update a previously initialized cache. + * kvm_gpc_refresh - update a previously initialized cache. * * @kvm: pointer to kvm instance. * @gpc: struct gfn_to_pfn_cache object. @@ -1318,11 +1317,11 @@ bool kvm_gfn_to_pfn_cache_check(struct kvm *kvm, struct gfn_to_pfn_cache *gpc, * still lock and check the cache status, as this function does not return * with the lock still held to permit access. */ -int kvm_gfn_to_pfn_cache_refresh(struct kvm *kvm, struct gfn_to_pfn_cache *gpc, - gpa_t gpa, unsigned long len); +int kvm_gpc_refresh(struct kvm *kvm, struct gfn_to_pfn_cache *gpc, gpa_t gpa, + unsigned long len); /** - * kvm_gfn_to_pfn_cache_unmap - temporarily unmap a gfn_to_pfn_cache. + * kvm_gpc_unmap - temporarily unmap a gfn_to_pfn_cache. * * @kvm: pointer to kvm instance. * @gpc: struct gfn_to_pfn_cache object. @@ -1331,7 +1330,7 @@ int kvm_gfn_to_pfn_cache_refresh(struct kvm *kvm, struct gfn_to_pfn_cache *gpc, * but at least the mapping from GPA to userspace HVA will remain cached * and can be reused on a subsequent refresh. */ -void kvm_gfn_to_pfn_cache_unmap(struct kvm *kvm, struct gfn_to_pfn_cache *gpc); +void kvm_gpc_unmap(struct kvm *kvm, struct gfn_to_pfn_cache *gpc); /** * kvm_gpc_deactivate - deactivate and unlink a gfn_to_pfn_cache. diff --git a/virt/kvm/pfncache.c b/virt/kvm/pfncache.c index 346e47f15572..23180f1d9c1c 100644 --- a/virt/kvm/pfncache.c +++ b/virt/kvm/pfncache.c @@ -76,8 +76,8 @@ void gfn_to_pfn_cache_invalidate_start(struct kvm *kvm, unsigned long start, } } -bool kvm_gfn_to_pfn_cache_check(struct kvm *kvm, struct gfn_to_pfn_cache *gpc, - gpa_t gpa, unsigned long len) +bool kvm_gpc_check(struct kvm *kvm, struct gfn_to_pfn_cache *gpc, gpa_t gpa, + unsigned long len) { struct kvm_memslots *slots = kvm_memslots(kvm); @@ -96,7 +96,7 @@ bool kvm_gfn_to_pfn_cache_check(struct kvm *kvm, struct gfn_to_pfn_cache *gpc, return true; } -EXPORT_SYMBOL_GPL(kvm_gfn_to_pfn_cache_check); +EXPORT_SYMBOL_GPL(kvm_gpc_check); static void gpc_unmap_khva(struct kvm *kvm, kvm_pfn_t pfn, void *khva) { @@ -238,8 +238,8 @@ static kvm_pfn_t hva_to_pfn_retry(struct kvm *kvm, struct gfn_to_pfn_cache *gpc) return -EFAULT; } -int kvm_gfn_to_pfn_cache_refresh(struct kvm *kvm, struct gfn_to_pfn_cache *gpc, - gpa_t gpa, unsigned long len) +int kvm_gpc_refresh(struct kvm *kvm, struct gfn_to_pfn_cache *gpc, gpa_t gpa, + unsigned long len) { struct kvm_memslots *slots = kvm_memslots(kvm); unsigned long page_offset = gpa & ~PAGE_MASK; @@ -328,9 +328,9 @@ int kvm_gfn_to_pfn_cache_refresh(struct kvm *kvm, struct gfn_to_pfn_cache *gpc, return ret; } -EXPORT_SYMBOL_GPL(kvm_gfn_to_pfn_cache_refresh); +EXPORT_SYMBOL_GPL(kvm_gpc_refresh); -void kvm_gfn_to_pfn_cache_unmap(struct kvm *kvm, struct gfn_to_pfn_cache *gpc) +void kvm_gpc_unmap(struct kvm *kvm, struct gfn_to_pfn_cache *gpc) { void *old_khva; kvm_pfn_t old_pfn; @@ -355,7 +355,7 @@ void kvm_gfn_to_pfn_cache_unmap(struct kvm *kvm, struct gfn_to_pfn_cache *gpc) gpc_unmap_khva(kvm, old_pfn, old_khva); } -EXPORT_SYMBOL_GPL(kvm_gfn_to_pfn_cache_unmap); +EXPORT_SYMBOL_GPL(kvm_gpc_unmap); void kvm_gpc_init(struct gfn_to_pfn_cache *gpc) { @@ -391,7 +391,7 @@ int kvm_gpc_activate(struct kvm *kvm, struct gfn_to_pfn_cache *gpc, gpc->active = true; write_unlock_irq(&gpc->lock); } - return kvm_gfn_to_pfn_cache_refresh(kvm, gpc, gpa, len); + return kvm_gpc_refresh(kvm, gpc, gpa, len); } EXPORT_SYMBOL_GPL(kvm_gpc_activate); @@ -411,7 +411,7 @@ void kvm_gpc_deactivate(struct kvm *kvm, struct gfn_to_pfn_cache *gpc) list_del(&gpc->list); spin_unlock(&kvm->gpc_lock); - kvm_gfn_to_pfn_cache_unmap(kvm, gpc); + kvm_gpc_unmap(kvm, gpc); } } EXPORT_SYMBOL_GPL(kvm_gpc_deactivate); From patchwork Thu Oct 13 21:12:23 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 13006418 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 93F71C4332F for ; Thu, 13 Oct 2022 21:14:04 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230013AbiJMVOC (ORCPT ); Thu, 13 Oct 2022 17:14:02 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53926 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229940AbiJMVNX (ORCPT ); Thu, 13 Oct 2022 17:13:23 -0400 Received: from mail-pg1-x54a.google.com (mail-pg1-x54a.google.com [IPv6:2607:f8b0:4864:20::54a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 537AC190E62 for ; Thu, 13 Oct 2022 14:13:10 -0700 (PDT) Received: by mail-pg1-x54a.google.com with SMTP id e187-20020a6369c4000000b0041c8dfb8447so1588331pgc.23 for ; Thu, 13 Oct 2022 14:13:09 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=I+Js4rwjsW6iIwbX3VOZw+Ue+OSoWJ6FRkZ2WqW5Mnw=; b=YFtv89DAnR5aI0XcjusnNfre1iUUFeX+FG5+Sjlu+cBR5Br5DMTjfEA/fWvvSWjOe9 qleW6wvYLwBC3TnG0PiE6Yfr9LOJ3zOOv5R+SwNTyFqmzegCCnRMDfuj5mSkgLqdvlpR tpFCZ8Efs75QDGh0h///oq7iGCXmgtWaNQ8D89JKktSH1uGHYgzs/UVYpXvydxyHSOEu ZVlyQnj2SYH+rrJo+dWEzzhs4F6Qq/Cq21G4tJPZaqAiELtteZgN4syVtkjWWyd78HTt dNQoMbkw+DdnCI070hDk6W8CEt9C2u2Vi63DbpALlIHoVJe8avwk/sBteEnAlNZRYADW Yn9w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=I+Js4rwjsW6iIwbX3VOZw+Ue+OSoWJ6FRkZ2WqW5Mnw=; b=zN1UyMgerys3X1yQXNX63LclRkXinbv1jV2VMrksGY3pBaQ7FzYaf6N1+JzJTIcYkW on4M693tLFq5uxnjOyohQbBF7N8/GwS2kv+uXFyiUliDVmCS+RFvcBpAu/4g58Glvj0s P8bHE3rHwHi1jQ9whVNjClt8ht2SSAUJkRIufHsTu/8NjVglsoUHe7idnA3GGYCDF9Si mHibZTPfSXz/9KRZyv3wudLO4xwV78LEWjjIQUzcnC0LFyUrjuVkSwgeC9g4/mTIr8lm AWWWyJdOJ+8Mm3Sojgb9WpZ3VdzoM81hcrmLhYTEpouuOiu/PTUKjcogYeGAQ0rXSPDy 0zUA== X-Gm-Message-State: ACrzQf0VwT69GOesyet2K8mOu4L3XQDHApoRnlEds/+VjRnjpj7sClTg CZNuTKCbd101EVahOycdt121QQeumAQ= X-Google-Smtp-Source: AMsMyM5/5YVpY1nzSJShOOKrI1737xNVhS5agX9dO3x+QhWK710GBWxc0llMNoRwpiNUHSrBafMZJCqHfaQ= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a17:90a:d990:b0:20d:9c20:90c with SMTP id d16-20020a17090ad99000b0020d9c20090cmr1895243pjv.203.1665695567569; Thu, 13 Oct 2022 14:12:47 -0700 (PDT) Reply-To: Sean Christopherson Date: Thu, 13 Oct 2022 21:12:23 +0000 In-Reply-To: <20221013211234.1318131-1-seanjc@google.com> Mime-Version: 1.0 References: <20221013211234.1318131-1-seanjc@google.com> X-Mailer: git-send-email 2.38.0.413.g74048e4d9e-goog Message-ID: <20221013211234.1318131-6-seanjc@google.com> Subject: [PATCH v2 05/16] KVM: x86: Remove unused argument in gpc_unmap_khva() From: Sean Christopherson To: Sean Christopherson , Paolo Bonzini Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Michal Luczaj , David Woodhouse Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Michal Luczaj Remove the unused @kvm argument from gpc_unmap_khva(). Signed-off-by: Michal Luczaj Signed-off-by: Sean Christopherson --- virt/kvm/pfncache.c | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/virt/kvm/pfncache.c b/virt/kvm/pfncache.c index 23180f1d9c1c..32ccf168361b 100644 --- a/virt/kvm/pfncache.c +++ b/virt/kvm/pfncache.c @@ -98,7 +98,7 @@ bool kvm_gpc_check(struct kvm *kvm, struct gfn_to_pfn_cache *gpc, gpa_t gpa, } EXPORT_SYMBOL_GPL(kvm_gpc_check); -static void gpc_unmap_khva(struct kvm *kvm, kvm_pfn_t pfn, void *khva) +static void gpc_unmap_khva(kvm_pfn_t pfn, void *khva) { /* Unmap the old pfn/page if it was mapped before. */ if (!is_error_noslot_pfn(pfn) && khva) { @@ -177,7 +177,7 @@ static kvm_pfn_t hva_to_pfn_retry(struct kvm *kvm, struct gfn_to_pfn_cache *gpc) * the existing mapping and didn't create a new one. */ if (new_khva != old_khva) - gpc_unmap_khva(kvm, new_pfn, new_khva); + gpc_unmap_khva(new_pfn, new_khva); kvm_release_pfn_clean(new_pfn); @@ -324,7 +324,7 @@ int kvm_gpc_refresh(struct kvm *kvm, struct gfn_to_pfn_cache *gpc, gpa_t gpa, mutex_unlock(&gpc->refresh_lock); if (unmap_old) - gpc_unmap_khva(kvm, old_pfn, old_khva); + gpc_unmap_khva(old_pfn, old_khva); return ret; } @@ -353,7 +353,7 @@ void kvm_gpc_unmap(struct kvm *kvm, struct gfn_to_pfn_cache *gpc) write_unlock_irq(&gpc->lock); mutex_unlock(&gpc->refresh_lock); - gpc_unmap_khva(kvm, old_pfn, old_khva); + gpc_unmap_khva(old_pfn, old_khva); } EXPORT_SYMBOL_GPL(kvm_gpc_unmap); From patchwork Thu Oct 13 21:12:24 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 13006419 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id ABC7DC4332F for ; Thu, 13 Oct 2022 21:14:06 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230076AbiJMVOE (ORCPT ); Thu, 13 Oct 2022 17:14:04 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54726 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230099AbiJMVNh (ORCPT ); Thu, 13 Oct 2022 17:13:37 -0400 Received: from mail-pf1-x449.google.com (mail-pf1-x449.google.com [IPv6:2607:f8b0:4864:20::449]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id EDF6C191D4E for ; Thu, 13 Oct 2022 14:13:21 -0700 (PDT) Received: by mail-pf1-x449.google.com with SMTP id t7-20020a62ea07000000b005619a832f68so1786698pfh.11 for ; Thu, 13 Oct 2022 14:13:21 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=1msHt2h1zk3U7VfDgxOJcVdiI5NiAbcgEYge87AV1N4=; b=LT3c/pDjbJWeU16zOB0oo+t9+yGv0hQb2pF6+lsobdNrnTlzOJ1BT9HjXDwk8ar+Ri 4E369MG74nJF1S/jSUVKQHt+1Z7J3aN/bMnEI2apAE9SXqbHbwG857JwlSlj5sFGHWqK xp2r7KilhlU7w5bcLRZNQuk87ysFvoRHZJtrvE1rLdr5fZ+NrIJYGjciglfo6aCpEygQ /EXa0W3oC58ZqFJsCXJ6zrDsS5fw7ErpyUlS6jYCIBOyX+6k1h7KUnMQUuDg/vBWUSIO Q/b+sjAjkMHjsSWkxGaab75aABk8kPEBUcmMj67xOhZmZ7nQByi+SYoaGa+hxi0RS9I8 4PTQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=1msHt2h1zk3U7VfDgxOJcVdiI5NiAbcgEYge87AV1N4=; b=JevLyPHEqDB0nk2uvaLUawNPIcwRwug3fU1vzPWxkTWEkVwsXbv/TprnWtuuwCGufG oU43TZDrxVF1Blk4xu8JSU6Pkx6VwQKCJrKdzhrLiChvIUEs+lmIzsUEWdH6xkau6bMM Fdwh/7sCbivCdtYBdfT1iNL/8C2MJWSfLRSp3MqVxi9tDQSrR/JpxlXixPBKNaQvVzgw 3X3xnXW1JVB2v/nPOxOUG7J/po5ueDMPWwKNmaE5lTAtxjh+hop0HWCcDRn6IBt69FRW La+8+WmfyIBEpvxqF8a6Expnp/6sw4YbQG2TN9/byB8R+gwN7yr5WKC6PeO3xJ4I8Cz/ T8VQ== X-Gm-Message-State: ACrzQf1fp1g08F0y3AZzxrjvwQ2nGnprbFQd7ju1mfiMdY/oUXAZIGxk g8KpjiKXu4V+SRzeGnwVj10RYEg2HdQ= X-Google-Smtp-Source: AMsMyM6RFkaj0/jA7js6tXadWjKbBCfJEO4acH6bd27Y9fq7PfEz7rQ1YFzwS3asqgOmFogbEop+9IEEvVI= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a17:90b:1d0d:b0:20d:5c1c:5fbb with SMTP id on13-20020a17090b1d0d00b0020d5c1c5fbbmr13469697pjb.196.1665695569201; Thu, 13 Oct 2022 14:12:49 -0700 (PDT) Reply-To: Sean Christopherson Date: Thu, 13 Oct 2022 21:12:24 +0000 In-Reply-To: <20221013211234.1318131-1-seanjc@google.com> Mime-Version: 1.0 References: <20221013211234.1318131-1-seanjc@google.com> X-Mailer: git-send-email 2.38.0.413.g74048e4d9e-goog Message-ID: <20221013211234.1318131-7-seanjc@google.com> Subject: [PATCH v2 06/16] KVM: Store immutable gfn_to_pfn_cache properties From: Sean Christopherson To: Sean Christopherson , Paolo Bonzini Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Michal Luczaj , David Woodhouse Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Michal Luczaj Move the assignment of immutable properties @kvm, @vcpu, and @usage to the initializer. Make _activate() and _deactivate() use stored values. Note, @len is also effectively immutable, but less obviously so. Leave @len as is for now, it will be addressed in a future patch. Suggested-by: Sean Christopherson Signed-off-by: Michal Luczaj [sean: handle @len in a separate patch] Signed-off-by: Sean Christopherson --- arch/x86/kvm/x86.c | 14 +++++------- arch/x86/kvm/xen.c | 47 ++++++++++++++++++--------------------- include/linux/kvm_host.h | 37 +++++++++++++++--------------- include/linux/kvm_types.h | 1 + virt/kvm/pfncache.c | 22 +++++++++++------- 5 files changed, 61 insertions(+), 60 deletions(-) diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index fd00e6a33203..9c68050672de 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -2314,13 +2314,11 @@ static void kvm_write_system_time(struct kvm_vcpu *vcpu, gpa_t system_time, kvm_make_request(KVM_REQ_GLOBAL_CLOCK_UPDATE, vcpu); /* we verify if the enable bit is set... */ - if (system_time & 1) { - kvm_gpc_activate(vcpu->kvm, &vcpu->arch.pv_time, vcpu, - KVM_HOST_USES_PFN, system_time & ~1ULL, + if (system_time & 1) + kvm_gpc_activate(&vcpu->arch.pv_time, system_time & ~1ULL, sizeof(struct pvclock_vcpu_time_info)); - } else { - kvm_gpc_deactivate(vcpu->kvm, &vcpu->arch.pv_time); - } + else + kvm_gpc_deactivate(&vcpu->arch.pv_time); return; } @@ -3388,7 +3386,7 @@ static int kvm_pv_enable_async_pf_int(struct kvm_vcpu *vcpu, u64 data) static void kvmclock_reset(struct kvm_vcpu *vcpu) { - kvm_gpc_deactivate(vcpu->kvm, &vcpu->arch.pv_time); + kvm_gpc_deactivate(&vcpu->arch.pv_time); vcpu->arch.time = 0; } @@ -11757,7 +11755,7 @@ int kvm_arch_vcpu_create(struct kvm_vcpu *vcpu) vcpu->arch.regs_avail = ~0; vcpu->arch.regs_dirty = ~0; - kvm_gpc_init(&vcpu->arch.pv_time); + kvm_gpc_init(&vcpu->arch.pv_time, vcpu->kvm, vcpu, KVM_HOST_USES_PFN); if (!irqchip_in_kernel(vcpu->kvm) || kvm_vcpu_is_reset_bsp(vcpu)) vcpu->arch.mp_state = KVM_MP_STATE_RUNNABLE; diff --git a/arch/x86/kvm/xen.c b/arch/x86/kvm/xen.c index 74d9f4985f93..55b7195d69d6 100644 --- a/arch/x86/kvm/xen.c +++ b/arch/x86/kvm/xen.c @@ -42,13 +42,12 @@ static int kvm_xen_shared_info_init(struct kvm *kvm, gfn_t gfn) int idx = srcu_read_lock(&kvm->srcu); if (gfn == GPA_INVALID) { - kvm_gpc_deactivate(kvm, gpc); + kvm_gpc_deactivate(gpc); goto out; } do { - ret = kvm_gpc_activate(kvm, gpc, NULL, KVM_HOST_USES_PFN, gpa, - PAGE_SIZE); + ret = kvm_gpc_activate(gpc, gpa, PAGE_SIZE); if (ret) goto out; @@ -550,15 +549,13 @@ int kvm_xen_vcpu_set_attr(struct kvm_vcpu *vcpu, struct kvm_xen_vcpu_attr *data) offsetof(struct compat_vcpu_info, time)); if (data->u.gpa == GPA_INVALID) { - kvm_gpc_deactivate(vcpu->kvm, &vcpu->arch.xen.vcpu_info_cache); + kvm_gpc_deactivate(&vcpu->arch.xen.vcpu_info_cache); r = 0; break; } - r = kvm_gpc_activate(vcpu->kvm, - &vcpu->arch.xen.vcpu_info_cache, NULL, - KVM_HOST_USES_PFN, data->u.gpa, - sizeof(struct vcpu_info)); + r = kvm_gpc_activate(&vcpu->arch.xen.vcpu_info_cache, + data->u.gpa, sizeof(struct vcpu_info)); if (!r) kvm_make_request(KVM_REQ_CLOCK_UPDATE, vcpu); @@ -566,15 +563,13 @@ int kvm_xen_vcpu_set_attr(struct kvm_vcpu *vcpu, struct kvm_xen_vcpu_attr *data) case KVM_XEN_VCPU_ATTR_TYPE_VCPU_TIME_INFO: if (data->u.gpa == GPA_INVALID) { - kvm_gpc_deactivate(vcpu->kvm, - &vcpu->arch.xen.vcpu_time_info_cache); + kvm_gpc_deactivate(&vcpu->arch.xen.vcpu_time_info_cache); r = 0; break; } - r = kvm_gpc_activate(vcpu->kvm, - &vcpu->arch.xen.vcpu_time_info_cache, - NULL, KVM_HOST_USES_PFN, data->u.gpa, + r = kvm_gpc_activate(&vcpu->arch.xen.vcpu_time_info_cache, + data->u.gpa, sizeof(struct pvclock_vcpu_time_info)); if (!r) kvm_make_request(KVM_REQ_CLOCK_UPDATE, vcpu); @@ -586,14 +581,13 @@ int kvm_xen_vcpu_set_attr(struct kvm_vcpu *vcpu, struct kvm_xen_vcpu_attr *data) break; } if (data->u.gpa == GPA_INVALID) { - kvm_gpc_deactivate(vcpu->kvm, - &vcpu->arch.xen.runstate_cache); + kvm_gpc_deactivate(&vcpu->arch.xen.runstate_cache); r = 0; break; } - r = kvm_gpc_activate(vcpu->kvm, &vcpu->arch.xen.runstate_cache, - NULL, KVM_HOST_USES_PFN, data->u.gpa, + r = kvm_gpc_activate(&vcpu->arch.xen.runstate_cache, + data->u.gpa, sizeof(struct vcpu_runstate_info)); break; @@ -1814,9 +1808,12 @@ void kvm_xen_init_vcpu(struct kvm_vcpu *vcpu) timer_setup(&vcpu->arch.xen.poll_timer, cancel_evtchn_poll, 0); - kvm_gpc_init(&vcpu->arch.xen.runstate_cache); - kvm_gpc_init(&vcpu->arch.xen.vcpu_info_cache); - kvm_gpc_init(&vcpu->arch.xen.vcpu_time_info_cache); + kvm_gpc_init(&vcpu->arch.xen.runstate_cache, vcpu->kvm, NULL, + KVM_HOST_USES_PFN); + kvm_gpc_init(&vcpu->arch.xen.vcpu_info_cache, vcpu->kvm, NULL, + KVM_HOST_USES_PFN); + kvm_gpc_init(&vcpu->arch.xen.vcpu_time_info_cache, vcpu->kvm, NULL, + KVM_HOST_USES_PFN); } void kvm_xen_destroy_vcpu(struct kvm_vcpu *vcpu) @@ -1824,9 +1821,9 @@ void kvm_xen_destroy_vcpu(struct kvm_vcpu *vcpu) if (kvm_xen_timer_enabled(vcpu)) kvm_xen_stop_timer(vcpu); - kvm_gpc_deactivate(vcpu->kvm, &vcpu->arch.xen.runstate_cache); - kvm_gpc_deactivate(vcpu->kvm, &vcpu->arch.xen.vcpu_info_cache); - kvm_gpc_deactivate(vcpu->kvm, &vcpu->arch.xen.vcpu_time_info_cache); + kvm_gpc_deactivate(&vcpu->arch.xen.runstate_cache); + kvm_gpc_deactivate(&vcpu->arch.xen.vcpu_info_cache); + kvm_gpc_deactivate(&vcpu->arch.xen.vcpu_time_info_cache); del_timer_sync(&vcpu->arch.xen.poll_timer); } @@ -1834,7 +1831,7 @@ void kvm_xen_destroy_vcpu(struct kvm_vcpu *vcpu) void kvm_xen_init_vm(struct kvm *kvm) { idr_init(&kvm->arch.xen.evtchn_ports); - kvm_gpc_init(&kvm->arch.xen.shinfo_cache); + kvm_gpc_init(&kvm->arch.xen.shinfo_cache, kvm, NULL, KVM_HOST_USES_PFN); } void kvm_xen_destroy_vm(struct kvm *kvm) @@ -1842,7 +1839,7 @@ void kvm_xen_destroy_vm(struct kvm *kvm) struct evtchnfd *evtchnfd; int i; - kvm_gpc_deactivate(kvm, &kvm->arch.xen.shinfo_cache); + kvm_gpc_deactivate(&kvm->arch.xen.shinfo_cache); idr_for_each_entry(&kvm->arch.xen.evtchn_ports, evtchnfd, i) { if (!evtchnfd->deliver.port.port) diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index bb020ee3b2fe..e5e70607a5ef 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -1243,18 +1243,7 @@ void kvm_vcpu_mark_page_dirty(struct kvm_vcpu *vcpu, gfn_t gfn); * kvm_gpc_init - initialize gfn_to_pfn_cache. * * @gpc: struct gfn_to_pfn_cache object. - * - * This sets up a gfn_to_pfn_cache by initializing locks. Note, the cache must - * be zero-allocated (or zeroed by the caller before init). - */ -void kvm_gpc_init(struct gfn_to_pfn_cache *gpc); - -/** - * kvm_gpc_activate - prepare a cached kernel mapping and HPA for a given guest - * physical address. - * * @kvm: pointer to kvm instance. - * @gpc: struct gfn_to_pfn_cache object. * @vcpu: vCPU to be used for marking pages dirty and to be woken on * invalidation. * @usage: indicates if the resulting host physical PFN is used while @@ -1263,20 +1252,31 @@ void kvm_gpc_init(struct gfn_to_pfn_cache *gpc); * changes!---will also force @vcpu to exit the guest and * refresh the cache); and/or if the PFN used directly * by KVM (and thus needs a kernel virtual mapping). + * + * This sets up a gfn_to_pfn_cache by initializing locks and assigning the + * immutable attributes. Note, the cache must be zero-allocated (or zeroed by + * the caller before init). + */ +void kvm_gpc_init(struct gfn_to_pfn_cache *gpc, struct kvm *kvm, + struct kvm_vcpu *vcpu, enum pfn_cache_usage usage); + +/** + * kvm_gpc_activate - prepare a cached kernel mapping and HPA for a given guest + * physical address. + * + * @gpc: struct gfn_to_pfn_cache object. * @gpa: guest physical address to map. * @len: sanity check; the range being access must fit a single page. * * @return: 0 for success. * -EINVAL for a mapping which would cross a page boundary. - * -EFAULT for an untranslatable guest physical address. + * -EFAULT for an untranslatable guest physical address. * - * This primes a gfn_to_pfn_cache and links it into the @kvm's list for + * This primes a gfn_to_pfn_cache and links it into the @gpc->kvm's list for * invalidations to be processed. Callers are required to use kvm_gpc_check() * to ensure that the cache is valid before accessing the target page. */ -int kvm_gpc_activate(struct kvm *kvm, struct gfn_to_pfn_cache *gpc, - struct kvm_vcpu *vcpu, enum pfn_cache_usage usage, - gpa_t gpa, unsigned long len); +int kvm_gpc_activate(struct gfn_to_pfn_cache *gpc, gpa_t gpa, unsigned long len); /** * kvm_gpc_check - check validity of a gfn_to_pfn_cache. @@ -1335,13 +1335,12 @@ void kvm_gpc_unmap(struct kvm *kvm, struct gfn_to_pfn_cache *gpc); /** * kvm_gpc_deactivate - deactivate and unlink a gfn_to_pfn_cache. * - * @kvm: pointer to kvm instance. * @gpc: struct gfn_to_pfn_cache object. * - * This removes a cache from the @kvm's list to be processed on MMU notifier + * This removes a cache from the VM's list to be processed on MMU notifier * invocation. */ -void kvm_gpc_deactivate(struct kvm *kvm, struct gfn_to_pfn_cache *gpc); +void kvm_gpc_deactivate(struct gfn_to_pfn_cache *gpc); void kvm_sigset_activate(struct kvm_vcpu *vcpu); void kvm_sigset_deactivate(struct kvm_vcpu *vcpu); diff --git a/include/linux/kvm_types.h b/include/linux/kvm_types.h index 3ca3db020e0e..76de36e56cdf 100644 --- a/include/linux/kvm_types.h +++ b/include/linux/kvm_types.h @@ -67,6 +67,7 @@ struct gfn_to_pfn_cache { gpa_t gpa; unsigned long uhva; struct kvm_memory_slot *memslot; + struct kvm *kvm; struct kvm_vcpu *vcpu; struct list_head list; rwlock_t lock; diff --git a/virt/kvm/pfncache.c b/virt/kvm/pfncache.c index 32ccf168361b..6756dfa60d5a 100644 --- a/virt/kvm/pfncache.c +++ b/virt/kvm/pfncache.c @@ -357,25 +357,29 @@ void kvm_gpc_unmap(struct kvm *kvm, struct gfn_to_pfn_cache *gpc) } EXPORT_SYMBOL_GPL(kvm_gpc_unmap); -void kvm_gpc_init(struct gfn_to_pfn_cache *gpc) +void kvm_gpc_init(struct gfn_to_pfn_cache *gpc, struct kvm *kvm, + struct kvm_vcpu *vcpu, enum pfn_cache_usage usage) { + WARN_ON_ONCE(!usage || (usage & KVM_GUEST_AND_HOST_USE_PFN) != usage); + WARN_ON_ONCE((usage & KVM_GUEST_USES_PFN) && !vcpu); + rwlock_init(&gpc->lock); mutex_init(&gpc->refresh_lock); + + gpc->kvm = kvm; + gpc->vcpu = vcpu; + gpc->usage = usage; } EXPORT_SYMBOL_GPL(kvm_gpc_init); -int kvm_gpc_activate(struct kvm *kvm, struct gfn_to_pfn_cache *gpc, - struct kvm_vcpu *vcpu, enum pfn_cache_usage usage, - gpa_t gpa, unsigned long len) +int kvm_gpc_activate(struct gfn_to_pfn_cache *gpc, gpa_t gpa, unsigned long len) { - WARN_ON_ONCE(!usage || (usage & KVM_GUEST_AND_HOST_USE_PFN) != usage); + struct kvm *kvm = gpc->kvm; if (!gpc->active) { gpc->khva = NULL; gpc->pfn = KVM_PFN_ERR_FAULT; gpc->uhva = KVM_HVA_ERR_BAD; - gpc->vcpu = vcpu; - gpc->usage = usage; gpc->valid = false; spin_lock(&kvm->gpc_lock); @@ -395,8 +399,10 @@ int kvm_gpc_activate(struct kvm *kvm, struct gfn_to_pfn_cache *gpc, } EXPORT_SYMBOL_GPL(kvm_gpc_activate); -void kvm_gpc_deactivate(struct kvm *kvm, struct gfn_to_pfn_cache *gpc) +void kvm_gpc_deactivate(struct gfn_to_pfn_cache *gpc) { + struct kvm *kvm = gpc->kvm; + if (gpc->active) { /* * Deactivate the cache before removing it from the list, KVM From patchwork Thu Oct 13 21:12:25 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 13006420 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8DEE9C4332F for ; Thu, 13 Oct 2022 21:14:12 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229763AbiJMVOH (ORCPT ); Thu, 13 Oct 2022 17:14:07 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54770 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230111AbiJMVNi (ORCPT ); Thu, 13 Oct 2022 17:13:38 -0400 Received: from mail-pf1-x449.google.com (mail-pf1-x449.google.com [IPv6:2607:f8b0:4864:20::449]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5B83A192B9A for ; Thu, 13 Oct 2022 14:13:24 -0700 (PDT) Received: by mail-pf1-x449.google.com with SMTP id y15-20020aa78f2f000000b00562674456afso1769151pfr.9 for ; Thu, 13 Oct 2022 14:13:23 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=a4H6RRJPk2dGB6P2HabFO7Il/OYENPrSGssdwi/forA=; b=LRQBSlnHAC2o/nLb76XOELDV+qRbsrKoolzLISt7bliPNp8atvRG/tV6G2jh+ZOuYk rGdCAqlfEn8SgSQFkFuAL0dc30DDRJ9sBTfCHqHbFhKD+n5Eyo1Eqp5CQqRgeBt6cTR1 CN3oKcPX243/OalqKAhivmjnuP9NFm/LvneYDiCtPE5+jQc4o6eKMRt5PyQpjA3LAwUJ xiRkt1KBYlWm20pr87BVvORpQ89cJxKi3CdAEfWNTT3kQO3MrKJmCd7ueUYZM0d5LxDv ncyWZe/mFy5CmBu/xvyBvRZjy4XQfgfo5hj5z//KoeyYBIKhtiBa6rE3VkTuX6paHybY MYFA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=a4H6RRJPk2dGB6P2HabFO7Il/OYENPrSGssdwi/forA=; b=2NJ52tRC36NYotr6acgmVlsX1f7giXTLo8JWGabRbMRK6W3Js4750iJ+oAWP7QjD79 U09T1iWuI147PrtcyDmSd+8FTVXJCLEnjSKijFJlUOcs1U6h34V7Zye3HHe1BlFwZp/k nDcj50JzaJrvVMbLroMPzmGpevEzqfT/ANBgibFMYw2cFQrUY+sQVOfcswaPpng2oydu PDDPnAdOtgqYbqfgGx3d+sdYadiH/BZl1KGZw+l9goh3wJSFcaP7IlhfzeShHqAEoc6C RavK+P2S3ykkvVpzrwmHXX1SkihE/qwCRzXp9+G9N+u+Z3xYp8RGG4OvJqcajYMAnv0z d2sQ== X-Gm-Message-State: ACrzQf09L7/ZP3u1NjBL2E1TphN9ULJay/eZp1/ma5a/JfBqE8PL4G0J ugHfTtv71ORSmHzNKWNkkIJvPXq4qwA= X-Google-Smtp-Source: AMsMyM4KejC+Gj7bPF0CxMXT+SZ8aB+MKLj2iqOhrP2GpdUTq8TyDneufYoKDsM04c96P4oCqzsfL064NfM= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:aa7:86d4:0:b0:562:c5e2:a986 with SMTP id h20-20020aa786d4000000b00562c5e2a986mr1615268pfo.61.1665695570785; Thu, 13 Oct 2022 14:12:50 -0700 (PDT) Reply-To: Sean Christopherson Date: Thu, 13 Oct 2022 21:12:25 +0000 In-Reply-To: <20221013211234.1318131-1-seanjc@google.com> Mime-Version: 1.0 References: <20221013211234.1318131-1-seanjc@google.com> X-Mailer: git-send-email 2.38.0.413.g74048e4d9e-goog Message-ID: <20221013211234.1318131-8-seanjc@google.com> Subject: [PATCH v2 07/16] KVM: Store gfn_to_pfn_cache length as an immutable property From: Sean Christopherson To: Sean Christopherson , Paolo Bonzini Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Michal Luczaj , David Woodhouse Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Michal Luczaj Make the length of a gfn=>pfn cache an immutable property of the cache to cleanup the APIs and avoid potential bugs, e.g calling check() with a larger size than refresh() could put KVM into an infinite loop. All current (and anticipated future) users access the cache with a predetermined size, which isn't a coincidence as using a dedicated cache really only make sense when the access pattern is "fixed". Add a WARN in kvm_setup_guest_pvclock() to assert that the offset+size matches the length of the cache, both to make it more obvious that the length really is immutable in that case, and to detect future bugs. No functional change intended. Signed-off-by: Michal Luczaj Co-developed-by: Sean Christopherson Signed-off-by: Sean Christopherson --- arch/x86/kvm/x86.c | 14 ++++++------ arch/x86/kvm/xen.c | 46 ++++++++++++++++----------------------- include/linux/kvm_host.h | 14 +++++------- include/linux/kvm_types.h | 1 + virt/kvm/pfncache.c | 18 +++++++-------- 5 files changed, 42 insertions(+), 51 deletions(-) diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 9c68050672de..0b4fa3455f3a 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -2315,8 +2315,7 @@ static void kvm_write_system_time(struct kvm_vcpu *vcpu, gpa_t system_time, /* we verify if the enable bit is set... */ if (system_time & 1) - kvm_gpc_activate(&vcpu->arch.pv_time, system_time & ~1ULL, - sizeof(struct pvclock_vcpu_time_info)); + kvm_gpc_activate(&vcpu->arch.pv_time, system_time & ~1ULL); else kvm_gpc_deactivate(&vcpu->arch.pv_time); @@ -3031,13 +3030,13 @@ static void kvm_setup_guest_pvclock(struct kvm_vcpu *v, struct pvclock_vcpu_time_info *guest_hv_clock; unsigned long flags; + WARN_ON_ONCE(gpc->len != offset + sizeof(*guest_hv_clock)); + read_lock_irqsave(&gpc->lock, flags); - while (!kvm_gpc_check(v->kvm, gpc, gpc->gpa, - offset + sizeof(*guest_hv_clock))) { + while (!kvm_gpc_check(v->kvm, gpc, gpc->gpa)) { read_unlock_irqrestore(&gpc->lock, flags); - if (kvm_gpc_refresh(v->kvm, gpc, gpc->gpa, - offset + sizeof(*guest_hv_clock))) + if (kvm_gpc_refresh(v->kvm, gpc, gpc->gpa)) return; read_lock_irqsave(&gpc->lock, flags); @@ -11755,7 +11754,8 @@ int kvm_arch_vcpu_create(struct kvm_vcpu *vcpu) vcpu->arch.regs_avail = ~0; vcpu->arch.regs_dirty = ~0; - kvm_gpc_init(&vcpu->arch.pv_time, vcpu->kvm, vcpu, KVM_HOST_USES_PFN); + kvm_gpc_init(&vcpu->arch.pv_time, vcpu->kvm, vcpu, KVM_HOST_USES_PFN, + sizeof(struct pvclock_vcpu_time_info)); if (!irqchip_in_kernel(vcpu->kvm) || kvm_vcpu_is_reset_bsp(vcpu)) vcpu->arch.mp_state = KVM_MP_STATE_RUNNABLE; diff --git a/arch/x86/kvm/xen.c b/arch/x86/kvm/xen.c index 55b7195d69d6..6f5a5507392e 100644 --- a/arch/x86/kvm/xen.c +++ b/arch/x86/kvm/xen.c @@ -47,7 +47,7 @@ static int kvm_xen_shared_info_init(struct kvm *kvm, gfn_t gfn) } do { - ret = kvm_gpc_activate(gpc, gpa, PAGE_SIZE); + ret = kvm_gpc_activate(gpc, gpa); if (ret) goto out; @@ -203,7 +203,6 @@ void kvm_xen_update_runstate_guest(struct kvm_vcpu *v, int state) struct gfn_to_pfn_cache *gpc = &vx->runstate_cache; uint64_t *user_times; unsigned long flags; - size_t user_len; int *user_state; kvm_xen_update_runstate(v, state); @@ -211,17 +210,15 @@ void kvm_xen_update_runstate_guest(struct kvm_vcpu *v, int state) if (!vx->runstate_cache.active) return; - user_len = sizeof(struct vcpu_runstate_info); - read_lock_irqsave(&gpc->lock, flags); - while (!kvm_gpc_check(v->kvm, gpc, gpc->gpa, user_len)) { + while (!kvm_gpc_check(v->kvm, gpc, gpc->gpa)) { read_unlock_irqrestore(&gpc->lock, flags); /* When invoked from kvm_sched_out() we cannot sleep */ if (state == RUNSTATE_runnable) return; - if (kvm_gpc_refresh(v->kvm, gpc, gpc->gpa, user_len)) + if (kvm_gpc_refresh(v->kvm, gpc, gpc->gpa)) return; read_lock_irqsave(&gpc->lock, flags); @@ -347,12 +344,10 @@ void kvm_xen_inject_pending_events(struct kvm_vcpu *v) * little more honest about it. */ read_lock_irqsave(&gpc->lock, flags); - while (!kvm_gpc_check(v->kvm, gpc, gpc->gpa, - sizeof(struct vcpu_info))) { + while (!kvm_gpc_check(v->kvm, gpc, gpc->gpa)) { read_unlock_irqrestore(&gpc->lock, flags); - if (kvm_gpc_refresh(v->kvm, gpc, gpc->gpa, - sizeof(struct vcpu_info))) + if (kvm_gpc_refresh(v->kvm, gpc, gpc->gpa)) return; read_lock_irqsave(&gpc->lock, flags); @@ -412,8 +407,7 @@ int __kvm_xen_has_interrupt(struct kvm_vcpu *v) sizeof_field(struct compat_vcpu_info, evtchn_upcall_pending)); read_lock_irqsave(&gpc->lock, flags); - while (!kvm_gpc_check(v->kvm, gpc, gpc->gpa, - sizeof(struct vcpu_info))) { + while (!kvm_gpc_check(v->kvm, gpc, gpc->gpa)) { read_unlock_irqrestore(&gpc->lock, flags); /* @@ -427,8 +421,7 @@ int __kvm_xen_has_interrupt(struct kvm_vcpu *v) if (in_atomic() || !task_is_running(current)) return 1; - if (kvm_gpc_refresh(v->kvm, gpc, gpc->gpa, - sizeof(struct vcpu_info))) { + if (kvm_gpc_refresh(v->kvm, gpc, gpc->gpa)) { /* * If this failed, userspace has screwed up the * vcpu_info mapping. No interrupts for you. @@ -555,7 +548,7 @@ int kvm_xen_vcpu_set_attr(struct kvm_vcpu *vcpu, struct kvm_xen_vcpu_attr *data) } r = kvm_gpc_activate(&vcpu->arch.xen.vcpu_info_cache, - data->u.gpa, sizeof(struct vcpu_info)); + data->u.gpa); if (!r) kvm_make_request(KVM_REQ_CLOCK_UPDATE, vcpu); @@ -569,8 +562,7 @@ int kvm_xen_vcpu_set_attr(struct kvm_vcpu *vcpu, struct kvm_xen_vcpu_attr *data) } r = kvm_gpc_activate(&vcpu->arch.xen.vcpu_time_info_cache, - data->u.gpa, - sizeof(struct pvclock_vcpu_time_info)); + data->u.gpa); if (!r) kvm_make_request(KVM_REQ_CLOCK_UPDATE, vcpu); break; @@ -587,8 +579,7 @@ int kvm_xen_vcpu_set_attr(struct kvm_vcpu *vcpu, struct kvm_xen_vcpu_attr *data) } r = kvm_gpc_activate(&vcpu->arch.xen.runstate_cache, - data->u.gpa, - sizeof(struct vcpu_runstate_info)); + data->u.gpa); break; case KVM_XEN_VCPU_ATTR_TYPE_RUNSTATE_CURRENT: @@ -956,7 +947,7 @@ static bool wait_pending_event(struct kvm_vcpu *vcpu, int nr_ports, read_lock_irqsave(&gpc->lock, flags); idx = srcu_read_lock(&kvm->srcu); - if (!kvm_gpc_check(kvm, gpc, gpc->gpa, PAGE_SIZE)) + if (!kvm_gpc_check(kvm, gpc, gpc->gpa)) goto out_rcu; ret = false; @@ -1347,7 +1338,7 @@ int kvm_xen_set_evtchn_fast(struct kvm_xen_evtchn *xe, struct kvm *kvm) idx = srcu_read_lock(&kvm->srcu); read_lock_irqsave(&gpc->lock, flags); - if (!kvm_gpc_check(kvm, gpc, gpc->gpa, PAGE_SIZE)) + if (!kvm_gpc_check(kvm, gpc, gpc->gpa)) goto out_rcu; if (IS_ENABLED(CONFIG_64BIT) && kvm->arch.xen.long_mode) { @@ -1381,7 +1372,7 @@ int kvm_xen_set_evtchn_fast(struct kvm_xen_evtchn *xe, struct kvm *kvm) gpc = &vcpu->arch.xen.vcpu_info_cache; read_lock_irqsave(&gpc->lock, flags); - if (!kvm_gpc_check(kvm, gpc, gpc->gpa, sizeof(struct vcpu_info))) { + if (!kvm_gpc_check(kvm, gpc, gpc->gpa)) { /* * Could not access the vcpu_info. Set the bit in-kernel * and prod the vCPU to deliver it for itself. @@ -1479,7 +1470,7 @@ static int kvm_xen_set_evtchn(struct kvm_xen_evtchn *xe, struct kvm *kvm) break; idx = srcu_read_lock(&kvm->srcu); - rc = kvm_gpc_refresh(kvm, gpc, gpc->gpa, PAGE_SIZE); + rc = kvm_gpc_refresh(kvm, gpc, gpc->gpa); srcu_read_unlock(&kvm->srcu, idx); } while(!rc); @@ -1809,11 +1800,11 @@ void kvm_xen_init_vcpu(struct kvm_vcpu *vcpu) timer_setup(&vcpu->arch.xen.poll_timer, cancel_evtchn_poll, 0); kvm_gpc_init(&vcpu->arch.xen.runstate_cache, vcpu->kvm, NULL, - KVM_HOST_USES_PFN); + KVM_HOST_USES_PFN, sizeof(struct vcpu_runstate_info)); kvm_gpc_init(&vcpu->arch.xen.vcpu_info_cache, vcpu->kvm, NULL, - KVM_HOST_USES_PFN); + KVM_HOST_USES_PFN, sizeof(struct vcpu_info)); kvm_gpc_init(&vcpu->arch.xen.vcpu_time_info_cache, vcpu->kvm, NULL, - KVM_HOST_USES_PFN); + KVM_HOST_USES_PFN, sizeof(struct pvclock_vcpu_time_info)); } void kvm_xen_destroy_vcpu(struct kvm_vcpu *vcpu) @@ -1831,7 +1822,8 @@ void kvm_xen_destroy_vcpu(struct kvm_vcpu *vcpu) void kvm_xen_init_vm(struct kvm *kvm) { idr_init(&kvm->arch.xen.evtchn_ports); - kvm_gpc_init(&kvm->arch.xen.shinfo_cache, kvm, NULL, KVM_HOST_USES_PFN); + kvm_gpc_init(&kvm->arch.xen.shinfo_cache, kvm, NULL, KVM_HOST_USES_PFN, + PAGE_SIZE); } void kvm_xen_destroy_vm(struct kvm *kvm) diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index e5e70607a5ef..c79f2e122ac8 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -1252,13 +1252,15 @@ void kvm_vcpu_mark_page_dirty(struct kvm_vcpu *vcpu, gfn_t gfn); * changes!---will also force @vcpu to exit the guest and * refresh the cache); and/or if the PFN used directly * by KVM (and thus needs a kernel virtual mapping). + * @len: sanity check; the range being access must fit a single page. * * This sets up a gfn_to_pfn_cache by initializing locks and assigning the * immutable attributes. Note, the cache must be zero-allocated (or zeroed by * the caller before init). */ void kvm_gpc_init(struct gfn_to_pfn_cache *gpc, struct kvm *kvm, - struct kvm_vcpu *vcpu, enum pfn_cache_usage usage); + struct kvm_vcpu *vcpu, enum pfn_cache_usage usage, + unsigned long len); /** * kvm_gpc_activate - prepare a cached kernel mapping and HPA for a given guest @@ -1266,7 +1268,6 @@ void kvm_gpc_init(struct gfn_to_pfn_cache *gpc, struct kvm *kvm, * * @gpc: struct gfn_to_pfn_cache object. * @gpa: guest physical address to map. - * @len: sanity check; the range being access must fit a single page. * * @return: 0 for success. * -EINVAL for a mapping which would cross a page boundary. @@ -1276,7 +1277,7 @@ void kvm_gpc_init(struct gfn_to_pfn_cache *gpc, struct kvm *kvm, * invalidations to be processed. Callers are required to use kvm_gpc_check() * to ensure that the cache is valid before accessing the target page. */ -int kvm_gpc_activate(struct gfn_to_pfn_cache *gpc, gpa_t gpa, unsigned long len); +int kvm_gpc_activate(struct gfn_to_pfn_cache *gpc, gpa_t gpa); /** * kvm_gpc_check - check validity of a gfn_to_pfn_cache. @@ -1284,7 +1285,6 @@ int kvm_gpc_activate(struct gfn_to_pfn_cache *gpc, gpa_t gpa, unsigned long len) * @kvm: pointer to kvm instance. * @gpc: struct gfn_to_pfn_cache object. * @gpa: current guest physical address to map. - * @len: sanity check; the range being access must fit a single page. * * @return: %true if the cache is still valid and the address matches. * %false if the cache is not valid. @@ -1296,8 +1296,7 @@ int kvm_gpc_activate(struct gfn_to_pfn_cache *gpc, gpa_t gpa, unsigned long len) * Callers in IN_GUEST_MODE may do so without locking, although they should * still hold a read lock on kvm->scru for the memslot checks. */ -bool kvm_gpc_check(struct kvm *kvm, struct gfn_to_pfn_cache *gpc, gpa_t gpa, - unsigned long len); +bool kvm_gpc_check(struct kvm *kvm, struct gfn_to_pfn_cache *gpc, gpa_t gpa); /** * kvm_gpc_refresh - update a previously initialized cache. @@ -1317,8 +1316,7 @@ bool kvm_gpc_check(struct kvm *kvm, struct gfn_to_pfn_cache *gpc, gpa_t gpa, * still lock and check the cache status, as this function does not return * with the lock still held to permit access. */ -int kvm_gpc_refresh(struct kvm *kvm, struct gfn_to_pfn_cache *gpc, gpa_t gpa, - unsigned long len); +int kvm_gpc_refresh(struct kvm *kvm, struct gfn_to_pfn_cache *gpc, gpa_t gpa); /** * kvm_gpc_unmap - temporarily unmap a gfn_to_pfn_cache. diff --git a/include/linux/kvm_types.h b/include/linux/kvm_types.h index 76de36e56cdf..d66b276d29e0 100644 --- a/include/linux/kvm_types.h +++ b/include/linux/kvm_types.h @@ -74,6 +74,7 @@ struct gfn_to_pfn_cache { struct mutex refresh_lock; void *khva; kvm_pfn_t pfn; + unsigned long len; enum pfn_cache_usage usage; bool active; bool valid; diff --git a/virt/kvm/pfncache.c b/virt/kvm/pfncache.c index 6756dfa60d5a..34883ad12536 100644 --- a/virt/kvm/pfncache.c +++ b/virt/kvm/pfncache.c @@ -76,15 +76,14 @@ void gfn_to_pfn_cache_invalidate_start(struct kvm *kvm, unsigned long start, } } -bool kvm_gpc_check(struct kvm *kvm, struct gfn_to_pfn_cache *gpc, gpa_t gpa, - unsigned long len) +bool kvm_gpc_check(struct kvm *kvm, struct gfn_to_pfn_cache *gpc, gpa_t gpa) { struct kvm_memslots *slots = kvm_memslots(kvm); if (!gpc->active) return false; - if ((gpa & ~PAGE_MASK) + len > PAGE_SIZE) + if ((gpa & ~PAGE_MASK) + gpc->len > PAGE_SIZE) return false; if (gpc->gpa != gpa || gpc->generation != slots->generation || @@ -238,8 +237,7 @@ static kvm_pfn_t hva_to_pfn_retry(struct kvm *kvm, struct gfn_to_pfn_cache *gpc) return -EFAULT; } -int kvm_gpc_refresh(struct kvm *kvm, struct gfn_to_pfn_cache *gpc, gpa_t gpa, - unsigned long len) +int kvm_gpc_refresh(struct kvm *kvm, struct gfn_to_pfn_cache *gpc, gpa_t gpa) { struct kvm_memslots *slots = kvm_memslots(kvm); unsigned long page_offset = gpa & ~PAGE_MASK; @@ -253,7 +251,7 @@ int kvm_gpc_refresh(struct kvm *kvm, struct gfn_to_pfn_cache *gpc, gpa_t gpa, * If must fit within a single page. The 'len' argument is * only to enforce that. */ - if (page_offset + len > PAGE_SIZE) + if (page_offset + gpc->len > PAGE_SIZE) return -EINVAL; /* @@ -358,7 +356,8 @@ void kvm_gpc_unmap(struct kvm *kvm, struct gfn_to_pfn_cache *gpc) EXPORT_SYMBOL_GPL(kvm_gpc_unmap); void kvm_gpc_init(struct gfn_to_pfn_cache *gpc, struct kvm *kvm, - struct kvm_vcpu *vcpu, enum pfn_cache_usage usage) + struct kvm_vcpu *vcpu, enum pfn_cache_usage usage, + unsigned long len) { WARN_ON_ONCE(!usage || (usage & KVM_GUEST_AND_HOST_USE_PFN) != usage); WARN_ON_ONCE((usage & KVM_GUEST_USES_PFN) && !vcpu); @@ -369,10 +368,11 @@ void kvm_gpc_init(struct gfn_to_pfn_cache *gpc, struct kvm *kvm, gpc->kvm = kvm; gpc->vcpu = vcpu; gpc->usage = usage; + gpc->len = len; } EXPORT_SYMBOL_GPL(kvm_gpc_init); -int kvm_gpc_activate(struct gfn_to_pfn_cache *gpc, gpa_t gpa, unsigned long len) +int kvm_gpc_activate(struct gfn_to_pfn_cache *gpc, gpa_t gpa) { struct kvm *kvm = gpc->kvm; @@ -395,7 +395,7 @@ int kvm_gpc_activate(struct gfn_to_pfn_cache *gpc, gpa_t gpa, unsigned long len) gpc->active = true; write_unlock_irq(&gpc->lock); } - return kvm_gpc_refresh(kvm, gpc, gpa, len); + return kvm_gpc_refresh(kvm, gpc, gpa); } EXPORT_SYMBOL_GPL(kvm_gpc_activate); From patchwork Thu Oct 13 21:12:26 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 13006421 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7BEE4C433FE for ; Thu, 13 Oct 2022 21:14:17 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230132AbiJMVOO (ORCPT ); Thu, 13 Oct 2022 17:14:14 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54320 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230119AbiJMVNj (ORCPT ); Thu, 13 Oct 2022 17:13:39 -0400 Received: from mail-yw1-x114a.google.com (mail-yw1-x114a.google.com [IPv6:2607:f8b0:4864:20::114a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 2EF94192DB4 for ; Thu, 13 Oct 2022 14:13:27 -0700 (PDT) Received: by mail-yw1-x114a.google.com with SMTP id 00721157ae682-3576c47f204so28156957b3.2 for ; Thu, 13 Oct 2022 14:13:27 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=5URKB2UHMUSPpWygEwKHgMdRAkLfFXnLzKwGOMCRDNA=; b=UKNC2oX08Dn5jOmQaFh8md74layXZhPEWoVtlgyMroHsnDPxL9Ea4oOE99qmFZKq35 c/rLwzP18IC/7p/JVrfDeh8kN2cowwAN6IbStXMfiY3r5he7mlWqzRYwplCfiCsVKlFR beyqEUJa6kiF+TycHjvtt5ttKot4XXozvmzS53XK4tISWavxxqJxecXn+FH51b0th1rd 4uRhaeMV8UAV3LShApu0XEjCuUUNQ+agDoal03x65LKmf5LXR7Zk67B+Qu5NJkMIfZrn 6W3aUB5e0ujvuP1lNjRimNix4mKv2pZ4WBlWNv3IEGmMpb3LiiFoptN5NXk8Ojsx6fzm uYRg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=5URKB2UHMUSPpWygEwKHgMdRAkLfFXnLzKwGOMCRDNA=; b=fvO88DNnGNtavTO3H8shd0l9FurSPh5ubIr2wSJ+nZnmGC+f785vAkMq42oKsX9all a0f5Zu3kfhKaOtfyxb0DUhtIfU8m/MeHzba88F1D/Je13Zx/8Mi25mVhAJvuuYvtL6AH uii9wIK4J0ciKTdARCf60+5+TWrcpnZDO2OeicOgu8D2YgambCQmjQywRQfvuyay9ov0 JwAPdVizTE+dPEYkwWsf9AVcmvqlUe+5u4Jo6z26eTNA8CBzHIfakV+3P7DUs/Qhhmhc UoaXpnwAY6D3LX19igrdXVgmOVVuB6DwFQffRL6HHyQtHyvP/RRQ9QZUc6xryFc8fXUa C4GQ== X-Gm-Message-State: ACrzQf2F+yvNvwJfyhlFQSASg0VG2Dguh8JY5nEC5J9vtxeG2F+dHCjl 36TNKfm2wbtqnWL0u6Uyak1e6qKfP9w= X-Google-Smtp-Source: AMsMyM76F81HmX3F2Qos6l80xV4HqE50+9kT+DWyQE4AfLfy2pocrTQVdpKufL11zhYH9DERGg08VIhL+Ck= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a25:3746:0:b0:6bd:33bc:a284 with SMTP id e67-20020a253746000000b006bd33bca284mr1714965yba.506.1665695572592; Thu, 13 Oct 2022 14:12:52 -0700 (PDT) Reply-To: Sean Christopherson Date: Thu, 13 Oct 2022 21:12:26 +0000 In-Reply-To: <20221013211234.1318131-1-seanjc@google.com> Mime-Version: 1.0 References: <20221013211234.1318131-1-seanjc@google.com> X-Mailer: git-send-email 2.38.0.413.g74048e4d9e-goog Message-ID: <20221013211234.1318131-9-seanjc@google.com> Subject: [PATCH v2 08/16] KVM: Use gfn_to_pfn_cache's immutable "kvm" in kvm_gpc_check() From: Sean Christopherson To: Sean Christopherson , Paolo Bonzini Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Michal Luczaj , David Woodhouse Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Michal Luczaj Make kvm_gpc_check() use kvm instance cached in gfn_to_pfn_cache. Suggested-by: Sean Christopherson Signed-off-by: Michal Luczaj Signed-off-by: Sean Christopherson --- arch/x86/kvm/x86.c | 2 +- arch/x86/kvm/xen.c | 12 ++++++------ include/linux/kvm_host.h | 3 +-- virt/kvm/pfncache.c | 4 ++-- 4 files changed, 10 insertions(+), 11 deletions(-) diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 0b4fa3455f3a..b357a84f8c49 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -3033,7 +3033,7 @@ static void kvm_setup_guest_pvclock(struct kvm_vcpu *v, WARN_ON_ONCE(gpc->len != offset + sizeof(*guest_hv_clock)); read_lock_irqsave(&gpc->lock, flags); - while (!kvm_gpc_check(v->kvm, gpc, gpc->gpa)) { + while (!kvm_gpc_check(gpc, gpc->gpa)) { read_unlock_irqrestore(&gpc->lock, flags); if (kvm_gpc_refresh(v->kvm, gpc, gpc->gpa)) diff --git a/arch/x86/kvm/xen.c b/arch/x86/kvm/xen.c index 6f5a5507392e..c7304f37c438 100644 --- a/arch/x86/kvm/xen.c +++ b/arch/x86/kvm/xen.c @@ -211,7 +211,7 @@ void kvm_xen_update_runstate_guest(struct kvm_vcpu *v, int state) return; read_lock_irqsave(&gpc->lock, flags); - while (!kvm_gpc_check(v->kvm, gpc, gpc->gpa)) { + while (!kvm_gpc_check(gpc, gpc->gpa)) { read_unlock_irqrestore(&gpc->lock, flags); /* When invoked from kvm_sched_out() we cannot sleep */ @@ -344,7 +344,7 @@ void kvm_xen_inject_pending_events(struct kvm_vcpu *v) * little more honest about it. */ read_lock_irqsave(&gpc->lock, flags); - while (!kvm_gpc_check(v->kvm, gpc, gpc->gpa)) { + while (!kvm_gpc_check(gpc, gpc->gpa)) { read_unlock_irqrestore(&gpc->lock, flags); if (kvm_gpc_refresh(v->kvm, gpc, gpc->gpa)) @@ -407,7 +407,7 @@ int __kvm_xen_has_interrupt(struct kvm_vcpu *v) sizeof_field(struct compat_vcpu_info, evtchn_upcall_pending)); read_lock_irqsave(&gpc->lock, flags); - while (!kvm_gpc_check(v->kvm, gpc, gpc->gpa)) { + while (!kvm_gpc_check(gpc, gpc->gpa)) { read_unlock_irqrestore(&gpc->lock, flags); /* @@ -947,7 +947,7 @@ static bool wait_pending_event(struct kvm_vcpu *vcpu, int nr_ports, read_lock_irqsave(&gpc->lock, flags); idx = srcu_read_lock(&kvm->srcu); - if (!kvm_gpc_check(kvm, gpc, gpc->gpa)) + if (!kvm_gpc_check(gpc, gpc->gpa)) goto out_rcu; ret = false; @@ -1338,7 +1338,7 @@ int kvm_xen_set_evtchn_fast(struct kvm_xen_evtchn *xe, struct kvm *kvm) idx = srcu_read_lock(&kvm->srcu); read_lock_irqsave(&gpc->lock, flags); - if (!kvm_gpc_check(kvm, gpc, gpc->gpa)) + if (!kvm_gpc_check(gpc, gpc->gpa)) goto out_rcu; if (IS_ENABLED(CONFIG_64BIT) && kvm->arch.xen.long_mode) { @@ -1372,7 +1372,7 @@ int kvm_xen_set_evtchn_fast(struct kvm_xen_evtchn *xe, struct kvm *kvm) gpc = &vcpu->arch.xen.vcpu_info_cache; read_lock_irqsave(&gpc->lock, flags); - if (!kvm_gpc_check(kvm, gpc, gpc->gpa)) { + if (!kvm_gpc_check(gpc, gpc->gpa)) { /* * Could not access the vcpu_info. Set the bit in-kernel * and prod the vCPU to deliver it for itself. diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index c79f2e122ac8..ad8ef7f2d705 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -1282,7 +1282,6 @@ int kvm_gpc_activate(struct gfn_to_pfn_cache *gpc, gpa_t gpa); /** * kvm_gpc_check - check validity of a gfn_to_pfn_cache. * - * @kvm: pointer to kvm instance. * @gpc: struct gfn_to_pfn_cache object. * @gpa: current guest physical address to map. * @@ -1296,7 +1295,7 @@ int kvm_gpc_activate(struct gfn_to_pfn_cache *gpc, gpa_t gpa); * Callers in IN_GUEST_MODE may do so without locking, although they should * still hold a read lock on kvm->scru for the memslot checks. */ -bool kvm_gpc_check(struct kvm *kvm, struct gfn_to_pfn_cache *gpc, gpa_t gpa); +bool kvm_gpc_check(struct gfn_to_pfn_cache *gpc, gpa_t gpa); /** * kvm_gpc_refresh - update a previously initialized cache. diff --git a/virt/kvm/pfncache.c b/virt/kvm/pfncache.c index 34883ad12536..6fe76fb4d228 100644 --- a/virt/kvm/pfncache.c +++ b/virt/kvm/pfncache.c @@ -76,9 +76,9 @@ void gfn_to_pfn_cache_invalidate_start(struct kvm *kvm, unsigned long start, } } -bool kvm_gpc_check(struct kvm *kvm, struct gfn_to_pfn_cache *gpc, gpa_t gpa) +bool kvm_gpc_check(struct gfn_to_pfn_cache *gpc, gpa_t gpa) { - struct kvm_memslots *slots = kvm_memslots(kvm); + struct kvm_memslots *slots = kvm_memslots(gpc->kvm); if (!gpc->active) return false; From patchwork Thu Oct 13 21:12:27 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 13006423 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id C94FBC43217 for ; Thu, 13 Oct 2022 21:14:20 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230162AbiJMVOT (ORCPT ); Thu, 13 Oct 2022 17:14:19 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54878 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230151AbiJMVNl (ORCPT ); Thu, 13 Oct 2022 17:13:41 -0400 Received: from mail-pl1-x649.google.com (mail-pl1-x649.google.com [IPv6:2607:f8b0:4864:20::649]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 4C844193744 for ; Thu, 13 Oct 2022 14:13:29 -0700 (PDT) Received: by mail-pl1-x649.google.com with SMTP id m3-20020a170902bb8300b0017f7e7e4385so2000965pls.20 for ; Thu, 13 Oct 2022 14:13:29 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=content-transfer-encoding:cc:to:from:subject:message-id:references :mime-version:in-reply-to:date:reply-to:from:to:cc:subject:date :message-id:reply-to; bh=E0wMVancUSCdQHm6xrhPLp8XHQC+Zfx0f39Z7/tw4F8=; b=FFpkcBRBwqGjxj/2Lpmkxa6nmJHRfRTkJpW/HdkN5dV56vwsisFFcKuCEj+zyvPIUN zbsnmso3/Zhfw4UCU8XmMyJxpZ+75WFMFFMg3MVqWCfm9cz3CYhKkTuFdXwfc70R8P2c 2G1pnZFZmEpjNtMUldatsG0ALoHLtyh/xAd+i21jg8831ZyzvmLJld6tXHYg2Y1JmpPx RGR5YvjLX1ls8cG4ssSHfCxAPHC4iP6VRfVBrag3kWoH01c25sR59X0HfzCQqopGO5+j xvtnNaJhpA0E10sdWhUupebK+0P7ROqzAelJQhQG+eB38BRxvC6z5LwkHJmvFEyH+ab1 AG7g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:cc:to:from:subject:message-id:references :mime-version:in-reply-to:date:reply-to:x-gm-message-state:from:to :cc:subject:date:message-id:reply-to; bh=E0wMVancUSCdQHm6xrhPLp8XHQC+Zfx0f39Z7/tw4F8=; b=7vxiBRupF6hYRmLI4u5L61CbsWhuWyHljYhdo05wKgqem/GBI/4LYWuIpqCcRaL095 7lvZ2+9njD3rGVcp14kzGp/Cc+Dt4S2xbIH7dlx0F9+RIo8v9TCn9VEXNERac7f+sLdo VUz9d9J007qH7AUuFkXsGmDhgXxmY57I5qXZIOVaVIlkbVycz7Ts8jLoLSqkBfIwjXP0 9z+pSAvTLNG4uBr/OwgfYhjYMq6D3R5w+CE8kybTw6LTygOSMWNRk5aGaBj32UuwgKdf qKcB51ZjXdjWreqb3wPc5O1CsDnnes2w6WN/12OynlzerKgC8hEh5giNDTZfx8jrck+r 2qyQ== X-Gm-Message-State: ACrzQf3+BeR9wXmmNvCdCtJtrTb4THa2VDEbq+7SPvCT3Ob2hye+uOSY T6JSHxLWZIzUzcnNHzCt42WIiagOmVg= X-Google-Smtp-Source: AMsMyM7BtIoFBOcy0PCnFfzJvpr6blOAVwNXcBS3RQH+M0TC6BP9D7fDhFf9cOn2kWZj8nqHBW8e53pMr5s= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a17:902:868b:b0:185:be8:b316 with SMTP id g11-20020a170902868b00b001850be8b316mr1578925plo.157.1665695573969; Thu, 13 Oct 2022 14:12:53 -0700 (PDT) Reply-To: Sean Christopherson Date: Thu, 13 Oct 2022 21:12:27 +0000 In-Reply-To: <20221013211234.1318131-1-seanjc@google.com> Mime-Version: 1.0 References: <20221013211234.1318131-1-seanjc@google.com> X-Mailer: git-send-email 2.38.0.413.g74048e4d9e-goog Message-ID: <20221013211234.1318131-10-seanjc@google.com> Subject: [PATCH v2 09/16] KVM: Clean up hva_to_pfn_retry() From: Sean Christopherson To: Sean Christopherson , Paolo Bonzini Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Michal Luczaj , David Woodhouse Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Michal Luczaj Make hva_to_pfn_retry() use kvm instance cached in gfn_to_pfn_cache. Suggested-by: Sean Christopherson Signed-off-by: Michal Luczaj Signed-off-by: Sean Christopherson --- virt/kvm/pfncache.c | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/virt/kvm/pfncache.c b/virt/kvm/pfncache.c index 6fe76fb4d228..ef7ac1666847 100644 --- a/virt/kvm/pfncache.c +++ b/virt/kvm/pfncache.c @@ -138,7 +138,7 @@ static inline bool mmu_notifier_retry_cache(struct kvm *kvm, unsigned long mmu_s return kvm->mmu_invalidate_seq != mmu_seq; } -static kvm_pfn_t hva_to_pfn_retry(struct kvm *kvm, struct gfn_to_pfn_cache *gpc) +static kvm_pfn_t hva_to_pfn_retry(struct gfn_to_pfn_cache *gpc) { /* Note, the new page offset may be different than the old! */ void *old_khva = gpc->khva - offset_in_page(gpc->khva); @@ -158,7 +158,7 @@ static kvm_pfn_t hva_to_pfn_retry(struct kvm *kvm, struct gfn_to_pfn_cache *gpc) gpc->valid = false; do { - mmu_seq = kvm->mmu_invalidate_seq; + mmu_seq = gpc->kvm->mmu_invalidate_seq; smp_rmb(); write_unlock_irq(&gpc->lock); @@ -216,7 +216,7 @@ static kvm_pfn_t hva_to_pfn_retry(struct kvm *kvm, struct gfn_to_pfn_cache *gpc) * attempting to refresh. */ WARN_ON_ONCE(gpc->valid); - } while (mmu_notifier_retry_cache(kvm, mmu_seq)); + } while (mmu_notifier_retry_cache(gpc->kvm, mmu_seq)); gpc->valid = true; gpc->pfn = new_pfn; @@ -293,7 +293,7 @@ int kvm_gpc_refresh(struct kvm *kvm, struct gfn_to_pfn_cache *gpc, gpa_t gpa) * drop the lock and do the HVA to PFN lookup again. */ if (!gpc->valid || old_uhva != gpc->uhva) { - ret = hva_to_pfn_retry(kvm, gpc); + ret = hva_to_pfn_retry(gpc); } else { /* If the HVA→PFN mapping was already valid, don't unmap it. */ old_pfn = KVM_PFN_ERR_FAULT; From patchwork Thu Oct 13 21:12:28 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 13006424 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 61668C433FE for ; Thu, 13 Oct 2022 21:14:24 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230181AbiJMVOV (ORCPT ); Thu, 13 Oct 2022 17:14:21 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54810 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230169AbiJMVNn (ORCPT ); Thu, 13 Oct 2022 17:13:43 -0400 Received: from mail-pf1-x449.google.com (mail-pf1-x449.google.com [IPv6:2607:f8b0:4864:20::449]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0A4D4193758 for ; Thu, 13 Oct 2022 14:13:32 -0700 (PDT) Received: by mail-pf1-x449.google.com with SMTP id h1-20020a62b401000000b0056161cd284fso1756210pfn.16 for ; Thu, 13 Oct 2022 14:13:32 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=/Wux4Tvws0B1FY5Eqq8RnVGyJRYrHhxjDEQpTU2vx4E=; b=lm3T1cwJ15k+6/uBB8xtPltALi9r5i87D/2rmZNqKvEmPelSkNSVueJFvJOSUtkNpM D7ZfmZNjpy4c2mZbl1sAAEncd8rYyd5w5o6W6U1UvsAZsQ/7hTAvprmfQgyGtXVWYTzv ygawkMCI0oYRuFo2rXYPgmV7BQDQCvYM1UvwpCtMyZgPmb4mPFWlR7yWzNAVC3N74OeB rM31Pta/FvF5UQqjop91PRPVIzY3Pp0P2btkR9lYSrKuXe00bYiV5g/o5j4F1TRwAr5m xh7rXOzlwDSDW/avVQ3BVGG3L+u9t8b5D0ckbMv5ss/mpe0/2MlRozQhIsSCSQ5dx6YF bnMQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=/Wux4Tvws0B1FY5Eqq8RnVGyJRYrHhxjDEQpTU2vx4E=; b=Z/XVSS2thBPfO23z93QfwFONCAlOtzs9mX0wAH7uTMKEWJ0xa+c803WuuiWnvdDyk/ WCq06N5MoMH6SbAKhmigt12afRlZbgxcozb3W595L2H/3Rn+JQDEka9eYI0dvTZqkvq9 OTCnM4/Buu1kwkjqxFgvMifpa/D3yxHibS05TnqFzBIWHK1TFPXjiwVCo2I8ZF3hEmEn HSUTHATekcx6YcUt+VtoJnd5YkCZGSxvc21jYWD7S3c6vqJAp42olVyvQX+uruh1az3B y6/AxqELKHozZ4EgrVXHsWLCVC+oU1bnvpdEkdZeRT0UBzA0OZmQIl18LbemhmK5hNOk w9CA== X-Gm-Message-State: ACrzQf2NHGay0qphlC1xhAxyLlVLkBVjpfWA58/n50OADfJkRQlUAifM 5Cv7XY5UFYYHDAgFYEmG4u1BfSm2Ctc= X-Google-Smtp-Source: AMsMyM6qr+GDBp8mpP+eAPS6Nay8vdzEveP9Wti1HT2bjY8hLoFWIbhsGKadzgEIyDTHNX012UthqhLTldQ= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a17:90a:c984:b0:20a:eab5:cf39 with SMTP id w4-20020a17090ac98400b0020aeab5cf39mr114805pjt.1.1665695575531; Thu, 13 Oct 2022 14:12:55 -0700 (PDT) Reply-To: Sean Christopherson Date: Thu, 13 Oct 2022 21:12:28 +0000 In-Reply-To: <20221013211234.1318131-1-seanjc@google.com> Mime-Version: 1.0 References: <20221013211234.1318131-1-seanjc@google.com> X-Mailer: git-send-email 2.38.0.413.g74048e4d9e-goog Message-ID: <20221013211234.1318131-11-seanjc@google.com> Subject: [PATCH v2 10/16] KVM: Use gfn_to_pfn_cache's immutable "kvm" in kvm_gpc_refresh() From: Sean Christopherson To: Sean Christopherson , Paolo Bonzini Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Michal Luczaj , David Woodhouse Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Michal Luczaj Make kvm_gpc_refresh() use kvm instance cached in gfn_to_pfn_cache. No functional change intended. Suggested-by: Sean Christopherson Signed-off-by: Michal Luczaj [sean: leave kvm_gpc_unmap() as-is] Signed-off-by: Sean Christopherson --- arch/x86/kvm/x86.c | 2 +- arch/x86/kvm/xen.c | 8 ++++---- include/linux/kvm_host.h | 8 +++----- virt/kvm/pfncache.c | 6 +++--- 4 files changed, 11 insertions(+), 13 deletions(-) diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index b357a84f8c49..d370d06bb07a 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -3036,7 +3036,7 @@ static void kvm_setup_guest_pvclock(struct kvm_vcpu *v, while (!kvm_gpc_check(gpc, gpc->gpa)) { read_unlock_irqrestore(&gpc->lock, flags); - if (kvm_gpc_refresh(v->kvm, gpc, gpc->gpa)) + if (kvm_gpc_refresh(gpc, gpc->gpa)) return; read_lock_irqsave(&gpc->lock, flags); diff --git a/arch/x86/kvm/xen.c b/arch/x86/kvm/xen.c index c7304f37c438..920ba5ca3016 100644 --- a/arch/x86/kvm/xen.c +++ b/arch/x86/kvm/xen.c @@ -218,7 +218,7 @@ void kvm_xen_update_runstate_guest(struct kvm_vcpu *v, int state) if (state == RUNSTATE_runnable) return; - if (kvm_gpc_refresh(v->kvm, gpc, gpc->gpa)) + if (kvm_gpc_refresh(gpc, gpc->gpa)) return; read_lock_irqsave(&gpc->lock, flags); @@ -347,7 +347,7 @@ void kvm_xen_inject_pending_events(struct kvm_vcpu *v) while (!kvm_gpc_check(gpc, gpc->gpa)) { read_unlock_irqrestore(&gpc->lock, flags); - if (kvm_gpc_refresh(v->kvm, gpc, gpc->gpa)) + if (kvm_gpc_refresh(gpc, gpc->gpa)) return; read_lock_irqsave(&gpc->lock, flags); @@ -421,7 +421,7 @@ int __kvm_xen_has_interrupt(struct kvm_vcpu *v) if (in_atomic() || !task_is_running(current)) return 1; - if (kvm_gpc_refresh(v->kvm, gpc, gpc->gpa)) { + if (kvm_gpc_refresh(gpc, gpc->gpa)) { /* * If this failed, userspace has screwed up the * vcpu_info mapping. No interrupts for you. @@ -1470,7 +1470,7 @@ static int kvm_xen_set_evtchn(struct kvm_xen_evtchn *xe, struct kvm *kvm) break; idx = srcu_read_lock(&kvm->srcu); - rc = kvm_gpc_refresh(kvm, gpc, gpc->gpa); + rc = kvm_gpc_refresh(gpc, gpc->gpa); srcu_read_unlock(&kvm->srcu, idx); } while(!rc); diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index ad8ef7f2d705..b63d2abbef56 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -1300,22 +1300,20 @@ bool kvm_gpc_check(struct gfn_to_pfn_cache *gpc, gpa_t gpa); /** * kvm_gpc_refresh - update a previously initialized cache. * - * @kvm: pointer to kvm instance. * @gpc: struct gfn_to_pfn_cache object. * @gpa: updated guest physical address to map. - * @len: sanity check; the range being access must fit a single page. * * @return: 0 for success. * -EINVAL for a mapping which would cross a page boundary. - * -EFAULT for an untranslatable guest physical address. + * -EFAULT for an untranslatable guest physical address. * * This will attempt to refresh a gfn_to_pfn_cache. Note that a successful - * returm from this function does not mean the page can be immediately + * return from this function does not mean the page can be immediately * accessed because it may have raced with an invalidation. Callers must * still lock and check the cache status, as this function does not return * with the lock still held to permit access. */ -int kvm_gpc_refresh(struct kvm *kvm, struct gfn_to_pfn_cache *gpc, gpa_t gpa); +int kvm_gpc_refresh(struct gfn_to_pfn_cache *gpc, gpa_t gpa); /** * kvm_gpc_unmap - temporarily unmap a gfn_to_pfn_cache. diff --git a/virt/kvm/pfncache.c b/virt/kvm/pfncache.c index ef7ac1666847..432b150bd9f1 100644 --- a/virt/kvm/pfncache.c +++ b/virt/kvm/pfncache.c @@ -237,9 +237,9 @@ static kvm_pfn_t hva_to_pfn_retry(struct gfn_to_pfn_cache *gpc) return -EFAULT; } -int kvm_gpc_refresh(struct kvm *kvm, struct gfn_to_pfn_cache *gpc, gpa_t gpa) +int kvm_gpc_refresh(struct gfn_to_pfn_cache *gpc, gpa_t gpa) { - struct kvm_memslots *slots = kvm_memslots(kvm); + struct kvm_memslots *slots = kvm_memslots(gpc->kvm); unsigned long page_offset = gpa & ~PAGE_MASK; bool unmap_old = false; unsigned long old_uhva; @@ -395,7 +395,7 @@ int kvm_gpc_activate(struct gfn_to_pfn_cache *gpc, gpa_t gpa) gpc->active = true; write_unlock_irq(&gpc->lock); } - return kvm_gpc_refresh(kvm, gpc, gpa); + return kvm_gpc_refresh(gpc, gpa); } EXPORT_SYMBOL_GPL(kvm_gpc_activate); From patchwork Thu Oct 13 21:12:29 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 13006422 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 875CAC433FE for ; Thu, 13 Oct 2022 21:14:19 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230170AbiJMVOS (ORCPT ); Thu, 13 Oct 2022 17:14:18 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54868 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230149AbiJMVNl (ORCPT ); Thu, 13 Oct 2022 17:13:41 -0400 Received: from mail-yb1-xb49.google.com (mail-yb1-xb49.google.com [IPv6:2607:f8b0:4864:20::b49]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 617B719375E for ; Thu, 13 Oct 2022 14:13:36 -0700 (PDT) Received: by mail-yb1-xb49.google.com with SMTP id a2-20020a5b0ec2000000b006c108ecf390so2560206ybs.9 for ; Thu, 13 Oct 2022 14:13:36 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=content-transfer-encoding:cc:to:from:subject:message-id:references :mime-version:in-reply-to:date:reply-to:from:to:cc:subject:date :message-id:reply-to; bh=N+ZLsWQWrbTeNoSHVahwYLBX7L/YFHf5HDS9U7VFCCo=; b=FsxHbjOHYqiWQWKvKcoWJWpwm1TWzvwZwM7umrP8lrQyx512BCB/aj9APV57S1zRKv M3W57j19MhZJuBdQgrGaDEbFXEI0b50MpiTMzR8NLjjzCIOQALgZrDczMsSpRlULzv9X piO072IEBzh3t59bPz1frM+5kgdt7pNZCO4/PenZkNJ4BFmJssUs+ksH47Qq5hTKhhnO 0xQKj3lupz+pNlcziUvEtCWa97qMpPZXn/BWiqQbftslhGY9+YKuUbNoGp9dcsq3iDGL MJaUwH9n0jjDAttouYARLnARPc7rxM55KeHAEkG6JjJ60M4n7eO+ZlBv9dyPh1DaSjWS VrAA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:cc:to:from:subject:message-id:references :mime-version:in-reply-to:date:reply-to:x-gm-message-state:from:to :cc:subject:date:message-id:reply-to; bh=N+ZLsWQWrbTeNoSHVahwYLBX7L/YFHf5HDS9U7VFCCo=; b=IiHr7WQ/0XWTN+z0K/aOhHS8PVex2r87EM9r01Z4+AJ51xZh3wqOERFAR+LDh68EvM cZKB8uQKdl3l7yjdV4KiS15kXzpYvgAqO470ALzHWRYuaL3bfD/YRH6bDy7mRkJj7oEA OGUIZPhYx3hF0yNdP4LCjTLzEZss2HCHVAqj/Y8voHRK0huWWVOORPFpYzjao58bbpd0 7+kFrS4VrNEiQDfHZbKNLeRcv9yzmzgQ5AVcU6ySejoXLGJbFyMl/cSEWdlLsZ2JQpAg yfXyW9CkTSmnmRwvTfm9gMp3GBwfFpwfMzEEuHDd+jpI7X9ZHDTpHYkHdKmNbm5sD/nj Ctqw== X-Gm-Message-State: ACrzQf0xP/kz0BrJwVIFpXeH9zj4ePXYU8L8qlr3Ntq7/IgyeERqH6AP 9PCmlYaO5MnUA3Lgx/R/gYixH5c3znU= X-Google-Smtp-Source: AMsMyM5fAIdbSS3Hmw2vGAg48UHVyralf3ztielCGIwvgHhlrNWhnbQ7znQZsDmv9sJolUSIaLw1Hetf928= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a81:99cf:0:b0:360:b7bd:64de with SMTP id q198-20020a8199cf000000b00360b7bd64demr1839403ywg.91.1665695577390; Thu, 13 Oct 2022 14:12:57 -0700 (PDT) Reply-To: Sean Christopherson Date: Thu, 13 Oct 2022 21:12:29 +0000 In-Reply-To: <20221013211234.1318131-1-seanjc@google.com> Mime-Version: 1.0 References: <20221013211234.1318131-1-seanjc@google.com> X-Mailer: git-send-email 2.38.0.413.g74048e4d9e-goog Message-ID: <20221013211234.1318131-12-seanjc@google.com> Subject: [PATCH v2 11/16] KVM: Drop KVM's API to allow temprorarily unmapping gfn=>pfn cache From: Sean Christopherson To: Sean Christopherson , Paolo Bonzini Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Michal Luczaj , David Woodhouse Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Drop kvm_gpc_unmap() as it has no users and unclear requirements. The API was added as part of the original gfn_to_pfn_cache support, but its sole usage[*] was never merged. Fold the guts of kvm_gpc_unmap() into the deactivate path and drop the API. Omit acquiring refresh_lock as as concurrent calls to kvm_gpc_deactivate() are not allowed (this is not enforced, e.g. via lockdep. due to it being called during vCPU destruction). If/when temporary unmapping makes a comeback, the desirable behavior is likely to restrict temporary unmapping to vCPU-exclusive mappings and require the vcpu->mutex be held to serialize unmap. Use of the refresh_lock to protect unmapping was somewhat specuatively added by commit 93984f19e7bc ("KVM: Fully serialize gfn=>pfn cache refresh via mutex") to guard against concurrent unmaps, but the primary use case of the temporary unmap, nested virtualization[*], doesn't actually need or want concurrent unmaps. [*] https://lore.kernel.org/all/20211210163625.2886-7-dwmw2@infradead.org Signed-off-by: Sean Christopherson --- include/linux/kvm_host.h | 12 ----------- virt/kvm/pfncache.c | 44 +++++++++++++++------------------------- 2 files changed, 16 insertions(+), 40 deletions(-) diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index b63d2abbef56..22cf43389954 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -1315,18 +1315,6 @@ bool kvm_gpc_check(struct gfn_to_pfn_cache *gpc, gpa_t gpa); */ int kvm_gpc_refresh(struct gfn_to_pfn_cache *gpc, gpa_t gpa); -/** - * kvm_gpc_unmap - temporarily unmap a gfn_to_pfn_cache. - * - * @kvm: pointer to kvm instance. - * @gpc: struct gfn_to_pfn_cache object. - * - * This unmaps the referenced page. The cache is left in the invalid state - * but at least the mapping from GPA to userspace HVA will remain cached - * and can be reused on a subsequent refresh. - */ -void kvm_gpc_unmap(struct kvm *kvm, struct gfn_to_pfn_cache *gpc); - /** * kvm_gpc_deactivate - deactivate and unlink a gfn_to_pfn_cache. * diff --git a/virt/kvm/pfncache.c b/virt/kvm/pfncache.c index 432b150bd9f1..62b47feed36c 100644 --- a/virt/kvm/pfncache.c +++ b/virt/kvm/pfncache.c @@ -328,33 +328,6 @@ int kvm_gpc_refresh(struct gfn_to_pfn_cache *gpc, gpa_t gpa) } EXPORT_SYMBOL_GPL(kvm_gpc_refresh); -void kvm_gpc_unmap(struct kvm *kvm, struct gfn_to_pfn_cache *gpc) -{ - void *old_khva; - kvm_pfn_t old_pfn; - - mutex_lock(&gpc->refresh_lock); - write_lock_irq(&gpc->lock); - - gpc->valid = false; - - old_khva = gpc->khva - offset_in_page(gpc->khva); - old_pfn = gpc->pfn; - - /* - * We can leave the GPA → uHVA map cache intact but the PFN - * lookup will need to be redone even for the same page. - */ - gpc->khva = NULL; - gpc->pfn = KVM_PFN_ERR_FAULT; - - write_unlock_irq(&gpc->lock); - mutex_unlock(&gpc->refresh_lock); - - gpc_unmap_khva(old_pfn, old_khva); -} -EXPORT_SYMBOL_GPL(kvm_gpc_unmap); - void kvm_gpc_init(struct gfn_to_pfn_cache *gpc, struct kvm *kvm, struct kvm_vcpu *vcpu, enum pfn_cache_usage usage, unsigned long len) @@ -402,6 +375,8 @@ EXPORT_SYMBOL_GPL(kvm_gpc_activate); void kvm_gpc_deactivate(struct gfn_to_pfn_cache *gpc) { struct kvm *kvm = gpc->kvm; + kvm_pfn_t old_pfn; + void *old_khva; if (gpc->active) { /* @@ -411,13 +386,26 @@ void kvm_gpc_deactivate(struct gfn_to_pfn_cache *gpc) */ write_lock_irq(&gpc->lock); gpc->active = false; + gpc->valid = false; + + /* + * Leave the GPA => uHVA cache intact, it's protected by the + * memslot generation. The PFN lookup needs to be redone every + * time as mmu_notifier protection is lost when the cache is + * removed from the VM's gpc_list. + */ + old_khva = gpc->khva - offset_in_page(gpc->khva); + gpc->khva = NULL; + + old_pfn = gpc->pfn; + gpc->pfn = KVM_PFN_ERR_FAULT; write_unlock_irq(&gpc->lock); spin_lock(&kvm->gpc_lock); list_del(&gpc->list); spin_unlock(&kvm->gpc_lock); - kvm_gpc_unmap(kvm, gpc); + gpc_unmap_khva(old_pfn, old_khva); } } EXPORT_SYMBOL_GPL(kvm_gpc_deactivate); From patchwork Thu Oct 13 21:12:30 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 13006425 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 63E4EC433FE for ; Thu, 13 Oct 2022 21:14:47 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230188AbiJMVOp (ORCPT ); Thu, 13 Oct 2022 17:14:45 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55160 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230192AbiJMVNs (ORCPT ); Thu, 13 Oct 2022 17:13:48 -0400 Received: from mail-pj1-x104a.google.com (mail-pj1-x104a.google.com [IPv6:2607:f8b0:4864:20::104a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0AE9615F932 for ; Thu, 13 Oct 2022 14:13:37 -0700 (PDT) Received: by mail-pj1-x104a.google.com with SMTP id z24-20020a17090abd9800b0020d43dcc8c3so3952941pjr.9 for ; Thu, 13 Oct 2022 14:13:37 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=KzUT48wWdLM0GS1tdTWfSztOOLtqUcKTy+x0XSoy+nI=; b=LAIF/INbrCyeNVmFPXN31HCop+C55GkbkQAdu31lDBBbZyldCtI0Px2ohWMAfko3KW uVEBsbeNeIm6wfrNw+BitE0+3ZthLtt1vR3aukCaqBncbky/tOjkYitGzEU/RREjqImn hNJk0xArg74UBFBv05kP+Q9K9H/DCM+rZR9+Y0GSZ1f7WSSO5MCataZWJQJP6gVoeyva XEmpj3/onwVh9sKK9UobbuZyl45dLQw9emPpIpgz+tSFes72O1tgf52Qz8HXgOqmrpG9 e1n2sxd8mZ9WtlaDl9RWbBtusVAVAYOLf3HYpuExEvu3NpzpSYi+pcFPezLh1Oy7FBDl a6tA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=KzUT48wWdLM0GS1tdTWfSztOOLtqUcKTy+x0XSoy+nI=; b=i07u8D+UepVc1SRpLeksDl/VP+7ZfRLPAjnza5AmJwYV8eckv6av/dk5L1EAMw63l+ Q2zLTt5lJuzC37TV46xvzqvyK2EloKWs+jH29o+jgotBZGenQ0V+qiVV9yorHnlPRtRk Pmv5kXGvyYFWN7/DKWolVc7jXDlIppm3V3WPVbwPbZntzzItnj1rPFxo7Guz2U9hEhr5 XIKOT5/dKTSvbNQvDmxmlo3U7J0xKm9idEeKaORAlIkKrFNqXY1Ed54ow4mufwVyj8XK gdVXt7DzgxCu2Si9ser/9w3t6rjUDMAfQROFV0kFuINza6k/ALmKYjViy7/LDoHp4cKG RNGA== X-Gm-Message-State: ACrzQf3JFSF5jjfYSOafQ1g6cnLbfkXlXKn3Twkd+1behnJVhj0Ea23A pWOy999Rn7cTJ8CuZBC1PL6g2x90Qeo= X-Google-Smtp-Source: AMsMyM5JBoOUAGVdNSV/5FiKvoxtt7Vpe+BjQGnE06c6G75bcw8EQpovg1arsWEkH694SxjG+GG67OVQYrA= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a17:902:d88a:b0:17f:713b:c753 with SMTP id b10-20020a170902d88a00b0017f713bc753mr1567056plz.37.1665695579062; Thu, 13 Oct 2022 14:12:59 -0700 (PDT) Reply-To: Sean Christopherson Date: Thu, 13 Oct 2022 21:12:30 +0000 In-Reply-To: <20221013211234.1318131-1-seanjc@google.com> Mime-Version: 1.0 References: <20221013211234.1318131-1-seanjc@google.com> X-Mailer: git-send-email 2.38.0.413.g74048e4d9e-goog Message-ID: <20221013211234.1318131-13-seanjc@google.com> Subject: [PATCH v2 12/16] KVM: Do not partially reinitialize gfn=>pfn cache during activation From: Sean Christopherson To: Sean Christopherson , Paolo Bonzini Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Michal Luczaj , David Woodhouse Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Don't partially reinitialize a gfn=>pfn cache when activating the cache, and instead assert that the cache is not valid during activation. Bug the VM if the assertion fails, as use-after-free and/or data corruption is all but guaranteed if KVM ends up with a valid-but-inactive cache. Signed-off-by: Sean Christopherson --- virt/kvm/pfncache.c | 9 +++++---- 1 file changed, 5 insertions(+), 4 deletions(-) diff --git a/virt/kvm/pfncache.c b/virt/kvm/pfncache.c index 62b47feed36c..2d5b417e50ac 100644 --- a/virt/kvm/pfncache.c +++ b/virt/kvm/pfncache.c @@ -342,6 +342,9 @@ void kvm_gpc_init(struct gfn_to_pfn_cache *gpc, struct kvm *kvm, gpc->vcpu = vcpu; gpc->usage = usage; gpc->len = len; + gpc->pfn = KVM_PFN_ERR_FAULT; + gpc->uhva = KVM_HVA_ERR_BAD; + } EXPORT_SYMBOL_GPL(kvm_gpc_init); @@ -350,10 +353,8 @@ int kvm_gpc_activate(struct gfn_to_pfn_cache *gpc, gpa_t gpa) struct kvm *kvm = gpc->kvm; if (!gpc->active) { - gpc->khva = NULL; - gpc->pfn = KVM_PFN_ERR_FAULT; - gpc->uhva = KVM_HVA_ERR_BAD; - gpc->valid = false; + if (KVM_BUG_ON(gpc->valid, kvm)) + return -EIO; spin_lock(&kvm->gpc_lock); list_add(&gpc->list, &kvm->gpc_list); From patchwork Thu Oct 13 21:12:31 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 13006426 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 10465C43217 for ; Thu, 13 Oct 2022 21:14:49 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230212AbiJMVOr (ORCPT ); Thu, 13 Oct 2022 17:14:47 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55268 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230196AbiJMVNv (ORCPT ); Thu, 13 Oct 2022 17:13:51 -0400 Received: from mail-pf1-x449.google.com (mail-pf1-x449.google.com [IPv6:2607:f8b0:4864:20::449]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D0914192D9C for ; Thu, 13 Oct 2022 14:13:46 -0700 (PDT) Received: by mail-pf1-x449.google.com with SMTP id eb28-20020a056a004c9c00b0056326adf7a5so1772869pfb.8 for ; Thu, 13 Oct 2022 14:13:46 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=QVsgOUFdCeAu3kg4DIQmvroAjbUx1fEa2Um3hwmoUgQ=; b=LTIXi1l9TES5TaT4VN1PhRA/Sp0htqxITjUh2LwiHHHhS1vAVeurBK+Of1n6IZxM69 L4Nbe+dg5FzuWzB3lq+rayvnEHGnqCFciuTsrPTzBETiYOzg2LzHtkfJCLEFj6sqDTBq eRUsFduZ8EgZcF1BezhkeJwv3+yf/rbtD9vfDFAnEhJ5lDNxphgMu8zs4B1tZhbwrOrV B5fBMuWM7TdtbXWncJJ7cWROJbonPqBFVtYOuOV46dJJUA357asOZiq1DB0p0GgwfryV TAKqD9e1zu1FhnZZeIcZYE4QJCVlpfmpOlt1WRmvfjR5PMC8q+lbpZPfU56wuZdyL8NZ 0NDw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=QVsgOUFdCeAu3kg4DIQmvroAjbUx1fEa2Um3hwmoUgQ=; b=wqfK+CwwTfa8hrTbwdF6u4bRZk3U8qyAdJbE8RVtW4lKPi+pZAEaswky4znJ9U1/Pw 8gggzEaUwKior7LoMZfZdBxhRQepk9UApSH6GZ69lROBTwKvZ8LgWGMJwR15J0r6SWol LJmBAg/CVI+4mfvFGeg9tuimOFAFedCFtT2CN7FJeuyAM8qdyDcmW9ZL/0jR3Wa3b+mF 0Sv7p+2+bIqcRkIyrNx8YXFOPZhXDkyyzHOXt5yiCIONhY+NZ3MP2wDYymTxj0DhfD5r 60yXCvF3IUgGE3U5NLvBDvy69a13arsIONTsRhl2tMVBr6mW5l7XY7nQtbZ4pY+5GXIh 5NxQ== X-Gm-Message-State: ACrzQf37F98aAkGn3bjn9NFcfiK29n1xA8FK1zOzmuC9TXDl2y2R7n5n OffHWCG6tsCC50wTVh7AfH5uBa5HJEo= X-Google-Smtp-Source: AMsMyM7ya5QkJAUOvJRMaQlxWenHfSVG5jIUMLPe32tgxDSiOUSY3UdXYPjFMx0rle3Is5aPNnIWN5u3eYE= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a05:6a00:114c:b0:528:2c7a:6302 with SMTP id b12-20020a056a00114c00b005282c7a6302mr1481255pfm.37.1665695580614; Thu, 13 Oct 2022 14:13:00 -0700 (PDT) Reply-To: Sean Christopherson Date: Thu, 13 Oct 2022 21:12:31 +0000 In-Reply-To: <20221013211234.1318131-1-seanjc@google.com> Mime-Version: 1.0 References: <20221013211234.1318131-1-seanjc@google.com> X-Mailer: git-send-email 2.38.0.413.g74048e4d9e-goog Message-ID: <20221013211234.1318131-14-seanjc@google.com> Subject: [PATCH v2 13/16] KVM: Drop @gpa from exported gfn=>pfn cache check() and refresh() helpers From: Sean Christopherson To: Sean Christopherson , Paolo Bonzini Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Michal Luczaj , David Woodhouse Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Drop the @gpa param from the exported check()+refresh() helpers and limit changing the cache's GPA to the activate path. All external users just feed in gpc->gpa, i.e. this is a fancy nop. Allowing users to change the GPA at check()+refresh() is dangerous as those helpers explicitly allow concurrent calls, e.g. KVM could get into a livelock scenario. It's also unclear as to what the expected behavior should be if multiple tasks attempt to refresh with different GPAs. Signed-off-by: Sean Christopherson --- arch/x86/kvm/x86.c | 4 ++-- arch/x86/kvm/xen.c | 20 ++++++++++---------- include/linux/kvm_host.h | 6 ++---- virt/kvm/pfncache.c | 16 +++++++++------- 4 files changed, 23 insertions(+), 23 deletions(-) diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index d370d06bb07a..2db8515d38dd 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -3033,10 +3033,10 @@ static void kvm_setup_guest_pvclock(struct kvm_vcpu *v, WARN_ON_ONCE(gpc->len != offset + sizeof(*guest_hv_clock)); read_lock_irqsave(&gpc->lock, flags); - while (!kvm_gpc_check(gpc, gpc->gpa)) { + while (!kvm_gpc_check(gpc)) { read_unlock_irqrestore(&gpc->lock, flags); - if (kvm_gpc_refresh(gpc, gpc->gpa)) + if (kvm_gpc_refresh(gpc)) return; read_lock_irqsave(&gpc->lock, flags); diff --git a/arch/x86/kvm/xen.c b/arch/x86/kvm/xen.c index 920ba5ca3016..529d3f4c1b9d 100644 --- a/arch/x86/kvm/xen.c +++ b/arch/x86/kvm/xen.c @@ -211,14 +211,14 @@ void kvm_xen_update_runstate_guest(struct kvm_vcpu *v, int state) return; read_lock_irqsave(&gpc->lock, flags); - while (!kvm_gpc_check(gpc, gpc->gpa)) { + while (!kvm_gpc_check(gpc)) { read_unlock_irqrestore(&gpc->lock, flags); /* When invoked from kvm_sched_out() we cannot sleep */ if (state == RUNSTATE_runnable) return; - if (kvm_gpc_refresh(gpc, gpc->gpa)) + if (kvm_gpc_refresh(gpc)) return; read_lock_irqsave(&gpc->lock, flags); @@ -344,10 +344,10 @@ void kvm_xen_inject_pending_events(struct kvm_vcpu *v) * little more honest about it. */ read_lock_irqsave(&gpc->lock, flags); - while (!kvm_gpc_check(gpc, gpc->gpa)) { + while (!kvm_gpc_check(gpc)) { read_unlock_irqrestore(&gpc->lock, flags); - if (kvm_gpc_refresh(gpc, gpc->gpa)) + if (kvm_gpc_refresh(gpc)) return; read_lock_irqsave(&gpc->lock, flags); @@ -407,7 +407,7 @@ int __kvm_xen_has_interrupt(struct kvm_vcpu *v) sizeof_field(struct compat_vcpu_info, evtchn_upcall_pending)); read_lock_irqsave(&gpc->lock, flags); - while (!kvm_gpc_check(gpc, gpc->gpa)) { + while (!kvm_gpc_check(gpc)) { read_unlock_irqrestore(&gpc->lock, flags); /* @@ -421,7 +421,7 @@ int __kvm_xen_has_interrupt(struct kvm_vcpu *v) if (in_atomic() || !task_is_running(current)) return 1; - if (kvm_gpc_refresh(gpc, gpc->gpa)) { + if (kvm_gpc_refresh(gpc)) { /* * If this failed, userspace has screwed up the * vcpu_info mapping. No interrupts for you. @@ -947,7 +947,7 @@ static bool wait_pending_event(struct kvm_vcpu *vcpu, int nr_ports, read_lock_irqsave(&gpc->lock, flags); idx = srcu_read_lock(&kvm->srcu); - if (!kvm_gpc_check(gpc, gpc->gpa)) + if (!kvm_gpc_check(gpc)) goto out_rcu; ret = false; @@ -1338,7 +1338,7 @@ int kvm_xen_set_evtchn_fast(struct kvm_xen_evtchn *xe, struct kvm *kvm) idx = srcu_read_lock(&kvm->srcu); read_lock_irqsave(&gpc->lock, flags); - if (!kvm_gpc_check(gpc, gpc->gpa)) + if (!kvm_gpc_check(gpc)) goto out_rcu; if (IS_ENABLED(CONFIG_64BIT) && kvm->arch.xen.long_mode) { @@ -1372,7 +1372,7 @@ int kvm_xen_set_evtchn_fast(struct kvm_xen_evtchn *xe, struct kvm *kvm) gpc = &vcpu->arch.xen.vcpu_info_cache; read_lock_irqsave(&gpc->lock, flags); - if (!kvm_gpc_check(gpc, gpc->gpa)) { + if (!kvm_gpc_check(gpc)) { /* * Could not access the vcpu_info. Set the bit in-kernel * and prod the vCPU to deliver it for itself. @@ -1470,7 +1470,7 @@ static int kvm_xen_set_evtchn(struct kvm_xen_evtchn *xe, struct kvm *kvm) break; idx = srcu_read_lock(&kvm->srcu); - rc = kvm_gpc_refresh(gpc, gpc->gpa); + rc = kvm_gpc_refresh(gpc); srcu_read_unlock(&kvm->srcu, idx); } while(!rc); diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index 22cf43389954..92cf0be21974 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -1283,7 +1283,6 @@ int kvm_gpc_activate(struct gfn_to_pfn_cache *gpc, gpa_t gpa); * kvm_gpc_check - check validity of a gfn_to_pfn_cache. * * @gpc: struct gfn_to_pfn_cache object. - * @gpa: current guest physical address to map. * * @return: %true if the cache is still valid and the address matches. * %false if the cache is not valid. @@ -1295,13 +1294,12 @@ int kvm_gpc_activate(struct gfn_to_pfn_cache *gpc, gpa_t gpa); * Callers in IN_GUEST_MODE may do so without locking, although they should * still hold a read lock on kvm->scru for the memslot checks. */ -bool kvm_gpc_check(struct gfn_to_pfn_cache *gpc, gpa_t gpa); +bool kvm_gpc_check(struct gfn_to_pfn_cache *gpc); /** * kvm_gpc_refresh - update a previously initialized cache. * * @gpc: struct gfn_to_pfn_cache object. - * @gpa: updated guest physical address to map. * * @return: 0 for success. * -EINVAL for a mapping which would cross a page boundary. @@ -1313,7 +1311,7 @@ bool kvm_gpc_check(struct gfn_to_pfn_cache *gpc, gpa_t gpa); * still lock and check the cache status, as this function does not return * with the lock still held to permit access. */ -int kvm_gpc_refresh(struct gfn_to_pfn_cache *gpc, gpa_t gpa); +int kvm_gpc_refresh(struct gfn_to_pfn_cache *gpc); /** * kvm_gpc_deactivate - deactivate and unlink a gfn_to_pfn_cache. diff --git a/virt/kvm/pfncache.c b/virt/kvm/pfncache.c index 2d5b417e50ac..f211c878788b 100644 --- a/virt/kvm/pfncache.c +++ b/virt/kvm/pfncache.c @@ -76,17 +76,14 @@ void gfn_to_pfn_cache_invalidate_start(struct kvm *kvm, unsigned long start, } } -bool kvm_gpc_check(struct gfn_to_pfn_cache *gpc, gpa_t gpa) +bool kvm_gpc_check(struct gfn_to_pfn_cache *gpc) { struct kvm_memslots *slots = kvm_memslots(gpc->kvm); if (!gpc->active) return false; - if ((gpa & ~PAGE_MASK) + gpc->len > PAGE_SIZE) - return false; - - if (gpc->gpa != gpa || gpc->generation != slots->generation || + if (gpc->generation != slots->generation || kvm_is_error_hva(gpc->uhva)) return false; @@ -237,7 +234,7 @@ static kvm_pfn_t hva_to_pfn_retry(struct gfn_to_pfn_cache *gpc) return -EFAULT; } -int kvm_gpc_refresh(struct gfn_to_pfn_cache *gpc, gpa_t gpa) +static int __kvm_gpc_refresh(struct gfn_to_pfn_cache *gpc, gpa_t gpa) { struct kvm_memslots *slots = kvm_memslots(gpc->kvm); unsigned long page_offset = gpa & ~PAGE_MASK; @@ -326,6 +323,11 @@ int kvm_gpc_refresh(struct gfn_to_pfn_cache *gpc, gpa_t gpa) return ret; } + +int kvm_gpc_refresh(struct gfn_to_pfn_cache *gpc) +{ + return __kvm_gpc_refresh(gpc, gpc->gpa); +} EXPORT_SYMBOL_GPL(kvm_gpc_refresh); void kvm_gpc_init(struct gfn_to_pfn_cache *gpc, struct kvm *kvm, @@ -369,7 +371,7 @@ int kvm_gpc_activate(struct gfn_to_pfn_cache *gpc, gpa_t gpa) gpc->active = true; write_unlock_irq(&gpc->lock); } - return kvm_gpc_refresh(gpc, gpa); + return __kvm_gpc_refresh(gpc, gpa); } EXPORT_SYMBOL_GPL(kvm_gpc_activate); From patchwork Thu Oct 13 21:12:32 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 13006428 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 681EEC4332F for ; Thu, 13 Oct 2022 21:15:03 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230254AbiJMVPA (ORCPT ); Thu, 13 Oct 2022 17:15:00 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54878 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230233AbiJMVOA (ORCPT ); Thu, 13 Oct 2022 17:14:00 -0400 Received: from mail-yw1-x1149.google.com (mail-yw1-x1149.google.com [IPv6:2607:f8b0:4864:20::1149]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D2AFC193746 for ; Thu, 13 Oct 2022 14:13:48 -0700 (PDT) Received: by mail-yw1-x1149.google.com with SMTP id 00721157ae682-3538689fc60so28139617b3.3 for ; Thu, 13 Oct 2022 14:13:48 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=content-transfer-encoding:cc:to:from:subject:message-id:references :mime-version:in-reply-to:date:reply-to:from:to:cc:subject:date :message-id:reply-to; bh=enHjfiPcp49jZxPsRc8NHubzz9PIYIRo2WUSut8GWYc=; b=m9qYNDRh5/bP062uaLuPUk974Odr39TMFHkfG791CgVfCCQFLpAymJhNra7RzSQbGe vVn/LZa5NpT4Lcsst7qHoK3YygOBe9X/SQ0YYufk1Oj88O7a6YE2Jn4t93mxf9+En6Hn Ooymzgxj3dr/q52Ui9GlfLJX01shMRE+sXSkaYIOl6KgZKtkkfwq1cr7KUoA+mCmmSOk a54h5+MYyfgkgGaKtfZJTEoqT2P/vF9DjqoNH5UgzwQxXIXNYqJgIbLeVrl7gancOnup RubJ/bZM3wPSwZYryJxWEcd3+c19/I4aO5Rae4FCZQpgXCNGXe/N22UwiezQKjSVNkT0 ZmJA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:cc:to:from:subject:message-id:references :mime-version:in-reply-to:date:reply-to:x-gm-message-state:from:to :cc:subject:date:message-id:reply-to; bh=enHjfiPcp49jZxPsRc8NHubzz9PIYIRo2WUSut8GWYc=; b=KOfnT82z3lp72TcowYDdpCh57EPCI0cwjMc4qtm0mVU808mvgQ75QQok9f+6bKs7HJ cD5ukvoNzB+eGiBRAS9lMorNaf8nUEZ/MDroYPQ0TYxz/KvGRGqG8W59Ywn2eMljeE6t x8Vz+F9+TvZi1OECWdi3FsI22M3zhsKywdLFbfi7QSgQx0qpKA40m0t1kxbXnUC/n1ua fZKl6vMA5mAJdRTT635vrK0yOSNAC/4+jzfkxMH3i0XLqnXNLCJobY/W+YTgfJWjBg4R rNknu86pNE7ywQOnxZuc8vzeJaUTKlwqSi2gYQHoqtJGVu2wIEzGkGyb+7evywiWcf+k e4lQ== X-Gm-Message-State: ACrzQf3ZqTpzekUDLm9cmSxPKsgFxlVL46QRearXg+ANgWXQglEpNYG6 xpli3S967wMIgIlIZBb5bWJFOstZUl8= X-Google-Smtp-Source: AMsMyM5duWlg6Mdq5UFsNT6JDOvbEQtD9HFySACxNL61/Ra+5KFqbIMPVrZhy/aMHGZBAGJDxfZYt10CuIc= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a25:be14:0:b0:6be:885f:20bb with SMTP id h20-20020a25be14000000b006be885f20bbmr1768093ybk.480.1665695582404; Thu, 13 Oct 2022 14:13:02 -0700 (PDT) Reply-To: Sean Christopherson Date: Thu, 13 Oct 2022 21:12:32 +0000 In-Reply-To: <20221013211234.1318131-1-seanjc@google.com> Mime-Version: 1.0 References: <20221013211234.1318131-1-seanjc@google.com> X-Mailer: git-send-email 2.38.0.413.g74048e4d9e-goog Message-ID: <20221013211234.1318131-15-seanjc@google.com> Subject: [PATCH v2 14/16] KVM: Skip unnecessary "unmap" if gpc is already valid during refresh From: Sean Christopherson To: Sean Christopherson , Paolo Bonzini Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Michal Luczaj , David Woodhouse Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org When refreshing a gfn=>pfn cache, skip straight to unlocking if the cache already valid instead of stuffing the "old" variables to turn the unmapping outro into a nop. Signed-off-by: Sean Christopherson --- virt/kvm/pfncache.c | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-) diff --git a/virt/kvm/pfncache.c b/virt/kvm/pfncache.c index f211c878788b..57d47f06637d 100644 --- a/virt/kvm/pfncache.c +++ b/virt/kvm/pfncache.c @@ -293,9 +293,8 @@ static int __kvm_gpc_refresh(struct gfn_to_pfn_cache *gpc, gpa_t gpa) ret = hva_to_pfn_retry(gpc); } else { /* If the HVA→PFN mapping was already valid, don't unmap it. */ - old_pfn = KVM_PFN_ERR_FAULT; - old_khva = NULL; ret = 0; + goto out_unlock; } out: From patchwork Thu Oct 13 21:12:33 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 13006427 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2E342C4332F for ; Thu, 13 Oct 2022 21:15:00 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230036AbiJMVO5 (ORCPT ); Thu, 13 Oct 2022 17:14:57 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54868 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230232AbiJMVOA (ORCPT ); Thu, 13 Oct 2022 17:14:00 -0400 Received: from mail-pf1-x44a.google.com (mail-pf1-x44a.google.com [IPv6:2607:f8b0:4864:20::44a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B2D151905FC for ; Thu, 13 Oct 2022 14:13:48 -0700 (PDT) Received: by mail-pf1-x44a.google.com with SMTP id q18-20020aa79832000000b00562d921e30aso1784868pfl.4 for ; Thu, 13 Oct 2022 14:13:48 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=Dkze2MsJxqCmAbWDzZZK4VGavXQgqa2AUGiiGVa6HH8=; b=dZitK4lqU/S7tVoHB8Mi0JBvY83jA7BnacPckZSLugRBTK18I4x1RwkdgfzTmSC3hn udH1dus/1z2gLr0+/SkrgyETcBI2rYFPqWw3Ugj8ESy09L4AhX30XhBz143NL32L/+Y+ nPk1wmmfLm7/ZlbmT+p3UaxDixhUXMWnXAaOheZ1bkUvnwLtBUkBPbkGk96caBpHEseU 4LKFkINsOUoLIBwemofF9VjepfzobAoNTNcsU0nwoPOJcAzD2bHcY3GXWjCoE50jpcit joA1lC2tulYrEo/kTriQE8scZzL0HUNH7uC+Jfuz097eee1UFXhdyr6+/RGFF9jAiF/G LoRQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=Dkze2MsJxqCmAbWDzZZK4VGavXQgqa2AUGiiGVa6HH8=; b=rjjq1GrTpoA+OWSfIQeJ1SHbIYDMkiQcQxgPXpoatDxjxdz4k9bVqhtFbe5E7sOTer fyMMByQBK2jtF/J+w4HJbMmVwiLgTnSTJBg2bFXL5Is/cxbY9o5J6jD1uJ22WoUHZGtJ 3Wbe7a1lGSQy3cNRcUjSvGJaesm63lJ8cW65wkAW4EqiiFnK3GUYDYC7uG1FsMkQS7B1 cKXhnMdBSAvq0bw1yQP1nddXNcELBrs8ZrL+tzB/iSAqwn9WEAarCIT1+A0anoes37Nl aaBex76Tb2ObsId/fxgylgRMTQMKQkPXUVXOoOAAvrN+M5Sfh9P/exrB2Sn/6RBvh6ZD ZTlw== X-Gm-Message-State: ACrzQf1MCuz1fPKA7hNQBTvSSzyIHl1SyhGmCTTc0ZHwT5XUctlPgBuO ZmFHiw79p9FtoK67zzYVEU4dsYesNI8= X-Google-Smtp-Source: AMsMyM4vrnK/wcheMWW3HQKHNgeiPWMqXEmimKvHDB7d0xYox77M0n9vUcsCD7v60VP/Q3hfKZrGRvYn3ag= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a17:90a:fe97:b0:20a:fee1:8f69 with SMTP id co23-20020a17090afe9700b0020afee18f69mr114235pjb.0.1665695584064; Thu, 13 Oct 2022 14:13:04 -0700 (PDT) Reply-To: Sean Christopherson Date: Thu, 13 Oct 2022 21:12:33 +0000 In-Reply-To: <20221013211234.1318131-1-seanjc@google.com> Mime-Version: 1.0 References: <20221013211234.1318131-1-seanjc@google.com> X-Mailer: git-send-email 2.38.0.413.g74048e4d9e-goog Message-ID: <20221013211234.1318131-16-seanjc@google.com> Subject: [PATCH v2 15/16] KVM: selftests: Add tests in xen_shinfo_test to detect lock races From: Sean Christopherson To: Sean Christopherson , Paolo Bonzini Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Michal Luczaj , David Woodhouse Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Michal Luczaj Tests for races between shinfo_cache (de)activation and hypercall+ioctl() processing. KVM has had bugs where activating the shared info cache multiple times and/or with concurrent users results in lock corruption, NULL pointer dereferences, and other fun. For the timer injection testcase (#22), re-arm the timer until the IRQ is successfully injected. If the timer expires while the shared info is deactivated (invalid), KVM will drop the event. Signed-off-by: Michal Luczaj Co-developed-by: Sean Christopherson Signed-off-by: Sean Christopherson --- .../selftests/kvm/x86_64/xen_shinfo_test.c | 140 ++++++++++++++++++ 1 file changed, 140 insertions(+) diff --git a/tools/testing/selftests/kvm/x86_64/xen_shinfo_test.c b/tools/testing/selftests/kvm/x86_64/xen_shinfo_test.c index 8a5cb800f50e..caa3f5ab9e10 100644 --- a/tools/testing/selftests/kvm/x86_64/xen_shinfo_test.c +++ b/tools/testing/selftests/kvm/x86_64/xen_shinfo_test.c @@ -15,9 +15,13 @@ #include #include #include +#include #include +/* Defined in include/linux/kvm_types.h */ +#define GPA_INVALID (~(ulong)0) + #define SHINFO_REGION_GVA 0xc0000000ULL #define SHINFO_REGION_GPA 0xc0000000ULL #define SHINFO_REGION_SLOT 10 @@ -44,6 +48,8 @@ #define MIN_STEAL_TIME 50000 +#define SHINFO_RACE_TIMEOUT 2 /* seconds */ + #define __HYPERVISOR_set_timer_op 15 #define __HYPERVISOR_sched_op 29 #define __HYPERVISOR_event_channel_op 32 @@ -148,6 +154,7 @@ static void guest_wait_for_irq(void) static void guest_code(void) { struct vcpu_runstate_info *rs = (void *)RUNSTATE_VADDR; + int i; __asm__ __volatile__( "sti\n" @@ -325,6 +332,49 @@ static void guest_code(void) guest_wait_for_irq(); GUEST_SYNC(21); + /* Racing host ioctls */ + + guest_wait_for_irq(); + + GUEST_SYNC(22); + /* Racing vmcall against host ioctl */ + + ports[0] = 0; + + p = (struct sched_poll) { + .ports = ports, + .nr_ports = 1, + .timeout = 0 + }; + +wait_for_timer: + /* + * Poll for a timer wake event while the worker thread is mucking with + * the shared info. KVM XEN drops timer IRQs if the shared info is + * invalid when the timer expires. Arbitrarily poll 100 times before + * giving up and asking the VMM to re-arm the timer. 100 polls should + * consume enough time to beat on KVM without taking too long if the + * timer IRQ is dropped due to an invalid event channel. + */ + for (i = 0; i < 100 && !guest_saw_irq; i++) + asm volatile("vmcall" + : "=a" (rax) + : "a" (__HYPERVISOR_sched_op), + "D" (SCHEDOP_poll), + "S" (&p) + : "memory"); + + /* + * Re-send the timer IRQ if it was (likely) dropped due to the timer + * expiring while the event channel was invalid. + */ + if (!guest_saw_irq) { + GUEST_SYNC(23); + goto wait_for_timer; + } + guest_saw_irq = false; + + GUEST_SYNC(24); } static int cmp_timespec(struct timespec *a, struct timespec *b) @@ -352,11 +402,36 @@ static void handle_alrm(int sig) TEST_FAIL("IRQ delivery timed out"); } +static void *juggle_shinfo_state(void *arg) +{ + struct kvm_vm *vm = (struct kvm_vm *)arg; + + struct kvm_xen_hvm_attr cache_init = { + .type = KVM_XEN_ATTR_TYPE_SHARED_INFO, + .u.shared_info.gfn = SHINFO_REGION_GPA / PAGE_SIZE + }; + + struct kvm_xen_hvm_attr cache_destroy = { + .type = KVM_XEN_ATTR_TYPE_SHARED_INFO, + .u.shared_info.gfn = GPA_INVALID + }; + + for (;;) { + __vm_ioctl(vm, KVM_XEN_HVM_SET_ATTR, &cache_init); + __vm_ioctl(vm, KVM_XEN_HVM_SET_ATTR, &cache_destroy); + pthread_testcancel(); + }; + + return NULL; +} + int main(int argc, char *argv[]) { struct timespec min_ts, max_ts, vm_ts; struct kvm_vm *vm; + pthread_t thread; bool verbose; + int ret; verbose = argc > 1 && (!strncmp(argv[1], "-v", 3) || !strncmp(argv[1], "--verbose", 10)); @@ -785,6 +860,71 @@ int main(int argc, char *argv[]) case 21: TEST_ASSERT(!evtchn_irq_expected, "Expected event channel IRQ but it didn't happen"); + alarm(0); + + if (verbose) + printf("Testing shinfo lock corruption (KVM_XEN_HVM_EVTCHN_SEND)\n"); + + ret = pthread_create(&thread, NULL, &juggle_shinfo_state, (void *)vm); + TEST_ASSERT(ret == 0, "pthread_create() failed: %s", strerror(ret)); + + struct kvm_irq_routing_xen_evtchn uxe = { + .port = 1, + .vcpu = vcpu->id, + .priority = KVM_IRQ_ROUTING_XEN_EVTCHN_PRIO_2LEVEL + }; + + evtchn_irq_expected = true; + for (time_t t = time(NULL) + SHINFO_RACE_TIMEOUT; time(NULL) < t;) + __vm_ioctl(vm, KVM_XEN_HVM_EVTCHN_SEND, &uxe); + break; + + case 22: + TEST_ASSERT(!evtchn_irq_expected, + "Expected event channel IRQ but it didn't happen"); + + if (verbose) + printf("Testing shinfo lock corruption (SCHEDOP_poll)\n"); + + shinfo->evtchn_pending[0] = 1; + + evtchn_irq_expected = true; + tmr.u.timer.expires_ns = rs->state_entry_time + + SHINFO_RACE_TIMEOUT * 1000000000ULL; + vcpu_ioctl(vcpu, KVM_XEN_VCPU_SET_ATTR, &tmr); + break; + + case 23: + /* + * Optional and possibly repeated sync point. + * Injecting the timer IRQ may fail if the + * shinfo is invalid when the timer expires. + * If the timer has expired but the IRQ hasn't + * been delivered, rearm the timer and retry. + */ + vcpu_ioctl(vcpu, KVM_XEN_VCPU_GET_ATTR, &tmr); + + /* Resume the guest if the timer is still pending. */ + if (tmr.u.timer.expires_ns) + break; + + /* All done if the IRQ was delivered. */ + if (!evtchn_irq_expected) + break; + + tmr.u.timer.expires_ns = rs->state_entry_time + + SHINFO_RACE_TIMEOUT * 1000000000ULL; + vcpu_ioctl(vcpu, KVM_XEN_VCPU_SET_ATTR, &tmr); + break; + case 24: + TEST_ASSERT(!evtchn_irq_expected, + "Expected event channel IRQ but it didn't happen"); + + ret = pthread_cancel(thread); + TEST_ASSERT(ret == 0, "pthread_cancel() failed: %s", strerror(ret)); + + ret = pthread_join(thread, 0); + TEST_ASSERT(ret == 0, "pthread_join() failed: %s", strerror(ret)); goto done; case 0x20: From patchwork Thu Oct 13 21:12:34 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 13006429 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8CCFBC433FE for ; Thu, 13 Oct 2022 21:15:28 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230213AbiJMVP1 (ORCPT ); Thu, 13 Oct 2022 17:15:27 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55400 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230206AbiJMVOr (ORCPT ); Thu, 13 Oct 2022 17:14:47 -0400 Received: from mail-pj1-x1049.google.com (mail-pj1-x1049.google.com [IPv6:2607:f8b0:4864:20::1049]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 98EED192D81 for ; Thu, 13 Oct 2022 14:13:51 -0700 (PDT) Received: by mail-pj1-x1049.google.com with SMTP id il7-20020a17090b164700b0020d1029ceaaso3952719pjb.8 for ; Thu, 13 Oct 2022 14:13:50 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=Ayk7WF1bhRdGwOMvSXaXxnD19B/O1/9Is88g4dKNZ4g=; b=RGZk+Vdb53lCuSorY1wAskL95mWskPoDQmOT0OPifoQMxC88+I+sUOzBcNwM2ES118 iW/lEHriL4Qg+SyNCkAnOE/F6nkNbWpyMPrn2QueInKh5I9bN/usi/JiSlITCWwhzV2w F6RhstB4XKOktzSjRK6gkJVtWkzoAHgddgAk7BJQH6HKFMpgykXSfdeLSPliTOAoC2SU OhvYMmXuZ77gQ7BXD0oc1vhaLxjItBGg0M0MZ4A4tbio5n4WSVH2CZsZU9ZjKFIddh25 2ojCOWdDUXvoEAgrXLJgd97RPQuSVYegtFiK5TFLdLpOqU8FtEIs1sigEXVw1rFG6IsE rsOQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=Ayk7WF1bhRdGwOMvSXaXxnD19B/O1/9Is88g4dKNZ4g=; b=FJU1B4Tfcixy6c8aTS41+t2yaF4LBSNFzhLI9aXxUyHMIIp2A2ESL75PuipIabZaGA J6ThO397S1YPvo0tx50MgvtdGoZkQkoeadpQGi+TEOs2Od9brn1HZgS6SoH07vlG6F1U CZD1vQ57zLiZjT7BkytWnevCG3HqqAnBQad9j5QUnIxDRbXacfJwfiL+MLOrdzMeDCLy jkqQI+EpVDS1whWxdSbanKpaOl1kxseKJLQs6sUz4IfvYc80bN8fXWqa0ZbEA04KvaHY htzpfFLi/lf6RFcwXVwJgw2M3BtCUvag+28YkOfEDKFYMvtzRDZKGFhEfffr+vOKcvQd Hvig== X-Gm-Message-State: ACrzQf2Y2Jg9zkhBCuzvUXe2nFbZ3QHjBr/1wCo6Oe2CTx91SeS23IoC 5cqU3H+cGZG/vndTrwhgJXpCsGXn9oo= X-Google-Smtp-Source: AMsMyM6tAfauzbLm9EvaoW78suvTos1Ey2Zysy7ZW4jVb6C5cbyn4fHYulzadh7ri04uq/mtbeeyli3ZBZE= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:aa7:9614:0:b0:562:b07b:ad62 with SMTP id q20-20020aa79614000000b00562b07bad62mr1641934pfg.79.1665695586079; Thu, 13 Oct 2022 14:13:06 -0700 (PDT) Reply-To: Sean Christopherson Date: Thu, 13 Oct 2022 21:12:34 +0000 In-Reply-To: <20221013211234.1318131-1-seanjc@google.com> Mime-Version: 1.0 References: <20221013211234.1318131-1-seanjc@google.com> X-Mailer: git-send-email 2.38.0.413.g74048e4d9e-goog Message-ID: <20221013211234.1318131-17-seanjc@google.com> Subject: [PATCH v2 16/16] KVM: selftests: Mark "guest_saw_irq" as volatile in xen_shinfo_test From: Sean Christopherson To: Sean Christopherson , Paolo Bonzini Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Michal Luczaj , David Woodhouse Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Tag "guest_saw_irq" as "volatile" to ensure that the compiler will never optimize away lookups. Relying on the compiler thinking that the flag is global and thus might change also works, but it's subtle, less robust, and looks like a bug at first glance, e.g. risks being "fixed" and breaking the test. Make the flag "static" as well since convincing the compiler it's global is no longer necessary. Alternatively, the flag could be accessed with {READ,WRITE}_ONCE(), but literally every access would need the wrappers, and eking out performance isn't exactly top priority for selftests. Signed-off-by: Sean Christopherson --- tools/testing/selftests/kvm/x86_64/xen_shinfo_test.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/tools/testing/selftests/kvm/x86_64/xen_shinfo_test.c b/tools/testing/selftests/kvm/x86_64/xen_shinfo_test.c index caa3f5ab9e10..2a5727188c8d 100644 --- a/tools/testing/selftests/kvm/x86_64/xen_shinfo_test.c +++ b/tools/testing/selftests/kvm/x86_64/xen_shinfo_test.c @@ -132,7 +132,7 @@ struct { struct kvm_irq_routing_entry entries[2]; } irq_routes; -bool guest_saw_irq; +static volatile bool guest_saw_irq; static void evtchn_handler(struct ex_regs *regs) {