From patchwork Thu Aug 11 21:05:57 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Paolo Bonzini X-Patchwork-Id: 12941734 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id A9901C19F2A for ; Thu, 11 Aug 2022 21:06:14 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236319AbiHKVGN (ORCPT ); Thu, 11 Aug 2022 17:06:13 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40886 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236267AbiHKVGK (ORCPT ); Thu, 11 Aug 2022 17:06:10 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 3F6937858E for ; Thu, 11 Aug 2022 14:06:09 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1660251968; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=4RKOyHsKBxxvzJ9NkJFblXxDl280JutW3/L7Ar2PgqY=; b=KQEC6HJ38qY8IixoGqKqhJ3c4PplMPkyOx4m+Xrc37qlfA/p/744085Z0vDHjNl0UPcinR FRU//eSSkuSxwSohRFJEm2OzHKaW/TR8HEFQDKBpTQTjJfheCd/pVFgrlc/liLVquwIiMu /X07SLd4n/O3EDUioukzWoxlBaS8Urs= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-587-92xKdmgLNcSAg3dlDrfPcQ-1; Thu, 11 Aug 2022 17:06:06 -0400 X-MC-Unique: 92xKdmgLNcSAg3dlDrfPcQ-1 Received: from smtp.corp.redhat.com (int-mx10.intmail.prod.int.rdu2.redhat.com [10.11.54.10]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 74064943211; Thu, 11 Aug 2022 21:06:06 +0000 (UTC) Received: from virtlab701.virt.lab.eng.bos.redhat.com (virtlab701.virt.lab.eng.bos.redhat.com [10.19.152.228]) by smtp.corp.redhat.com (Postfix) with ESMTP id 4DD74492C3B; Thu, 11 Aug 2022 21:06:06 +0000 (UTC) From: Paolo Bonzini To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: seanjc@google.com, mlevitsk@redhat.com, vkuznets@redhat.com Subject: [PATCH v2 1/9] KVM: x86: check validity of argument to KVM_SET_MP_STATE Date: Thu, 11 Aug 2022 17:05:57 -0400 Message-Id: <20220811210605.402337-2-pbonzini@redhat.com> In-Reply-To: <20220811210605.402337-1-pbonzini@redhat.com> References: <20220811210605.402337-1-pbonzini@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.85 on 10.11.54.10 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org An invalid argument to KVM_SET_MP_STATE has no effect other than making the vCPU fail to run at the next KVM_RUN. Since it is extremely unlikely that any userspace is relying on it, fail with -EINVAL just like for other architectures. Signed-off-by: Paolo Bonzini Reviewed-by: Maxim Levitsky --- arch/x86/kvm/x86.c | 20 +++++++++++++++++--- 1 file changed, 17 insertions(+), 3 deletions(-) diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 132d662d9713..c44348bb6ef2 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -10653,7 +10653,8 @@ static inline int vcpu_block(struct kvm_vcpu *vcpu) case KVM_MP_STATE_INIT_RECEIVED: break; default: - return -EINTR; + WARN_ON(1); + break; } return 1; } @@ -11094,9 +11095,22 @@ int kvm_arch_vcpu_ioctl_set_mpstate(struct kvm_vcpu *vcpu, vcpu_load(vcpu); - if (!lapic_in_kernel(vcpu) && - mp_state->mp_state != KVM_MP_STATE_RUNNABLE) + switch (mp_state->mp_state) { + case KVM_MP_STATE_UNINITIALIZED: + case KVM_MP_STATE_HALTED: + case KVM_MP_STATE_AP_RESET_HOLD: + case KVM_MP_STATE_INIT_RECEIVED: + case KVM_MP_STATE_SIPI_RECEIVED: + if (!lapic_in_kernel(vcpu)) + goto out; + break; + + case KVM_MP_STATE_RUNNABLE: + break; + + default: goto out; + } /* * KVM_MP_STATE_INIT_RECEIVED means the processor is in From patchwork Thu Aug 11 21:05:58 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Paolo Bonzini X-Patchwork-Id: 12941739 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id E3C89C25B06 for ; Thu, 11 Aug 2022 21:06:25 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236434AbiHKVGY (ORCPT ); Thu, 11 Aug 2022 17:06:24 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41032 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236299AbiHKVGM (ORCPT ); Thu, 11 Aug 2022 17:06:12 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 251AC7B797 for ; Thu, 11 Aug 2022 14:06:11 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1660251970; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=Z5V1ddQbf4lZG4s001LxFtCrBf7XQOvfCeCSjnlz2Rc=; b=RgsBRRQxIO8QdZwppkQZnWdoQH4jURoPLeftnyGG0KpXyewz3ETahihiNQwnDQ4QfzZprU YGUFTDG4c0CfC/8+ANcaQzsoC7rYDbSaGFsq5xeJCZypKdjx8sLsJxG+h47OmAnVEBnXpl OfVUckpNVub62TxHLz+1a5A7POOpv8Y= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-328-XehIE_7tN7eZgziV-1cAKw-1; Thu, 11 Aug 2022 17:06:07 -0400 X-MC-Unique: XehIE_7tN7eZgziV-1cAKw-1 Received: from smtp.corp.redhat.com (int-mx10.intmail.prod.int.rdu2.redhat.com [10.11.54.10]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id A2C48965C24; Thu, 11 Aug 2022 21:06:06 +0000 (UTC) Received: from virtlab701.virt.lab.eng.bos.redhat.com (virtlab701.virt.lab.eng.bos.redhat.com [10.19.152.228]) by smtp.corp.redhat.com (Postfix) with ESMTP id 7CC75492C3B; Thu, 11 Aug 2022 21:06:06 +0000 (UTC) From: Paolo Bonzini To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: seanjc@google.com, mlevitsk@redhat.com, vkuznets@redhat.com Subject: [PATCH v2 2/9] KVM: x86: remove return value of kvm_vcpu_block Date: Thu, 11 Aug 2022 17:05:58 -0400 Message-Id: <20220811210605.402337-3-pbonzini@redhat.com> In-Reply-To: <20220811210605.402337-1-pbonzini@redhat.com> References: <20220811210605.402337-1-pbonzini@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.85 on 10.11.54.10 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org The return value of kvm_vcpu_block will be repurposed soon to return the state of KVM_REQ_UNHALT. In preparation for that, get rid of the current return value. It is only used by kvm_vcpu_halt to decide whether the call resulted in a wait, but the same effect can be obtained with a single round of polling. No functional change intended, apart from practically indistinguishable changes to the polling behavior. Signed-off-by: Paolo Bonzini --- include/linux/kvm_host.h | 2 +- virt/kvm/kvm_main.c | 45 +++++++++++++++++----------------------- 2 files changed, 20 insertions(+), 27 deletions(-) diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index 1c480b1821e1..e7bd48d15db8 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -1339,7 +1339,7 @@ void kvm_sigset_activate(struct kvm_vcpu *vcpu); void kvm_sigset_deactivate(struct kvm_vcpu *vcpu); void kvm_vcpu_halt(struct kvm_vcpu *vcpu); -bool kvm_vcpu_block(struct kvm_vcpu *vcpu); +void kvm_vcpu_block(struct kvm_vcpu *vcpu); void kvm_arch_vcpu_blocking(struct kvm_vcpu *vcpu); void kvm_arch_vcpu_unblocking(struct kvm_vcpu *vcpu); bool kvm_vcpu_wake_up(struct kvm_vcpu *vcpu); diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index 515dfe9d3bcf..1f049c1d01b4 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -3429,10 +3429,9 @@ static int kvm_vcpu_check_block(struct kvm_vcpu *vcpu) * pending. This is mostly used when halting a vCPU, but may also be used * directly for other vCPU non-runnable states, e.g. x86's Wait-For-SIPI. */ -bool kvm_vcpu_block(struct kvm_vcpu *vcpu) +void kvm_vcpu_block(struct kvm_vcpu *vcpu) { struct rcuwait *wait = kvm_arch_vcpu_get_wait(vcpu); - bool waited = false; vcpu->stat.generic.blocking = 1; @@ -3447,7 +3446,6 @@ bool kvm_vcpu_block(struct kvm_vcpu *vcpu) if (kvm_vcpu_check_block(vcpu) < 0) break; - waited = true; schedule(); } @@ -3457,8 +3455,6 @@ bool kvm_vcpu_block(struct kvm_vcpu *vcpu) preempt_enable(); vcpu->stat.generic.blocking = 0; - - return waited; } static inline void update_halt_poll_stats(struct kvm_vcpu *vcpu, ktime_t start, @@ -3493,35 +3489,32 @@ void kvm_vcpu_halt(struct kvm_vcpu *vcpu) { bool halt_poll_allowed = !kvm_arch_no_poll(vcpu); bool do_halt_poll = halt_poll_allowed && vcpu->halt_poll_ns; - ktime_t start, cur, poll_end; + ktime_t start, cur, poll_end, stop; bool waited = false; u64 halt_ns; start = cur = poll_end = ktime_get(); - if (do_halt_poll) { - ktime_t stop = ktime_add_ns(start, vcpu->halt_poll_ns); + stop = do_halt_poll ? start : ktime_add_ns(start, vcpu->halt_poll_ns); - do { - /* - * This sets KVM_REQ_UNHALT if an interrupt - * arrives. - */ - if (kvm_vcpu_check_block(vcpu) < 0) - goto out; - cpu_relax(); - poll_end = cur = ktime_get(); - } while (kvm_vcpu_can_poll(cur, stop)); - } + do { + /* + * This sets KVM_REQ_UNHALT if an interrupt + * arrives. + */ + if (kvm_vcpu_check_block(vcpu) < 0) + goto out; + cpu_relax(); + poll_end = cur = ktime_get(); + } while (kvm_vcpu_can_poll(cur, stop)); - waited = kvm_vcpu_block(vcpu); + waited = true; + kvm_vcpu_block(vcpu); cur = ktime_get(); - if (waited) { - vcpu->stat.generic.halt_wait_ns += - ktime_to_ns(cur) - ktime_to_ns(poll_end); - KVM_STATS_LOG_HIST_UPDATE(vcpu->stat.generic.halt_wait_hist, - ktime_to_ns(cur) - ktime_to_ns(poll_end)); - } + vcpu->stat.generic.halt_wait_ns += + ktime_to_ns(cur) - ktime_to_ns(poll_end); + KVM_STATS_LOG_HIST_UPDATE(vcpu->stat.generic.halt_wait_hist, + ktime_to_ns(cur) - ktime_to_ns(poll_end)); out: /* The total time the vCPU was "halted", including polling time. */ halt_ns = ktime_to_ns(cur) - ktime_to_ns(start); From patchwork Thu Aug 11 21:05:59 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Paolo Bonzini X-Patchwork-Id: 12941740 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 901B8C19F2D for ; Thu, 11 Aug 2022 21:06:27 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236443AbiHKVG0 (ORCPT ); Thu, 11 Aug 2022 17:06:26 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41000 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236302AbiHKVGM (ORCPT ); Thu, 11 Aug 2022 17:06:12 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 6FB787FE68 for ; Thu, 11 Aug 2022 14:06:11 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1660251970; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=NqSPZEkWBGI46NvTgNx/Bt3rMwg89QmQhSCnM/Tpfm4=; b=iCEn/fDzS9TYlcVam9spMKxVg0BMAFnZIyIy087IghguLJZM8QTrZjdr/ca/AF9jSenTex /+WMiJyS6kjweftrX4wV5VVhwXSCanY57PY1L2okoOAT0xXj1cXqo9NQ4ekHvQipcKgf9m M7Tew9KyR8dejIkFLTPJDdn+ReJBf/A= Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-498-zw33dlhwODiy7MM-FRzIlQ-1; Thu, 11 Aug 2022 17:06:07 -0400 X-MC-Unique: zw33dlhwODiy7MM-FRzIlQ-1 Received: from smtp.corp.redhat.com (int-mx10.intmail.prod.int.rdu2.redhat.com [10.11.54.10]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id D299C1C004FC; Thu, 11 Aug 2022 21:06:06 +0000 (UTC) Received: from virtlab701.virt.lab.eng.bos.redhat.com (virtlab701.virt.lab.eng.bos.redhat.com [10.19.152.228]) by smtp.corp.redhat.com (Postfix) with ESMTP id ABA93492C3B; Thu, 11 Aug 2022 21:06:06 +0000 (UTC) From: Paolo Bonzini To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: seanjc@google.com, mlevitsk@redhat.com, vkuznets@redhat.com Subject: [PATCH v2 3/9] KVM: x86: make kvm_vcpu_{block,halt} return whether vCPU is runnable Date: Thu, 11 Aug 2022 17:05:59 -0400 Message-Id: <20220811210605.402337-4-pbonzini@redhat.com> In-Reply-To: <20220811210605.402337-1-pbonzini@redhat.com> References: <20220811210605.402337-1-pbonzini@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.85 on 10.11.54.10 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org This is currently returned via KVM_REQ_UNHALT, but this is completely unnecessary since all that the callers do is clear the request; it is never processed via the usual request loop. The same condition can be returned as a positive value from the functions. No functional change intended. Signed-off-by: Paolo Bonzini --- include/linux/kvm_host.h | 4 ++-- virt/kvm/kvm_main.c | 23 ++++++++++++++++++----- 2 files changed, 20 insertions(+), 7 deletions(-) diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index e7bd48d15db8..cbd9577e5447 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -1338,8 +1338,8 @@ void kvm_gfn_to_pfn_cache_destroy(struct kvm *kvm, struct gfn_to_pfn_cache *gpc) void kvm_sigset_activate(struct kvm_vcpu *vcpu); void kvm_sigset_deactivate(struct kvm_vcpu *vcpu); -void kvm_vcpu_halt(struct kvm_vcpu *vcpu); -void kvm_vcpu_block(struct kvm_vcpu *vcpu); +int kvm_vcpu_halt(struct kvm_vcpu *vcpu); +int kvm_vcpu_block(struct kvm_vcpu *vcpu); void kvm_arch_vcpu_blocking(struct kvm_vcpu *vcpu); void kvm_arch_vcpu_unblocking(struct kvm_vcpu *vcpu); bool kvm_vcpu_wake_up(struct kvm_vcpu *vcpu); diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index 1f049c1d01b4..e827805b7b28 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -3402,6 +3402,12 @@ static void shrink_halt_poll_ns(struct kvm_vcpu *vcpu) trace_kvm_halt_poll_ns_shrink(vcpu->vcpu_id, val, old); } +/* + * Returns zero if the vCPU should remain in a blocked state, + * nonzero if it has been woken up, specifically: + * - 1 if it is runnable + * - -EINTR if it is not runnable (e.g. has a signal or a timer pending) + */ static int kvm_vcpu_check_block(struct kvm_vcpu *vcpu) { int ret = -EINTR; @@ -3409,6 +3415,7 @@ static int kvm_vcpu_check_block(struct kvm_vcpu *vcpu) if (kvm_arch_vcpu_runnable(vcpu)) { kvm_make_request(KVM_REQ_UNHALT, vcpu); + ret = 1; goto out; } if (kvm_cpu_has_pending_timer(vcpu)) @@ -3429,9 +3436,10 @@ static int kvm_vcpu_check_block(struct kvm_vcpu *vcpu) * pending. This is mostly used when halting a vCPU, but may also be used * directly for other vCPU non-runnable states, e.g. x86's Wait-For-SIPI. */ -void kvm_vcpu_block(struct kvm_vcpu *vcpu) +int kvm_vcpu_block(struct kvm_vcpu *vcpu) { struct rcuwait *wait = kvm_arch_vcpu_get_wait(vcpu); + int r; vcpu->stat.generic.blocking = 1; @@ -3443,7 +3451,8 @@ void kvm_vcpu_block(struct kvm_vcpu *vcpu) for (;;) { set_current_state(TASK_INTERRUPTIBLE); - if (kvm_vcpu_check_block(vcpu) < 0) + r = kvm_vcpu_check_block(vcpu); + if (r != 0) break; schedule(); @@ -3455,6 +3464,7 @@ void kvm_vcpu_block(struct kvm_vcpu *vcpu) preempt_enable(); vcpu->stat.generic.blocking = 0; + return r; } static inline void update_halt_poll_stats(struct kvm_vcpu *vcpu, ktime_t start, @@ -3485,12 +3495,13 @@ static inline void update_halt_poll_stats(struct kvm_vcpu *vcpu, ktime_t start, * expensive block+unblock sequence if a wake event arrives soon after the vCPU * is halted. */ -void kvm_vcpu_halt(struct kvm_vcpu *vcpu) +int kvm_vcpu_halt(struct kvm_vcpu *vcpu) { bool halt_poll_allowed = !kvm_arch_no_poll(vcpu); bool do_halt_poll = halt_poll_allowed && vcpu->halt_poll_ns; ktime_t start, cur, poll_end, stop; bool waited = false; + int r; u64 halt_ns; start = cur = poll_end = ktime_get(); @@ -3501,14 +3512,15 @@ void kvm_vcpu_halt(struct kvm_vcpu *vcpu) * This sets KVM_REQ_UNHALT if an interrupt * arrives. */ - if (kvm_vcpu_check_block(vcpu) < 0) + r = kvm_vcpu_check_block(vcpu); + if (r != 0) goto out; cpu_relax(); poll_end = cur = ktime_get(); } while (kvm_vcpu_can_poll(cur, stop)); waited = true; - kvm_vcpu_block(vcpu); + r = kvm_vcpu_block(vcpu); cur = ktime_get(); vcpu->stat.generic.halt_wait_ns += @@ -3547,6 +3559,7 @@ void kvm_vcpu_halt(struct kvm_vcpu *vcpu) } trace_kvm_vcpu_wakeup(halt_ns, waited, vcpu_valid_wakeup(vcpu)); + return r; } EXPORT_SYMBOL_GPL(kvm_vcpu_halt); From patchwork Thu Aug 11 21:06:00 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Paolo Bonzini X-Patchwork-Id: 12941743 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5388CC19F2A for ; Thu, 11 Aug 2022 21:06:58 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236472AbiHKVG5 (ORCPT ); Thu, 11 Aug 2022 17:06:57 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41004 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236473AbiHKVGc (ORCPT ); Thu, 11 Aug 2022 17:06:32 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 486B098586 for ; Thu, 11 Aug 2022 14:06:19 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1660251979; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=zssSnyjhXO8Y+VaC40NbtnI4nznRbo22yC14BgkmKUU=; b=Qe/nC1V1cuyhbLjQxmnNBKWZKIWlKJDghpqfCdNoZthNZ210dPh/6da76QI5c32M8kKYBA GJofDHwM91tvsjm7b2/6Zfr0XtptafWMRsS014pvnfXS9TkxD51KsOgV9ivrRF0jNDngXX I1QY6UEhL3ydEwHRoIzzB7eW9bZBSc8= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-593-n-MfIhBBM4uPTIlgrGS62Q-1; Thu, 11 Aug 2022 17:06:07 -0400 X-MC-Unique: n-MfIhBBM4uPTIlgrGS62Q-1 Received: from smtp.corp.redhat.com (int-mx10.intmail.prod.int.rdu2.redhat.com [10.11.54.10]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 0D55A85A59A; Thu, 11 Aug 2022 21:06:07 +0000 (UTC) Received: from virtlab701.virt.lab.eng.bos.redhat.com (virtlab701.virt.lab.eng.bos.redhat.com [10.19.152.228]) by smtp.corp.redhat.com (Postfix) with ESMTP id DB67B492C3B; Thu, 11 Aug 2022 21:06:06 +0000 (UTC) From: Paolo Bonzini To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: seanjc@google.com, mlevitsk@redhat.com, vkuznets@redhat.com Subject: [PATCH v2 4/9] KVM: mips, x86: do not rely on KVM_REQ_UNHALT Date: Thu, 11 Aug 2022 17:06:00 -0400 Message-Id: <20220811210605.402337-5-pbonzini@redhat.com> In-Reply-To: <20220811210605.402337-1-pbonzini@redhat.com> References: <20220811210605.402337-1-pbonzini@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.85 on 10.11.54.10 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org KVM_REQ_UNHALT is now available as the return value from kvm_vcpu_halt or kvm_vcpu_block, so the request can be simply cleared just like all other architectures do. No functional change intended. Signed-off-by: Paolo Bonzini --- arch/mips/kvm/emulate.c | 8 ++++---- arch/x86/kvm/x86.c | 8 +++++--- 2 files changed, 9 insertions(+), 7 deletions(-) diff --git a/arch/mips/kvm/emulate.c b/arch/mips/kvm/emulate.c index b494d8d39290..77d760d45c48 100644 --- a/arch/mips/kvm/emulate.c +++ b/arch/mips/kvm/emulate.c @@ -944,6 +944,7 @@ enum hrtimer_restart kvm_mips_count_timeout(struct kvm_vcpu *vcpu) enum emulation_result kvm_mips_emul_wait(struct kvm_vcpu *vcpu) { + int r; kvm_debug("[%#lx] !!!WAIT!!! (%#lx)\n", vcpu->arch.pc, vcpu->arch.pending_exceptions); @@ -952,16 +953,15 @@ enum emulation_result kvm_mips_emul_wait(struct kvm_vcpu *vcpu) if (!vcpu->arch.pending_exceptions) { kvm_vz_lose_htimer(vcpu); vcpu->arch.wait = 1; - kvm_vcpu_halt(vcpu); + r = kvm_vcpu_halt(vcpu); /* * We we are runnable, then definitely go off to user space to * check if any I/O interrupts are pending. */ - if (kvm_check_request(KVM_REQ_UNHALT, vcpu)) { - kvm_clear_request(KVM_REQ_UNHALT, vcpu); + kvm_clear_request(KVM_REQ_UNHALT, vcpu); + if (r > 0) vcpu->run->exit_reason = KVM_EXIT_IRQ_WINDOW_OPEN; - } } return EMULATE_DONE; diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index c44348bb6ef2..416df0fc7fda 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -10611,6 +10611,7 @@ static int vcpu_enter_guest(struct kvm_vcpu *vcpu) static inline int vcpu_block(struct kvm_vcpu *vcpu) { bool hv_timer; + int r; if (!kvm_arch_vcpu_runnable(vcpu)) { /* @@ -10626,15 +10627,16 @@ static inline int vcpu_block(struct kvm_vcpu *vcpu) kvm_vcpu_srcu_read_unlock(vcpu); if (vcpu->arch.mp_state == KVM_MP_STATE_HALTED) - kvm_vcpu_halt(vcpu); + r = kvm_vcpu_halt(vcpu); else - kvm_vcpu_block(vcpu); + r = kvm_vcpu_block(vcpu); kvm_vcpu_srcu_read_lock(vcpu); if (hv_timer) kvm_lapic_switch_to_hv_timer(vcpu); - if (!kvm_check_request(KVM_REQ_UNHALT, vcpu)) + kvm_clear_request(KVM_REQ_UNHALT, vcpu); + if (r <= 0) return 1; } From patchwork Thu Aug 11 21:06:01 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Paolo Bonzini X-Patchwork-Id: 12941741 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id E9279C19F2A for ; Thu, 11 Aug 2022 21:06:31 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236457AbiHKVG3 (ORCPT ); Thu, 11 Aug 2022 17:06:29 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41118 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236330AbiHKVGO (ORCPT ); Thu, 11 Aug 2022 17:06:14 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id E711E85A85 for ; Thu, 11 Aug 2022 14:06:11 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1660251970; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=Z79C7+G0Dxe3w3HmQmbb18UPXL0A3NetxvaU68mky2A=; b=iNMVsI2KWKZptJI7m54Qnc1U1aM3rBOMHARoBfMVmdD6qG4vw/hxixTMF52YGX9bw0Z4TZ IfLnnCOH+nnl84Ph4hihH16b+fCAP+b7r8e4941iujyXf+R5XxlBPD3xZKlKfrTGO3elg2 7mcJgQCQaEVW1FZ6qQ2c+IWVe41GfXE= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-507-Jf8znDLJNFKbI0vanirJ4Q-1; Thu, 11 Aug 2022 17:06:07 -0400 X-MC-Unique: Jf8znDLJNFKbI0vanirJ4Q-1 Received: from smtp.corp.redhat.com (int-mx10.intmail.prod.int.rdu2.redhat.com [10.11.54.10]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 41508943201; Thu, 11 Aug 2022 21:06:07 +0000 (UTC) Received: from virtlab701.virt.lab.eng.bos.redhat.com (virtlab701.virt.lab.eng.bos.redhat.com [10.19.152.228]) by smtp.corp.redhat.com (Postfix) with ESMTP id 16BFE492C3B; Thu, 11 Aug 2022 21:06:07 +0000 (UTC) From: Paolo Bonzini To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: seanjc@google.com, mlevitsk@redhat.com, vkuznets@redhat.com Subject: [PATCH v2 5/9] KVM: remove KVM_REQ_UNHALT Date: Thu, 11 Aug 2022 17:06:01 -0400 Message-Id: <20220811210605.402337-6-pbonzini@redhat.com> In-Reply-To: <20220811210605.402337-1-pbonzini@redhat.com> References: <20220811210605.402337-1-pbonzini@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.85 on 10.11.54.10 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org KVM_REQ_UNHALT is now unnecessary because it is replaced by the return value of kvm_vcpu_block/kvm_vcpu_halt. Remove it. No functional change intended. Signed-off-by: Paolo Bonzini --- Documentation/virt/kvm/vcpu-requests.rst | 28 +----------------------- arch/arm64/kvm/arm.c | 1 - arch/mips/kvm/emulate.c | 1 - arch/powerpc/kvm/book3s_pr.c | 1 - arch/powerpc/kvm/book3s_pr_papr.c | 1 - arch/powerpc/kvm/booke.c | 1 - arch/powerpc/kvm/powerpc.c | 1 - arch/riscv/kvm/vcpu_insn.c | 1 - arch/s390/kvm/kvm-s390.c | 2 -- arch/x86/kvm/x86.c | 2 -- arch/x86/kvm/xen.c | 1 - include/linux/kvm_host.h | 3 +-- virt/kvm/kvm_main.c | 5 ----- 13 files changed, 2 insertions(+), 46 deletions(-) diff --git a/Documentation/virt/kvm/vcpu-requests.rst b/Documentation/virt/kvm/vcpu-requests.rst index 31f62b64e07b..87f04c1fa53d 100644 --- a/Documentation/virt/kvm/vcpu-requests.rst +++ b/Documentation/virt/kvm/vcpu-requests.rst @@ -97,7 +97,7 @@ VCPU requests are simply bit indices of the ``vcpu->requests`` bitmap. This means general bitops, like those documented in [atomic-ops]_ could also be used, e.g. :: - clear_bit(KVM_REQ_UNHALT & KVM_REQUEST_MASK, &vcpu->requests); + clear_bit(KVM_REQ_UNBLOCK & KVM_REQUEST_MASK, &vcpu->requests); However, VCPU request users should refrain from doing so, as it would break the abstraction. The first 8 bits are reserved for architecture @@ -126,17 +126,6 @@ KVM_REQ_UNBLOCK or in order to update the interrupt routing and ensure that assigned devices will wake up the vCPU. -KVM_REQ_UNHALT - - This request may be made from the KVM common function kvm_vcpu_block(), - which is used to emulate an instruction that causes a CPU to halt until - one of an architectural specific set of events and/or interrupts is - received (determined by checking kvm_arch_vcpu_runnable()). When that - event or interrupt arrives kvm_vcpu_block() makes the request. This is - in contrast to when kvm_vcpu_block() returns due to any other reason, - such as a pending signal, which does not indicate the VCPU's halt - emulation should stop, and therefore does not make the request. - KVM_REQ_OUTSIDE_GUEST_MODE This "request" ensures the target vCPU has exited guest mode prior to the @@ -297,21 +286,6 @@ architecture dependent. kvm_vcpu_block() calls kvm_arch_vcpu_runnable() to check if it should awaken. One reason to do so is to provide architectures a function where requests may be checked if necessary. -Clearing Requests ------------------ - -Generally it only makes sense for the receiving VCPU thread to clear a -request. However, in some circumstances, such as when the requesting -thread and the receiving VCPU thread are executed serially, such as when -they are the same thread, or when they are using some form of concurrency -control to temporarily execute synchronously, then it's possible to know -that the request may be cleared immediately, rather than waiting for the -receiving VCPU thread to handle the request in VCPU RUN. The only current -examples of this are kvm_vcpu_block() calls made by VCPUs to block -themselves. A possible side-effect of that call is to make the -KVM_REQ_UNHALT request, which may then be cleared immediately when the -VCPU returns from the call. - References ========== diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c index 986cee6fbc7f..0bbd1ce601a5 100644 --- a/arch/arm64/kvm/arm.c +++ b/arch/arm64/kvm/arm.c @@ -666,7 +666,6 @@ void kvm_vcpu_wfi(struct kvm_vcpu *vcpu) kvm_vcpu_halt(vcpu); vcpu_clear_flag(vcpu, IN_WFIT); - kvm_clear_request(KVM_REQ_UNHALT, vcpu); preempt_disable(); vgic_v4_load(vcpu); diff --git a/arch/mips/kvm/emulate.c b/arch/mips/kvm/emulate.c index 77d760d45c48..c11143586765 100644 --- a/arch/mips/kvm/emulate.c +++ b/arch/mips/kvm/emulate.c @@ -959,7 +959,6 @@ enum emulation_result kvm_mips_emul_wait(struct kvm_vcpu *vcpu) * We we are runnable, then definitely go off to user space to * check if any I/O interrupts are pending. */ - kvm_clear_request(KVM_REQ_UNHALT, vcpu); if (r > 0) vcpu->run->exit_reason = KVM_EXIT_IRQ_WINDOW_OPEN; } diff --git a/arch/powerpc/kvm/book3s_pr.c b/arch/powerpc/kvm/book3s_pr.c index d6abed6e51e6..9fc4dd8f66eb 100644 --- a/arch/powerpc/kvm/book3s_pr.c +++ b/arch/powerpc/kvm/book3s_pr.c @@ -499,7 +499,6 @@ static void kvmppc_set_msr_pr(struct kvm_vcpu *vcpu, u64 msr) if (msr & MSR_POW) { if (!vcpu->arch.pending_exceptions) { kvm_vcpu_halt(vcpu); - kvm_clear_request(KVM_REQ_UNHALT, vcpu); vcpu->stat.generic.halt_wakeup++; /* Unset POW bit after we woke up */ diff --git a/arch/powerpc/kvm/book3s_pr_papr.c b/arch/powerpc/kvm/book3s_pr_papr.c index a1f2978b2a86..b2c89e850d7a 100644 --- a/arch/powerpc/kvm/book3s_pr_papr.c +++ b/arch/powerpc/kvm/book3s_pr_papr.c @@ -393,7 +393,6 @@ int kvmppc_h_pr(struct kvm_vcpu *vcpu, unsigned long cmd) case H_CEDE: kvmppc_set_msr_fast(vcpu, kvmppc_get_msr(vcpu) | MSR_EE); kvm_vcpu_halt(vcpu); - kvm_clear_request(KVM_REQ_UNHALT, vcpu); vcpu->stat.generic.halt_wakeup++; return EMULATE_DONE; case H_LOGICAL_CI_LOAD: diff --git a/arch/powerpc/kvm/booke.c b/arch/powerpc/kvm/booke.c index 06c5830a93f9..7b4920e9fd26 100644 --- a/arch/powerpc/kvm/booke.c +++ b/arch/powerpc/kvm/booke.c @@ -719,7 +719,6 @@ int kvmppc_core_prepare_to_enter(struct kvm_vcpu *vcpu) if (vcpu->arch.shared->msr & MSR_WE) { local_irq_enable(); kvm_vcpu_halt(vcpu); - kvm_clear_request(KVM_REQ_UNHALT, vcpu); hard_irq_disable(); kvmppc_set_exit_type(vcpu, EMULATED_MTMSRWE_EXITS); diff --git a/arch/powerpc/kvm/powerpc.c b/arch/powerpc/kvm/powerpc.c index 191992fcb2c2..3c384ed1d9fc 100644 --- a/arch/powerpc/kvm/powerpc.c +++ b/arch/powerpc/kvm/powerpc.c @@ -238,7 +238,6 @@ int kvmppc_kvm_pv(struct kvm_vcpu *vcpu) case EV_HCALL_TOKEN(EV_IDLE): r = EV_SUCCESS; kvm_vcpu_halt(vcpu); - kvm_clear_request(KVM_REQ_UNHALT, vcpu); break; default: r = EV_UNIMPLEMENTED; diff --git a/arch/riscv/kvm/vcpu_insn.c b/arch/riscv/kvm/vcpu_insn.c index 7eb90a47b571..0bb52761a3f7 100644 --- a/arch/riscv/kvm/vcpu_insn.c +++ b/arch/riscv/kvm/vcpu_insn.c @@ -191,7 +191,6 @@ void kvm_riscv_vcpu_wfi(struct kvm_vcpu *vcpu) kvm_vcpu_srcu_read_unlock(vcpu); kvm_vcpu_halt(vcpu); kvm_vcpu_srcu_read_lock(vcpu); - kvm_clear_request(KVM_REQ_UNHALT, vcpu); } } diff --git a/arch/s390/kvm/kvm-s390.c b/arch/s390/kvm/kvm-s390.c index edfd4bbd0cba..aa39ea4582bd 100644 --- a/arch/s390/kvm/kvm-s390.c +++ b/arch/s390/kvm/kvm-s390.c @@ -4343,8 +4343,6 @@ static int kvm_s390_handle_requests(struct kvm_vcpu *vcpu) goto retry; } - /* nothing to do, just clear the request */ - kvm_clear_request(KVM_REQ_UNHALT, vcpu); /* we left the vsie handler, nothing to do, just clear the request */ kvm_clear_request(KVM_REQ_VSIE_RESTART, vcpu); diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 416df0fc7fda..7f084613fac8 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -10635,7 +10635,6 @@ static inline int vcpu_block(struct kvm_vcpu *vcpu) if (hv_timer) kvm_lapic_switch_to_hv_timer(vcpu); - kvm_clear_request(KVM_REQ_UNHALT, vcpu); if (r <= 0) return 1; } @@ -10842,7 +10841,6 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu) r = 0; goto out; } - kvm_clear_request(KVM_REQ_UNHALT, vcpu); r = -EAGAIN; if (signal_pending(current)) { r = -EINTR; diff --git a/arch/x86/kvm/xen.c b/arch/x86/kvm/xen.c index 280cb5dc7341..93c628d3e3a9 100644 --- a/arch/x86/kvm/xen.c +++ b/arch/x86/kvm/xen.c @@ -1065,7 +1065,6 @@ static bool kvm_xen_schedop_poll(struct kvm_vcpu *vcpu, bool longmode, del_timer(&vcpu->arch.xen.poll_timer); vcpu->arch.mp_state = KVM_MP_STATE_RUNNABLE; - kvm_clear_request(KVM_REQ_UNHALT, vcpu); } vcpu->arch.xen.poll_evtchn = 0; diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index cbd9577e5447..cfe46830783f 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -151,12 +151,11 @@ static inline bool is_error_page(struct page *page) #define KVM_REQUEST_NO_ACTION BIT(10) /* * Architecture-independent vcpu->requests bit members - * Bits 4-7 are reserved for more arch-independent bits. + * Bits 3-7 are reserved for more arch-independent bits. */ #define KVM_REQ_TLB_FLUSH (0 | KVM_REQUEST_WAIT | KVM_REQUEST_NO_WAKEUP) #define KVM_REQ_VM_DEAD (1 | KVM_REQUEST_WAIT | KVM_REQUEST_NO_WAKEUP) #define KVM_REQ_UNBLOCK 2 -#define KVM_REQ_UNHALT 3 #define KVM_REQUEST_ARCH_BASE 8 /* diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index e827805b7b28..18292f028536 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -3414,7 +3414,6 @@ static int kvm_vcpu_check_block(struct kvm_vcpu *vcpu) int idx = srcu_read_lock(&vcpu->kvm->srcu); if (kvm_arch_vcpu_runnable(vcpu)) { - kvm_make_request(KVM_REQ_UNHALT, vcpu); ret = 1; goto out; } @@ -3508,10 +3507,6 @@ int kvm_vcpu_halt(struct kvm_vcpu *vcpu) stop = do_halt_poll ? start : ktime_add_ns(start, vcpu->halt_poll_ns); do { - /* - * This sets KVM_REQ_UNHALT if an interrupt - * arrives. - */ r = kvm_vcpu_check_block(vcpu); if (r != 0) goto out; From patchwork Thu Aug 11 21:06:02 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Paolo Bonzini X-Patchwork-Id: 12941735 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 33F75C25B06 for ; Thu, 11 Aug 2022 21:06:16 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236361AbiHKVGP (ORCPT ); Thu, 11 Aug 2022 17:06:15 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40998 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236288AbiHKVGL (ORCPT ); Thu, 11 Aug 2022 17:06:11 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 1539F785BF for ; Thu, 11 Aug 2022 14:06:09 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1660251969; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=ZZBHej5wRe7xU3WBHwemoCQQEDQCEX2ZZ7C6QJ36Rgc=; b=fUPNjtCLnY9zqJ5RXLwp/q1NJ30/C+0o/VAmpCZjMMyJ0tqP6T44ccLNUI9vip8fiw2ihH pBMX0YU6uNBzmUllbujo6Wwr/B2A4hI+maaiFJgOMmLQerc6SZsHUDUFwf9jWfdMl91nfE bwmlbxGvet5kqtl3cCBODkRyJDtvp0U= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-370-hvhBoeflO764B_GN1I-8wA-1; Thu, 11 Aug 2022 17:06:07 -0400 X-MC-Unique: hvhBoeflO764B_GN1I-8wA-1 Received: from smtp.corp.redhat.com (int-mx10.intmail.prod.int.rdu2.redhat.com [10.11.54.10]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 7129585A599; Thu, 11 Aug 2022 21:06:07 +0000 (UTC) Received: from virtlab701.virt.lab.eng.bos.redhat.com (virtlab701.virt.lab.eng.bos.redhat.com [10.19.152.228]) by smtp.corp.redhat.com (Postfix) with ESMTP id 4AA5A492C3B; Thu, 11 Aug 2022 21:06:07 +0000 (UTC) From: Paolo Bonzini To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: seanjc@google.com, mlevitsk@redhat.com, vkuznets@redhat.com Subject: [PATCH v2 6/9] KVM: x86: make vendor code check for all nested events Date: Thu, 11 Aug 2022 17:06:02 -0400 Message-Id: <20220811210605.402337-7-pbonzini@redhat.com> In-Reply-To: <20220811210605.402337-1-pbonzini@redhat.com> References: <20220811210605.402337-1-pbonzini@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.85 on 10.11.54.10 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Interrupts, NMIs etc. sent while in guest mode are already handled properly by the *_interrupt_allowed callbacks, but other events can cause a vCPU to be runnable that are specific to guest mode. In the case of VMX there are two, the preemption timer and the monitor trap. The VMX preemption timer is already special cased via the hv_timer_pending callback, but the purpose of the callback can be easily extended to MTF or in fact any other event that can occur only in guest mode. Rename the callback and add an MTF check; kvm_arch_vcpu_runnable() now will return true if an MTF is pending, without relying on kvm_vcpu_running()'s call to kvm_check_nested_events(). Until that call is removed, however, the patch introduces no functional change. Reported-by: Maxim Levitsky Reviewed-by: Maxim Levitsky Signed-off-by: Paolo Bonzini Reviewed-by: Maxim Levitsky --- arch/x86/include/asm/kvm_host.h | 2 +- arch/x86/kvm/vmx/nested.c | 9 ++++++++- arch/x86/kvm/x86.c | 8 ++++---- 3 files changed, 13 insertions(+), 6 deletions(-) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index 5ffa578cafe1..293ff678fff5 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -1636,7 +1636,7 @@ struct kvm_x86_nested_ops { int (*check_events)(struct kvm_vcpu *vcpu); bool (*handle_page_fault_workaround)(struct kvm_vcpu *vcpu, struct x86_exception *fault); - bool (*hv_timer_pending)(struct kvm_vcpu *vcpu); + bool (*has_events)(struct kvm_vcpu *vcpu); void (*triple_fault)(struct kvm_vcpu *vcpu); int (*get_state)(struct kvm_vcpu *vcpu, struct kvm_nested_state __user *user_kvm_nested_state, diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c index ddd4367d4826..9631cdcdd058 100644 --- a/arch/x86/kvm/vmx/nested.c +++ b/arch/x86/kvm/vmx/nested.c @@ -3876,6 +3876,13 @@ static bool nested_vmx_preemption_timer_pending(struct kvm_vcpu *vcpu) to_vmx(vcpu)->nested.preemption_timer_expired; } +static bool vmx_has_nested_events(struct kvm_vcpu *vcpu) +{ + struct vcpu_vmx *vmx = to_vmx(vcpu); + + return nested_vmx_preemption_timer_pending(vcpu) || vmx->nested.mtf_pending; +} + static int vmx_check_nested_events(struct kvm_vcpu *vcpu) { struct vcpu_vmx *vmx = to_vmx(vcpu); @@ -6816,7 +6823,7 @@ struct kvm_x86_nested_ops vmx_nested_ops = { .leave_nested = vmx_leave_nested, .check_events = vmx_check_nested_events, .handle_page_fault_workaround = nested_vmx_handle_page_fault_workaround, - .hv_timer_pending = nested_vmx_preemption_timer_pending, + .has_events = vmx_has_nested_events, .triple_fault = nested_vmx_triple_fault, .get_state = vmx_get_nested_state, .set_state = vmx_set_nested_state, diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 7f084613fac8..0f9f24793b8a 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -9789,8 +9789,8 @@ static int inject_pending_event(struct kvm_vcpu *vcpu, bool *req_immediate_exit) } if (is_guest_mode(vcpu) && - kvm_x86_ops.nested_ops->hv_timer_pending && - kvm_x86_ops.nested_ops->hv_timer_pending(vcpu)) + kvm_x86_ops.nested_ops->has_events && + kvm_x86_ops.nested_ops->has_events(vcpu)) *req_immediate_exit = true; WARN_ON(vcpu->arch.exception.pending); @@ -12562,8 +12562,8 @@ static inline bool kvm_vcpu_has_events(struct kvm_vcpu *vcpu) return true; if (is_guest_mode(vcpu) && - kvm_x86_ops.nested_ops->hv_timer_pending && - kvm_x86_ops.nested_ops->hv_timer_pending(vcpu)) + kvm_x86_ops.nested_ops->has_events && + kvm_x86_ops.nested_ops->has_events(vcpu)) return true; if (kvm_xen_has_pending_events(vcpu)) From patchwork Thu Aug 11 21:06:03 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Paolo Bonzini X-Patchwork-Id: 12941736 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 49FD7C3F6B0 for ; Thu, 11 Aug 2022 21:06:18 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236385AbiHKVGQ (ORCPT ); Thu, 11 Aug 2022 17:06:16 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41020 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236291AbiHKVGM (ORCPT ); Thu, 11 Aug 2022 17:06:12 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 4E58F792F6 for ; Thu, 11 Aug 2022 14:06:10 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1660251969; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=+LAQ3UWyhcCOdQwv4ps8HwWuKgNTwvufANeP1Ktgs0w=; b=Xx9amVxFHP9UwLDm2678bCcPdTo0Oans9S6m406+jzvs9rc0pBVzduwaCDx0kwY0jhLeXT eIF7T1QskGe/+5kILeu7kIHLW9IiGDKU95bwdXHSN8qn/H4uQgBRAFOlrZGd480mkZ5R/S nHXg1ZbL6jwKGO0NGEz/t0ytSb7SjeE= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-124-MZ3uhukuNoGJY9QJMice3w-1; Thu, 11 Aug 2022 17:06:08 -0400 X-MC-Unique: MZ3uhukuNoGJY9QJMice3w-1 Received: from smtp.corp.redhat.com (int-mx10.intmail.prod.int.rdu2.redhat.com [10.11.54.10]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id A51B7800124; Thu, 11 Aug 2022 21:06:07 +0000 (UTC) Received: from virtlab701.virt.lab.eng.bos.redhat.com (virtlab701.virt.lab.eng.bos.redhat.com [10.19.152.228]) by smtp.corp.redhat.com (Postfix) with ESMTP id 79F78492C3B; Thu, 11 Aug 2022 21:06:07 +0000 (UTC) From: Paolo Bonzini To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: seanjc@google.com, mlevitsk@redhat.com, vkuznets@redhat.com, stable@vger.kernel.org Subject: [PATCH v2 7/9] KVM: nVMX: Make an event request when pending an MTF nested VM-Exit Date: Thu, 11 Aug 2022 17:06:03 -0400 Message-Id: <20220811210605.402337-8-pbonzini@redhat.com> In-Reply-To: <20220811210605.402337-1-pbonzini@redhat.com> References: <20220811210605.402337-1-pbonzini@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.85 on 10.11.54.10 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Sean Christopherson Set KVM_REQ_EVENT when MTF becomes pending to ensure that KVM will run through inject_pending_event() and thus vmx_check_nested_events() prior to re-entering the guest. MTF currently works by virtue of KVM's hack that calls kvm_check_nested_events() from kvm_vcpu_running(), but that hack will be removed in the near future. Until that call is removed, the patch introduces no functional change. Fixes: 5ef8acbdd687 ("KVM: nVMX: Emulate MTF when performing instruction emulation") Cc: stable@vger.kernel.org Signed-off-by: Sean Christopherson Signed-off-by: Paolo Bonzini Reviewed-by: Maxim Levitsky --- arch/x86/kvm/vmx/vmx.c | 6 ++++-- 1 file changed, 4 insertions(+), 2 deletions(-) diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index d7f8331d6f7e..940c0c0f8281 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -1660,10 +1660,12 @@ static void vmx_update_emulated_instruction(struct kvm_vcpu *vcpu) */ if (nested_cpu_has_mtf(vmcs12) && (!vcpu->arch.exception.pending || - vcpu->arch.exception.nr == DB_VECTOR)) + vcpu->arch.exception.nr == DB_VECTOR)) { vmx->nested.mtf_pending = true; - else + kvm_make_request(KVM_REQ_EVENT, vcpu); + } else { vmx->nested.mtf_pending = false; + } } static int vmx_skip_emulated_instruction(struct kvm_vcpu *vcpu) From patchwork Thu Aug 11 21:06:04 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Paolo Bonzini X-Patchwork-Id: 12941742 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8D6E8C19F2A for ; Thu, 11 Aug 2022 21:06:35 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236494AbiHKVGd (ORCPT ); Thu, 11 Aug 2022 17:06:33 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41222 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236364AbiHKVGP (ORCPT ); Thu, 11 Aug 2022 17:06:15 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 796F77755A for ; Thu, 11 Aug 2022 14:06:14 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1660251973; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=K+7Clrf+ZaklLBV7mgCu2JZrobw4R6ISnHeNUw2I+2o=; b=RbIsPYaAeh3I8EV5JTXkrjhawXB08V8UUmM1M3iYsSdu8VhiHH29dKfGZ/LDpI27VOTXy0 O3KkCLj72Qb7OiQM6ROOhoBLLYWj+G4Y45NiNSTvVDZ7ZDnlK6xgR2J+1bPkDskNY7fQDB tD9iwmGSTs+zxM9rFMDVo1xGFL+y/BI= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-418-R3OsheUPPPSeAR4rpjP1_A-1; Thu, 11 Aug 2022 17:06:08 -0400 X-MC-Unique: R3OsheUPPPSeAR4rpjP1_A-1 Received: from smtp.corp.redhat.com (int-mx10.intmail.prod.int.rdu2.redhat.com [10.11.54.10]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id D441B943202; Thu, 11 Aug 2022 21:06:07 +0000 (UTC) Received: from virtlab701.virt.lab.eng.bos.redhat.com (virtlab701.virt.lab.eng.bos.redhat.com [10.19.152.228]) by smtp.corp.redhat.com (Postfix) with ESMTP id AE2DB403343; Thu, 11 Aug 2022 21:06:07 +0000 (UTC) From: Paolo Bonzini To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: seanjc@google.com, mlevitsk@redhat.com, vkuznets@redhat.com Subject: [PATCH v2 8/9] KVM: x86: lapic does not have to process INIT if it is blocked Date: Thu, 11 Aug 2022 17:06:04 -0400 Message-Id: <20220811210605.402337-9-pbonzini@redhat.com> In-Reply-To: <20220811210605.402337-1-pbonzini@redhat.com> References: <20220811210605.402337-1-pbonzini@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.85 on 10.11.54.10 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Do not return true from kvm_apic_has_events, and consequently from kvm_vcpu_has_events, if the vCPU is not going to process an INIT. Signed-off-by: Paolo Bonzini --- arch/x86/include/asm/kvm_host.h | 1 + arch/x86/kvm/i8259.c | 2 +- arch/x86/kvm/lapic.h | 2 +- arch/x86/kvm/x86.c | 5 +++++ arch/x86/kvm/x86.h | 5 ----- 5 files changed, 8 insertions(+), 7 deletions(-) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index 293ff678fff5..1ce4ebc41118 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -2042,6 +2042,7 @@ void __user *__x86_set_memory_region(struct kvm *kvm, int id, gpa_t gpa, u32 size); bool kvm_vcpu_is_reset_bsp(struct kvm_vcpu *vcpu); bool kvm_vcpu_is_bsp(struct kvm_vcpu *vcpu); +bool kvm_vcpu_latch_init(struct kvm_vcpu *vcpu); bool kvm_intr_is_single_vcpu(struct kvm *kvm, struct kvm_lapic_irq *irq, struct kvm_vcpu **dest_vcpu); diff --git a/arch/x86/kvm/i8259.c b/arch/x86/kvm/i8259.c index e1bb6218bb96..177555eea54e 100644 --- a/arch/x86/kvm/i8259.c +++ b/arch/x86/kvm/i8259.c @@ -29,9 +29,9 @@ #include #include #include -#include "irq.h" +#include -#include +#include "irq.h" #include "trace.h" #define pr_pic_unimpl(fmt, ...) \ diff --git a/arch/x86/kvm/lapic.h b/arch/x86/kvm/lapic.h index 117a46df5cc1..12577ddccdfc 100644 --- a/arch/x86/kvm/lapic.h +++ b/arch/x86/kvm/lapic.h @@ -225,7 +225,7 @@ static inline bool kvm_vcpu_apicv_active(struct kvm_vcpu *vcpu) static inline bool kvm_apic_has_events(struct kvm_vcpu *vcpu) { - return lapic_in_kernel(vcpu) && vcpu->arch.apic->pending_events; + return lapic_in_kernel(vcpu) && vcpu->arch.apic->pending_events && !kvm_vcpu_latch_init(vcpu); } static inline bool kvm_lowest_prio_delivery(struct kvm_lapic_irq *irq) diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 0f9f24793b8a..5e9358ea112b 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -12529,6 +12529,11 @@ static inline bool kvm_guest_apic_has_interrupt(struct kvm_vcpu *vcpu) static_call(kvm_x86_guest_apic_has_interrupt)(vcpu)); } +bool kvm_vcpu_latch_init(struct kvm_vcpu *vcpu) +{ + return is_smm(vcpu) || static_call(kvm_x86_apic_init_signal_blocked)(vcpu); +} + static inline bool kvm_vcpu_has_events(struct kvm_vcpu *vcpu) { if (!list_empty_careful(&vcpu->async_pf.done)) diff --git a/arch/x86/kvm/x86.h b/arch/x86/kvm/x86.h index 1926d2cb8e79..c333e7cf933a 100644 --- a/arch/x86/kvm/x86.h +++ b/arch/x86/kvm/x86.h @@ -267,11 +267,6 @@ static inline bool kvm_check_has_quirk(struct kvm *kvm, u64 quirk) return !(kvm->arch.disabled_quirks & quirk); } -static inline bool kvm_vcpu_latch_init(struct kvm_vcpu *vcpu) -{ - return is_smm(vcpu) || static_call(kvm_x86_apic_init_signal_blocked)(vcpu); -} - void kvm_inject_realmode_interrupt(struct kvm_vcpu *vcpu, int irq, int inc_eip); u64 get_kvmclock_ns(struct kvm *kvm); From patchwork Thu Aug 11 21:06:05 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Paolo Bonzini X-Patchwork-Id: 12941737 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9D4DCC19F2A for ; Thu, 11 Aug 2022 21:06:20 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236406AbiHKVGS (ORCPT ); Thu, 11 Aug 2022 17:06:18 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41028 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236293AbiHKVGM (ORCPT ); Thu, 11 Aug 2022 17:06:12 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id D37E77AC31 for ; Thu, 11 Aug 2022 14:06:10 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1660251969; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=e3seP/frNyLsAlU7K9H6qLEBTEwpcXPprkWdZzIof8M=; b=IuA1h4+X3r8+2jYSnfTgEx9KnmQ6KBMyC25HlHOQcAULJ6i61OVV6hlQCiyFlY2JkugxGp ivOOVis0NO3AvFasSmMntGnNer7Nk6NzuOKLfkKJNbt91wZ5ZliOcQ1lP8gN6eEQIbgtr4 Lu3eEcaQRKS/5AjJNssYpAIuAoyt3X8= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-19-HKzmCSrrPo6qeeTQL5u-fg-1; Thu, 11 Aug 2022 17:06:08 -0400 X-MC-Unique: HKzmCSrrPo6qeeTQL5u-fg-1 Received: from smtp.corp.redhat.com (int-mx10.intmail.prod.int.rdu2.redhat.com [10.11.54.10]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 0ED9D943206; Thu, 11 Aug 2022 21:06:08 +0000 (UTC) Received: from virtlab701.virt.lab.eng.bos.redhat.com (virtlab701.virt.lab.eng.bos.redhat.com [10.19.152.228]) by smtp.corp.redhat.com (Postfix) with ESMTP id DCB3C492C3B; Thu, 11 Aug 2022 21:06:07 +0000 (UTC) From: Paolo Bonzini To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: seanjc@google.com, mlevitsk@redhat.com, vkuznets@redhat.com Subject: [PATCH v2 9/9] KVM: x86: never write to memory from kvm_vcpu_check_block Date: Thu, 11 Aug 2022 17:06:05 -0400 Message-Id: <20220811210605.402337-10-pbonzini@redhat.com> In-Reply-To: <20220811210605.402337-1-pbonzini@redhat.com> References: <20220811210605.402337-1-pbonzini@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.85 on 10.11.54.10 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org kvm_vcpu_check_block() is called while not in TASK_RUNNING, and therefore it cannot sleep. Writing to guest memory is therefore forbidden, but it can happen on AMD processors if kvm_check_nested_events() causes a vmexit. Fortunately, all events that are caught by kvm_check_nested_events() are also recognized by kvm_vcpu_has_events() through vendor callbacks such as kvm_x86_interrupt_allowed() or kvm_x86_ops.nested_ops->has_events(), so remove the call and postpone the actual processing to vcpu_block(). Signed-off-by: Paolo Bonzini Reviewed-by: Maxim Levitsky --- arch/x86/kvm/x86.c | 14 +++++++++++--- 1 file changed, 11 insertions(+), 3 deletions(-) diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 5e9358ea112b..9226fd536783 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -10639,6 +10639,17 @@ static inline int vcpu_block(struct kvm_vcpu *vcpu) return 1; } + if (is_guest_mode(vcpu)) { + /* + * Evaluate nested events before exiting the halted state. + * This allows the halt state to be recorded properly in + * the VMCS12's activity state field (AMD does not have + * a similar field and a vmexit always causes a spurious + * wakeup from HLT). + */ + kvm_check_nested_events(vcpu); + } + if (kvm_apic_accept_events(vcpu) < 0) return 0; switch(vcpu->arch.mp_state) { @@ -10662,9 +10673,6 @@ static inline int vcpu_block(struct kvm_vcpu *vcpu) static inline bool kvm_vcpu_running(struct kvm_vcpu *vcpu) { - if (is_guest_mode(vcpu)) - kvm_check_nested_events(vcpu); - return (vcpu->arch.mp_state == KVM_MP_STATE_RUNNABLE && !vcpu->arch.apf.halted); }