From patchwork Thu Jul 16 18:22:21 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Thomas Gleixner X-Patchwork-Id: 11668365 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 5533E60D for ; Thu, 16 Jul 2020 19:51:24 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 38F2A2083B for ; Thu, 16 Jul 2020 19:51:24 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=linutronix.de header.i=@linutronix.de header.b="D7FuUAYt"; dkim=permerror (0-bit key) header.d=linutronix.de header.i=@linutronix.de header.b="/vm5ygBP" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729687AbgGPTvH (ORCPT ); Thu, 16 Jul 2020 15:51:07 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41012 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729727AbgGPTvA (ORCPT ); Thu, 16 Jul 2020 15:51:00 -0400 Received: from galois.linutronix.de (Galois.linutronix.de [IPv6:2a0a:51c0:0:12e:550::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id DFE37C08C5CE; Thu, 16 Jul 2020 12:50:59 -0700 (PDT) Message-Id: <20200716185425.307587523@linutronix.de> DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020; t=1594929058; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: references:references; bh=9xkAs7jG5EBlpBrFKUyEy4hqz+L6GrXswNue6258hSg=; b=D7FuUAYthunwvlhxyDHFNWxdUIk+TTqXfqddyjMDbxw+FGiYiviomWcOF9jyUZdTqCv1FH KNVuVw8VQXyoSW4+y3Y8SgzpeEM7se6uVodXFIJkKQXt7dym+na+BClavT4KmTd4ZsBENT 1CsN51U85hqJQ85ylh+y+BlMgjwwD57D3i9gjkQVCttMNWqgm+FL/EghUSXqbIXroSKT47 Z1Efk495z2Y56LloFcnWl8VOPL6O+qljnxzk9R62zoeQCCb6fALbUyQRB5eO4pBLVCr11x c2vfDn/IKyUWf2FBPv7m/I+10+NFyFr2c2hpQDZ28LK0Zi3q1Orlog9PB2GgXQ== DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020e; t=1594929058; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: references:references; bh=9xkAs7jG5EBlpBrFKUyEy4hqz+L6GrXswNue6258hSg=; b=/vm5ygBP046ZbGw4YdWZBN+fbSUqdCpS9v7s3dN4aTrESeC/JaK2E/UCk4ht37doKd7DBC SsjBKMh/ITvg4wDg== Date: Thu, 16 Jul 2020 20:22:21 +0200 From: Thomas Gleixner To: LKML Cc: x86@kernel.org, linux-arch@vger.kernel.org, Will Deacon , Arnd Bergmann , Mark Rutland , Kees Cook , Keno Fischer , Paolo Bonzini , kvm@vger.kernel.org Subject: [patch V3 13/13] x86/kvm: Use generic exit to guest work function References: <20200716182208.180916541@linutronix.de> MIME-Version: 1.0 Content-transfer-encoding: 8-bit Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Use the generic infrastructure to check for and handle pending work before entering into guest mode. This now handles TIF_NOTIFY_RESUME as well which was ignored so far. Handling it is important as this covers task work and task work will be used to offload the heavy lifting of POSIX CPU timers to thread context. Signed-off-by: Thomas Gleixner Cc: Paolo Bonzini Cc: kvm@vger.kernel.org --- arch/x86/kvm/Kconfig | 1 + arch/x86/kvm/vmx/vmx.c | 11 +++++------ arch/x86/kvm/x86.c | 15 ++++++--------- 3 files changed, 12 insertions(+), 15 deletions(-) --- a/arch/x86/kvm/Kconfig +++ b/arch/x86/kvm/Kconfig @@ -42,6 +42,7 @@ config KVM select HAVE_KVM_MSI select HAVE_KVM_CPU_RELAX_INTERCEPT select HAVE_KVM_NO_POLL + select KVM_EXIT_TO_GUEST_WORK select KVM_GENERIC_DIRTYLOG_READ_PROTECT select KVM_VFIO select SRCU --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -27,6 +27,7 @@ #include #include #include +#include #include #include @@ -5376,14 +5377,12 @@ static int handle_invalid_guest_state(st } /* - * Note, return 1 and not 0, vcpu_run() is responsible for - * morphing the pending signal into the proper return code. + * Note, return 1 and not 0, vcpu_run() will invoke + * exit_to_guest_mode() which will create a proper return + * code. */ - if (signal_pending(current)) + if (__exit_to_guest_mode_work_pending()) return 1; - - if (need_resched()) - schedule(); } return 1; --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -56,6 +56,7 @@ #include #include #include +#include #include @@ -1585,7 +1586,7 @@ EXPORT_SYMBOL_GPL(kvm_emulate_wrmsr); bool kvm_vcpu_exit_request(struct kvm_vcpu *vcpu) { return vcpu->mode == EXITING_GUEST_MODE || kvm_request_pending(vcpu) || - need_resched() || signal_pending(current); + exit_to_guest_mode_work_pending(); } EXPORT_SYMBOL_GPL(kvm_vcpu_exit_request); @@ -8676,15 +8677,11 @@ static int vcpu_run(struct kvm_vcpu *vcp break; } - if (signal_pending(current)) { - r = -EINTR; - vcpu->run->exit_reason = KVM_EXIT_INTR; - ++vcpu->stat.signal_exits; - break; - } - if (need_resched()) { + if (exit_to_guest_mode_work_pending()) { srcu_read_unlock(&kvm->srcu, vcpu->srcu_idx); - cond_resched(); + r = exit_to_guest_mode(vcpu); + if (r) + return r; vcpu->srcu_idx = srcu_read_lock(&kvm->srcu); } }