From patchwork Fri May 3 18:17:34 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Matlack X-Patchwork-Id: 13653278 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 8E0A6C25B4F for ; Fri, 3 May 2024 18:18:06 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:Message-ID: References:Mime-Version:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=UBagu8j1hXHC/wQLQnHNrJ0fEmJfMHwXQC2OsIM3w/o=; b=bzwOpmaKN2VGoTwLv2ODSotSpY oEJoZGnW6azX6srZYzxKEZwsMSKN0M/mnSVTxNqrh3vccVQFZLsKw4+aUM6jHcMBgvDY/SO1aYXaI dkI0vjGnYwwZrhVC19P1AHdYVPiIK7AETR57OfUXA9oQ7esyIpnF1MkaYOjhO2EE3jDmWG+Mzx0At ztnN8bb46LuLDuJBwTFz2yEqLKJUsa4Ws/2JRk+YdKV8qD+vgnLnqMLNjH4i1a4ZnIEwUY7nIdPze mWHXxDBRKkVBgoup5/zxiKYn4n92+C9DfiImNPp0+Ta+SXctBluszBuya3iPUbfL1gxPlaJ08LAWO EzLR128g==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1s2xTo-0000000HWGK-2T1X; Fri, 03 May 2024 18:17:56 +0000 Received: from mail-yw1-x1149.google.com ([2607:f8b0:4864:20::1149]) by bombadil.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1s2xTc-0000000HW6m-45nY for linux-arm-kernel@lists.infradead.org; Fri, 03 May 2024 18:17:47 +0000 Received: by mail-yw1-x1149.google.com with SMTP id 00721157ae682-61df903af62so71850477b3.1 for ; Fri, 03 May 2024 11:17:44 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1714760263; x=1715365063; darn=lists.infradead.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=XCWz5AboezUKY+fKcSkwtkc56yFvTydQOL6+C1uV1TA=; b=QLfMBwnNJdOp/btp8rlM42ib8SoHQzb3J/mwDLFy6EeZuN3/tG7oB5gKFgweOKBNKh gTKDXd96FiaWl3aJzL7rq4URBYE/4M6wQAXF8z3WFEkKKX6YE8ATphiUBwKa7T5oz0YV D9FAhoT8n78YefEqHzTDwFZAYEHsrVqPhpIdpxcBjNT0wMHhTHnUe/lKXDuM/WNGbRTW FwrS69U04YOBtXj+NdJRZD2AaLfL9HQQarlLvninXdCEiwz1pqzgFp3y+yDd4hKDov3l pahbDuqXOhO1OdSKuJwPWvllY7r6h70MVlnjxAN1LKc/3utIiE3eFPaZNI6QyU+1NALp VKuA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1714760263; x=1715365063; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=XCWz5AboezUKY+fKcSkwtkc56yFvTydQOL6+C1uV1TA=; b=VJt1y6GWwpjkiNxDbrMmmB/+5MAlAYl7MWwUKlOj+sz5q9bvqSGUSRRhvfMxvSiXyl 3mGyksi6uGM5zPuGo0rSdMr7H6ampZ0HhF6zxIgKpr4RJqklWR0/ZqWWsTYa6M8v2I38 aWZBNhi5u+iLvsX3VKeWggM9ZQkArsn8OZNvUo069seisJJP4t6tV92Ob4cujbLKI6ve o6qdl7h+wv3Uqyxxtma6W2QKDZW+DGtY2KZ/hkhE2lh8TXj36PM7G4VrWQUXqX+kvpjw B0jjk1w6bw40aLCzePB8Ra9sh+Ts8nZd1P2DPe+5IIAOwwGnanUCBzkQ8xYALUFhnXYT 5LJg== X-Forwarded-Encrypted: i=1; AJvYcCXJpt2KFkJ2JYI3b343MOF7vRVNmEb22oyy5vDtdD5b9NNnpuS/w94IrD2uDt+ZjWYxWxgu3Ya+c83Ryn1H9Kx3BsJQYSgixjkMGYDRd1hDeaJKAHc= X-Gm-Message-State: AOJu0YyO+hrmKHnyeq0saDYFh3cUMrBiyYg0NjmCVNO+hChCi6/uKlIR amDDo/swOAO4dHX1BlkmA+m+df1nuHuPPUng0xuI9V+/Rp5peBxPWndVvPWyIBpKa/6SOqsTybr A89Bhhp5rfg== X-Google-Smtp-Source: AGHT+IG8z91B85kZw12g8ILejBDce9nCeI6uZrPNjYvj7vQXfcVKq57xpnYqIWHHCJ8gvpfpvg4aSbrDSFG7Mg== X-Received: from dmatlack-n2d-128.c.googlers.com ([fda3:e722:ac3:cc00:20:ed76:c0a8:1309]) (user=dmatlack job=sendgmr) by 2002:a0d:ea05:0:b0:61b:7912:6cad with SMTP id t5-20020a0dea05000000b0061b79126cadmr815080ywe.2.1714760263504; Fri, 03 May 2024 11:17:43 -0700 (PDT) Date: Fri, 3 May 2024 11:17:34 -0700 In-Reply-To: <20240503181734.1467938-1-dmatlack@google.com> Mime-Version: 1.0 References: <20240503181734.1467938-1-dmatlack@google.com> X-Mailer: git-send-email 2.45.0.rc1.225.g2a3ae87e7f-goog Message-ID: <20240503181734.1467938-4-dmatlack@google.com> Subject: [PATCH v3 3/3] KVM: Mark a vCPU as preempted/ready iff it's scheduled out while running From: David Matlack To: Paolo Bonzini Cc: Marc Zyngier , Oliver Upton , James Morse , Suzuki K Poulose , Zenghui Yu , Tianrui Zhao , Bibo Mao , Huacai Chen , Michael Ellerman , Nicholas Piggin , Anup Patel , Atish Patra , Paul Walmsley , Palmer Dabbelt , Albert Ou , Christian Borntraeger , Janosch Frank , Claudio Imbrenda , David Hildenbrand , Sean Christopherson , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, kvm@vger.kernel.org, loongarch@lists.linux.dev, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, David Matlack X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240503_111745_326620_E23F2942 X-CRM114-Status: GOOD ( 16.24 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Mark a vCPU as preempted/ready if-and-only-if it's scheduled out while running. i.e. Do not mark a vCPU preempted/ready if it's scheduled out during a non-KVM_RUN ioctl() or when userspace is doing KVM_RUN with immediate_exit. Commit 54aa83c90198 ("KVM: x86: do not set st->preempted when going back to user space") stopped marking a vCPU as preempted when returning to userspace, but if userspace then invokes a KVM vCPU ioctl() that gets preempted, the vCPU will be marked preempted/ready. This is arguably incorrect behavior since the vCPU was not actually preempted while the guest was running, it was preempted while doing something on behalf of userspace. This commit also avoids KVM dirtying guest memory after userspace has paused vCPUs, e.g. for Live Migration, which allows userspace to collect the final dirty bitmap before or in parallel with saving vCPU state without having to worry about saving vCPU state triggering writes to guest memory. Suggested-by: Sean Christopherson Signed-off-by: David Matlack --- virt/kvm/kvm_main.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index 2b29851a90bd..3973e62acc7c 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -6302,7 +6302,7 @@ static void kvm_sched_out(struct preempt_notifier *pn, { struct kvm_vcpu *vcpu = preempt_notifier_to_vcpu(pn); - if (current->on_rq) { + if (current->on_rq && vcpu->wants_to_run) { WRITE_ONCE(vcpu->preempted, true); WRITE_ONCE(vcpu->ready, true); }