From patchwork Wed May 22 01:40:07 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 13670167 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 43F63C25B74 for ; Wed, 22 May 2024 01:40:29 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:Reply-To:List-Subscribe:List-Help: List-Post:List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:Message-ID :Mime-Version:Date:Content-ID:Content-Description:Resent-Date:Resent-From: Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References: List-Owner; bh=ugRoi0TVxBUwhmNxLUG0dPff9BubEM8UE7wZ9Mymb4s=; b=oxSW1KGLx15hHw Bv+UxWaoGOVSP9comhoj6Syt16jO1LGgnYmcQ0ZY7UycNhWKDZwlQ2/zwtH5QSSNpi8W9Br6/N6KH 3tZjC+H37t0QcNVd2G04+yrL2lx6gEnaqjdwePlbo0vDhtRoFFra9zFPHtk5SvV5Ze7nXEbTEuozU Kk5bxq866v0GjVdhrn2mU4GJTIp8gw5bm1spc1ixScfe2f6dofr2IU/vs2zgsiheLKysai1MrfRov BMGy5lqZAtyngLXiS2kR5xvt/zOMOoiq4gZ4AjUmgGN2Sv2QU164rm10LBtg0EIiG22nr8vSqXKYq zA4bbZrV45SQWZ527mog==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1s9axr-00000001eZF-3njp; Wed, 22 May 2024 01:40:23 +0000 Received: from mail-yb1-xb49.google.com ([2607:f8b0:4864:20::b49]) by bombadil.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1s9axn-00000001eVn-3ENE for linux-riscv@lists.infradead.org; Wed, 22 May 2024 01:40:21 +0000 Received: by mail-yb1-xb49.google.com with SMTP id 3f1490d57ef6-dee902341c0so15031558276.3 for ; Tue, 21 May 2024 18:40:17 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1716342016; x=1716946816; darn=lists.infradead.org; h=cc:to:from:subject:message-id:mime-version:date:reply-to:from:to:cc :subject:date:message-id:reply-to; bh=zcj+niLGOsOBGFG/wxz5ssr64Z9mR+IM9s4JveywwVo=; b=w9sMHdj9nLPN3ZUoNV+5YRisDZZEBfGZyi9QgsGY+9T7fdjPxAKd291YoR5JGiB5Zg 8tbWrdS/Bo3VMJ/xzZXsrD8Uw/Qjo57t2JUrzRuESn/cPFLOuOF423w17yodREOuokO5 lLCXrSS9yJZBWu2ImM50yNvxiObu8DrMU3LLMfmvYLWJmDZ3E0oAOhoyim0vQOFjnYeG A5lCOdML6/XCxsKJ9HNyyEOKxyX01xsBYkGJuEHfB/JpQMvC5UQeCmxqaeyJ6Oia0gsi WUDR2GOTUcYxJ3dd4vEOZB9OLGvvUNOoOMZ6AAUtLS/o/hn191C+bIZPbmi+zMjtW6y4 ukyA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1716342016; x=1716946816; h=cc:to:from:subject:message-id:mime-version:date:reply-to :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=zcj+niLGOsOBGFG/wxz5ssr64Z9mR+IM9s4JveywwVo=; b=mNcFNeCvw91RjL0JAuQFsDtB8d+vbwRMS6vBpi4/KjZhQ90oWyWxzSOnjpdKAokvee tneyLldhhFo2JlDnpxPdCD6S0r3MSWtHniIIIUs3FTZhMnNtHoRM8bK6Y8S7wMwOdpZG vehY5TfIHSMJMxHHWPOpClvZYYAn7SJeaIOJLM27nF4luG1za0/LuCY0K4MYGglCZceS SgyYm32Z4j9O5ub2lG1uV3w/EgVwqwnAE35Kl15fkhTk7SHkWBTQJDa+QfYDaO/kuJIq jtXZPaiCQbt3AcLEy9PWD8Ua35aewlOaxGfxUIZ4cFBqBQ1RfkXMqe5PTO3+yxqSvHZP HfiQ== X-Forwarded-Encrypted: i=1; AJvYcCX3dsNXV+BOZ05NrGYRDFpX8En0D8khDDqLJASu5FBZvFVVXhkeWNnQhYF9i2witxRQOYt8mFrpJ2VxRi+jcHKFJ+pom1twISusXn9xFHrI X-Gm-Message-State: AOJu0Yx7MoARFjRZ3QvQj0bsu78vC6w/e2ff0NSkf1gnE6xoJNvwUDCu uhd1Qy66CxNhcCKIgyIQhfxM2mNLRAvbI4yP2cd61/bsFE2pPxXuJGhpi0oMQoemyUUZg2Pf5Wh y9g== X-Google-Smtp-Source: AGHT+IF8tl2Sd1O1Adb/DEG65NrlAsoUAahexLMaCbbpQeSeG4ZOBg+4u/hLB5lJUUIzwhVDdo9qeU1encA= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a05:6902:20c1:b0:de5:3003:4b64 with SMTP id 3f1490d57ef6-df4e0a80c05mr82582276.1.1716342016391; Tue, 21 May 2024 18:40:16 -0700 (PDT) Date: Tue, 21 May 2024 18:40:07 -0700 Mime-Version: 1.0 X-Mailer: git-send-email 2.45.0.215.g3402c0e53f-goog Message-ID: <20240522014013.1672962-1-seanjc@google.com> Subject: [PATCH v2 0/6] KVM: Fold kvm_arch_sched_in() into kvm_arch_vcpu_load() From: Sean Christopherson To: Marc Zyngier , Oliver Upton , Tianrui Zhao , Bibo Mao , Huacai Chen , Michael Ellerman , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Christian Borntraeger , Janosch Frank , Claudio Imbrenda , Sean Christopherson , Paolo Bonzini Cc: linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, kvm@vger.kernel.org, loongarch@lists.linux.dev, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240521_184019_850051_F68974E1 X-CRM114-Status: GOOD ( 11.59 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Reply-To: Sean Christopherson Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org Drop kvm_arch_sched_in() and instead add and use kvm_vcpu.scheduled_out to communicate to kvm_arch_vcpu_load() that the vCPU is being scheduling back in. While fiddling with an idea for optimizing state management on AMD CPUs, I wanted to skip re-saving certain host state when a vCPU is scheduled back in, as the state (theoretically) shouldn't change for the task while it's scheduled out. Actually doing that was annoying and unnecessarily brittle due to having a separate API for the kvm_sched_in() case (the state save needed to be in kvm_arch_vcpu_load() for the common path). The other motivation for this is to avoid yet another arch hook, and more arbitrary ordering, if there's a future need to hook kvm_sched_out() (we've come close on the x86 side several times). E.g. kvm_arch_vcpu_put() can simply check kvm_vcpu.scheduled_out if it needs to something specific for the vCPU being scheduled out. v2: - Add scheduled_out flag instead of passing a bool to kvm_arch_vcpu_load(). [Oliver] - Tack on patches to clean up x86's setting of l1tf_flush_l1d in kvm_arch_sched_load() (the code looked slightly less weird when the flag was being set by kvm_arch_sched_in()). v1: https://lore.kernel.org/all/20240430193157.419425-1-seanjc@google.com Sean Christopherson (6): KVM: Add a flag to track if a loaded vCPU is scheduled out KVM: VMX: Move PLE grow/shrink helpers above vmx_vcpu_load() KVM: x86: Fold kvm_arch_sched_in() into kvm_arch_vcpu_load() KVM: Delete the now unused kvm_arch_sched_in() KVM: x86: Unconditionally set l1tf_flush_l1d during vCPU load KVM: x86: Drop now-superflous setting of l1tf_flush_l1d in vcpu_run() arch/arm64/include/asm/kvm_host.h | 1 - arch/loongarch/include/asm/kvm_host.h | 1 - arch/mips/include/asm/kvm_host.h | 1 - arch/powerpc/include/asm/kvm_host.h | 1 - arch/riscv/include/asm/kvm_host.h | 1 - arch/s390/include/asm/kvm_host.h | 1 - arch/x86/include/asm/kvm-x86-ops.h | 1 - arch/x86/include/asm/kvm_host.h | 2 - arch/x86/kvm/pmu.c | 6 +- arch/x86/kvm/svm/svm.c | 11 +--- arch/x86/kvm/vmx/main.c | 2 - arch/x86/kvm/vmx/vmx.c | 80 +++++++++++++-------------- arch/x86/kvm/vmx/x86_ops.h | 1 - arch/x86/kvm/x86.c | 22 +++----- include/linux/kvm_host.h | 3 +- virt/kvm/kvm_main.c | 5 +- 16 files changed, 59 insertions(+), 80 deletions(-) base-commit: 4aad0b1893a141f114ba40ed509066f3c9bc24b0 Acked-by: Kai Huang