From patchwork Wed May 22 01:40:07 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 13670162 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 551E0C25B78 for ; Wed, 22 May 2024 01:40:40 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:Reply-To:List-Subscribe:List-Help: List-Post:List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:Message-ID :Mime-Version:Date:Content-ID:Content-Description:Resent-Date:Resent-From: Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References: List-Owner; bh=yJJuLG8fmnE1Rwtcx4lDF6Cf5WXeST7OpR7sc6Xara8=; b=3IVfqpj2LpJ/gk 2Zlw7UDw1skP6p4UojF7tIZIgB58h+SaE5/lLt5rdCEfrAjNccuCdH+8eAn79ew4xXAiLQjP1dOdW iwmQSJoTCgg4JSn8gwm98vOaQ+2L4k7R682iz6c0sNYMC4dTJpY8AOC+0b8ZXH1nQxaIrfAr/NXr1 jEOCY4iVcVinZb1vsLZ0t+SjQXtbjCHxN270+/4fuScCX+bUIzbHOs1gYrLzRXc5t9/hGJB1/oPsj cXOCkdT0ZSNNQIcOs3Ixd0UvS8iq6qn6nozqlM53LAyL1Ri8VFYgLsNnK7zjlNVgmoHQ+IpjLgfyV IF9jUKkuioKdQe8EOp5w==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1s9axy-00000001ee7-1XBK; Wed, 22 May 2024 01:40:30 +0000 Received: from mail-yb1-xb4a.google.com ([2607:f8b0:4864:20::b4a]) by bombadil.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1s9axo-00000001eVo-0anw for linux-arm-kernel@lists.infradead.org; Wed, 22 May 2024 01:40:23 +0000 Received: by mail-yb1-xb4a.google.com with SMTP id 3f1490d57ef6-dee902341c0so15031559276.3 for ; Tue, 21 May 2024 18:40:17 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1716342016; x=1716946816; darn=lists.infradead.org; h=cc:to:from:subject:message-id:mime-version:date:reply-to:from:to:cc :subject:date:message-id:reply-to; bh=zcj+niLGOsOBGFG/wxz5ssr64Z9mR+IM9s4JveywwVo=; b=w9sMHdj9nLPN3ZUoNV+5YRisDZZEBfGZyi9QgsGY+9T7fdjPxAKd291YoR5JGiB5Zg 8tbWrdS/Bo3VMJ/xzZXsrD8Uw/Qjo57t2JUrzRuESn/cPFLOuOF423w17yodREOuokO5 lLCXrSS9yJZBWu2ImM50yNvxiObu8DrMU3LLMfmvYLWJmDZ3E0oAOhoyim0vQOFjnYeG A5lCOdML6/XCxsKJ9HNyyEOKxyX01xsBYkGJuEHfB/JpQMvC5UQeCmxqaeyJ6Oia0gsi WUDR2GOTUcYxJ3dd4vEOZB9OLGvvUNOoOMZ6AAUtLS/o/hn191C+bIZPbmi+zMjtW6y4 ukyA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1716342016; x=1716946816; h=cc:to:from:subject:message-id:mime-version:date:reply-to :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=zcj+niLGOsOBGFG/wxz5ssr64Z9mR+IM9s4JveywwVo=; b=srRumJCkYvbKf7qRPLm4BhpTu43SxM/dkCIfEanaDQ3qMDXa7TNdfZoAYQy8TnnWh3 3nkIg21O/cxUxJZ0fHsMXqrkx0INlyP81uK9F4cDvGkygir+jt7VITFWS3+G+ig5sojU H36ZUKQE4DH946LCyKrRdv5vAvx3DM3KadWX9RJ+zCVxuYGNRH25ufYSx9s4QldXadnu +daZCvhkdf//dllS9QAMQSSWMZ2oyIpth+Xmp7NGJ5WRehkNV5o4X+zypmvVjLXUkcBQ 85toi4TPGC+ITTiJwASVtNGLfdYMnhADE0Xud12MuUhP5Jg/mt0yDJrBhEajfOoVu6Ys 9JKA== X-Gm-Message-State: AOJu0YzRV/wvSPmrgeL6rwksagmLeYlRwzP0O/IIIn9ns21r9+22DJkH oB1JVcTbTz8mnsB3REEZoz4+H49zZESr3YJI74WOf9BSDbCHD6dt0d+YdRTGFf4kR+5nHjB4OJD fuw== X-Google-Smtp-Source: AGHT+IF8tl2Sd1O1Adb/DEG65NrlAsoUAahexLMaCbbpQeSeG4ZOBg+4u/hLB5lJUUIzwhVDdo9qeU1encA= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a05:6902:20c1:b0:de5:3003:4b64 with SMTP id 3f1490d57ef6-df4e0a80c05mr82582276.1.1716342016391; Tue, 21 May 2024 18:40:16 -0700 (PDT) Date: Tue, 21 May 2024 18:40:07 -0700 Mime-Version: 1.0 X-Mailer: git-send-email 2.45.0.215.g3402c0e53f-goog Message-ID: <20240522014013.1672962-1-seanjc@google.com> Subject: [PATCH v2 0/6] KVM: Fold kvm_arch_sched_in() into kvm_arch_vcpu_load() From: Sean Christopherson To: Marc Zyngier , Oliver Upton , Tianrui Zhao , Bibo Mao , Huacai Chen , Michael Ellerman , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Christian Borntraeger , Janosch Frank , Claudio Imbrenda , Sean Christopherson , Paolo Bonzini Cc: linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, kvm@vger.kernel.org, loongarch@lists.linux.dev, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240521_184020_231954_C9A8CED1 X-CRM114-Status: GOOD ( 13.22 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Reply-To: Sean Christopherson Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Drop kvm_arch_sched_in() and instead add and use kvm_vcpu.scheduled_out to communicate to kvm_arch_vcpu_load() that the vCPU is being scheduling back in. While fiddling with an idea for optimizing state management on AMD CPUs, I wanted to skip re-saving certain host state when a vCPU is scheduled back in, as the state (theoretically) shouldn't change for the task while it's scheduled out. Actually doing that was annoying and unnecessarily brittle due to having a separate API for the kvm_sched_in() case (the state save needed to be in kvm_arch_vcpu_load() for the common path). The other motivation for this is to avoid yet another arch hook, and more arbitrary ordering, if there's a future need to hook kvm_sched_out() (we've come close on the x86 side several times). E.g. kvm_arch_vcpu_put() can simply check kvm_vcpu.scheduled_out if it needs to something specific for the vCPU being scheduled out. v2: - Add scheduled_out flag instead of passing a bool to kvm_arch_vcpu_load(). [Oliver] - Tack on patches to clean up x86's setting of l1tf_flush_l1d in kvm_arch_sched_load() (the code looked slightly less weird when the flag was being set by kvm_arch_sched_in()). v1: https://lore.kernel.org/all/20240430193157.419425-1-seanjc@google.com Sean Christopherson (6): KVM: Add a flag to track if a loaded vCPU is scheduled out KVM: VMX: Move PLE grow/shrink helpers above vmx_vcpu_load() KVM: x86: Fold kvm_arch_sched_in() into kvm_arch_vcpu_load() KVM: Delete the now unused kvm_arch_sched_in() KVM: x86: Unconditionally set l1tf_flush_l1d during vCPU load KVM: x86: Drop now-superflous setting of l1tf_flush_l1d in vcpu_run() arch/arm64/include/asm/kvm_host.h | 1 - arch/loongarch/include/asm/kvm_host.h | 1 - arch/mips/include/asm/kvm_host.h | 1 - arch/powerpc/include/asm/kvm_host.h | 1 - arch/riscv/include/asm/kvm_host.h | 1 - arch/s390/include/asm/kvm_host.h | 1 - arch/x86/include/asm/kvm-x86-ops.h | 1 - arch/x86/include/asm/kvm_host.h | 2 - arch/x86/kvm/pmu.c | 6 +- arch/x86/kvm/svm/svm.c | 11 +--- arch/x86/kvm/vmx/main.c | 2 - arch/x86/kvm/vmx/vmx.c | 80 +++++++++++++-------------- arch/x86/kvm/vmx/x86_ops.h | 1 - arch/x86/kvm/x86.c | 22 +++----- include/linux/kvm_host.h | 3 +- virt/kvm/kvm_main.c | 5 +- 16 files changed, 59 insertions(+), 80 deletions(-) base-commit: 4aad0b1893a141f114ba40ed509066f3c9bc24b0 Acked-by: Kai Huang