From patchwork Tue Apr 30 19:31:54 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 13649891 Received: from mail-yb1-f201.google.com (mail-yb1-f201.google.com [209.85.219.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 00757199E85 for ; Tue, 30 Apr 2024 19:32:02 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.219.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1714505524; cv=none; b=IeEAN2kQbstdGnxJXIuab7K9wkOyZZCZfvfyNtcPBPwDHyFZhIjwXHjfxD4GDJvkDwMB/gN00B4gFUgquHG9PC08+4UvbvPxoGWhXDvn3DUzoFtq/Ys+5FFlAKLFcG3kEU/IZfOiR9iOxAFhNylcztWM16dEpB+K3mPIxSfECG0= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1714505524; c=relaxed/simple; bh=W+u+y7MWVg0P4qdPzROh25II2YFSK5VTz5XTnAkY0hA=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=S8WI6eKcug+XR1TrG7taJwkAQB+Zr2P8Y1PVLqbJTBc0MDpvoUiFfjBukYJLlhNOvdXosOa/c3U/bi41rVMKBuB2MMOPHdOtqEl8aOW4vpjNyW1OAqv0CJ5omzDAaS/Vsm+BsHCp85BJ8sNjXmwLQODLJ2NwcOpBXXtsueJpa0k= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=jIoxzCWw; arc=none smtp.client-ip=209.85.219.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="jIoxzCWw" Received: by mail-yb1-f201.google.com with SMTP id 3f1490d57ef6-de5a378a948so8353861276.2 for ; Tue, 30 Apr 2024 12:32:02 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1714505522; x=1715110322; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=/9fRzl3CD0k/SyefV+PuVpGeG7yuKpTeGf5FXkOJQxc=; b=jIoxzCWwcWUb3+1GhOFOaYUvvT11RADNXMQ/hLNuzYBL0j/Zq+lCkU0/k4B7l2+6vx wBS13BW/IIB6vrNtvN9wXQaVxrdgSiAQZ/w4yl/dGlBx352pluQYkBtat5zZ+AXyMiwU V/HW9b11Xy9WsVacdDhqI7kPLPqcs3wBMTb5KV98JTade7eQXlzg8EmPKcpbe2pTljBa OUy3dj1Jj769GLA9R7bGwPQTJqfZZlz1wn2KemTzu0gQ7G2s27LO8j61HObD/y6S4kmm pAm/BFudGWZEKipQNJbTRufyESyZ97WGz4b5/hl40egd44eg+pVq388LldT3+LXCt4vY CBIg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1714505522; x=1715110322; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=/9fRzl3CD0k/SyefV+PuVpGeG7yuKpTeGf5FXkOJQxc=; b=GF7xfyIkptMoMLbdVIzm6elg5wy7gtXYPZtRBDUh+OxgcrH6zp/YhXIJx7dFzi3pgM 7LIIAmPn0P6dhgK/WdQzR5chYzB7WAc6bdhguP4yE1bNNzewvN9MGKVYGCp23QiBa2z1 M0at3LS2aos9C/FErkvwVXwfPssIvKKuTgohhQwPIJR3mqiitKih7glTt3ARJVHi2vAb 2OFNLO8YXeOqw1orzOdRqZbtXQPHtYlRcTvLcEEDVOLIeWsPIxdBpIK63mV/Xq3/cctl /c8ZcDGfqRPBtGMMr+UoGMJEiI01AlYBWKjYAFC5yyJXvYFFIyOXhVOZX7XMBaI9Frzv RyaQ== X-Forwarded-Encrypted: i=1; AJvYcCWjmfGBRu8aQingSc4/JGZFjcsKeAfURtlSXGpusSC7Xk/GJZy3JtHGC06pSY/lpdWru5eizt1e5b2yjGHfQxcjC79NJzoYDodXSw== X-Gm-Message-State: AOJu0YzlfpYqXhPGTDO/mjdwCnXIIe2eno51WRR+rjaAddgXFc2z3YQk +iUmKXr2d990nvnVvcKjNZYIYYrJdtTcRNmWefhmcHhcK3RWQL7k3Gr60uYiemI1IFCv1EvC9q9 /mA== X-Google-Smtp-Source: AGHT+IHAdk2rf8sKJrz0YzlhWlhSiHEVj5XA1lMxBkIc6CQI4jsKBeWM7e1dBIV2qjtpd3vjCDa8fgs5KY0= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a25:2d16:0:b0:de1:d49:7ff6 with SMTP id t22-20020a252d16000000b00de10d497ff6mr66932ybt.7.1714505521993; Tue, 30 Apr 2024 12:32:01 -0700 (PDT) Reply-To: Sean Christopherson Date: Tue, 30 Apr 2024 12:31:54 -0700 In-Reply-To: <20240430193157.419425-1-seanjc@google.com> Precedence: bulk X-Mailing-List: linux-mips@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240430193157.419425-1-seanjc@google.com> X-Mailer: git-send-email 2.45.0.rc0.197.gbae5840b3b-goog Message-ID: <20240430193157.419425-2-seanjc@google.com> Subject: [PATCH 1/4] KVM: Plumb in a @sched_in flag to kvm_arch_vcpu_load() From: Sean Christopherson To: Marc Zyngier , Oliver Upton , Tianrui Zhao , Bibo Mao , Huacai Chen , Michael Ellerman , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Christian Borntraeger , Janosch Frank , Claudio Imbrenda , Sean Christopherson , Paolo Bonzini Cc: linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, kvm@vger.kernel.org, loongarch@lists.linux.dev, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org Add a @sched_in flag to kvm_arch_vcpu_load() to note that the vCPU is being (re)loaded by kvm_sched_in(), i.e. after the vCPU was previously scheduled out. KVM x86 currently uses a dedicated kvm_arch_sched_in() hook, but that's unnecessarily brittle as the behavior of the arch hook heavily depends on the arbitrary order of the two arch calls. A separate hook also makes it unnecessarily difficult to do something unique when re-loading vCPU during kvm_sched_in(), e.g. to optimize vCPU loading if KVM knows that some CPU state couldn't have changed while the vCPU was scheduled out. Signed-off-by: Sean Christopherson --- arch/arm64/kvm/arm.c | 2 +- arch/arm64/kvm/emulate-nested.c | 4 ++-- arch/arm64/kvm/reset.c | 2 +- arch/loongarch/kvm/vcpu.c | 2 +- arch/mips/kvm/mmu.c | 2 +- arch/powerpc/kvm/powerpc.c | 2 +- arch/riscv/kvm/vcpu.c | 4 ++-- arch/s390/kvm/kvm-s390.c | 2 +- arch/x86/kvm/x86.c | 2 +- include/linux/kvm_host.h | 2 +- virt/kvm/kvm_main.c | 4 ++-- 11 files changed, 14 insertions(+), 14 deletions(-) diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c index c4a0a35e02c7..30ea103bfacb 100644 --- a/arch/arm64/kvm/arm.c +++ b/arch/arm64/kvm/arm.c @@ -428,7 +428,7 @@ void kvm_arch_vcpu_unblocking(struct kvm_vcpu *vcpu) } -void kvm_arch_vcpu_load(struct kvm_vcpu *vcpu, int cpu) +void kvm_arch_vcpu_load(struct kvm_vcpu *vcpu, int cpu, bool sched_in) { struct kvm_s2_mmu *mmu; int *last_ran; diff --git a/arch/arm64/kvm/emulate-nested.c b/arch/arm64/kvm/emulate-nested.c index 4697ba41b3a9..ad5458c47e5e 100644 --- a/arch/arm64/kvm/emulate-nested.c +++ b/arch/arm64/kvm/emulate-nested.c @@ -2193,7 +2193,7 @@ void kvm_emulate_nested_eret(struct kvm_vcpu *vcpu) *vcpu_pc(vcpu) = elr; *vcpu_cpsr(vcpu) = spsr; - kvm_arch_vcpu_load(vcpu, smp_processor_id()); + kvm_arch_vcpu_load(vcpu, smp_processor_id(), false); preempt_enable(); } @@ -2274,7 +2274,7 @@ static int kvm_inject_nested(struct kvm_vcpu *vcpu, u64 esr_el2, */ __kvm_adjust_pc(vcpu); - kvm_arch_vcpu_load(vcpu, smp_processor_id()); + kvm_arch_vcpu_load(vcpu, smp_processor_id(), false); preempt_enable(); return 1; diff --git a/arch/arm64/kvm/reset.c b/arch/arm64/kvm/reset.c index 68d1d05672bd..654cf09c81e9 100644 --- a/arch/arm64/kvm/reset.c +++ b/arch/arm64/kvm/reset.c @@ -262,7 +262,7 @@ void kvm_reset_vcpu(struct kvm_vcpu *vcpu) kvm_timer_vcpu_reset(vcpu); if (loaded) - kvm_arch_vcpu_load(vcpu, smp_processor_id()); + kvm_arch_vcpu_load(vcpu, smp_processor_id(), false); preempt_enable(); } diff --git a/arch/loongarch/kvm/vcpu.c b/arch/loongarch/kvm/vcpu.c index 3a8779065f73..61d549c4f8d1 100644 --- a/arch/loongarch/kvm/vcpu.c +++ b/arch/loongarch/kvm/vcpu.c @@ -1050,7 +1050,7 @@ static int _kvm_vcpu_load(struct kvm_vcpu *vcpu, int cpu) return 0; } -void kvm_arch_vcpu_load(struct kvm_vcpu *vcpu, int cpu) +void kvm_arch_vcpu_load(struct kvm_vcpu *vcpu, int cpu, bool sched_in) { unsigned long flags; diff --git a/arch/mips/kvm/mmu.c b/arch/mips/kvm/mmu.c index c17157e700c0..6797799f3f32 100644 --- a/arch/mips/kvm/mmu.c +++ b/arch/mips/kvm/mmu.c @@ -682,7 +682,7 @@ static void kvm_mips_migrate_count(struct kvm_vcpu *vcpu) } /* Restore ASID once we are scheduled back after preemption */ -void kvm_arch_vcpu_load(struct kvm_vcpu *vcpu, int cpu) +void kvm_arch_vcpu_load(struct kvm_vcpu *vcpu, int cpu, bool sched_in) { unsigned long flags; diff --git a/arch/powerpc/kvm/powerpc.c b/arch/powerpc/kvm/powerpc.c index d32abe7fe6ab..8de620716875 100644 --- a/arch/powerpc/kvm/powerpc.c +++ b/arch/powerpc/kvm/powerpc.c @@ -826,7 +826,7 @@ int kvm_cpu_has_pending_timer(struct kvm_vcpu *vcpu) return kvmppc_core_pending_dec(vcpu); } -void kvm_arch_vcpu_load(struct kvm_vcpu *vcpu, int cpu) +void kvm_arch_vcpu_load(struct kvm_vcpu *vcpu, int cpu, bool sched_in) { #ifdef CONFIG_BOOKE /* diff --git a/arch/riscv/kvm/vcpu.c b/arch/riscv/kvm/vcpu.c index b5ca9f2e98ac..a7b7f172fa61 100644 --- a/arch/riscv/kvm/vcpu.c +++ b/arch/riscv/kvm/vcpu.c @@ -87,7 +87,7 @@ static void kvm_riscv_reset_vcpu(struct kvm_vcpu *vcpu) /* Reset the guest CSRs for hotplug usecase */ if (loaded) - kvm_arch_vcpu_load(vcpu, smp_processor_id()); + kvm_arch_vcpu_load(vcpu, smp_processor_id(), false); put_cpu(); } @@ -507,7 +507,7 @@ static void kvm_riscv_vcpu_setup_config(struct kvm_vcpu *vcpu) } } -void kvm_arch_vcpu_load(struct kvm_vcpu *vcpu, int cpu) +void kvm_arch_vcpu_load(struct kvm_vcpu *vcpu, int cpu, bool sched_in) { struct kvm_vcpu_csr *csr = &vcpu->arch.guest_csr; struct kvm_vcpu_config *cfg = &vcpu->arch.cfg; diff --git a/arch/s390/kvm/kvm-s390.c b/arch/s390/kvm/kvm-s390.c index 5147b943a864..9f04dc312641 100644 --- a/arch/s390/kvm/kvm-s390.c +++ b/arch/s390/kvm/kvm-s390.c @@ -3713,7 +3713,7 @@ __u64 kvm_s390_get_cpu_timer(struct kvm_vcpu *vcpu) return value; } -void kvm_arch_vcpu_load(struct kvm_vcpu *vcpu, int cpu) +void kvm_arch_vcpu_load(struct kvm_vcpu *vcpu, int cpu, bool shed_in) { gmap_enable(vcpu->arch.enabled_gmap); diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 2d2619d3eee4..925cadb18b55 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -5003,7 +5003,7 @@ static bool need_emulate_wbinvd(struct kvm_vcpu *vcpu) return kvm_arch_has_noncoherent_dma(vcpu->kvm); } -void kvm_arch_vcpu_load(struct kvm_vcpu *vcpu, int cpu) +void kvm_arch_vcpu_load(struct kvm_vcpu *vcpu, int cpu, bool sched_in) { /* Address WBINVD may be executed by guest */ if (need_emulate_wbinvd(vcpu)) { diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index afbc99264ffa..2f5e35eb7eab 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -1498,7 +1498,7 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu); void kvm_arch_sched_in(struct kvm_vcpu *vcpu, int cpu); -void kvm_arch_vcpu_load(struct kvm_vcpu *vcpu, int cpu); +void kvm_arch_vcpu_load(struct kvm_vcpu *vcpu, int cpu, bool sched_in); void kvm_arch_vcpu_put(struct kvm_vcpu *vcpu); int kvm_arch_vcpu_precreate(struct kvm *kvm, unsigned int id); int kvm_arch_vcpu_create(struct kvm_vcpu *vcpu); diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index 658581d4ad68..4a4b29a9bace 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -211,7 +211,7 @@ void vcpu_load(struct kvm_vcpu *vcpu) __this_cpu_write(kvm_running_vcpu, vcpu); preempt_notifier_register(&vcpu->preempt_notifier); - kvm_arch_vcpu_load(vcpu, cpu); + kvm_arch_vcpu_load(vcpu, cpu, false); put_cpu(); } EXPORT_SYMBOL_GPL(vcpu_load); @@ -6279,7 +6279,7 @@ static void kvm_sched_in(struct preempt_notifier *pn, int cpu) __this_cpu_write(kvm_running_vcpu, vcpu); kvm_arch_sched_in(vcpu, cpu); - kvm_arch_vcpu_load(vcpu, cpu); + kvm_arch_vcpu_load(vcpu, cpu, true); } static void kvm_sched_out(struct preempt_notifier *pn, From patchwork Tue Apr 30 19:31:55 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 13649892 Received: from mail-pg1-f202.google.com (mail-pg1-f202.google.com [209.85.215.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id BFF4119DF57 for ; Tue, 30 Apr 2024 19:32:04 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.215.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1714505526; cv=none; b=EZQWfgKvrE6uL7OCXhs7gBvrZO217Ggp7E9YE1rQWdbneewb1bgf8iUCRldQjYGrVX20htwpLp8cIr6gaxfqNFkaU0ECNiFw+lbaazge7OjExpYNXtO8T+HMxeIsNWexAOiizsei7AOj72EZ5eBa53yRb5i2ZV2qm/JwHx/yoYU= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1714505526; c=relaxed/simple; bh=LpfSAtfaCACOaLvBplHLMq/lCqNS1Zqd5xDlARtwz50=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=dAGIqmUJLKcL1584T9BbEHEX+qw2yZUueytP/JJaw90O1QOJBQtEF5qUFMcFWXRMHrPzKE4KF17uMRPTTrR206yPD8kp3cwvQqOQMajoEhuyJWOPAF3sfD1rW1Qm4cJWro67gBNshZDN+d1hqHRgu1o+f20hTcDa7gBsU1zrR3g= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=pEws91zF; arc=none smtp.client-ip=209.85.215.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="pEws91zF" Received: by mail-pg1-f202.google.com with SMTP id 41be03b00d2f7-5c670f70a37so6249126a12.2 for ; Tue, 30 Apr 2024 12:32:04 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1714505524; x=1715110324; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=teE05Ylose2pcYDprhAFOjLN/sOw6wgj1YwY4a/EgBU=; b=pEws91zFBG08nKmAbcexKp1RDm1i0jD+JNppz1g4odAPDckiDS2EyY/utf4f2iltAX 3JX8LKG2DoEyY4nTDzeyKavCpWTjUDUVPwacQr3Ii9Wv+yZzRRwV5kdl29ePv4v8m5Oh EUVU7X61csMxGXG1V96eb/Hxhgk+G61QDMfjHb+ughUxynw/jcpIAkY3Fndy0kNQIOxO dcUpkMXbVAo0qtJBimEZq/TEibG2xoiTj+gxnm6eaLs0otz0pEkBc+A8tlw4PPhJfWoD mm/HTvkq+g02ECs6k+jCstGmkvhJlwQp5XQjGB8wuKfm3sKYNqTGWKPCq0kXKkeKlG/K g+ug== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1714505524; x=1715110324; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=teE05Ylose2pcYDprhAFOjLN/sOw6wgj1YwY4a/EgBU=; b=WiGGfqgFPSKtdA7d40PCFPh7G+blh2+39DEoRyBMBZiT/cywv7SEPTc9V4PhtNNapJ dhPgEt4Xc1hlRKNfJTh2zdf3hTe2KaETQZhn7nIHcS+JTEJ9a0E/pyrIlDgHmEJ2G+aS 6sC56gdtmIFBI5JGMUrHAfiuGKy1V5Emx//DqB5TQGQlFOYGhPNEoJ6Kd+HpV4CrfGF8 ekIkHVe/uUxCw8ZUXdPtVFHpWBU9JMCm7OmqYUXwIyHPyOnD5SlSu6c+E/AAL0f0vW7y wQ9MxK9iEF2mZASp2T0nlTpsbaLGBgk94sPnFpyM+rOlLbDdQbYNQGWUp6OzilB5/WXu 5oqg== X-Forwarded-Encrypted: i=1; AJvYcCXR1CJNA3Zp7okCJN8k9bEyX/RSDarGJBthVL4+vokkRc7Ef12CSzb6y+94gnTIvf/r9SRztHyzorfhRxcz4UdyxdK+56unpV3F0A== X-Gm-Message-State: AOJu0YxqaMMeKTlC+mVomJNsuc/yrvcqjuJGvPzGpem/VfQ2DYazbE/8 tsTAlj3f6oXxpl1/9tl44vh1/opR03YZl3y9TbHZc6DcqQafWvkw5/VbEvxBNTqSaEO8KU1o2gE gZg== X-Google-Smtp-Source: AGHT+IEbG2WGq5H7HJJ3oJz7X/vycBy/FzdcOadqmLeOQqQxGZZjzOUu53htbVlU23el0Z9xOdv4S0UMTZ0= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a05:6a02:6a1:b0:5dc:5111:d8b1 with SMTP id ca33-20020a056a0206a100b005dc5111d8b1mr1313pgb.5.1714505523833; Tue, 30 Apr 2024 12:32:03 -0700 (PDT) Reply-To: Sean Christopherson Date: Tue, 30 Apr 2024 12:31:55 -0700 In-Reply-To: <20240430193157.419425-1-seanjc@google.com> Precedence: bulk X-Mailing-List: linux-mips@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240430193157.419425-1-seanjc@google.com> X-Mailer: git-send-email 2.45.0.rc0.197.gbae5840b3b-goog Message-ID: <20240430193157.419425-3-seanjc@google.com> Subject: [PATCH 2/4] KVM: VMX: Move PLE grow/shrink helpers above vmx_vcpu_load() From: Sean Christopherson To: Marc Zyngier , Oliver Upton , Tianrui Zhao , Bibo Mao , Huacai Chen , Michael Ellerman , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Christian Borntraeger , Janosch Frank , Claudio Imbrenda , Sean Christopherson , Paolo Bonzini Cc: linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, kvm@vger.kernel.org, loongarch@lists.linux.dev, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org Move VMX's {grow,shrink}_ple_window() above vmx_vcpu_load() in preparation of moving the sched_in logic, which handles shrinking the PLE window, into vmx_vcpu_load(). No functional change intended. Signed-off-by: Sean Christopherson --- arch/x86/kvm/vmx/vmx.c | 64 +++++++++++++++++++++--------------------- 1 file changed, 32 insertions(+), 32 deletions(-) diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index 6780313914f8..cb36db7b6140 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -1402,6 +1402,38 @@ static void vmx_write_guest_kernel_gs_base(struct vcpu_vmx *vmx, u64 data) } #endif +static void grow_ple_window(struct kvm_vcpu *vcpu) +{ + struct vcpu_vmx *vmx = to_vmx(vcpu); + unsigned int old = vmx->ple_window; + + vmx->ple_window = __grow_ple_window(old, ple_window, + ple_window_grow, + ple_window_max); + + if (vmx->ple_window != old) { + vmx->ple_window_dirty = true; + trace_kvm_ple_window_update(vcpu->vcpu_id, + vmx->ple_window, old); + } +} + +static void shrink_ple_window(struct kvm_vcpu *vcpu) +{ + struct vcpu_vmx *vmx = to_vmx(vcpu); + unsigned int old = vmx->ple_window; + + vmx->ple_window = __shrink_ple_window(old, ple_window, + ple_window_shrink, + ple_window); + + if (vmx->ple_window != old) { + vmx->ple_window_dirty = true; + trace_kvm_ple_window_update(vcpu->vcpu_id, + vmx->ple_window, old); + } +} + void vmx_vcpu_load_vmcs(struct kvm_vcpu *vcpu, int cpu, struct loaded_vmcs *buddy) { @@ -5871,38 +5903,6 @@ int vmx_vcpu_pre_run(struct kvm_vcpu *vcpu) return 1; } -static void grow_ple_window(struct kvm_vcpu *vcpu) -{ - struct vcpu_vmx *vmx = to_vmx(vcpu); - unsigned int old = vmx->ple_window; - - vmx->ple_window = __grow_ple_window(old, ple_window, - ple_window_grow, - ple_window_max); - - if (vmx->ple_window != old) { - vmx->ple_window_dirty = true; - trace_kvm_ple_window_update(vcpu->vcpu_id, - vmx->ple_window, old); - } -} - -static void shrink_ple_window(struct kvm_vcpu *vcpu) -{ - struct vcpu_vmx *vmx = to_vmx(vcpu); - unsigned int old = vmx->ple_window; - - vmx->ple_window = __shrink_ple_window(old, ple_window, - ple_window_shrink, - ple_window); - - if (vmx->ple_window != old) { - vmx->ple_window_dirty = true; - trace_kvm_ple_window_update(vcpu->vcpu_id, - vmx->ple_window, old); - } -} - /* * Indicate a busy-waiting vcpu in spinlock. We do not enable the PAUSE * exiting, so only get here on cpu with PAUSE-Loop-Exiting. From patchwork Tue Apr 30 19:31:56 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 13649893 Received: from mail-pg1-f202.google.com (mail-pg1-f202.google.com [209.85.215.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 582E91A38C3 for ; Tue, 30 Apr 2024 19:32:07 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.215.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1714505530; cv=none; b=pO7PXJ92zAcT4QSVdIgYH+sttIgWROZeOOkefs3vIrELlkPXpEPwEqFP2gOkYyElsapu7o6XFeK8/6bIHNXAuDJMf6+NqkTZubDhbqH4HzkRCrEebC9Nr1j9thOTVnikxWNOU/fnHrDmXRIOHdAZPtBeJluQ9Ee7liMj2BpuHrM= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1714505530; c=relaxed/simple; bh=FEbQRNE7DhZ1nKpbUCJKYi/7wNlFoTDp0Qkei3Not9k=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=VEYS58PO7ZFLJQE8nVyOVsX9Dm3TIHXV7h5f+JHkVqIy3IFGL4rqDwrg6WCw5NTZYId7aGoQ9nHzRyUqeXjM3cmg3kWR9k78te2wUpdQ8o9p69DwdaorzrFxczoQH21Io1Zu5qDFHBk+irXkG+NuLpsBDS+rgdY+lSo1qzeVg1A= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=Jvgm67M5; arc=none smtp.client-ip=209.85.215.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="Jvgm67M5" Received: by mail-pg1-f202.google.com with SMTP id 41be03b00d2f7-5e4df21f22dso126441a12.0 for ; Tue, 30 Apr 2024 12:32:07 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1714505527; x=1715110327; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=PdYsGD+f0Q3EGFqlo7iVC6xHcnKlviVz/6wVcPE0mCU=; b=Jvgm67M5PDjKapp/eKhXa8U3s2PM3YlE3kX1dsvskJy5OGxyfQO7CBKEJs+icDG2wU phRwCkqt0tQmYrEln9xFOW01knweUKzhZA/WCm/SSv3fFR3zZRUBAY1z+qpvDxphOi9m bR6dqE/aRJsm2iflUgwt57qAEviyUcNnGFEBVRHC8D/UPcmv5Rd1tcGGvcKPSemUj7Cx Y5FyV4jvTazrTMNQv3f7giyJJ1ol4Bgfujs8jD9DUEQM9VPeVuqas/EiUUnNlkhynzZm ubYfko87YxYFkxltG1XKqKvdlNny41V0ajAE0srPDKRlJ4r3SqStOGkbOKtdsKnCZMMb HgtA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1714505527; x=1715110327; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=PdYsGD+f0Q3EGFqlo7iVC6xHcnKlviVz/6wVcPE0mCU=; b=Ipo0FQc8QGeIesUanOL6nVIJ5EbjZj0WLT/7yj2gg+12Uyoo8/o6cmlF/D2Wewu2pm UVaWKDDQxAn0PSSQrEshYvnANe1K2fBlQns/LAaeyV1ws4xsUhXDisAELmEnuX16mncS NxXIs5nOiVj3GjgEdfqjxjmVxovkNkkpwIhzZC4ZUZw0lUnlrWvLA10gJWtIwuKPHP+c l9EfuT8JQZccjhBRzIHeLNvK7/Tqtz6hh6unT/QdqE5hGoy7VMR5RiT1cJkDH4fuL8Jl aA+n6nbaICzRCbbvgQxmSouHgdR2SusuAlM+L1YAkiPf9klRj8rCTYJqH1c5rEGJDtxe nVqw== X-Forwarded-Encrypted: i=1; AJvYcCW7oKxQnnE+RhfUaw6v/XXityzMq57u+VMAahlJlPHwPtXfaji6q2LePTqIQBosCD8a/qTUYFSiFDkCJT6PTq3885XEG1IJGEoGPA== X-Gm-Message-State: AOJu0YyKq521QmPPsbvyssWwGaWwK3g4PRbqXRFC2WzDe/fy3Nl4JOz6 KsgX8MA7mnN0dZkXWJE4ZRb5XCktSPPKXe58gFrEX4rMSrMve8G5LyW8hjIVToA0qWNoiKl0RYG flw== X-Google-Smtp-Source: AGHT+IFxEJGJNoBZFDDOGsKBZH3KCgZUJ/MpHfYduLX06CrOeWsHzYm0K/UuUin+BCjJtD8xU9kINTkYCM8= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a05:6a02:5a8:b0:5f0:7fc6:83a7 with SMTP id by40-20020a056a0205a800b005f07fc683a7mr13378pgb.0.1714505525717; Tue, 30 Apr 2024 12:32:05 -0700 (PDT) Reply-To: Sean Christopherson Date: Tue, 30 Apr 2024 12:31:56 -0700 In-Reply-To: <20240430193157.419425-1-seanjc@google.com> Precedence: bulk X-Mailing-List: linux-mips@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240430193157.419425-1-seanjc@google.com> X-Mailer: git-send-email 2.45.0.rc0.197.gbae5840b3b-goog Message-ID: <20240430193157.419425-4-seanjc@google.com> Subject: [PATCH 3/4] KVM: x86: Fold kvm_arch_sched_in() into kvm_arch_vcpu_load() From: Sean Christopherson To: Marc Zyngier , Oliver Upton , Tianrui Zhao , Bibo Mao , Huacai Chen , Michael Ellerman , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Christian Borntraeger , Janosch Frank , Claudio Imbrenda , Sean Christopherson , Paolo Bonzini Cc: linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, kvm@vger.kernel.org, loongarch@lists.linux.dev, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org Fold the guts of kvm_arch_sched_in() into kvm_arch_vcpu_load(), keying off the recently added @sched_in as appropriate. Note, there is a very slight functional change, as PLE shrink updates will now happen after blasting WBINVD, but that is quite uninteresting. Signed-off-by: Sean Christopherson --- arch/x86/include/asm/kvm-x86-ops.h | 1 - arch/x86/include/asm/kvm_host.h | 4 +--- arch/x86/kvm/svm/svm.c | 13 ++++--------- arch/x86/kvm/vmx/main.c | 2 -- arch/x86/kvm/vmx/vmx.c | 11 ++++------- arch/x86/kvm/vmx/x86_ops.h | 3 +-- arch/x86/kvm/x86.c | 19 +++++++++++-------- 7 files changed, 21 insertions(+), 32 deletions(-) diff --git a/arch/x86/include/asm/kvm-x86-ops.h b/arch/x86/include/asm/kvm-x86-ops.h index 5187fcf4b610..910d06cdb86b 100644 --- a/arch/x86/include/asm/kvm-x86-ops.h +++ b/arch/x86/include/asm/kvm-x86-ops.h @@ -103,7 +103,6 @@ KVM_X86_OP(write_tsc_multiplier) KVM_X86_OP(get_exit_info) KVM_X86_OP(check_intercept) KVM_X86_OP(handle_exit_irqoff) -KVM_X86_OP(sched_in) KVM_X86_OP_OPTIONAL(update_cpu_dirty_logging) KVM_X86_OP_OPTIONAL(vcpu_blocking) KVM_X86_OP_OPTIONAL(vcpu_unblocking) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index 01c69840647e..9fd1ec82303d 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -1624,7 +1624,7 @@ struct kvm_x86_ops { void (*vcpu_reset)(struct kvm_vcpu *vcpu, bool init_event); void (*prepare_switch_to_guest)(struct kvm_vcpu *vcpu); - void (*vcpu_load)(struct kvm_vcpu *vcpu, int cpu); + void (*vcpu_load)(struct kvm_vcpu *vcpu, int cpu, bool sched_in); void (*vcpu_put)(struct kvm_vcpu *vcpu); void (*update_exception_bitmap)(struct kvm_vcpu *vcpu); @@ -1746,8 +1746,6 @@ struct kvm_x86_ops { struct x86_exception *exception); void (*handle_exit_irqoff)(struct kvm_vcpu *vcpu); - void (*sched_in)(struct kvm_vcpu *vcpu, int cpu); - /* * Size of the CPU's dirty log buffer, i.e. VMX's PML buffer. A zero * value indicates CPU dirty logging is unsupported or disabled. diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c index 0f3b59da0d4a..6d9763dc4fed 100644 --- a/arch/x86/kvm/svm/svm.c +++ b/arch/x86/kvm/svm/svm.c @@ -1539,11 +1539,14 @@ static void svm_prepare_host_switch(struct kvm_vcpu *vcpu) to_svm(vcpu)->guest_state_loaded = false; } -static void svm_vcpu_load(struct kvm_vcpu *vcpu, int cpu) +static void svm_vcpu_load(struct kvm_vcpu *vcpu, int cpu, bool sched_in) { struct vcpu_svm *svm = to_svm(vcpu); struct svm_cpu_data *sd = per_cpu_ptr(&svm_data, cpu); + if (sched_in && !kvm_pause_in_guest(vcpu->kvm)) + shrink_ple_window(vcpu); + if (sd->current_vmcb != svm->vmcb) { sd->current_vmcb = svm->vmcb; @@ -4548,12 +4551,6 @@ static void svm_handle_exit_irqoff(struct kvm_vcpu *vcpu) vcpu->arch.at_instruction_boundary = true; } -static void svm_sched_in(struct kvm_vcpu *vcpu, int cpu) -{ - if (!kvm_pause_in_guest(vcpu->kvm)) - shrink_ple_window(vcpu); -} - static void svm_setup_mce(struct kvm_vcpu *vcpu) { /* [63:9] are reserved. */ @@ -5013,8 +5010,6 @@ static struct kvm_x86_ops svm_x86_ops __initdata = { .check_intercept = svm_check_intercept, .handle_exit_irqoff = svm_handle_exit_irqoff, - .sched_in = svm_sched_in, - .nested_ops = &svm_nested_ops, .deliver_interrupt = svm_deliver_interrupt, diff --git a/arch/x86/kvm/vmx/main.c b/arch/x86/kvm/vmx/main.c index 7c546ad3e4c9..4fee9a8cc5a1 100644 --- a/arch/x86/kvm/vmx/main.c +++ b/arch/x86/kvm/vmx/main.c @@ -121,8 +121,6 @@ struct kvm_x86_ops vt_x86_ops __initdata = { .check_intercept = vmx_check_intercept, .handle_exit_irqoff = vmx_handle_exit_irqoff, - .sched_in = vmx_sched_in, - .cpu_dirty_log_size = PML_ENTITY_NUM, .update_cpu_dirty_logging = vmx_update_cpu_dirty_logging, diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index cb36db7b6140..ccea594187c7 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -1505,10 +1505,13 @@ void vmx_vcpu_load_vmcs(struct kvm_vcpu *vcpu, int cpu, * Switches to specified vcpu, until a matching vcpu_put(), but assumes * vcpu mutex is already taken. */ -void vmx_vcpu_load(struct kvm_vcpu *vcpu, int cpu) +void vmx_vcpu_load(struct kvm_vcpu *vcpu, int cpu, bool sched_in) { struct vcpu_vmx *vmx = to_vmx(vcpu); + if (sched_in && !kvm_pause_in_guest(vcpu->kvm)) + shrink_ple_window(vcpu); + vmx_vcpu_load_vmcs(vcpu, cpu, NULL); vmx_vcpu_pi_load(vcpu, cpu); @@ -8093,12 +8096,6 @@ void vmx_cancel_hv_timer(struct kvm_vcpu *vcpu) } #endif -void vmx_sched_in(struct kvm_vcpu *vcpu, int cpu) -{ - if (!kvm_pause_in_guest(vcpu->kvm)) - shrink_ple_window(vcpu); -} - void vmx_update_cpu_dirty_logging(struct kvm_vcpu *vcpu) { struct vcpu_vmx *vmx = to_vmx(vcpu); diff --git a/arch/x86/kvm/vmx/x86_ops.h b/arch/x86/kvm/vmx/x86_ops.h index 502704596c83..b7104a5f623e 100644 --- a/arch/x86/kvm/vmx/x86_ops.h +++ b/arch/x86/kvm/vmx/x86_ops.h @@ -23,7 +23,7 @@ int vmx_vcpu_pre_run(struct kvm_vcpu *vcpu); fastpath_t vmx_vcpu_run(struct kvm_vcpu *vcpu, bool force_immediate_exit); void vmx_vcpu_free(struct kvm_vcpu *vcpu); void vmx_vcpu_reset(struct kvm_vcpu *vcpu, bool init_event); -void vmx_vcpu_load(struct kvm_vcpu *vcpu, int cpu); +void vmx_vcpu_load(struct kvm_vcpu *vcpu, int cpu, bool sched_in); void vmx_vcpu_put(struct kvm_vcpu *vcpu); int vmx_handle_exit(struct kvm_vcpu *vcpu, fastpath_t exit_fastpath); void vmx_handle_exit_irqoff(struct kvm_vcpu *vcpu); @@ -112,7 +112,6 @@ u64 vmx_get_l2_tsc_multiplier(struct kvm_vcpu *vcpu); void vmx_write_tsc_offset(struct kvm_vcpu *vcpu); void vmx_write_tsc_multiplier(struct kvm_vcpu *vcpu); void vmx_request_immediate_exit(struct kvm_vcpu *vcpu); -void vmx_sched_in(struct kvm_vcpu *vcpu, int cpu); void vmx_update_cpu_dirty_logging(struct kvm_vcpu *vcpu); #ifdef CONFIG_X86_64 int vmx_set_hv_timer(struct kvm_vcpu *vcpu, u64 guest_deadline_tsc, diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 925cadb18b55..9b0a21f2e56e 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -5005,6 +5005,16 @@ static bool need_emulate_wbinvd(struct kvm_vcpu *vcpu) void kvm_arch_vcpu_load(struct kvm_vcpu *vcpu, int cpu, bool sched_in) { + struct kvm_pmu *pmu = vcpu_to_pmu(vcpu); + + if (sched_in) { + vcpu->arch.l1tf_flush_l1d = true; + if (pmu->version && unlikely(pmu->event_count)) { + pmu->need_cleanup = true; + kvm_make_request(KVM_REQ_PMU, vcpu); + } + } + /* Address WBINVD may be executed by guest */ if (need_emulate_wbinvd(vcpu)) { if (static_call(kvm_x86_has_wbinvd_exit)()) @@ -5014,7 +5024,7 @@ void kvm_arch_vcpu_load(struct kvm_vcpu *vcpu, int cpu, bool sched_in) wbinvd_ipi, NULL, 1); } - static_call(kvm_x86_vcpu_load)(vcpu, cpu); + static_call(kvm_x86_vcpu_load)(vcpu, cpu, sched_in); /* Save host pkru register if supported */ vcpu->arch.host_pkru = read_pkru(); @@ -12569,14 +12579,7 @@ bool kvm_vcpu_is_bsp(struct kvm_vcpu *vcpu) void kvm_arch_sched_in(struct kvm_vcpu *vcpu, int cpu) { - struct kvm_pmu *pmu = vcpu_to_pmu(vcpu); - vcpu->arch.l1tf_flush_l1d = true; - if (pmu->version && unlikely(pmu->event_count)) { - pmu->need_cleanup = true; - kvm_make_request(KVM_REQ_PMU, vcpu); - } - static_call(kvm_x86_sched_in)(vcpu, cpu); } void kvm_arch_free_vm(struct kvm *kvm) From patchwork Tue Apr 30 19:31:57 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 13649894 Received: from mail-yw1-f201.google.com (mail-yw1-f201.google.com [209.85.128.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id C0B8A1BED73 for ; Tue, 30 Apr 2024 19:32:09 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1714505531; cv=none; b=umRzhofTTd78VRYwgLHMKnrMdMlZ7hvQNcksqLZ2V7rV/HjpG8kmpfCpK/IihLTXsYCV7q596aim0vBLPkOitlaNS1Jk51YeuPnieaGZWRVegHILWQuNfqNQDhn0sD3K6tUG0Bklei/8LItSU72Edljn0H2D/4tZ1Io7YB1cQF4= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1714505531; c=relaxed/simple; bh=9MhcqD3oLMqfUjo9TWK3GGfH4maBAFWA/FNunEEvlt0=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=HoZeHV8rnQb2ox045itcHWMQfGmJxFCQOEDjWAwNuBzu9dS8I1r+biZ+XmP4f3RzqwayPBAm5jYw3H6a0rKePeeFW9B1HNh4EqnPvrLSOTRGsw+amYgczeamQUk5h4kvehad3gd4IpHncfo4MKu7FPvbq1f+z0/ObnZC+rY/2/0= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=MsPRclbO; arc=none smtp.client-ip=209.85.128.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="MsPRclbO" Received: by mail-yw1-f201.google.com with SMTP id 00721157ae682-61be75e21fdso28236647b3.0 for ; Tue, 30 Apr 2024 12:32:09 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1714505529; x=1715110329; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=zlyEud5d/bhoGiNpMEEmnNOfWYjdDQf4ZBXlBrFZHdU=; b=MsPRclbOMMYK0Lbgrn14Uwt9UJd03dzLxDN6GDcbHIJY79k6Qajw4KY1t5Gj12ioKY tK1/P25DwUWrA6eu3S2p2kaGq/JMKd02vpeGM8IxxxuKFX3FO6Br+07LSdbtU1oPlkLP UJkAnpCe/uzmphUvQTC4LDdgC8Y3I3PLYfC9i/27sh/dpJXRHo9VuqRVNH9rMtib9OhV VXCsku3QBHMsUQ2hiaqoZd85fIZIPR5hMwRvZf7uYhcgrMuzNWiHuf2tdsIS6UoZQfxA LWq0xE0hf5oYe/9DQ7gcHh3m0P039KpPwqrgfNJBzJ6uFzJac3wOxB3hlc0LVEXwm+ee kLZw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1714505529; x=1715110329; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=zlyEud5d/bhoGiNpMEEmnNOfWYjdDQf4ZBXlBrFZHdU=; b=fC0MTWf0HLpj74KrKct2yjj4YIecUQRjyEyDvkK0HGoeaSCkY/C5UXea9jamCHYRy7 RPytyrrhc71L3qZEvxN2r5w/t98EAX5jvchJA7w/miCmmnnXHvKzpGXuPKi1HgnE6IqY FnRwxj1JTNF+o+GUdSyiPKXFD1wAZWOGFaSM3rOQP1c57q7+TPruxpmqArOrc7fz57Sg bw/69OAymhcyEOdDgX+cFIwJsrAihYpTb8qvGs42vuwFCTUIGw/VcZQgizSxXbewniaT LBxiHB61z34i6/1KlcUYL/tNY/Wq+G408g06Oo/QCgaQ1D0RAPeXzn8BP44KZczM+jb9 1TUg== X-Forwarded-Encrypted: i=1; AJvYcCWUlVVeIqXwgksngVlshb/TLoLGpYw3lYbgX4FsMWQDCt03ToO8qptxL0MnhCSTMHfUyeC/RugLAINKYGMzH3zst2eYaCuofoopfw== X-Gm-Message-State: AOJu0YzZBK/xbt7jG7UkV3QNNpSlbTl6uoOTw+atV8QFOlidVQDGhwVV 1sR6KlGrL5qe2zxr7q0qHhKAVfQB5+wQlrdS2uKOmtgYn/eG5df0Mv8XLCB+EUlnbNgRujyvq/z kCw== X-Google-Smtp-Source: AGHT+IG4cB2+idKWRATGCb6OVW7YEX4JltpUlzKwSOtT4qBlUznDsr14mRRca16T5tdKvIRNLf2VCRL2BMM= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a0d:d2c5:0:b0:61d:3304:c25e with SMTP id u188-20020a0dd2c5000000b0061d3304c25emr134509ywd.7.1714505528716; Tue, 30 Apr 2024 12:32:08 -0700 (PDT) Reply-To: Sean Christopherson Date: Tue, 30 Apr 2024 12:31:57 -0700 In-Reply-To: <20240430193157.419425-1-seanjc@google.com> Precedence: bulk X-Mailing-List: linux-mips@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240430193157.419425-1-seanjc@google.com> X-Mailer: git-send-email 2.45.0.rc0.197.gbae5840b3b-goog Message-ID: <20240430193157.419425-5-seanjc@google.com> Subject: [PATCH 4/4] KVM: Delete the now unused kvm_arch_sched_in() From: Sean Christopherson To: Marc Zyngier , Oliver Upton , Tianrui Zhao , Bibo Mao , Huacai Chen , Michael Ellerman , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Christian Borntraeger , Janosch Frank , Claudio Imbrenda , Sean Christopherson , Paolo Bonzini Cc: linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, kvm@vger.kernel.org, loongarch@lists.linux.dev, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org Delete kvm_arch_sched_in() now that all implementations are nops. Signed-off-by: Sean Christopherson --- arch/arm64/include/asm/kvm_host.h | 1 - arch/loongarch/include/asm/kvm_host.h | 1 - arch/mips/include/asm/kvm_host.h | 1 - arch/powerpc/include/asm/kvm_host.h | 1 - arch/riscv/include/asm/kvm_host.h | 1 - arch/s390/include/asm/kvm_host.h | 1 - arch/x86/kvm/pmu.c | 6 +++--- arch/x86/kvm/x86.c | 5 ----- include/linux/kvm_host.h | 2 -- virt/kvm/kvm_main.c | 1 - 10 files changed, 3 insertions(+), 17 deletions(-) diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index 9e8a496fb284..a12d3bb0b590 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -1180,7 +1180,6 @@ static inline bool kvm_system_needs_idmapped_vectors(void) } static inline void kvm_arch_sync_events(struct kvm *kvm) {} -static inline void kvm_arch_sched_in(struct kvm_vcpu *vcpu, int cpu) {} void kvm_arm_init_debug(void); void kvm_arm_vcpu_init_debug(struct kvm_vcpu *vcpu); diff --git a/arch/loongarch/include/asm/kvm_host.h b/arch/loongarch/include/asm/kvm_host.h index 69305441f40d..64ca60a3ce24 100644 --- a/arch/loongarch/include/asm/kvm_host.h +++ b/arch/loongarch/include/asm/kvm_host.h @@ -228,7 +228,6 @@ static inline bool kvm_is_ifetch_fault(struct kvm_vcpu_arch *arch) static inline void kvm_arch_hardware_unsetup(void) {} static inline void kvm_arch_sync_events(struct kvm *kvm) {} static inline void kvm_arch_memslots_updated(struct kvm *kvm, u64 gen) {} -static inline void kvm_arch_sched_in(struct kvm_vcpu *vcpu, int cpu) {} static inline void kvm_arch_vcpu_blocking(struct kvm_vcpu *vcpu) {} static inline void kvm_arch_vcpu_unblocking(struct kvm_vcpu *vcpu) {} static inline void kvm_arch_vcpu_block_finish(struct kvm_vcpu *vcpu) {} diff --git a/arch/mips/include/asm/kvm_host.h b/arch/mips/include/asm/kvm_host.h index 179f320cc231..6743a57c1ab4 100644 --- a/arch/mips/include/asm/kvm_host.h +++ b/arch/mips/include/asm/kvm_host.h @@ -890,7 +890,6 @@ static inline void kvm_arch_sync_events(struct kvm *kvm) {} static inline void kvm_arch_free_memslot(struct kvm *kvm, struct kvm_memory_slot *slot) {} static inline void kvm_arch_memslots_updated(struct kvm *kvm, u64 gen) {} -static inline void kvm_arch_sched_in(struct kvm_vcpu *vcpu, int cpu) {} static inline void kvm_arch_vcpu_blocking(struct kvm_vcpu *vcpu) {} static inline void kvm_arch_vcpu_unblocking(struct kvm_vcpu *vcpu) {} diff --git a/arch/powerpc/include/asm/kvm_host.h b/arch/powerpc/include/asm/kvm_host.h index 8abac532146e..c4fb6a27fb92 100644 --- a/arch/powerpc/include/asm/kvm_host.h +++ b/arch/powerpc/include/asm/kvm_host.h @@ -897,7 +897,6 @@ struct kvm_vcpu_arch { static inline void kvm_arch_sync_events(struct kvm *kvm) {} static inline void kvm_arch_memslots_updated(struct kvm *kvm, u64 gen) {} static inline void kvm_arch_flush_shadow_all(struct kvm *kvm) {} -static inline void kvm_arch_sched_in(struct kvm_vcpu *vcpu, int cpu) {} static inline void kvm_arch_vcpu_blocking(struct kvm_vcpu *vcpu) {} static inline void kvm_arch_vcpu_unblocking(struct kvm_vcpu *vcpu) {} diff --git a/arch/riscv/include/asm/kvm_host.h b/arch/riscv/include/asm/kvm_host.h index 484d04a92fa6..6cd7a576ef14 100644 --- a/arch/riscv/include/asm/kvm_host.h +++ b/arch/riscv/include/asm/kvm_host.h @@ -272,7 +272,6 @@ struct kvm_vcpu_arch { }; static inline void kvm_arch_sync_events(struct kvm *kvm) {} -static inline void kvm_arch_sched_in(struct kvm_vcpu *vcpu, int cpu) {} #define KVM_RISCV_GSTAGE_TLB_MIN_ORDER 12 diff --git a/arch/s390/include/asm/kvm_host.h b/arch/s390/include/asm/kvm_host.h index 95990461888f..e9fcaf4607a6 100644 --- a/arch/s390/include/asm/kvm_host.h +++ b/arch/s390/include/asm/kvm_host.h @@ -1045,7 +1045,6 @@ extern int kvm_s390_gisc_register(struct kvm *kvm, u32 gisc); extern int kvm_s390_gisc_unregister(struct kvm *kvm, u32 gisc); static inline void kvm_arch_sync_events(struct kvm *kvm) {} -static inline void kvm_arch_sched_in(struct kvm_vcpu *vcpu, int cpu) {} static inline void kvm_arch_free_memslot(struct kvm *kvm, struct kvm_memory_slot *slot) {} static inline void kvm_arch_memslots_updated(struct kvm *kvm, u64 gen) {} diff --git a/arch/x86/kvm/pmu.c b/arch/x86/kvm/pmu.c index c397b28e3d1b..75346a588e13 100644 --- a/arch/x86/kvm/pmu.c +++ b/arch/x86/kvm/pmu.c @@ -521,9 +521,9 @@ void kvm_pmu_handle_event(struct kvm_vcpu *vcpu) } /* - * Unused perf_events are only released if the corresponding MSRs - * weren't accessed during the last vCPU time slice. kvm_arch_sched_in - * triggers KVM_REQ_PMU if cleanup is needed. + * Release unused perf_events if the corresponding guest MSRs weren't + * accessed during the last vCPU time slice (need_cleanup is set when + * the vCPU is scheduled back in). */ if (unlikely(pmu->need_cleanup)) kvm_pmu_cleanup(vcpu); diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 9b0a21f2e56e..17d6ce0d4fa6 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -12577,11 +12577,6 @@ bool kvm_vcpu_is_bsp(struct kvm_vcpu *vcpu) return (vcpu->arch.apic_base & MSR_IA32_APICBASE_BSP) != 0; } -void kvm_arch_sched_in(struct kvm_vcpu *vcpu, int cpu) -{ - -} - void kvm_arch_free_vm(struct kvm *kvm) { #if IS_ENABLED(CONFIG_HYPERV) diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index 2f5e35eb7eab..85b6dd7927fe 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -1496,8 +1496,6 @@ int kvm_arch_vcpu_ioctl_set_guest_debug(struct kvm_vcpu *vcpu, struct kvm_guest_debug *dbg); int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu); -void kvm_arch_sched_in(struct kvm_vcpu *vcpu, int cpu); - void kvm_arch_vcpu_load(struct kvm_vcpu *vcpu, int cpu, bool sched_in); void kvm_arch_vcpu_put(struct kvm_vcpu *vcpu); int kvm_arch_vcpu_precreate(struct kvm *kvm, unsigned int id); diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index 4a4b29a9bace..b154b22a3b84 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -6278,7 +6278,6 @@ static void kvm_sched_in(struct preempt_notifier *pn, int cpu) WRITE_ONCE(vcpu->ready, false); __this_cpu_write(kvm_running_vcpu, vcpu); - kvm_arch_sched_in(vcpu, cpu); kvm_arch_vcpu_load(vcpu, cpu, true); }