From patchwork Tue Apr 1 16:10:38 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Paolo Bonzini X-Patchwork-Id: 14035090 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id B71B820D4EB for ; Tue, 1 Apr 2025 16:11:18 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=170.10.129.124 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1743523881; cv=none; b=c6id49W+w6OQ2NBgeu/Y+yOwWI9J19agVjAvS3lwpNnVc6/zZkG48BUXy0/qbeAH+O4muetiGpn+hCZXOZjC4svcFZEBey06i8QoDkckiAoQL4x+BsrPUZAlOSDCdlXOsAPEHA+T8lW+Pysdde0iXIOLACBjRt5j8/fZgED5bWo= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1743523881; c=relaxed/simple; bh=J8qPggLN660CJ/gny2eLwefa7cxow24DmQlH8cqCWPU=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=AHW0f52f5Nn4Nuu/bF4XEYgf/SmaOC0JKdcpeIZUJTE8GKV9Q6PG7leKTfWftMwXohnvyBHg/Mk0DrGe/vgk8LNQ115lLXPwLE/ITjBCWcC9gKhEtlI/qlhcngcaXQuc2gs6ZyoOWGAVvpvFz+/z76FsbJQhuoYL2fHJFO75t2Q= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=redhat.com; spf=pass smtp.mailfrom=redhat.com; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b=KOWPsmQZ; arc=none smtp.client-ip=170.10.129.124 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=redhat.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=redhat.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="KOWPsmQZ" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1743523877; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=2Ne6WCLBR/XrZ0UzXGe9+IeJgjHUMAWkVyxrLTzy0KE=; b=KOWPsmQZDoAzQ0favkb7keKTp374dGP4ElwtNy1ODM80sgp7sAdS8bmpiXbWh91dxpseQw gczIQ7aOb4E8jklHNgLBXLjRRTda9PG6KKe9H3vUsAZ1OmU3SgQYM1s/jqCaLQoV9zuQOm ny29BZxABdpv52BGzyj95YFMkdOHr98= Received: from mail-wr1-f69.google.com (mail-wr1-f69.google.com [209.85.221.69]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-547-ZoKEJuk9N-68vXnRCkmqtA-1; Tue, 01 Apr 2025 12:11:14 -0400 X-MC-Unique: ZoKEJuk9N-68vXnRCkmqtA-1 X-Mimecast-MFC-AGG-ID: ZoKEJuk9N-68vXnRCkmqtA_1743523873 Received: by mail-wr1-f69.google.com with SMTP id ffacd0b85a97d-39979ad285bso3221130f8f.2 for ; Tue, 01 Apr 2025 09:11:14 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1743523873; x=1744128673; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=2Ne6WCLBR/XrZ0UzXGe9+IeJgjHUMAWkVyxrLTzy0KE=; b=w57uELueiu35OyOUDGurCB63wOF3XTruBTLugyy8QALw+EtjSfYYI/S2hlxE0m2VPs ghF3ns7/92vgZCDF/9XimcMvvYsirKuzyoPnu2lRg19zHNYyR8U8BRQLNFGsHWd1pRT+ V0l/t7lH90wM6zdtZ2lJwKP7vRNmhpiLYI1pk6b1Mh0yk71/7IvPNqPYfrSipvydkulM w1NahQ3W3WGhOHhhsZ3rVCFY7zWTPFlypsKWs+4VmPU9hezg6nFgAZ7VPff1J55cL4iW GmNNBD9AMzKSYDEU4gHlVYQ5ZQWVin+2vrBsE1siXJ+to1KdfcxIqAMCea/KAbUh4yGm M3sg== X-Forwarded-Encrypted: i=1; AJvYcCVS/hpD1T8jJma7H9j8/n3Hqtt3fp2eDHCOg1pbYtL8SpOgNpprM46dDxjcK9CjyXGdas4=@vger.kernel.org X-Gm-Message-State: AOJu0YxsMTQFTk8lDRAfFPj8iBiCpNURoFVIIv24iOlWkeUEiwbsWkl4 HDDRzzBySuaJGl/gwEU6qlj0PknAcMUhO5wltx4MpTlgumtrjlI82RZEOn52+Q0JFkae3eYTBtU JB6olMSyj071X2aY5Pp5QwSd4puGJSTZ9JLwEGqJRxHpOV2zdSA== X-Gm-Gg: ASbGncuwnKg9eLdlzUe7GN1PWVS8MZdTXddonfcbIhesg/KsFVUcO/PCkh3C4Y+Dqrf 3BAwsLtDsSB8ZShBxFgRx7YaA1nUgZa5na1xsVb5BmasFhidO97T5wkiO/xuFJY4j2HDG0spFtu WFAkDPAu++DZDly8/ET2o6MlGxhV+rO8IMiXaQ2bw2avaYuAs843MicSQhPLwhHg9bICPJmpS6D m/ZvXstadyMNj7p9P7ZjHb1RYwQZQXfSHvGwVTEcG1BnraphYQ/48N/y9iuxAuQUsetfmObnlll 5+eh/DF2njd9u7QRzF3J+A== X-Received: by 2002:a05:6000:1acf:b0:390:f358:85db with SMTP id ffacd0b85a97d-39c120e0bafmr10933803f8f.30.1743523872456; Tue, 01 Apr 2025 09:11:12 -0700 (PDT) X-Google-Smtp-Source: AGHT+IE6K6N1iLNBiiCO+QMwKNhv1DNnVDzcZ/uBheUeMHVe4bav7TmYrqVRWee/NwpSbAcJtsirSw== X-Received: by 2002:a05:6000:1acf:b0:390:f358:85db with SMTP id ffacd0b85a97d-39c120e0bafmr10933738f8f.30.1743523871765; Tue, 01 Apr 2025 09:11:11 -0700 (PDT) Received: from [192.168.10.48] ([176.206.111.201]) by smtp.gmail.com with ESMTPSA id ffacd0b85a97d-39c0b663470sm14413217f8f.27.2025.04.01.09.11.09 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 01 Apr 2025 09:11:10 -0700 (PDT) From: Paolo Bonzini To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: roy.hopkins@suse.com, seanjc@google.com, thomas.lendacky@amd.com, ashish.kalra@amd.com, michael.roth@amd.com, jroedel@suse.de, nsaenz@amazon.com, anelkz@amazon.de, James.Bottomley@HansenPartnership.com Subject: [PATCH 01/29] Documentation: kvm: introduce "VM plane" concept Date: Tue, 1 Apr 2025 18:10:38 +0200 Message-ID: <20250401161106.790710-2-pbonzini@redhat.com> X-Mailer: git-send-email 2.49.0 In-Reply-To: <20250401161106.790710-1-pbonzini@redhat.com> References: <20250401161106.790710-1-pbonzini@redhat.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 There have been multiple occurrences of processors introducing a virtual privilege level concept for guests, where the hypervisor hosts multiple copies of a vCPU's register state (or at least of most of it) and provides hypercalls or instructions to switch between them. These include AMD VMPLs, Intel TDX partitions, Microsoft Hyper-V VTLs, and ARM CCA planes. Include documentation on how the feature will be exposed to userspace, based on a draft made between Plumbers and KVM Forum. In the past, two main solutions that were attempted, mostly in the context of Hyper-V VTLs and SEV-SNP VMPLs: - use a single vCPU file descriptor, and store multiple copies of the state in a single struct kvm_vcpu. This requires a lot of changes to provide multiple copies of affected fields, especially MMUs and APICs; and complex uAPI extensions to direct existing ioctls to a specific privilege level. This solution looked marginally okay for SEV-SNP VMPLs, but only because the copies of the register state were hidden in the VMSA (KVM does not manage it); it showed all its problems when applied to Hyper-V VTLs. - use multiple VM and vCPU file descriptors, and handle the switch entirely in userspace. This got gnarly pretty fast for even more reasons than the previous case, for example because VMs could not share anymore memslots, including dirty bitmaps and private/shared attributes (a substantial problem for SEV-SNP since VMPLs share their ASID). Another problem was the need to share _some_ register state across VTLs and to control that vCPUs did not run in parallel; there needed to be a lot of logic to be added in userspace to ensure that higher-privileged VTL properly interrupted a lower-privileged one. This solution also complicates in-kernel implementation of privilege level switch, or even makes it impossible, because there is no kernel knowledge of the relationship between vCPUs that have the same id but belong to different privilege levels. Especially given the need to accelerate switches in kernel, it is clear that KVM needs some level of knowledge of the relationship between vCPUs that have the same id but belong to different privilege levels. For this reason, I proposed a design that only gives the initial set of VM and vCPU file descriptors the full set of ioctls + struct kvm_run; other privilege levels instead only support a small part of the KVM API. In fact for the vm file descriptor it is only three ioctls: KVM_CHECK_EXTENSION, KVM_SIGNAL_MSI, KVM_SET_MEMORY_ATTRIBUTES. For vCPUs it is basically KVM_GET/SET_*. This solves a lot of the problems in the multiple-file-descriptors solution, namely it gets for free the ability to avoid parallel execution of the same vCPUs in different privilege levels. Changes to the userspace API of course exist, but they are relatively small and more easily backwards compatible, because they boil down to the introduction of new file descriptor kinds instead of having to change the inputs to all affected ioctls. It does share some of the code churn issues in the single-file-descriptor solution; on the other hand a prototype multi-fd VMPL implementation[1] also needed large scale changes which therefore seem unavoidable when privilege levels are provided by hardware, and not a software concept only as is the case for VTLs. hardware [1] https://lore.kernel.org/lkml/cover.1726506534.git.roy.hopkins@suse.com/ Acknowledgements: thanks to everyone who participated in the discussions, you are too many to mention in a small margin. Thanks to Roy Hopkins, Tom Lendacky, Anel Orazgaliyeva, Nicolas Saenz-Julienne for experimenting with implementations of VTLs and VMPLs. Ah, and because x86 has three names for it and Arm has one, choose the Arm name for all architectures to avoid bikeshedding and to displease everyone---including the KVM/arm64 folks, probably. Signed-off-by: Paolo Bonzini --- Documentation/virt/kvm/api.rst | 235 ++++++++++++++++++++--- Documentation/virt/kvm/vcpu-requests.rst | 7 + 2 files changed, 211 insertions(+), 31 deletions(-) diff --git a/Documentation/virt/kvm/api.rst b/Documentation/virt/kvm/api.rst index 2a63a244e87a..e1c67bc6df47 100644 --- a/Documentation/virt/kvm/api.rst +++ b/Documentation/virt/kvm/api.rst @@ -56,6 +56,18 @@ be checked with :ref:`KVM_CHECK_EXTENSION `. Some capabilities also need to be enabled for VMs or VCPUs where their functionality is desired (see :ref:`cap_enable` and :ref:`cap_enable_vm`). +On some architectures, a "virtual privilege level" concept may be present +apart from the usual separation between user and supervisor mode, or +between hypervisor and guest mode. When this is the case, a single vCPU +can have multiple copies of its register state (or at least most of it), +and will switch between them through a special processor instruction, +or through some kind of hypercall. + +KVM calls these privilege levels "planes". Planes other than the +initially-created one (called "plane 0") have a file descriptor each, +and so do the planes of each vCPU. Ioctls for vCPU planes should also +be issued from a single thread, unless specially marked as asynchronous +in the documentation. 2. Restrictions =============== @@ -119,6 +131,11 @@ description: Type: system, vm, or vcpu. + File descriptors for planes other than plane 0 provide a subset + of vm and vcpu ioctls. Those that *are* supported in extra + planes are marked specially in the documentation (for example, + `vcpu (all planes)`). + Parameters: what parameters are accepted by the ioctl. @@ -264,7 +281,7 @@ otherwise. :Capability: basic, KVM_CAP_CHECK_EXTENSION_VM for vm ioctl :Architectures: all -:Type: system ioctl, vm ioctl +:Type: system ioctl, vm ioctl (all planes) :Parameters: extension identifier (KVM_CAP_*) :Returns: 0 if unsupported; 1 (or some other positive integer) if supported @@ -421,7 +438,7 @@ kvm_run' (see below). :Capability: basic :Architectures: all except arm64 -:Type: vcpu ioctl +:Type: vcpu ioctl (all planes) :Parameters: struct kvm_regs (out) :Returns: 0 on success, -1 on error @@ -461,7 +478,7 @@ Reads the general purpose registers from the vcpu. :Capability: basic :Architectures: all except arm64 -:Type: vcpu ioctl +:Type: vcpu ioctl (all planes) :Parameters: struct kvm_regs (in) :Returns: 0 on success, -1 on error @@ -475,7 +492,7 @@ See KVM_GET_REGS for the data structure. :Capability: basic :Architectures: x86, ppc -:Type: vcpu ioctl +:Type: vcpu ioctl (all planes) :Parameters: struct kvm_sregs (out) :Returns: 0 on success, -1 on error @@ -506,7 +523,7 @@ but not yet injected into the cpu core. :Capability: basic :Architectures: x86, ppc -:Type: vcpu ioctl +:Type: vcpu ioctl (all planes) :Parameters: struct kvm_sregs (in) :Returns: 0 on success, -1 on error @@ -519,7 +536,7 @@ data structures. :Capability: basic :Architectures: x86 -:Type: vcpu ioctl +:Type: vcpu ioctl (all planes) :Parameters: struct kvm_translation (in/out) :Returns: 0 on success, -1 on error @@ -645,7 +662,7 @@ This is an asynchronous vcpu ioctl and can be invoked from any thread. :Capability: basic (vcpu), KVM_CAP_GET_MSR_FEATURES (system) :Architectures: x86 -:Type: system ioctl, vcpu ioctl +:Type: system ioctl, vcpu ioctl (all planes) :Parameters: struct kvm_msrs (in/out) :Returns: number of msrs successfully returned; -1 on error @@ -685,7 +702,7 @@ kvm will fill in the 'data' member. :Capability: basic :Architectures: x86 -:Type: vcpu ioctl +:Type: vcpu ioctl (all planes) :Parameters: struct kvm_msrs (in) :Returns: number of msrs successfully set (see below), -1 on error @@ -773,7 +790,7 @@ signal mask. :Capability: basic :Architectures: x86, loongarch -:Type: vcpu ioctl +:Type: vcpu ioctl (all planes) :Parameters: struct kvm_fpu (out) :Returns: 0 on success, -1 on error @@ -811,7 +828,7 @@ Reads the floating point state from the vcpu. :Capability: basic :Architectures: x86, loongarch -:Type: vcpu ioctl +:Type: vcpu ioctl (all planes) :Parameters: struct kvm_fpu (in) :Returns: 0 on success, -1 on error @@ -1126,7 +1143,7 @@ Other flags returned by ``KVM_GET_CLOCK`` are accepted but ignored. :Capability: KVM_CAP_VCPU_EVENTS :Extended by: KVM_CAP_INTR_SHADOW :Architectures: x86, arm64 -:Type: vcpu ioctl +:Type: vcpu ioctl (all planes) :Parameters: struct kvm_vcpu_events (out) :Returns: 0 on success, -1 on error @@ -1249,7 +1266,7 @@ directly to the virtual CPU). :Capability: KVM_CAP_VCPU_EVENTS :Extended by: KVM_CAP_INTR_SHADOW :Architectures: x86, arm64 -:Type: vcpu ioctl +:Type: vcpu ioctl (all planes) :Parameters: struct kvm_vcpu_events (in) :Returns: 0 on success, -1 on error @@ -1315,7 +1332,7 @@ See KVM_GET_VCPU_EVENTS for the data structure. :Capability: KVM_CAP_DEBUGREGS :Architectures: x86 -:Type: vcpu ioctl +:Type: vcpu ioctl (all planes) :Parameters: struct kvm_debugregs (out) :Returns: 0 on success, -1 on error @@ -1337,7 +1354,7 @@ Reads debug registers from the vcpu. :Capability: KVM_CAP_DEBUGREGS :Architectures: x86 -:Type: vcpu ioctl +:Type: vcpu ioctl (all planes) :Parameters: struct kvm_debugregs (in) :Returns: 0 on success, -1 on error @@ -1656,7 +1673,7 @@ otherwise it will return EBUSY error. :Capability: KVM_CAP_XSAVE :Architectures: x86 -:Type: vcpu ioctl +:Type: vcpu ioctl (all planes) :Parameters: struct kvm_xsave (out) :Returns: 0 on success, -1 on error @@ -1676,7 +1693,7 @@ This ioctl would copy current vcpu's xsave struct to the userspace. :Capability: KVM_CAP_XSAVE and KVM_CAP_XSAVE2 :Architectures: x86 -:Type: vcpu ioctl +:Type: vcpu ioctl (all planes) :Parameters: struct kvm_xsave (in) :Returns: 0 on success, -1 on error @@ -1704,7 +1721,7 @@ contents of CPUID leaf 0xD on the host. :Capability: KVM_CAP_XCRS :Architectures: x86 -:Type: vcpu ioctl +:Type: vcpu ioctl (all planes) :Parameters: struct kvm_xcrs (out) :Returns: 0 on success, -1 on error @@ -1731,7 +1748,7 @@ This ioctl would copy current vcpu's xcrs to the userspace. :Capability: KVM_CAP_XCRS :Architectures: x86 -:Type: vcpu ioctl +:Type: vcpu ioctl (all planes) :Parameters: struct kvm_xcrs (in) :Returns: 0 on success, -1 on error @@ -2027,7 +2044,7 @@ error. :Capability: KVM_CAP_IRQCHIP :Architectures: x86 -:Type: vcpu ioctl +:Type: vcpu ioctl (all planes) :Parameters: struct kvm_lapic_state (out) :Returns: 0 on success, -1 on error @@ -2058,7 +2075,7 @@ always uses xAPIC format. :Capability: KVM_CAP_IRQCHIP :Architectures: x86 -:Type: vcpu ioctl +:Type: vcpu ioctl (all planes) :Parameters: struct kvm_lapic_state (in) :Returns: 0 on success, -1 on error @@ -2292,7 +2309,7 @@ prior to calling the KVM_RUN ioctl. :Capability: KVM_CAP_ONE_REG :Architectures: all -:Type: vcpu ioctl +:Type: vcpu ioctl (all planes) :Parameters: struct kvm_one_reg (in) :Returns: 0 on success, negative value on failure @@ -2907,7 +2924,7 @@ such as set vcpu counter or reset vcpu, and they have the following id bit patte :Capability: KVM_CAP_ONE_REG :Architectures: all -:Type: vcpu ioctl +:Type: vcpu ioctl (all planes) :Parameters: struct kvm_one_reg (in and out) :Returns: 0 on success, negative value on failure @@ -2961,7 +2978,7 @@ after pausing the vcpu, but before it is resumed. :Capability: KVM_CAP_SIGNAL_MSI :Architectures: x86 arm64 -:Type: vm ioctl +:Type: vm ioctl (all planes) :Parameters: struct kvm_msi (in) :Returns: >0 on delivery, 0 if guest blocked the MSI, and -1 on error @@ -3564,7 +3581,7 @@ VCPU matching underlying host. :Capability: basic :Architectures: arm64, mips, riscv -:Type: vcpu ioctl +:Type: vcpu ioctl (all planes) :Parameters: struct kvm_reg_list (in/out) :Returns: 0 on success; -1 on error @@ -4861,7 +4878,7 @@ The acceptable values for the flags field are:: :Capability: KVM_CAP_NESTED_STATE :Architectures: x86 -:Type: vcpu ioctl +:Type: vcpu ioctl (all planes) :Parameters: struct kvm_nested_state (in/out) :Returns: 0 on success, -1 on error @@ -4935,7 +4952,7 @@ to the KVM_CHECK_EXTENSION ioctl(). :Capability: KVM_CAP_NESTED_STATE :Architectures: x86 -:Type: vcpu ioctl +:Type: vcpu ioctl (all planes) :Parameters: struct kvm_nested_state (in) :Returns: 0 on success, -1 on error @@ -5816,7 +5833,7 @@ then ``length`` is returned. :Capability: KVM_CAP_SREGS2 :Architectures: x86 -:Type: vcpu ioctl +:Type: vcpu ioctl (all planes) :Parameters: struct kvm_sregs2 (out) :Returns: 0 on success, -1 on error @@ -5849,7 +5866,7 @@ flags values for ``kvm_sregs2``: :Capability: KVM_CAP_SREGS2 :Architectures: x86 -:Type: vcpu ioctl +:Type: vcpu ioctl (all planes) :Parameters: struct kvm_sregs2 (in) :Returns: 0 on success, -1 on error @@ -6065,7 +6082,7 @@ as the descriptors in Descriptors block. :Capability: KVM_CAP_XSAVE2 :Architectures: x86 -:Type: vcpu ioctl +:Type: vcpu ioctl (all planes) :Parameters: struct kvm_xsave (out) :Returns: 0 on success, -1 on error @@ -6323,7 +6340,7 @@ Returns -EINVAL if called on a protected VM. :Capability: KVM_CAP_MEMORY_ATTRIBUTES :Architectures: x86 -:Type: vm ioctl +:Type: vm ioctl (all planes) :Parameters: struct kvm_memory_attributes (in) :Returns: 0 on success, <0 on error @@ -6458,6 +6475,46 @@ the capability to be present. `flags` must currently be zero. +.. _KVM_CREATE_PLANE: + +4.144 KVM_CREATE_PLANE +---------------------- + +:Capability: KVM_CAP_PLANES +:Architectures: none +:Type: vm ioctl +:Parameters: plane id +:Returns: a VM fd that can be used to control the new plane. + +Creates a new *plane*, i.e. a separate privilege level for the +virtual machine. Each plane has its own memory attributes, +which can be used to enable more restricted permissions than +what is allowed with ``KVM_SET_USER_MEMORY_REGION``. + +Each plane has a numeric id that is used when communicating +with KVM through the :ref:`kvm_run ` struct. While +KVM is currently agnostic to whether low ids are more or less +privileged, it is expected that this will not always be the +case in the future. For example KVM in the future may use +the plane id when planes are supported by hardware (as is the +case for VMPLs in AMD), or if KVM supports accelerated plane +switch operations (as might be the case for Hyper-V VTLs). + +4.145 KVM_CREATE_VCPU_PLANE +--------------------------- + +:Capability: KVM_CAP_PLANES +:Architectures: none +:Type: vm ioctl (non default plane) +:Parameters: vcpu file descriptor for the default plane +:Returns: a vCPU fd that can be used to control the new plane + for the vCPU. + +Adds a vCPU to a plane; the new vCPU's id comes from the vCPU +file descriptor that is passed in the argument. Note that + because of how the API is defined, planes other than plane 0 +can only have a subset of the ids that are available in plane 0. + .. _kvm_run: 5. The kvm_run structure @@ -6493,7 +6550,50 @@ This field is ignored if KVM_CAP_IMMEDIATE_EXIT is not available. :: - __u8 padding1[6]; + /* in/out */ + __u8 plane; + +The plane that will be run (usually 0). + +While this is not yet supported, in the future KVM may handle plane +switch in the kernel. In this case, the output value of this field +may differ from the input value. However, automatic switch will +have to be :ref:`explicitly enabled `. + +For backwards compatibility, this field is ignored unless a plane +other than plane 0 has been created. + +:: + + /* in/out */ + __u16 suspended_planes; + +A bitmap of planes whose execution was suspended to run a +higher-privileged plane, usually via a hypercall or due to +an interrupt in the higher-privileged plane. + +KVM right now does not use this field; it may be used in the future +once KVM implements in-kernel plane switch mechanisms. Until that +is the case, userspace can leave this to zero. + +:: + + /* in */ + __u16 req_exit_planes; + +A bitmap of planes for which KVM should exit when they have a pending +interrupt. In general, userspace should set bits corresponding to +planes that are more privileged than ``plane``; because KVM is agnostic +to whether low ids are more or less privileged, these could be the bits +*above* or *below* ``plane``. In some cases it may make sense to request +an exit for all planes---for example, if the higher-priority plane +wants to be informed about interrupts pending in lower-priority planes, +userspace may need to learn about those as well. + +The bit at position ``plane`` is ignored; interrupts for the current +plane are never delivered to userspace. + +:: /* out */ __u32 exit_reason; @@ -7162,6 +7262,44 @@ The valid value for 'flags' is: - KVM_NOTIFY_CONTEXT_INVALID -- the VM context is corrupted and not valid in VMCS. It would run into unknown result if resume the target VM. +:: + + /* KVM_EXIT_PLANE_EVENT */ + struct { + #define KVM_PLANE_EVENT_INTERRUPT 1 + __u16 cause; + __u16 pending_event_planes; + __u16 target; + __u16 padding; + __u32 flags; + __u64 extra; + } plane_event; + +Inform userspace of an event that affects a different plane than the +currently executing one. + +On a ``KVM_EXIT_PLANE_EVENT`` exit, ``pending_event_planes`` is always +set to the set of planes that have a pending interrupt. + +``cause`` provides the event that caused the exit, and the meaning of +``target`` depends on the cause of the exit too. + +Right now the only defined cause is ``KVM_PLANE_EVENT_INTERRUPT``, i.e. +an interrupt was received by a plane whose id is set in the +``req_exit_planes`` bitmap. In this case, ``target`` is the AND of +``req_exit_planes`` and ``pending_event_planes``. + +``flags`` and ``extra`` are currently always 0. + +If userspace wants to switch to the target plane, it should move any +shared state from the current plane to ``target``, and then invoke +``KVM_RUN`` with ``kvm_run->plane`` set to ``target`` (and +``req_exit_planes`` initialized accordingly). Note that it's also +valid to switch planes in response to other userspace exit codes, for +example ``KVM_EXIT_X86_WRMSR`` or ``KVM_EXIT_HYPERCALL``. Immediately +after ``KVM_RUN`` is entered, KVM will check ``req_exit_planes`` and +trigger a ``KVM_EXIT_PLANE_EVENT`` userspace exit if needed. + :: /* Fix the size of the union. */ @@ -8511,6 +8649,26 @@ ENOSYS for the others. When enabled, KVM will exit to userspace with KVM_EXIT_SYSTEM_EVENT of type KVM_SYSTEM_EVENT_SUSPEND to process the guest suspend request. +7.46 KVM_CAP_PLANES_FPU +----------------------- + +:Architectures: x86 +:Parameters: arg[0] is 0 if each vCPU plane has a separate FPU, + 1 if the FPU is shared +:Type: vm + +When enabled, such as KVM_SET_XSAVE or KVM_SET_FPU *are* available for +vCPU on all planes, but they will read and write the same data that is presented +to other planes. Note that KVM_GET/SET_XSAVE also allows access to some +registers that are *not* part of FPU state; right now this is just PKRU. +Those are never shared. + +KVM_CAP_PLANES_FPU is experimental; userspace must *not* assume that +KVM_CAP_PLANES_FPU is present on x86 for *any* VM type and different +VM types may or may not allow enabling KVM_CAP_PLANES_FPU. Like for other +capabilities, KVM_CAP_PLANES_FPU can be queried on the VM file descriptor; +KVM_CHECK_EXTENSION returns 1 if it is possible to enable shared FPU mode. + 8. Other capabilities. ====================== @@ -9037,6 +9195,21 @@ KVM exits with the register state of either the L1 or L2 guest depending on which executed at the time of an exit. Userspace must take care to differentiate between these cases. +8.46 KVM_CAP_PLANES +------------------- + +:Capability: KVM_CAP_PLANES +:Architectures: x86 +:Type: system, vm + +The capability returns the maximum plane id that can be passed to +:ref:`KVM_CREATE_PLANE `. Because the maximum +id can vary according to the machine type, it is recommended to +check for this capability on the VM file descriptor. + +When called on the system file descriptor, KVM returns the highest +value supported on any machine type. + 9. Known KVM API problems ========================= diff --git a/Documentation/virt/kvm/vcpu-requests.rst b/Documentation/virt/kvm/vcpu-requests.rst index 06718b9bc959..86ac67b98a74 100644 --- a/Documentation/virt/kvm/vcpu-requests.rst +++ b/Documentation/virt/kvm/vcpu-requests.rst @@ -286,6 +286,13 @@ architecture dependent. kvm_vcpu_block() calls kvm_arch_vcpu_runnable() to check if it should awaken. One reason to do so is to provide architectures a function where requests may be checked if necessary. +VM planes +--------- + +Each plane has its own set of requests. Processing requests from +another plane needs to go through a plane switch, for example via a +`KVM_EXIT_PLANE_EVENT` userspace exit. + References ========== From patchwork Tue Apr 1 16:10:39 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Paolo Bonzini X-Patchwork-Id: 14035089 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id AC18520D4EA for ; Tue, 1 Apr 2025 16:11:18 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=170.10.133.124 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1743523880; cv=none; b=Mc3eyZDBGgW9Z36QrQx6rO7LJ23/zSp5nWCcv8Kp2G5rUckxsIqytlHdumgBXByC/NWo6QqOt8DFw8dSQgdsEc5jFQvvaaw7/jp6mDwntZ/1MdzGTgz0L4bnqy3WhK65RYHIscHTmArURWKV3tHAeuayuoVIh/zLsxkKNBFogWE= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1743523880; c=relaxed/simple; bh=P8Cr7Z2TxPF0ktXGH33ko7FhabI+md/UaeO2mhqlq9M=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=rNACYhvKZPCxURyciIDWfdf4d1VihK5gpPuwWIP+OxkO+xrxd8cXbfEdwQUi5z5pPL8nOs8nFbLE+m2ypczz1juJpI82aaj6AR1swfJiAQGg1PzGHAFhYPPXtbgO0izUgdvUQj2UrgT4bEKZVajxXJe6VxNvfUl3UuPQZp/mUEs= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=redhat.com; spf=pass smtp.mailfrom=redhat.com; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b=ULO3mU7B; arc=none smtp.client-ip=170.10.133.124 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=redhat.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=redhat.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="ULO3mU7B" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1743523877; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=Xan579+d7KgslaidCIf/x6Agm9xb9cfRwwriUXqq5BU=; b=ULO3mU7BXpBsWMA+GR69gvq9KqJeRcSugHwjw6wxevybFPbNWkGFuyuoVR7yKWG54EQQWB GPoh+rOiMGKIpuyQj7TAcuilN88a6BpgI8wRWvXkfhK5Mri2mDafrx1TBBru1EMg8NjTm5 kxc6KPShh/W9HBR5j/L38sO1Y4Efr4s= Received: from mail-wm1-f71.google.com (mail-wm1-f71.google.com [209.85.128.71]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-390-dEegEXiZMc2Gq1YSlgMenw-1; Tue, 01 Apr 2025 12:11:16 -0400 X-MC-Unique: dEegEXiZMc2Gq1YSlgMenw-1 X-Mimecast-MFC-AGG-ID: dEegEXiZMc2Gq1YSlgMenw_1743523875 Received: by mail-wm1-f71.google.com with SMTP id 5b1f17b1804b1-43d733063cdso48589635e9.0 for ; Tue, 01 Apr 2025 09:11:16 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1743523875; x=1744128675; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=Xan579+d7KgslaidCIf/x6Agm9xb9cfRwwriUXqq5BU=; b=i8SvbaVNM2xGppMPhBvl2DLD2dWZBiF3tJJPxJ7p3te1/u6hKtV5KlL4WjsKmA/pZN +kTzMoY+typZ0ghsNaVaJeAu40mVs8BzGWqlOKlHnxrX0sqrijVRA4MnAsYxiMnkVJOs c8K0GEP3gcdJp+5skO1I2M+f5CICFqd7kN+YSrEolOp7zzNCmMB78++/1FQ3pi/YOFiB uJ/6mN+/QP9Az5kFbOriSP06kM+f3flIkzq3ki552opj7fadHjcSSk8IFgAtn5/VgPhx 14fDAuhBJgjMfMMjbaMsqt57D2Lhj8eHeBL7KuswyfefY/MDfOuYaDA9NVLrOTZ0WgNN b4+w== X-Forwarded-Encrypted: i=1; AJvYcCWzAgkidt/qWdO0RmOIPYfy9E3Q+1JYZKRG2w7lUso4SI6CEGbk3RehnidjT4q/XEMos68=@vger.kernel.org X-Gm-Message-State: AOJu0Yzur4txdnLrwuzOEe9gAQNA20sh4VSaaCSBPruw5kfaPHx8zjsQ HJhLrodxz8Ho3zvYg5XMBFceSR0JhmVIQxvjsH8JTYKehJRFyF3Tkvc9R92/njkYSvL1keMCMBb 83h7G7K3hsfn430dYKFs5CCmeDN09v610JuJ2QI9Q2zORyepjvg== X-Gm-Gg: ASbGncv/jpc//T7IHMsH0dRbd+URGhavLjcPcg0dFbpSuOmyAreVlR4VvfWMvnICQL/ 1UP3+H/wGD/vXbM3wbWti1oLgySXtzDmm6WDFshZwPr1N7wvSPHdHGS7C9gKfZPQID3vkscCLMy d7Se7By4ILD3VKty1si+orRYyT6h4N6r8AMT7gbYcwhOU66NAPGHcpFTEKyFxy4uhCZAXdwbQVV GPGW55sgP+y6c14r7F+dXB9nJPZoTdVhxuaCfxfI7DQQY9rnjPVioKV2D8aRdng2jFJmaMVfoeh ZbKFSl8YxVs20ZquvTydXw== X-Received: by 2002:a05:600c:8411:b0:43c:efed:733e with SMTP id 5b1f17b1804b1-43e9532e1ebmr100629045e9.14.1743523875086; Tue, 01 Apr 2025 09:11:15 -0700 (PDT) X-Google-Smtp-Source: AGHT+IFUdn3Vx4U1/8c8ov9d3i+5ptLX3WkvMAvnyRA6d81On2Tt/Y/tDhAYSD2hlI0h2+y0PDDicw== X-Received: by 2002:a05:600c:8411:b0:43c:efed:733e with SMTP id 5b1f17b1804b1-43e9532e1ebmr100628635e9.14.1743523874749; Tue, 01 Apr 2025 09:11:14 -0700 (PDT) Received: from [192.168.10.48] ([176.206.111.201]) by smtp.gmail.com with ESMTPSA id 5b1f17b1804b1-43d8fbc10f7sm162095215e9.14.2025.04.01.09.11.12 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 01 Apr 2025 09:11:13 -0700 (PDT) From: Paolo Bonzini To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: roy.hopkins@suse.com, seanjc@google.com, thomas.lendacky@amd.com, ashish.kalra@amd.com, michael.roth@amd.com, jroedel@suse.de, nsaenz@amazon.com, anelkz@amazon.de, James.Bottomley@HansenPartnership.com Subject: [PATCH 02/29] KVM: API definitions for plane userspace exit Date: Tue, 1 Apr 2025 18:10:39 +0200 Message-ID: <20250401161106.790710-3-pbonzini@redhat.com> X-Mailer: git-send-email 2.49.0 In-Reply-To: <20250401161106.790710-1-pbonzini@redhat.com> References: <20250401161106.790710-1-pbonzini@redhat.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Copy over the uapi definitions from the Documentation/ directory. Signed-off-by: Paolo Bonzini --- include/uapi/linux/kvm.h | 25 +++++++++++++++++++++++-- 1 file changed, 23 insertions(+), 2 deletions(-) diff --git a/include/uapi/linux/kvm.h b/include/uapi/linux/kvm.h index 1e0a511c43d0..b0cca93ebcb3 100644 --- a/include/uapi/linux/kvm.h +++ b/include/uapi/linux/kvm.h @@ -135,6 +135,16 @@ struct kvm_xen_exit { } u; }; +struct kvm_plane_event_exit { +#define KVM_PLANE_EVENT_INTERRUPT 1 + __u16 cause; + __u16 pending_event_planes; + __u16 target; + __u16 padding; + __u32 flags; + __u64 extra[8]; +}; + struct kvm_tdx_exit { #define KVM_EXIT_TDX_VMCALL 1 __u32 type; @@ -262,7 +272,8 @@ struct kvm_tdx_exit { #define KVM_EXIT_NOTIFY 37 #define KVM_EXIT_LOONGARCH_IOCSR 38 #define KVM_EXIT_MEMORY_FAULT 39 -#define KVM_EXIT_TDX 40 +#define KVM_EXIT_PLANE_EVENT 40 +#define KVM_EXIT_TDX 41 /* For KVM_EXIT_INTERNAL_ERROR */ /* Emulate instruction failed. */ @@ -295,7 +306,13 @@ struct kvm_run { /* in */ __u8 request_interrupt_window; __u8 HINT_UNSAFE_IN_KVM(immediate_exit); - __u8 padding1[6]; + + /* in/out */ + __u8 plane; + __u16 suspended_planes; + + /* in */ + __u16 req_exit_planes; /* out */ __u32 exit_reason; @@ -532,6 +549,8 @@ struct kvm_run { __u64 gpa; __u64 size; } memory_fault; + /* KVM_EXIT_PLANE_EVENT */ + struct kvm_plane_event_exit plane_event; /* KVM_EXIT_TDX */ struct kvm_tdx_exit tdx; /* Fix the size of the union. */ @@ -1017,6 +1036,8 @@ struct kvm_enable_cap { #define KVM_CAP_PRE_FAULT_MEMORY 236 #define KVM_CAP_X86_APIC_BUS_CYCLES_NS 237 #define KVM_CAP_X86_GUEST_MODE 238 +#define KVM_CAP_PLANES 239 +#define KVM_CAP_PLANES_FPU 240 struct kvm_irq_routing_irqchip { __u32 irqchip; From patchwork Tue Apr 1 16:10:40 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Paolo Bonzini X-Patchwork-Id: 14035091 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 113AB20D51D for ; Tue, 1 Apr 2025 16:11:20 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=170.10.133.124 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1743523882; cv=none; b=OWZH5ldVEAicMlmyy1BUlO2PlYQ/qeU+/Im1hhQNTUaQlPN7XN5vJt1BK0rFNaH8HNSMy2H+TO5D9PW8DgO5b3B3wd63AVzzClHkznEDemrVhq9coJ2sU/3mtCfBwwfMqMIdgRAT2DBZbM3USrNgMOv3IEBOhE+IhBh0Xd0r45s= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1743523882; c=relaxed/simple; bh=yRG3ANAcWURe86N1hf41BIQjs3HGQuEMu8uGmCkgimo=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=M+tMjoSh7IinBE2FV6Plpn354b00Joehz2hY4UaruXtENdktyti/9bNrVMkaAAPPBo9W+R/P96YmjWqqQhr58s34BoswEy9ZI/UvcI7JuUQLxoTqadCQh4G6dmECb7sB1eZlnETMO48W8AVYYkXpdHNrYQ/x/IJiWvlnTlwSqjc= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=redhat.com; spf=pass smtp.mailfrom=redhat.com; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b=jQTmboQU; arc=none smtp.client-ip=170.10.133.124 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=redhat.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=redhat.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="jQTmboQU" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1743523879; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=C5sOJ1ZN51NGUB7AKsam1Z/J0Y4Aw58lesO8LZ+8PjM=; b=jQTmboQUYm7AoQG4afW+4LsLoLJM4fNyTxzVoDgfJlOeQYKS+RCGwYXwzmi6H6HxRvWEl1 MMgPXB0aAluNlXOE/O1weYNEx5cy+Ev7SiyQyEvGS7Dpc/0sgDHIo5l/bXct72kdVsEwxb fhSZOY5+ncNFb8AjUv8/s0PEzDwTW6c= Received: from mail-wm1-f70.google.com (mail-wm1-f70.google.com [209.85.128.70]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-678-1hH7ZBF4MuCK_9jPWL6E_Q-1; Tue, 01 Apr 2025 12:11:19 -0400 X-MC-Unique: 1hH7ZBF4MuCK_9jPWL6E_Q-1 X-Mimecast-MFC-AGG-ID: 1hH7ZBF4MuCK_9jPWL6E_Q_1743523878 Received: by mail-wm1-f70.google.com with SMTP id 5b1f17b1804b1-43d0a037f97so35531455e9.2 for ; Tue, 01 Apr 2025 09:11:18 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1743523878; x=1744128678; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=C5sOJ1ZN51NGUB7AKsam1Z/J0Y4Aw58lesO8LZ+8PjM=; b=Grdt1BK8ahLGU+uarszeAXpynmaPqIamX5MHZzOkSGQcUkxnIid9Z6cKcRY0nSMjyF i8n/VQDUszsF0+tT4qpbbNGU2wbJ9W3rEdSUxmCXgwgJcL2DbRB50tPVecjw0I98FYMi 1wbNML34UESn9HaJBHQnF5P3wm6VBvNeILvqw4n+YB1/0x/Yd3igjc/ZG8tBc/Uy0PSJ 5h0vm2uvY1VZy9OnQk2CfteRm/Jo917kmaOxWwpqfi5GASSjreGLZ5VMcoKpWnA0SZjb Bo+8v8LuePIm/zSCOx+iol9J30wyEen8VIU4QtI+OnjNH+4TPlQnmZsILkAs0BQsdrKN oPmw== X-Forwarded-Encrypted: i=1; AJvYcCWG66Kf/5N7OMyUZJ1TIRxebO/WgYIoEJ8fxfEIM70SDt5Ssffz1B9SAvnnC4imjRcsm68=@vger.kernel.org X-Gm-Message-State: AOJu0YybhQr4N2Ek4sNX5WDiNR7vo7W2tj4HTxSYak3DU5OllirQKZKM 8mVtgn54GjJQVdNdFxJKg/azETDKl1ROy4HL/sYIA2tP0KToVBlnU8g/gHrflDqzJjTwtZ1YJk2 CG5NK036R9gA08PBhBhqmFDOTqKluAKLvCjOXvZHqK0WKeNHUmQ== X-Gm-Gg: ASbGncvA5JysaoRl8acp/93JLW2pWsNLp7fFRfXdZ6WHdOFQVjbGhIsfVk3dfEP6lXS ohlYnVHQR++FwGpeBjMBitSuGZyI8DwFA0c49g3et7dJDgMFB+mlCpIx44UOl9Rl2pw98DvrayT KERMJTIJT/87Q3glJlOkZqxj4xzq+8FhVGvWzCYKB2N5hOuVrbu9x3UGCoZcrYMKWksfxzNkHdQ 4PB0TcjBwScO+WYg4gsv6vYupgoukZiQrCFNLqSkM7i7YhHCy9l/3LZbKx05BWX5lsBLO0OsuGM aPHG2qwkMfNT/NFNb1zfLw== X-Received: by 2002:a05:600c:198f:b0:43b:ca39:6c7d with SMTP id 5b1f17b1804b1-43db61d8326mr133339065e9.3.1743523877715; Tue, 01 Apr 2025 09:11:17 -0700 (PDT) X-Google-Smtp-Source: AGHT+IFbyCKX5Aqf5nXfh6DE7+ASXGOt8qcrXWAlRQRnDUDahaaum0QoMpdBp6Nw+tK1Bgywmh1HQw== X-Received: by 2002:a05:600c:198f:b0:43b:ca39:6c7d with SMTP id 5b1f17b1804b1-43db61d8326mr133338495e9.3.1743523877277; Tue, 01 Apr 2025 09:11:17 -0700 (PDT) Received: from [192.168.10.48] ([176.206.111.201]) by smtp.gmail.com with ESMTPSA id ffacd0b85a97d-39c0b7a4294sm14189742f8f.89.2025.04.01.09.11.15 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 01 Apr 2025 09:11:15 -0700 (PDT) From: Paolo Bonzini To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: roy.hopkins@suse.com, seanjc@google.com, thomas.lendacky@amd.com, ashish.kalra@amd.com, michael.roth@amd.com, jroedel@suse.de, nsaenz@amazon.com, anelkz@amazon.de, James.Bottomley@HansenPartnership.com Subject: [PATCH 03/29] KVM: add plane info to structs Date: Tue, 1 Apr 2025 18:10:40 +0200 Message-ID: <20250401161106.790710-4-pbonzini@redhat.com> X-Mailer: git-send-email 2.49.0 In-Reply-To: <20250401161106.790710-1-pbonzini@redhat.com> References: <20250401161106.790710-1-pbonzini@redhat.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Add some of the data to move from one plane to the other within a VM, typically from plane N to plane 0. There is quite some difference here because while separate planes provide very little of the vm file descriptor functionality, they are almost fully functional vCPUs except that non-zero planes(*) can only be ran indirectly through the initial plane. Therefore, vCPUs use struct kvm_vcpu for all planes, with just a couple fields that will be added later and will only be valid for plane 0. At the VM level instead plane info is stored in a completely different struct. For now struct kvm_plane has no architecture-specific counterpart, but this may change in the future if needed. It's possible for example that some MMU info becomes per-plane in order to support per-plane RWX permissions. (*) I will restrain from calling them astral planes. Signed-off-by: Paolo Bonzini --- include/linux/kvm_host.h | 17 ++++++++++++++++- include/linux/kvm_types.h | 1 + virt/kvm/kvm_main.c | 32 ++++++++++++++++++++++++++++++++ 3 files changed, 49 insertions(+), 1 deletion(-) diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index c8f1facdb600..0e16c34080ef 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -84,6 +84,10 @@ #define KVM_MAX_NR_ADDRESS_SPACES 1 #endif +#ifndef KVM_MAX_VCPU_PLANES +#define KVM_MAX_VCPU_PLANES 1 +#endif + /* * For the normal pfn, the highest 12 bits should be zero, * so we can mask bit 62 ~ bit 52 to indicate the error pfn, @@ -332,7 +336,8 @@ struct kvm_vcpu { #ifdef CONFIG_PROVE_RCU int srcu_depth; #endif - int mode; + short plane; + short mode; u64 requests; unsigned long guest_debug; @@ -367,6 +372,8 @@ struct kvm_vcpu { } async_pf; #endif + struct kvm_vcpu *plane0; + #ifdef CONFIG_HAVE_KVM_CPU_RELAX_INTERCEPT /* * Cpu relax intercept or pause loop exit optimization @@ -753,6 +760,11 @@ struct kvm_memslots { int node_idx; }; +struct kvm_plane { + struct kvm *kvm; + int plane; +}; + struct kvm { #ifdef KVM_HAVE_MMU_RWLOCK rwlock_t mmu_lock; @@ -777,6 +789,9 @@ struct kvm { /* The current active memslot set for each address space */ struct kvm_memslots __rcu *memslots[KVM_MAX_NR_ADDRESS_SPACES]; struct xarray vcpu_array; + + struct kvm_plane *planes[KVM_MAX_VCPU_PLANES]; + /* * Protected by slots_lock, but can be read outside if an * incorrect answer is acceptable. diff --git a/include/linux/kvm_types.h b/include/linux/kvm_types.h index 827ecc0b7e10..7d0a86108d1a 100644 --- a/include/linux/kvm_types.h +++ b/include/linux/kvm_types.h @@ -11,6 +11,7 @@ struct kvm_interrupt; struct kvm_irq_routing_table; struct kvm_memory_slot; struct kvm_one_reg; +struct kvm_plane; struct kvm_run; struct kvm_userspace_memory_region; struct kvm_vcpu; diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index f6c947961b78..67773b6b9576 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -1095,9 +1095,22 @@ void __weak kvm_arch_create_vm_debugfs(struct kvm *kvm) { } +static struct kvm_plane *kvm_create_vm_plane(struct kvm *kvm, unsigned plane_id) +{ + struct kvm_plane *plane = kzalloc(sizeof(struct kvm_plane), GFP_KERNEL_ACCOUNT); + + if (!plane) + return ERR_PTR(-ENOMEM); + + plane->kvm = kvm; + plane->plane = plane_id; + return plane; +} + static struct kvm *kvm_create_vm(unsigned long type, const char *fdname) { struct kvm *kvm = kvm_arch_alloc_vm(); + struct kvm_plane *plane0; struct kvm_memslots *slots; int r, i, j; @@ -1136,6 +1149,13 @@ static struct kvm *kvm_create_vm(unsigned long type, const char *fdname) snprintf(kvm->stats_id, sizeof(kvm->stats_id), "kvm-%d", task_pid_nr(current)); + plane0 = kvm_create_vm_plane(kvm, 0); + if (IS_ERR(plane0)) { + r = PTR_ERR(plane0); + goto out_err_no_plane0; + } + kvm->planes[0] = plane0; + r = -ENOMEM; if (init_srcu_struct(&kvm->srcu)) goto out_err_no_srcu; @@ -1227,6 +1247,8 @@ static struct kvm *kvm_create_vm(unsigned long type, const char *fdname) out_err_no_irq_srcu: cleanup_srcu_struct(&kvm->srcu); out_err_no_srcu: + kfree(kvm->planes[0]); +out_err_no_plane0: kvm_arch_free_vm(kvm); mmdrop(current->mm); return ERR_PTR(r); @@ -1253,6 +1275,10 @@ static void kvm_destroy_devices(struct kvm *kvm) } } +static void kvm_destroy_plane(struct kvm_plane *plane) +{ +} + static void kvm_destroy_vm(struct kvm *kvm) { int i; @@ -1309,6 +1335,11 @@ static void kvm_destroy_vm(struct kvm *kvm) #ifdef CONFIG_KVM_GENERIC_MEMORY_ATTRIBUTES xa_destroy(&kvm->mem_attr_array); #endif + for (i = 0; i < ARRAY_SIZE(kvm->planes); i++) { + struct kvm_plane *plane = kvm->planes[i]; + if (plane) + kvm_destroy_plane(plane); + } kvm_arch_free_vm(kvm); preempt_notifier_dec(); kvm_disable_virtualization(); @@ -4110,6 +4141,7 @@ static int kvm_vm_ioctl_create_vcpu(struct kvm *kvm, unsigned long id) } vcpu->run = page_address(page); + vcpu->plane0 = vcpu; kvm_vcpu_init(vcpu, kvm, id); r = kvm_arch_vcpu_create(vcpu); From patchwork Tue Apr 1 16:10:41 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Paolo Bonzini X-Patchwork-Id: 14035092 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 7422620E336 for ; Tue, 1 Apr 2025 16:11:23 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=170.10.133.124 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1743523885; cv=none; b=J/npKTQVJx2I7Fset4+mpFC7kC9lfG6riZ3DfJdVPWVfqpXekaAo7lOjvX8Py7Mfk/MGtKE6+I3JiPHmSLWAPzEfGNS2MEkHFHgJyceSsFVXXOaA9Mz+RE3N6ke6Q2isOiw+7ny9H1V8qVYBbOH6HpNFvgT+j9USvCOYJrlRus8= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1743523885; c=relaxed/simple; bh=v8jKs9c/aSFeCCmvqZ/YRl3Cj78RpyaLQZNO8QjAPqo=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=BkzKrxNVKeaAJfuajJt5bMksPkq0YGaYGAtp3tPcE/BsjsNZDjfKYQDji4w875WB3wZe69kyPiKFWAyAGQj7WA+uWB1Q+Y9CTMEix3vWykJfkIqY79ekjMoWT9z2aFCsPGBwIwFM+VC9Fp93u4W0rcVSqbdpRchTShn3pwNPcn0= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=redhat.com; spf=pass smtp.mailfrom=redhat.com; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b=YPQdvFYH; arc=none smtp.client-ip=170.10.133.124 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=redhat.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=redhat.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="YPQdvFYH" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1743523882; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=ZVj8LxmRYtpLGVaJMpm0cMjRCHl/hUqQsU3rCt2if/8=; b=YPQdvFYHfxG9svK8nZBOzQaxW7Ki7nZO+zprBz4EBZhvLMYwV/ooUTY5BRP9j8uekGQKg5 xGPPwKClHoRCkFyKR08i80fGGD1mAQokOqSCc3r3EabubCPyQRDw5WFU1KZBGebDA8i/00 WujLoztQMUiOPaeqOS1JSqEWKRK9DA4= Received: from mail-wm1-f71.google.com (mail-wm1-f71.google.com [209.85.128.71]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-54-OG99hOJdMwq7WJ1Ikqo8vQ-1; Tue, 01 Apr 2025 12:11:21 -0400 X-MC-Unique: OG99hOJdMwq7WJ1Ikqo8vQ-1 X-Mimecast-MFC-AGG-ID: OG99hOJdMwq7WJ1Ikqo8vQ_1743523880 Received: by mail-wm1-f71.google.com with SMTP id 5b1f17b1804b1-43cf327e9a2so48729695e9.3 for ; Tue, 01 Apr 2025 09:11:21 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1743523880; x=1744128680; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=ZVj8LxmRYtpLGVaJMpm0cMjRCHl/hUqQsU3rCt2if/8=; b=bD2EEsOUgsJwUMt0mLuYIgVvEHTrd+Skuhm5ALagjfprAA0Y5A4pJSMtAOEV/i2naK O+UVDLxrWFLf6QPlKt76N1hUeQwfWeRiyAsl+k3S3MqYVqFB7UBXVLwuMXXlP7GAKr6s tWrupRW8EuoHfFlyJYsTU6zii5/gTu7N0ZrxNSpvy5uVs0H4U5y6gJgvqwqT171lzM2v LUQlilwcsMonwm6tbpa03dxtMBTActD9U2AEuYOuvYp4+FMbeCITOlXMLkZSqOFTllhx Yqm4ODBRtH+I/cShUbR+h8Hp66NLvEW8Id0SRXBROxzK3QQhITXRaNMDyX0FwTaFtFAF 3WIg== X-Forwarded-Encrypted: i=1; AJvYcCXGjjXW1VH9KMFmW2jLt3xxRzb9tH5jNaHsZQNd0kiwplFlxWIwHIHPSCWzu7ceq5IhTqs=@vger.kernel.org X-Gm-Message-State: AOJu0YyGtEUPiHP+CKcPN82EGqJ8oRE56NOQV4ulUK3k2XZRDXDUeyhQ HCxoQvyVJGLdhd3JwucIoOqolQWbF+V/dwIwx87oxpETZ+SIOdFCgZu68vZkP9oKFKvX9bCKg0H P8uD16beXMf87C1g7Zn0FZLGzCdzZRCq5k/Eii7zkpRCAd02cSg== X-Gm-Gg: ASbGncvau+8GVINevMSAGsd2mmmBbXVi5Dv2zkff/2VEg5PLrdZK7ug/rCQVgIdsSk8 LEfdr1ARBPVqPK6cIeAQe5o5nSJKuG3gu3JYiDgVau+tFmJ7sSSLvCzzKCYPW2TYyrAVgfyv4y3 q92z8DN24VLersyOHbsjIBEeAzv0jNU5ZdTgzDrjUp8VOBWwqyG6uDZIFd608/pL5BSxbHLRawx MdDCxK4L2d2uMx303NG4s3OfJZaG9azDrp8Q/xQKOb+C3vnSYl4IIwK7sDtNjzsjFQ5lxdpQoQe 0cnAlmravTST4qZueStxoQ== X-Received: by 2002:a05:600c:3d06:b0:43c:f184:2e16 with SMTP id 5b1f17b1804b1-43eaa03e0b9mr29476055e9.5.1743523879940; Tue, 01 Apr 2025 09:11:19 -0700 (PDT) X-Google-Smtp-Source: AGHT+IF+E1xmwedldRJqmLtJk3EgwSLiKjTaPQFsZDp0oRIihpMjpWkbFC2kwYxD4KB/4aSNm+zhoQ== X-Received: by 2002:a05:600c:3d06:b0:43c:f184:2e16 with SMTP id 5b1f17b1804b1-43eaa03e0b9mr29475595e9.5.1743523879524; Tue, 01 Apr 2025 09:11:19 -0700 (PDT) Received: from [192.168.10.48] ([176.206.111.201]) by smtp.gmail.com with ESMTPSA id ffacd0b85a97d-39c0b79e3b0sm14455968f8f.74.2025.04.01.09.11.17 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 01 Apr 2025 09:11:18 -0700 (PDT) From: Paolo Bonzini To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: roy.hopkins@suse.com, seanjc@google.com, thomas.lendacky@amd.com, ashish.kalra@amd.com, michael.roth@amd.com, jroedel@suse.de, nsaenz@amazon.com, anelkz@amazon.de, James.Bottomley@HansenPartnership.com Subject: [PATCH 04/29] KVM: introduce struct kvm_arch_plane Date: Tue, 1 Apr 2025 18:10:41 +0200 Message-ID: <20250401161106.790710-5-pbonzini@redhat.com> X-Mailer: git-send-email 2.49.0 In-Reply-To: <20250401161106.790710-1-pbonzini@redhat.com> References: <20250401161106.790710-1-pbonzini@redhat.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Signed-off-by: Paolo Bonzini --- arch/arm64/include/asm/kvm_host.h | 5 +++++ arch/loongarch/include/asm/kvm_host.h | 5 +++++ arch/mips/include/asm/kvm_host.h | 5 +++++ arch/powerpc/include/asm/kvm_host.h | 5 +++++ arch/riscv/include/asm/kvm_host.h | 5 +++++ arch/s390/include/asm/kvm_host.h | 5 +++++ arch/x86/include/asm/kvm_host.h | 6 ++++++ include/linux/kvm_host.h | 2 ++ virt/kvm/kvm_main.c | 3 +++ 9 files changed, 41 insertions(+) diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index d919557af5e5..b742275cda4d 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -227,6 +227,9 @@ struct kvm_s2_mmu { struct kvm_arch_memory_slot { }; +struct kvm_arch_plane { +}; + /** * struct kvm_smccc_features: Descriptor of the hypercall services exposed to the guests * @@ -1334,6 +1337,8 @@ static inline bool kvm_system_needs_idmapped_vectors(void) return cpus_have_final_cap(ARM64_SPECTRE_V3A); } +static inline void kvm_arch_init_plane(struct kvm_plane *plane) {} +static inline void kvm_arch_free_plane(struct kvm_plane *plane) {} static inline void kvm_arch_sync_events(struct kvm *kvm) {} void kvm_init_host_debug_data(void); diff --git a/arch/loongarch/include/asm/kvm_host.h b/arch/loongarch/include/asm/kvm_host.h index 2281293a5f59..24c1dafac855 100644 --- a/arch/loongarch/include/asm/kvm_host.h +++ b/arch/loongarch/include/asm/kvm_host.h @@ -73,6 +73,9 @@ struct kvm_arch_memory_slot { unsigned long flags; }; +struct kvm_arch_plane { +}; + #define HOST_MAX_PMNUM 16 struct kvm_context { unsigned long vpid_cache; @@ -325,6 +328,8 @@ static inline bool kvm_is_ifetch_fault(struct kvm_vcpu_arch *arch) } /* Misc */ +static inline void kvm_arch_init_plane(struct kvm_plane *plane) {} +static inline void kvm_arch_free_plane(struct kvm_plane *plane) {} static inline void kvm_arch_hardware_unsetup(void) {} static inline void kvm_arch_sync_events(struct kvm *kvm) {} static inline void kvm_arch_memslots_updated(struct kvm *kvm, u64 gen) {} diff --git a/arch/mips/include/asm/kvm_host.h b/arch/mips/include/asm/kvm_host.h index f7222eb594ea..d7be72c529b3 100644 --- a/arch/mips/include/asm/kvm_host.h +++ b/arch/mips/include/asm/kvm_host.h @@ -147,6 +147,9 @@ struct kvm_vcpu_stat { struct kvm_arch_memory_slot { }; +struct kvm_arch_plane { +}; + #ifdef CONFIG_CPU_LOONGSON64 struct ipi_state { uint32_t status; @@ -886,6 +889,8 @@ extern unsigned long kvm_mips_get_ramsize(struct kvm *kvm); extern int kvm_vcpu_ioctl_interrupt(struct kvm_vcpu *vcpu, struct kvm_mips_interrupt *irq); +static inline void kvm_arch_init_plane(struct kvm_plane *plane) {} +static inline void kvm_arch_free_plane(struct kvm_plane *plane) {} static inline void kvm_arch_sync_events(struct kvm *kvm) {} static inline void kvm_arch_free_memslot(struct kvm *kvm, struct kvm_memory_slot *slot) {} diff --git a/arch/powerpc/include/asm/kvm_host.h b/arch/powerpc/include/asm/kvm_host.h index 6e1108f8fce6..6023f0fd637b 100644 --- a/arch/powerpc/include/asm/kvm_host.h +++ b/arch/powerpc/include/asm/kvm_host.h @@ -256,6 +256,9 @@ struct kvm_arch_memory_slot { #endif /* CONFIG_KVM_BOOK3S_HV_POSSIBLE */ }; +struct kvm_arch_plane { +}; + struct kvm_hpt_info { /* Host virtual (linear mapping) address of guest HPT */ unsigned long virt; @@ -902,6 +905,8 @@ struct kvm_vcpu_arch { #define __KVM_HAVE_ARCH_WQP #define __KVM_HAVE_CREATE_DEVICE +static inline void kvm_arch_init_plane(struct kvm_plane *plane) {} +static inline void kvm_arch_free_plane(struct kvm_plane *plane) {} static inline void kvm_arch_sync_events(struct kvm *kvm) {} static inline void kvm_arch_memslots_updated(struct kvm *kvm, u64 gen) {} static inline void kvm_arch_flush_shadow_all(struct kvm *kvm) {} diff --git a/arch/riscv/include/asm/kvm_host.h b/arch/riscv/include/asm/kvm_host.h index cc33e35cd628..72f862194a0c 100644 --- a/arch/riscv/include/asm/kvm_host.h +++ b/arch/riscv/include/asm/kvm_host.h @@ -97,6 +97,9 @@ struct kvm_vcpu_stat { struct kvm_arch_memory_slot { }; +struct kvm_arch_plane { +}; + struct kvm_vmid { /* * Writes to vmid_version and vmid happen with vmid_lock held @@ -301,6 +304,8 @@ static inline bool kvm_arch_pmi_in_guest(struct kvm_vcpu *vcpu) return IS_ENABLED(CONFIG_GUEST_PERF_EVENTS) && !!vcpu; } +static inline void kvm_arch_init_plane(struct kvm_plane *plane) {} +static inline void kvm_arch_free_plane(struct kvm_plane *plane) {} static inline void kvm_arch_sync_events(struct kvm *kvm) {} #define KVM_RISCV_GSTAGE_TLB_MIN_ORDER 12 diff --git a/arch/s390/include/asm/kvm_host.h b/arch/s390/include/asm/kvm_host.h index 9a367866cab0..63b79ce5c8ac 100644 --- a/arch/s390/include/asm/kvm_host.h +++ b/arch/s390/include/asm/kvm_host.h @@ -799,6 +799,9 @@ struct kvm_vm_stat { struct kvm_arch_memory_slot { }; +struct kvm_arch_plane { +}; + struct s390_map_info { struct list_head list; __u64 guest_addr; @@ -1056,6 +1059,8 @@ bool kvm_s390_pv_cpu_is_protected(struct kvm_vcpu *vcpu); extern int kvm_s390_gisc_register(struct kvm *kvm, u32 gisc); extern int kvm_s390_gisc_unregister(struct kvm *kvm, u32 gisc); +static inline void kvm_arch_init_plane(struct kvm_plane *plane) {} +static inline void kvm_arch_free_plane(struct kvm_plane *plane) {} static inline void kvm_arch_sync_events(struct kvm *kvm) {} static inline void kvm_arch_free_memslot(struct kvm *kvm, struct kvm_memory_slot *slot) {} diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index 383b736cc6f1..8240f565a764 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -1086,6 +1086,9 @@ struct kvm_arch_memory_slot { unsigned short *gfn_write_track; }; +struct kvm_arch_plane { +}; + /* * Track the mode of the optimized logical map, as the rules for decoding the * destination vary per mode. Enabling the optimized logical map requires all @@ -2357,6 +2360,9 @@ void kvm_make_scan_ioapic_request(struct kvm *kvm); void kvm_make_scan_ioapic_request_mask(struct kvm *kvm, unsigned long *vcpu_bitmap); +static inline void kvm_arch_init_plane(struct kvm_plane *plane) {} +static inline void kvm_arch_free_plane(struct kvm_plane *plane) {} + bool kvm_arch_async_page_not_present(struct kvm_vcpu *vcpu, struct kvm_async_pf *work); void kvm_arch_async_page_present(struct kvm_vcpu *vcpu, diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index 0e16c34080ef..6bd9b0b3cbee 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -763,6 +763,8 @@ struct kvm_memslots { struct kvm_plane { struct kvm *kvm; int plane; + + struct kvm_arch_plane arch; }; struct kvm { diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index 67773b6b9576..e83db27580da 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -1104,6 +1104,8 @@ static struct kvm_plane *kvm_create_vm_plane(struct kvm *kvm, unsigned plane_id) plane->kvm = kvm; plane->plane = plane_id; + + kvm_arch_init_plane(plane); return plane; } @@ -1277,6 +1279,7 @@ static void kvm_destroy_devices(struct kvm *kvm) static void kvm_destroy_plane(struct kvm_plane *plane) { + kvm_arch_free_plane(plane); } static void kvm_destroy_vm(struct kvm *kvm) From patchwork Tue Apr 1 16:10:42 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Paolo Bonzini X-Patchwork-Id: 14035093 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 3C46520D4E4 for ; Tue, 1 Apr 2025 16:11:29 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=170.10.133.124 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1743523891; cv=none; b=gSlT/H1o6WYQiYJ7BTHbm/J1ygo0crR45sZYq4JPDZg03PUIUIrs0oB9T4/fSsYyh94XMi+mhnaFXM4uL9qypdjQUqRcL6QZynxS/JqcIwQ3WvZKkYuDpE/FvvxhCYciZp2DIGVyudcvHQOjTIOz60smu7DKjYiIvAjwrTYQfiY= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1743523891; c=relaxed/simple; bh=rxM4+r5oKn3FrtYxyoYERVyr2vDI5fg2rK7DFIuTtwg=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=gJd4oiJxylthshM6yl5TL3AgpueKMafVf5vRrj2xEYDg2vfeztJ6GOCTApGw6fG+46eAA7fn/xJNaweDpy/niQbgS7p4pPZHiCbL53rOJttBxquBUDIqSliiv1BPyuhPhpyVhnorCErBwTl2RhIlWaAsrK/gF0OYwd/+rD2oP4A= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=redhat.com; spf=pass smtp.mailfrom=redhat.com; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b=IhUmYcxt; arc=none smtp.client-ip=170.10.133.124 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=redhat.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=redhat.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="IhUmYcxt" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1743523889; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=QaUbfvoP8aPmiGrV9eVE67gclsfaX3jw4nCiV/n0Ni4=; b=IhUmYcxtic8Q9if2rKECo1cInoFfMkAL0cg/T0W35PE+Gnu5EbnostFWRbyHWlOlGEygoN PCVECvikEWwOXA/AMxqn59MDg86W7UWJz9k6HP2pXnFdyzP6zZLHKqFgagyybrzRmWaEpE TAMOiiGL++EPmVCL8vbFrTFKWav0DYY= Received: from mail-wm1-f70.google.com (mail-wm1-f70.google.com [209.85.128.70]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-20-bJp1DRXIMIK20Z9sgXlYkw-1; Tue, 01 Apr 2025 12:11:25 -0400 X-MC-Unique: bJp1DRXIMIK20Z9sgXlYkw-1 X-Mimecast-MFC-AGG-ID: bJp1DRXIMIK20Z9sgXlYkw_1743523883 Received: by mail-wm1-f70.google.com with SMTP id 5b1f17b1804b1-43cec217977so33423885e9.0 for ; Tue, 01 Apr 2025 09:11:24 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1743523883; x=1744128683; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=QaUbfvoP8aPmiGrV9eVE67gclsfaX3jw4nCiV/n0Ni4=; b=dMVsxyXKBLhUeTXsmjQk8OrTkIbNhU3uy65pVzB3dBmHauOiQd0TSzd/vb2WYy/e3n MmJCWWyXzLRFuqDbpFSY+g0+bWlaAMIpGCerM9xryV5Q0ecyGzL14bTDpgQxSPx5TjIG 0a+TUZ59hF++hHHm+l3WQrIh49H5flrASSmQCr2cQk9RcWBW4SZkFBUl9n9bS7PuSsKJ Kl/hAt5bP17ia02Iw/I0w9Vqfn5N+EVY4LaNhR/NUTW1ix0iOxZg+reIZ1iJYhvzR0vS 4bDcWvmj2Ix7MZb+i0SVmAReribh8zqT7L05jQ+0dVW7n7gr9KO3nqQN0rNgBnnB3NLh Zt/A== X-Forwarded-Encrypted: i=1; AJvYcCVQap0v+1scgOYcMTQCks2Dk4X80KxlawUjYgKSHgUxLe/Ilo18OCtdaGL7jJT7a0aFQI8=@vger.kernel.org X-Gm-Message-State: AOJu0YzWWQAG5e9q95g78iNxO46yUFSTIdnH10+3TMzlW5rjVLn91Cgd 1LOJSEBPFsew7gE2wGeUrEnrp8mEa7f0BHVGVpxXewqVPbBjhhFFqJfYr4xLBJWBCWomgTd3CHk MNedkD+P5jwDRRJoEALOI9Zj0HC4PWMlxJLn+dPxTZ3NzcNlgTA== X-Gm-Gg: ASbGncuGTbrELLqRFmtAm/P0/13Uc1ln4ZcnBeuUIpJxY1c/8B8nSyMGwdv5hVT2gZl E0M378wJwt4/txHIhdqoXZV73S0QitpAwKgZajCh0rzmwN8WnZnVSIthH1voBdBbdVGOnP8OeyG O+j3CPhItRB/cuLjmVocMfIB/X2jYLLOw/Ik0qGnQdNGf9qlkV+dLuJkbRoo+HEE1EM0XZu6DT5 CbKPAK1NNQkVbAS0HmxTzy065Pj5/E5dJn3KOA6bo2qDzifKPVA5H88KLNTpMgObeRSV1yCOTU+ qDlW1D0qCjmF2VIwlye+WA== X-Received: by 2002:a05:600c:cc8:b0:43d:b3:fb1 with SMTP id 5b1f17b1804b1-43ea5f001b7mr45448985e9.27.1743523883239; Tue, 01 Apr 2025 09:11:23 -0700 (PDT) X-Google-Smtp-Source: AGHT+IHZdEWzHsg0YswCSXrdzj6ch9i3n9G2iz9TWc1mNERKBxpVsH1+e/xNG+caFFDjVEabapAovQ== X-Received: by 2002:a05:600c:cc8:b0:43d:b3:fb1 with SMTP id 5b1f17b1804b1-43ea5f001b7mr45447655e9.27.1743523881889; Tue, 01 Apr 2025 09:11:21 -0700 (PDT) Received: from [192.168.10.48] ([176.206.111.201]) by smtp.gmail.com with ESMTPSA id ffacd0b85a97d-39c0b66ab86sm14896816f8f.51.2025.04.01.09.11.20 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 01 Apr 2025 09:11:20 -0700 (PDT) From: Paolo Bonzini To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: roy.hopkins@suse.com, seanjc@google.com, thomas.lendacky@amd.com, ashish.kalra@amd.com, michael.roth@amd.com, jroedel@suse.de, nsaenz@amazon.com, anelkz@amazon.de, James.Bottomley@HansenPartnership.com Subject: [PATCH 05/29] KVM: add plane support to KVM_SIGNAL_MSI Date: Tue, 1 Apr 2025 18:10:42 +0200 Message-ID: <20250401161106.790710-6-pbonzini@redhat.com> X-Mailer: git-send-email 2.49.0 In-Reply-To: <20250401161106.790710-1-pbonzini@redhat.com> References: <20250401161106.790710-1-pbonzini@redhat.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 struct kvm_kernel_irq_routing_entry is the main tool for sending cross-plane IPIs. Make kvm_send_userspace_msi the first function to accept a struct kvm_plane pointer, in preparation for making it available from plane file descriptors. Signed-off-by: Paolo Bonzini --- include/linux/kvm_host.h | 3 ++- virt/kvm/irqchip.c | 5 ++++- virt/kvm/kvm_main.c | 2 +- 3 files changed, 7 insertions(+), 3 deletions(-) diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index 6bd9b0b3cbee..98bae5dc3515 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -684,6 +684,7 @@ struct kvm_kernel_irq_routing_entry { u32 data; u32 flags; u32 devid; + u32 plane; } msi; struct kvm_s390_adapter_int adapter; struct kvm_hv_sint hv_sint; @@ -2218,7 +2219,7 @@ static inline int kvm_init_irq_routing(struct kvm *kvm) #endif -int kvm_send_userspace_msi(struct kvm *kvm, struct kvm_msi *msi); +int kvm_send_userspace_msi(struct kvm_plane *plane, struct kvm_msi *msi); void kvm_eventfd_init(struct kvm *kvm); int kvm_ioeventfd(struct kvm *kvm, struct kvm_ioeventfd *args); diff --git a/virt/kvm/irqchip.c b/virt/kvm/irqchip.c index 162d8ed889f2..84952345e3c2 100644 --- a/virt/kvm/irqchip.c +++ b/virt/kvm/irqchip.c @@ -45,8 +45,10 @@ int kvm_irq_map_chip_pin(struct kvm *kvm, unsigned irqchip, unsigned pin) return irq_rt->chip[irqchip][pin]; } -int kvm_send_userspace_msi(struct kvm *kvm, struct kvm_msi *msi) +int kvm_send_userspace_msi(struct kvm_plane *plane, struct kvm_msi *msi) { + struct kvm *kvm = plane->kvm; + unsigned plane_id = plane->plane; struct kvm_kernel_irq_routing_entry route; if (!kvm_arch_irqchip_in_kernel(kvm) || (msi->flags & ~KVM_MSI_VALID_DEVID)) @@ -57,6 +59,7 @@ int kvm_send_userspace_msi(struct kvm *kvm, struct kvm_msi *msi) route.msi.data = msi->data; route.msi.flags = msi->flags; route.msi.devid = msi->devid; + route.msi.plane = plane_id; return kvm_set_msi(&route, kvm, KVM_USERSPACE_IRQ_SOURCE_ID, 1, false); } diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index e83db27580da..5b44a7f9e52e 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -5207,7 +5207,7 @@ static long kvm_vm_ioctl(struct file *filp, r = -EFAULT; if (copy_from_user(&msi, argp, sizeof(msi))) goto out; - r = kvm_send_userspace_msi(kvm, &msi); + r = kvm_send_userspace_msi(kvm->planes[0], &msi); break; } #endif From patchwork Tue Apr 1 16:10:43 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Paolo Bonzini X-Patchwork-Id: 14035094 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id B1B5621128D for ; Tue, 1 Apr 2025 16:11:29 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=170.10.133.124 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1743523891; cv=none; b=GQLxAsUILDFYkPT1y2kk/sMm6TzuslKIYWYVmJPoEpY3p3t0wJT/9wrYxjVDKsL5947fFzWthYyBiGK2aSw5XGSNvjnbd8HNJ9CKz87+yQHzLSIE2KyjYjCPaGair/7xuU7cyMYW2433oT9FHADotf2TFxTGZtWMSnKVDlavX1U= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1743523891; c=relaxed/simple; bh=wgQBBmc1qbxZtdWIBwi2SbIZVYBQeBi9r18IbBZZndo=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=ABgzlxDnc/pO4f+3D4I6GHrK9SenARUagv3MMSV8A8TV4zdkjwtBW0k07sMxwFwPArmSgd71QwK+58YnOI3+ljVOuCKs+2nCy3UNoYTQGOviO7UePnEQhnIw3XXP9Pk/GQybIjTKvy5vNWsAvtOXUJ8yqKRUSjAAWX6J5qcAsQk= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=redhat.com; spf=pass smtp.mailfrom=redhat.com; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b=AJ6Noud/; arc=none smtp.client-ip=170.10.133.124 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=redhat.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=redhat.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="AJ6Noud/" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1743523888; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=2sGhYUJTD12TKfaq4UZWtS0jUt1FVjaz51Eq07OqDF4=; b=AJ6Noud/YEWKh8K/T4eXK+jz7ktfjh8b/3mIvK1oo17WqINdMTaxmMl+3spOum/Xwu/jq+ /ADpqcZODyc71cIcCgTIYIsxjfKiQUZngK5rbcfY820gy5X7m+5uEIFFZot5eI0Yr68B7w uEkkP5CIruic5FMr5usd2W9Vx0OKZfw= Received: from mail-wm1-f69.google.com (mail-wm1-f69.google.com [209.85.128.69]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-526-wnsSAchZN_26aC55MubfXA-1; Tue, 01 Apr 2025 12:11:26 -0400 X-MC-Unique: wnsSAchZN_26aC55MubfXA-1 X-Mimecast-MFC-AGG-ID: wnsSAchZN_26aC55MubfXA_1743523885 Received: by mail-wm1-f69.google.com with SMTP id 5b1f17b1804b1-43d51bd9b45so37142165e9.1 for ; Tue, 01 Apr 2025 09:11:26 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1743523885; x=1744128685; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=2sGhYUJTD12TKfaq4UZWtS0jUt1FVjaz51Eq07OqDF4=; b=oaAPUkCKUNqJYO0J24ZyUu/+a0+Mem2bX3eCcsl4Bg0s9XkYehnhBUO3iNHcId0Rcv EMb9mU5XMzaQCGl/v+2vF4vBvmzIKo+k3qlthGg6j57lVuo6j0eNPwVyEG/+HTkN8Kak lNziSO09ujcm9XIqk7ylpcpEQv30UDY4oug2Sk908Y0TrOvKuRk8nHhrgApwE0DDFfff 0rSZKIPDQLBIo4YEwZ2eRMNWzFukUWbXiGMesJnfUI0HFRGB1yDj4SJxO3RPh4w6vV8W 77mRwFHQg2bmRWTycc5+F7qKz3j8Qd1+Z6hqfVtjL7gea02r5OGv5Lcf3ek5CHj+1exg aqig== X-Forwarded-Encrypted: i=1; AJvYcCXyW47/9OExuFQQzgN4O81GMo6/D/EdgTPYtb5LWe70dimsidXwhxuca326Ld0c/r5cO70=@vger.kernel.org X-Gm-Message-State: AOJu0YzMMcI+jfrUghmr9bkzxQQ07cN6lEu9RXGfT+3aCJAi3mNpLGB6 GKzsifwowMi+733cxcqbuSG+wBZYP+rcoWS0TRs627WWHvjN5PItPAqKqRFGXJr47NdrtpmQfbQ mvOsNr0nRnzISXYi5QT4wpdgljlD3lVu7utqomwY19r0RHITaZw== X-Gm-Gg: ASbGncsCcNAWH5i4QAqwL9zq1WWRuSpQQnhBXi0HhCaDw4AkT6hG6VNnN/M9OcgVGQ4 96CyMT1Rp30zt4u/9Ch68Q6AYMTuvE3778UBq3MeeRfoL1BoOkyBr+xIwdDpPd2Mu7oNh9leqZ/ 1cMRhgBZxngocupcWa5DRcnI7iKpGBd6y/smF9l7eOvDqOma2XmKVLzImvjc3ZCpU7HZ4Gl7LL3 E6sCoQUoKl3GvukcFw7Wq7AxYSdPN8iTmJzzSVw02rVoCDAJPlE2mAe/JpkstanzG40VDQntkRA rdR/DhxKLCyGMEiylDHJ2g== X-Received: by 2002:a05:600c:1d14:b0:43c:fe15:41e1 with SMTP id 5b1f17b1804b1-43ea7c4e878mr34970885e9.4.1743523884837; Tue, 01 Apr 2025 09:11:24 -0700 (PDT) X-Google-Smtp-Source: AGHT+IEw0KCQptOpUzsEJC9h+J15baLRxkJXWFt4YysKwcuqvA/vSlOPdWF/S+38oX+DaBh6C3XLCA== X-Received: by 2002:a05:600c:1d14:b0:43c:fe15:41e1 with SMTP id 5b1f17b1804b1-43ea7c4e878mr34970365e9.4.1743523884345; Tue, 01 Apr 2025 09:11:24 -0700 (PDT) Received: from [192.168.10.48] ([176.206.111.201]) by smtp.gmail.com with ESMTPSA id 5b1f17b1804b1-43ea97895e1sm19840855e9.1.2025.04.01.09.11.22 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 01 Apr 2025 09:11:23 -0700 (PDT) From: Paolo Bonzini To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: roy.hopkins@suse.com, seanjc@google.com, thomas.lendacky@amd.com, ashish.kalra@amd.com, michael.roth@amd.com, jroedel@suse.de, nsaenz@amazon.com, anelkz@amazon.de, James.Bottomley@HansenPartnership.com Subject: [PATCH 06/29] KVM: move mem_attr_array to kvm_plane Date: Tue, 1 Apr 2025 18:10:43 +0200 Message-ID: <20250401161106.790710-7-pbonzini@redhat.com> X-Mailer: git-send-email 2.49.0 In-Reply-To: <20250401161106.790710-1-pbonzini@redhat.com> References: <20250401161106.790710-1-pbonzini@redhat.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Another aspect of the VM that is now different for separate planes is memory attributes, in order to support RWX permissions in the future. The existing vm-level ioctls apply to plane 0 and the underlying functionality operates on struct kvm_plane, which now hosts the mem_attr_array xarray. As a result, the pre/post architecture-specific callbacks also take a plane. Private/shared is a global attribute and only applies to plane 0. Signed-off-by: Paolo Bonzini --- arch/x86/kvm/mmu/mmu.c | 23 ++++++----- include/linux/kvm_host.h | 24 +++++++----- virt/kvm/guest_memfd.c | 3 +- virt/kvm/kvm_main.c | 85 +++++++++++++++++++++++++--------------- 4 files changed, 84 insertions(+), 51 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index a284dce227a0..04e4b041e248 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -7670,9 +7670,11 @@ void kvm_mmu_pre_destroy_vm(struct kvm *kvm) } #ifdef CONFIG_KVM_GENERIC_MEMORY_ATTRIBUTES -bool kvm_arch_pre_set_memory_attributes(struct kvm *kvm, +bool kvm_arch_pre_set_memory_attributes(struct kvm_plane *plane, struct kvm_gfn_range *range) { + struct kvm *kvm = plane->kvm; + /* * Zap SPTEs even if the slot can't be mapped PRIVATE. KVM x86 only * supports KVM_MEMORY_ATTRIBUTE_PRIVATE, and so it *seems* like KVM @@ -7714,26 +7716,27 @@ static void hugepage_set_mixed(struct kvm_memory_slot *slot, gfn_t gfn, lpage_info_slot(gfn, slot, level)->disallow_lpage |= KVM_LPAGE_MIXED_FLAG; } -static bool hugepage_has_attrs(struct kvm *kvm, struct kvm_memory_slot *slot, +static bool hugepage_has_attrs(struct kvm_plane *plane, struct kvm_memory_slot *slot, gfn_t gfn, int level, unsigned long attrs) { const unsigned long start = gfn; const unsigned long end = start + KVM_PAGES_PER_HPAGE(level); if (level == PG_LEVEL_2M) - return kvm_range_has_memory_attributes(kvm, start, end, ~0, attrs); + return kvm_range_has_memory_attributes(plane, start, end, ~0, attrs); for (gfn = start; gfn < end; gfn += KVM_PAGES_PER_HPAGE(level - 1)) { if (hugepage_test_mixed(slot, gfn, level - 1) || - attrs != kvm_get_memory_attributes(kvm, gfn)) + attrs != kvm_get_plane_memory_attributes(plane, gfn)) return false; } return true; } -bool kvm_arch_post_set_memory_attributes(struct kvm *kvm, +bool kvm_arch_post_set_memory_attributes(struct kvm_plane *plane, struct kvm_gfn_range *range) { + struct kvm *kvm = plane->kvm; unsigned long attrs = range->arg.attributes; struct kvm_memory_slot *slot = range->slot; int level; @@ -7767,7 +7770,7 @@ bool kvm_arch_post_set_memory_attributes(struct kvm *kvm, */ if (gfn >= slot->base_gfn && gfn + nr_pages <= slot->base_gfn + slot->npages) { - if (hugepage_has_attrs(kvm, slot, gfn, level, attrs)) + if (hugepage_has_attrs(plane, slot, gfn, level, attrs)) hugepage_clear_mixed(slot, gfn, level); else hugepage_set_mixed(slot, gfn, level); @@ -7789,7 +7792,7 @@ bool kvm_arch_post_set_memory_attributes(struct kvm *kvm, */ if (gfn < range->end && (gfn + nr_pages) <= (slot->base_gfn + slot->npages)) { - if (hugepage_has_attrs(kvm, slot, gfn, level, attrs)) + if (hugepage_has_attrs(plane, slot, gfn, level, attrs)) hugepage_clear_mixed(slot, gfn, level); else hugepage_set_mixed(slot, gfn, level); @@ -7801,11 +7804,13 @@ bool kvm_arch_post_set_memory_attributes(struct kvm *kvm, void kvm_mmu_init_memslot_memory_attributes(struct kvm *kvm, struct kvm_memory_slot *slot) { + struct kvm_plane *plane0; int level; if (!kvm_arch_has_private_mem(kvm)) return; + plane0 = kvm->planes[0]; for (level = PG_LEVEL_2M; level <= KVM_MAX_HUGEPAGE_LEVEL; level++) { /* * Don't bother tracking mixed attributes for pages that can't @@ -7825,9 +7830,9 @@ void kvm_mmu_init_memslot_memory_attributes(struct kvm *kvm, * be manually checked as the attributes may already be mixed. */ for (gfn = start; gfn < end; gfn += nr_pages) { - unsigned long attrs = kvm_get_memory_attributes(kvm, gfn); + unsigned long attrs = kvm_get_plane_memory_attributes(plane0, gfn); - if (hugepage_has_attrs(kvm, slot, gfn, level, attrs)) + if (hugepage_has_attrs(plane0, slot, gfn, level, attrs)) hugepage_clear_mixed(slot, gfn, level); else hugepage_set_mixed(slot, gfn, level); diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index 98bae5dc3515..4d408d1d5ccc 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -763,6 +763,10 @@ struct kvm_memslots { struct kvm_plane { struct kvm *kvm; +#ifdef CONFIG_KVM_GENERIC_MEMORY_ATTRIBUTES + /* Protected by slots_locks (for writes) and RCU (for reads) */ + struct xarray mem_attr_array; +#endif int plane; struct kvm_arch_plane arch; @@ -875,10 +879,6 @@ struct kvm { #ifdef CONFIG_HAVE_KVM_PM_NOTIFIER struct notifier_block pm_notifier; -#endif -#ifdef CONFIG_KVM_GENERIC_MEMORY_ATTRIBUTES - /* Protected by slots_locks (for writes) and RCU (for reads) */ - struct xarray mem_attr_array; #endif char stats_id[KVM_STATS_NAME_SIZE]; }; @@ -2511,20 +2511,26 @@ static inline void kvm_prepare_memory_fault_exit(struct kvm_vcpu *vcpu, } #ifdef CONFIG_KVM_GENERIC_MEMORY_ATTRIBUTES -static inline unsigned long kvm_get_memory_attributes(struct kvm *kvm, gfn_t gfn) +static inline unsigned long kvm_get_plane_memory_attributes(struct kvm_plane *plane, gfn_t gfn) { - return xa_to_value(xa_load(&kvm->mem_attr_array, gfn)); + return xa_to_value(xa_load(&plane->mem_attr_array, gfn)); } -bool kvm_range_has_memory_attributes(struct kvm *kvm, gfn_t start, gfn_t end, +static inline unsigned long kvm_get_memory_attributes(struct kvm *kvm, gfn_t gfn) +{ + return kvm_get_plane_memory_attributes(kvm->planes[0], gfn); +} + +bool kvm_range_has_memory_attributes(struct kvm_plane *plane, gfn_t start, gfn_t end, unsigned long mask, unsigned long attrs); -bool kvm_arch_pre_set_memory_attributes(struct kvm *kvm, +bool kvm_arch_pre_set_memory_attributes(struct kvm_plane *plane, struct kvm_gfn_range *range); -bool kvm_arch_post_set_memory_attributes(struct kvm *kvm, +bool kvm_arch_post_set_memory_attributes(struct kvm_plane *plane, struct kvm_gfn_range *range); static inline bool kvm_mem_is_private(struct kvm *kvm, gfn_t gfn) { + /* Private/shared is always in plane 0 */ return IS_ENABLED(CONFIG_KVM_PRIVATE_MEM) && kvm_get_memory_attributes(kvm, gfn) & KVM_MEMORY_ATTRIBUTE_PRIVATE; } diff --git a/virt/kvm/guest_memfd.c b/virt/kvm/guest_memfd.c index b2aa6bf24d3a..f07102bcaf24 100644 --- a/virt/kvm/guest_memfd.c +++ b/virt/kvm/guest_memfd.c @@ -642,6 +642,7 @@ EXPORT_SYMBOL_GPL(kvm_gmem_get_pfn); long kvm_gmem_populate(struct kvm *kvm, gfn_t start_gfn, void __user *src, long npages, kvm_gmem_populate_cb post_populate, void *opaque) { + struct kvm_plane *plane0 = kvm->planes[0]; struct file *file; struct kvm_memory_slot *slot; void __user *p; @@ -694,7 +695,7 @@ long kvm_gmem_populate(struct kvm *kvm, gfn_t start_gfn, void __user *src, long (npages - i) < (1 << max_order)); ret = -EINVAL; - while (!kvm_range_has_memory_attributes(kvm, gfn, gfn + (1 << max_order), + while (!kvm_range_has_memory_attributes(plane0, gfn, gfn + (1 << max_order), KVM_MEMORY_ATTRIBUTE_PRIVATE, KVM_MEMORY_ATTRIBUTE_PRIVATE)) { if (!max_order) diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index 5b44a7f9e52e..e343905e46d8 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -500,6 +500,7 @@ static inline struct kvm *mmu_notifier_to_kvm(struct mmu_notifier *mn) } typedef bool (*gfn_handler_t)(struct kvm *kvm, struct kvm_gfn_range *range); +typedef bool (*plane_gfn_handler_t)(struct kvm_plane *plane, struct kvm_gfn_range *range); typedef void (*on_lock_fn_t)(struct kvm *kvm); @@ -511,7 +512,11 @@ struct kvm_mmu_notifier_range { u64 start; u64 end; union kvm_mmu_notifier_arg arg; - gfn_handler_t handler; + /* The only difference is the type of the first parameter. */ + union { + gfn_handler_t handler; + plane_gfn_handler_t handler_plane; + }; on_lock_fn_t on_lock; bool flush_on_ret; bool may_block; @@ -1105,6 +1110,9 @@ static struct kvm_plane *kvm_create_vm_plane(struct kvm *kvm, unsigned plane_id) plane->kvm = kvm; plane->plane = plane_id; +#ifdef CONFIG_KVM_GENERIC_MEMORY_ATTRIBUTES + xa_init(&plane->mem_attr_array); +#endif kvm_arch_init_plane(plane); return plane; } @@ -1130,9 +1138,6 @@ static struct kvm *kvm_create_vm(unsigned long type, const char *fdname) spin_lock_init(&kvm->mn_invalidate_lock); rcuwait_init(&kvm->mn_memslots_update_rcuwait); xa_init(&kvm->vcpu_array); -#ifdef CONFIG_KVM_GENERIC_MEMORY_ATTRIBUTES - xa_init(&kvm->mem_attr_array); -#endif INIT_LIST_HEAD(&kvm->gpc_list); spin_lock_init(&kvm->gpc_lock); @@ -1280,6 +1285,10 @@ static void kvm_destroy_devices(struct kvm *kvm) static void kvm_destroy_plane(struct kvm_plane *plane) { kvm_arch_free_plane(plane); + +#ifdef CONFIG_KVM_GENERIC_MEMORY_ATTRIBUTES + xa_destroy(&plane->mem_attr_array); +#endif } static void kvm_destroy_vm(struct kvm *kvm) @@ -1335,9 +1344,6 @@ static void kvm_destroy_vm(struct kvm *kvm) } cleanup_srcu_struct(&kvm->irq_srcu); cleanup_srcu_struct(&kvm->srcu); -#ifdef CONFIG_KVM_GENERIC_MEMORY_ATTRIBUTES - xa_destroy(&kvm->mem_attr_array); -#endif for (i = 0; i < ARRAY_SIZE(kvm->planes); i++) { struct kvm_plane *plane = kvm->planes[i]; if (plane) @@ -2385,9 +2391,9 @@ static int kvm_vm_ioctl_clear_dirty_log(struct kvm *kvm, #endif /* CONFIG_KVM_GENERIC_DIRTYLOG_READ_PROTECT */ #ifdef CONFIG_KVM_GENERIC_MEMORY_ATTRIBUTES -static u64 kvm_supported_mem_attributes(struct kvm *kvm) +static u64 kvm_supported_mem_attributes(struct kvm_plane *plane) { - if (!kvm || kvm_arch_has_private_mem(kvm)) + if (!plane || (!plane->plane && kvm_arch_has_private_mem(plane->kvm))) return KVM_MEMORY_ATTRIBUTE_PRIVATE; return 0; @@ -2397,19 +2403,20 @@ static u64 kvm_supported_mem_attributes(struct kvm *kvm) * Returns true if _all_ gfns in the range [@start, @end) have attributes * such that the bits in @mask match @attrs. */ -bool kvm_range_has_memory_attributes(struct kvm *kvm, gfn_t start, gfn_t end, +bool kvm_range_has_memory_attributes(struct kvm_plane *plane, + gfn_t start, gfn_t end, unsigned long mask, unsigned long attrs) { - XA_STATE(xas, &kvm->mem_attr_array, start); + XA_STATE(xas, &plane->mem_attr_array, start); unsigned long index; void *entry; - mask &= kvm_supported_mem_attributes(kvm); + mask &= kvm_supported_mem_attributes(plane); if (attrs & ~mask) return false; if (end == start + 1) - return (kvm_get_memory_attributes(kvm, start) & mask) == attrs; + return (kvm_get_plane_memory_attributes(plane, start) & mask) == attrs; guard(rcu)(); if (!attrs) @@ -2428,8 +2435,8 @@ bool kvm_range_has_memory_attributes(struct kvm *kvm, gfn_t start, gfn_t end, return true; } -static __always_inline void kvm_handle_gfn_range(struct kvm *kvm, - struct kvm_mmu_notifier_range *range) +static __always_inline void __kvm_handle_gfn_range(struct kvm *kvm, void *arg1, + struct kvm_mmu_notifier_range *range) { struct kvm_gfn_range gfn_range; struct kvm_memory_slot *slot; @@ -2469,7 +2476,7 @@ static __always_inline void kvm_handle_gfn_range(struct kvm *kvm, range->on_lock(kvm); } - ret |= range->handler(kvm, &gfn_range); + ret |= range->handler(arg1, &gfn_range); } } @@ -2480,7 +2487,19 @@ static __always_inline void kvm_handle_gfn_range(struct kvm *kvm, KVM_MMU_UNLOCK(kvm); } -static bool kvm_pre_set_memory_attributes(struct kvm *kvm, +static __always_inline void kvm_handle_gfn_range(struct kvm *kvm, + struct kvm_mmu_notifier_range *range) +{ + __kvm_handle_gfn_range(kvm, kvm, range); +} + +static __always_inline void kvm_plane_handle_gfn_range(struct kvm_plane *plane, + struct kvm_mmu_notifier_range *range) +{ + __kvm_handle_gfn_range(plane->kvm, plane, range); +} + +static bool kvm_pre_set_memory_attributes(struct kvm_plane *plane, struct kvm_gfn_range *range) { /* @@ -2494,20 +2513,21 @@ static bool kvm_pre_set_memory_attributes(struct kvm *kvm, * but it's not obvious that allowing new mappings while the attributes * are in flux is desirable or worth the complexity. */ - kvm_mmu_invalidate_range_add(kvm, range->start, range->end); + kvm_mmu_invalidate_range_add(plane->kvm, range->start, range->end); - return kvm_arch_pre_set_memory_attributes(kvm, range); + return kvm_arch_pre_set_memory_attributes(plane, range); } /* Set @attributes for the gfn range [@start, @end). */ -static int kvm_vm_set_mem_attributes(struct kvm *kvm, gfn_t start, gfn_t end, +static int kvm_vm_set_mem_attributes(struct kvm_plane *plane, gfn_t start, gfn_t end, unsigned long attributes) { + struct kvm *kvm = plane->kvm; struct kvm_mmu_notifier_range pre_set_range = { .start = start, .end = end, .arg.attributes = attributes, - .handler = kvm_pre_set_memory_attributes, + .handler_plane = kvm_pre_set_memory_attributes, .on_lock = kvm_mmu_invalidate_begin, .flush_on_ret = true, .may_block = true, @@ -2516,7 +2536,7 @@ static int kvm_vm_set_mem_attributes(struct kvm *kvm, gfn_t start, gfn_t end, .start = start, .end = end, .arg.attributes = attributes, - .handler = kvm_arch_post_set_memory_attributes, + .handler_plane = kvm_arch_post_set_memory_attributes, .on_lock = kvm_mmu_invalidate_end, .may_block = true, }; @@ -2529,7 +2549,7 @@ static int kvm_vm_set_mem_attributes(struct kvm *kvm, gfn_t start, gfn_t end, mutex_lock(&kvm->slots_lock); /* Nothing to do if the entire range as the desired attributes. */ - if (kvm_range_has_memory_attributes(kvm, start, end, ~0, attributes)) + if (kvm_range_has_memory_attributes(plane, start, end, ~0, attributes)) goto out_unlock; /* @@ -2537,27 +2557,28 @@ static int kvm_vm_set_mem_attributes(struct kvm *kvm, gfn_t start, gfn_t end, * partway through setting the new attributes. */ for (i = start; i < end; i++) { - r = xa_reserve(&kvm->mem_attr_array, i, GFP_KERNEL_ACCOUNT); + r = xa_reserve(&plane->mem_attr_array, i, GFP_KERNEL_ACCOUNT); if (r) goto out_unlock; } - kvm_handle_gfn_range(kvm, &pre_set_range); + kvm_plane_handle_gfn_range(plane, &pre_set_range); for (i = start; i < end; i++) { - r = xa_err(xa_store(&kvm->mem_attr_array, i, entry, + r = xa_err(xa_store(&plane->mem_attr_array, i, entry, GFP_KERNEL_ACCOUNT)); KVM_BUG_ON(r, kvm); } - kvm_handle_gfn_range(kvm, &post_set_range); + kvm_plane_handle_gfn_range(plane, &post_set_range); out_unlock: mutex_unlock(&kvm->slots_lock); return r; } -static int kvm_vm_ioctl_set_mem_attributes(struct kvm *kvm, + +static int kvm_vm_ioctl_set_mem_attributes(struct kvm_plane *plane, struct kvm_memory_attributes *attrs) { gfn_t start, end; @@ -2565,7 +2586,7 @@ static int kvm_vm_ioctl_set_mem_attributes(struct kvm *kvm, /* flags is currently not used. */ if (attrs->flags) return -EINVAL; - if (attrs->attributes & ~kvm_supported_mem_attributes(kvm)) + if (attrs->attributes & ~kvm_supported_mem_attributes(plane)) return -EINVAL; if (attrs->size == 0 || attrs->address + attrs->size < attrs->address) return -EINVAL; @@ -2582,7 +2603,7 @@ static int kvm_vm_ioctl_set_mem_attributes(struct kvm *kvm, */ BUILD_BUG_ON(sizeof(attrs->attributes) != sizeof(unsigned long)); - return kvm_vm_set_mem_attributes(kvm, start, end, attrs->attributes); + return kvm_vm_set_mem_attributes(plane, start, end, attrs->attributes); } #endif /* CONFIG_KVM_GENERIC_MEMORY_ATTRIBUTES */ @@ -4867,7 +4888,7 @@ static int kvm_vm_ioctl_check_extension_generic(struct kvm *kvm, long arg) return 1; #ifdef CONFIG_KVM_GENERIC_MEMORY_ATTRIBUTES case KVM_CAP_MEMORY_ATTRIBUTES: - return kvm_supported_mem_attributes(kvm); + return kvm_supported_mem_attributes(kvm ? kvm->planes[0] : NULL); #endif #ifdef CONFIG_KVM_PRIVATE_MEM case KVM_CAP_GUEST_MEMFD: @@ -5274,7 +5295,7 @@ static long kvm_vm_ioctl(struct file *filp, if (copy_from_user(&attrs, argp, sizeof(attrs))) goto out; - r = kvm_vm_ioctl_set_mem_attributes(kvm, &attrs); + r = kvm_vm_ioctl_set_mem_attributes(kvm->planes[0], &attrs); break; } #endif /* CONFIG_KVM_GENERIC_MEMORY_ATTRIBUTES */ From patchwork Tue Apr 1 16:10:44 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Paolo Bonzini X-Patchwork-Id: 14035095 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id EB449212FBF for ; Tue, 1 Apr 2025 16:11:31 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=170.10.129.124 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1743523893; cv=none; b=AN/mhCnCIJ9Vygf+3mrvvjxPkHQxbtRykU3omrpwi1JQMqhTGuAUzPtl9qY3HopH1AsxLf33qlxiInWp30mp4euzWaJWAFpH9TVsETkuZshW5j1PFZB2wvE1oEWyn4RHTPJqXdb4BGDoqD+ELQnzi1rKvhmAF5/tJFW8T/jT/Qo= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1743523893; c=relaxed/simple; bh=ZfkfOM480M1YBqKpC2/kM72sR6VmePA1K6dauSV8B3s=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=dC9z8ohrKR2hm3FwVCiHlD2bXNAixvsRNdPzyHBDjNE0CV5J6pCC9gMOfEqDhCYz8BwCABN7wCeDg7AkI9wQHITVL/xlBn4//fLlEB6mVggFflUIYJuqp3fdYR10JPg9RA0Z0RacHNnpFAJIf5s4XDHroRXIvfGJ0mlTVc0SgO4= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=redhat.com; spf=pass smtp.mailfrom=redhat.com; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b=akd/liQU; arc=none smtp.client-ip=170.10.129.124 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=redhat.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=redhat.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="akd/liQU" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1743523891; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=KsKBW7BEHbNNA2EQflKemABN/u2ulObIcmH48WXDJ8s=; b=akd/liQUR6vnxyv0Of6erdwoSLBfssYhhckFNORUYPszqJtuwETxh8YSxJNEdsGnvQjjbN vptIjSFdpwuVzrTzanr4frNBrAwtnHzc/swdZJgX0i/t2ta88nBf6SJTiM9R/Vougehdvn Xm96tfhxA4Tog7qazmRQEm4d8pSrqoA= Received: from mail-wm1-f72.google.com (mail-wm1-f72.google.com [209.85.128.72]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-103-_A8MSWlnO06N9vchG2PK5g-1; Tue, 01 Apr 2025 12:11:29 -0400 X-MC-Unique: _A8MSWlnO06N9vchG2PK5g-1 X-Mimecast-MFC-AGG-ID: _A8MSWlnO06N9vchG2PK5g_1743523889 Received: by mail-wm1-f72.google.com with SMTP id 5b1f17b1804b1-438e180821aso26461165e9.1 for ; Tue, 01 Apr 2025 09:11:29 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1743523888; x=1744128688; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=KsKBW7BEHbNNA2EQflKemABN/u2ulObIcmH48WXDJ8s=; b=ZNnNSKNaV7aDRvJSi8LVDodV9s5ub8tOcKH7+w3tQoQEEwl5PEryqCpPt81zz0lbGM F+k+ysWrX2n8rhps+DivPlriH/GUYm+ALoNDMZh9VgcvTkBLAvnNXFUABw1DmOTLdXql siwjj1mLW1CIdZyJnfFJrOpXwZNmTE1Y16HIrf0zbWAMNpOkLLrEJc8/z9HEUq4G6+yK qr3Fl3Zm1zqVJ2oIS5a7xbHUT/RELtQMF6ZeXg2Z1kRO2wDoXugy/z1olR5dBN40ziu3 reCW/nuVJvZwyhgWGMsP9jlawt+znCC+knKzHsRzHptnyHnBFgIV5nyAUu2w/aTpeGo2 VKIw== X-Forwarded-Encrypted: i=1; AJvYcCWy4ZqzuLOdAki7RboHFG80ckPHV4nwX20ce3hkmeQDlaU+Y1OCF3jq+qBJP0jinzYMon0=@vger.kernel.org X-Gm-Message-State: AOJu0YzCFKlDgR7ha5gLlGgEyCChyoe90IvkfiSrL8R22CRT7IIN/qqc VryEi5yWo21vghhB1cZ4mASExzRb0cM5+2J4sHDRqi3xi4mu1K8dDuvE00Dalk9g20DGREsvTBL dl1wtkYSoW4tLY6w9qO7wL8w4pEqxDnWfVexG3UT0ndszJ9twzZ5fEBPdWA== X-Gm-Gg: ASbGncswHHGYJvQvxQhhCXUWYbLKUAdFDDnvV6x6727UlV8xpYe92f0ZmMQ6L+RN5cX 7Z+Ii/P4oGDfh4OCcVmSQzUQBytPS2L/i3GdtYUpt6UkSH0vVAJy3bOMnqWTu/F1GyKnxT1s6oc 1d0YIJ3U/6ok2rOiL1PiS4jZnFaJp/PfTPLrnAhci6K34t8a/9g6RQy3yU8oTQ3hRDhVGUECuQz c2VrJmoeen8I7C/MjS6vy8U7DuRaKTl5sBSdJIU28eLI4vQPgvlxUGDR4Ui/u0FFo5twwmiAzbI uEqM2CKlLTkpP8IWqpVfrQ== X-Received: by 2002:a05:6000:2410:b0:39c:142a:35f9 with SMTP id ffacd0b85a97d-39c142a35famr11329187f8f.10.1743523887461; Tue, 01 Apr 2025 09:11:27 -0700 (PDT) X-Google-Smtp-Source: AGHT+IEUBBONFMxXubuB0rFZnmFR8I8IBPlykXIOGysFwqjq21RAjnf88FJ3fTcF64SWlqgmRjwgcw== X-Received: by 2002:a05:6000:2410:b0:39c:142a:35f9 with SMTP id ffacd0b85a97d-39c142a35famr11329136f8f.10.1743523887070; Tue, 01 Apr 2025 09:11:27 -0700 (PDT) Received: from [192.168.10.48] ([176.206.111.201]) by smtp.gmail.com with ESMTPSA id ffacd0b85a97d-39c0b66a9d2sm14476457f8f.43.2025.04.01.09.11.24 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 01 Apr 2025 09:11:25 -0700 (PDT) From: Paolo Bonzini To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: roy.hopkins@suse.com, seanjc@google.com, thomas.lendacky@amd.com, ashish.kalra@amd.com, michael.roth@amd.com, jroedel@suse.de, nsaenz@amazon.com, anelkz@amazon.de, James.Bottomley@HansenPartnership.com Subject: [PATCH 07/29] KVM: do not use online_vcpus to test vCPU validity Date: Tue, 1 Apr 2025 18:10:44 +0200 Message-ID: <20250401161106.790710-8-pbonzini@redhat.com> X-Mailer: git-send-email 2.49.0 In-Reply-To: <20250401161106.790710-1-pbonzini@redhat.com> References: <20250401161106.790710-1-pbonzini@redhat.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Different planes can initialize their vCPUs separately, therefore there is no single online_vcpus value that can be used to test that a vCPU has indeed been fully initialized. Use the shiny new plane field instead, initializing it to an invalid value (-1) while the vCPU is visible in the xarray but may still disappear if the creation fails. Signed-off-by: Paolo Bonzini --- arch/x86/kvm/i8254.c | 3 ++- include/linux/kvm_host.h | 23 ++++++----------------- virt/kvm/kvm_main.c | 20 +++++++++++++------- 3 files changed, 21 insertions(+), 25 deletions(-) diff --git a/arch/x86/kvm/i8254.c b/arch/x86/kvm/i8254.c index d7ab8780ab9e..e3a3e7b90c26 100644 --- a/arch/x86/kvm/i8254.c +++ b/arch/x86/kvm/i8254.c @@ -260,9 +260,10 @@ static void pit_do_work(struct kthread_work *work) * VCPUs and only when LVT0 is in NMI mode. The interrupt can * also be simultaneously delivered through PIC and IOAPIC. */ - if (atomic_read(&kvm->arch.vapics_in_nmi_mode) > 0) + if (atomic_read(&kvm->arch.vapics_in_nmi_mode) > 0) { kvm_for_each_vcpu(i, vcpu, kvm) kvm_apic_nmi_wd_deliver(vcpu); + } } static enum hrtimer_restart pit_timer_fn(struct hrtimer *data) diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index 4d408d1d5ccc..0db27814294f 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -992,27 +992,16 @@ static inline struct kvm_io_bus *kvm_get_bus(struct kvm *kvm, enum kvm_bus idx) static inline struct kvm_vcpu *kvm_get_vcpu(struct kvm *kvm, int i) { - int num_vcpus = atomic_read(&kvm->online_vcpus); - - /* - * Explicitly verify the target vCPU is online, as the anti-speculation - * logic only limits the CPU's ability to speculate, e.g. given a "bad" - * index, clamping the index to 0 would return vCPU0, not NULL. - */ - if (i >= num_vcpus) + struct kvm_vcpu *vcpu = xa_load(&kvm->vcpu_array, i); + if (vcpu && unlikely(vcpu->plane == -1)) return NULL; - i = array_index_nospec(i, num_vcpus); - - /* Pairs with smp_wmb() in kvm_vm_ioctl_create_vcpu. */ - smp_rmb(); - return xa_load(&kvm->vcpu_array, i); + return vcpu; } -#define kvm_for_each_vcpu(idx, vcpup, kvm) \ - if (atomic_read(&kvm->online_vcpus)) \ - xa_for_each_range(&kvm->vcpu_array, idx, vcpup, 0, \ - (atomic_read(&kvm->online_vcpus) - 1)) +#define kvm_for_each_vcpu(idx, vcpup, kvm) \ + xa_for_each(&kvm->vcpu_array, idx, vcpup) \ + if ((vcpup)->plane == -1) ; else \ static inline struct kvm_vcpu *kvm_get_vcpu_by_id(struct kvm *kvm, int id) { diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index e343905e46d8..eba02cb7cc57 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -4186,6 +4186,11 @@ static int kvm_vm_ioctl_create_vcpu(struct kvm *kvm, unsigned long id) goto unlock_vcpu_destroy; } + /* + * Store an invalid plane number until fully initialized. xa_insert() has + * release semantics, which ensures the write is visible to kvm_get_vcpu(). + */ + vcpu->plane = -1; vcpu->vcpu_idx = atomic_read(&kvm->online_vcpus); r = xa_insert(&kvm->vcpu_array, vcpu->vcpu_idx, vcpu, GFP_KERNEL_ACCOUNT); WARN_ON_ONCE(r == -EBUSY); @@ -4195,7 +4200,7 @@ static int kvm_vm_ioctl_create_vcpu(struct kvm *kvm, unsigned long id) /* * Now it's all set up, let userspace reach it. Grab the vCPU's mutex * so that userspace can't invoke vCPU ioctl()s until the vCPU is fully - * visible (per online_vcpus), e.g. so that KVM doesn't get tricked + * visible (valid vcpu->plane), e.g. so that KVM doesn't get tricked * into a NULL-pointer dereference because KVM thinks the _current_ * vCPU doesn't exist. As a bonus, taking vcpu->mutex ensures lockdep * knows it's taken *inside* kvm->lock. @@ -4206,12 +4211,13 @@ static int kvm_vm_ioctl_create_vcpu(struct kvm *kvm, unsigned long id) if (r < 0) goto kvm_put_xa_erase; - /* - * Pairs with smp_rmb() in kvm_get_vcpu. Store the vcpu - * pointer before kvm->online_vcpu's incremented value. - */ - smp_wmb(); atomic_inc(&kvm->online_vcpus); + + /* + * Pairs with xa_load() in kvm_get_vcpu, ensuring that online_vcpus + * is updated before vcpu->plane. + */ + smp_store_release(&vcpu->plane, 0); mutex_unlock(&vcpu->mutex); mutex_unlock(&kvm->lock); @@ -4355,7 +4361,7 @@ static int kvm_wait_for_vcpu_online(struct kvm_vcpu *vcpu) * In practice, this happy path will always be taken, as a well-behaved * VMM will never invoke a vCPU ioctl() before KVM_CREATE_VCPU returns. */ - if (likely(vcpu->vcpu_idx < atomic_read(&kvm->online_vcpus))) + if (likely(vcpu->plane != -1)) return 0; /* From patchwork Tue Apr 1 16:10:45 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Paolo Bonzini X-Patchwork-Id: 14035096 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id CB3952144C3 for ; Tue, 1 Apr 2025 16:11:33 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=170.10.129.124 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1743523895; cv=none; b=tkkxEg9aTFu6fwFAAq0mEvYWFYSTaa57uMCTX0ZkaFaxGf0dBH4VK0eCdWdISSDdyt2edUS5sB1fMUkCM6t2BZYeR59cFo0S2bu/0NVMwqR2V5UcCv443BmzlVSaiau5JcuG1hAMMkp63imPUTTDnLD0+BaD7lvhE+ygsMFOsu0= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1743523895; c=relaxed/simple; bh=U6yMn8cBq8sVKBGacb4GZasvrwVKcSNV989ya6C5aGg=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=ezl7ZP1xrDHRAmuPCXCtkr/dmJTLGNJRpqzJi1OeI8zZC59x5a4aeid8WINVZGRYcepG6qgeJN4e8DfEA/evDedKYb6NvULz+xMLZvLKZJnIHsSCNhmh9GFjXgy7ceOMBQxBqOci+JuD++wkyT92po8fm5vnjrKavilDiMKQoFk= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=redhat.com; spf=pass smtp.mailfrom=redhat.com; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b=CQtI8A26; arc=none smtp.client-ip=170.10.129.124 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=redhat.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=redhat.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="CQtI8A26" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1743523892; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=Cb+CmZ3WRWsV/9FrwrAoBArv5drunjdteeEUhGgbJHE=; b=CQtI8A26YnHrQt+6Qg0Bk+apGVLrlhynA0T2Py6NMZ4mwskgTs3I3QxcCh9j82COwGIvuO gDIaJo4k0j/Jyu5vXxnO4+SSmXzblBIKK0bI37U7RlNJgOM2g03TgaV71osqEOS/VoL86b 76z58bQDeSNMYdtP/Bgw4HoHdakS0uA= Received: from mail-wr1-f72.google.com (mail-wr1-f72.google.com [209.85.221.72]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-375-YyhRx63DNua0tz1SKOm8iw-1; Tue, 01 Apr 2025 12:11:31 -0400 X-MC-Unique: YyhRx63DNua0tz1SKOm8iw-1 X-Mimecast-MFC-AGG-ID: YyhRx63DNua0tz1SKOm8iw_1743523890 Received: by mail-wr1-f72.google.com with SMTP id ffacd0b85a97d-3978ef9a284so2694217f8f.3 for ; Tue, 01 Apr 2025 09:11:31 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1743523890; x=1744128690; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=Cb+CmZ3WRWsV/9FrwrAoBArv5drunjdteeEUhGgbJHE=; b=qXuS3nZrp+7CYTbFBRnmiNgEBKlNufYFy+C846SITi6G9yqhVy/GEVQK98ihJXPHsV DHfLrQAqxL95uMlKctHF/v5XMRV7KSAToYF0hMhkAYOSPQOlprnr8o0fbNw6FOosJ1IK 2mFiZTYQ/+2xTmLb3v87d79I9Vofn1jYpLtiZ5ivWTPjJ+csu2bTl6oJzcXQ//d6pGc8 cFaAaEvxKoar966KlL8gLCAQnTB1Ho/JJdLUY5ZnBW1IKFDak/ZrbcJD3mDjNEWa1Tff PoVmP0rJUnWTzXRWNjbTKGHPyQpS0GiH0bjssPcy9zxcd3B2pt2RYyQthDB5SJeifhjZ oP0A== X-Forwarded-Encrypted: i=1; AJvYcCXl9IlHlYvWhPZXoESPumKe2jVMGxierIdX1ZO/BWc+Uxhjk29EpucH1JYqL+L43eamJXQ=@vger.kernel.org X-Gm-Message-State: AOJu0YwnAhcH2kZGOXMa1gUwgp8AiSfCDIPn8C2gNmWAfcrstczvmxHZ npjXJvzwFXQhw5lXmv/rOvsZnppP5NSgwRUG7NCmzsxO8HKf8mqKl9XaEuSKtzqGnOkthTyAdVO SuqCJo5zBd9ckbddX3Fs/JTpRCXXK7dROyR6eeGci2pdi9dz5zrmCGOT6uw== X-Gm-Gg: ASbGncsq9v+j5+gswM7eqjXZACiTSrqZg7BbUdHQqENaQJX0MfoZU2SdyjkUJ+uahG7 l4ShVLU9fM4V74qYmwSDBDhANa+eixvB3fB+m9XsZxWAGCDDIFA2rBL8SOcKjl8G/mPho3m5QY/ p3ePgmrL1vgS80EZYWlrDZC9+fqItREy6eowX5zZGY9aq9Qq/k/uV2w91O5W4vXBFNgzbEYZHgw 6WJ6mh9DQlhc9Y6Wp7KsxJOdmwy3rjszItsrN/Fuu5c+0Zqbttk8f7GNLM7P0ccoumRdC5DLk0E CEjGcSIOQIZFRcimMx25Kg== X-Received: by 2002:a05:6000:2406:b0:39c:1424:2827 with SMTP id ffacd0b85a97d-39c1424288amr10394879f8f.15.1743523890032; Tue, 01 Apr 2025 09:11:30 -0700 (PDT) X-Google-Smtp-Source: AGHT+IF4bSLgkcclAnLgXImQoVywXonho7NhGNG6jPdFeVRkXIeTp4vzQ53xbuKaIBnWKYfV8gHRsg== X-Received: by 2002:a05:6000:2406:b0:39c:1424:2827 with SMTP id ffacd0b85a97d-39c1424288amr10394829f8f.15.1743523889638; Tue, 01 Apr 2025 09:11:29 -0700 (PDT) Received: from [192.168.10.48] ([176.206.111.201]) by smtp.gmail.com with ESMTPSA id ffacd0b85a97d-39c0b663617sm14614220f8f.34.2025.04.01.09.11.27 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 01 Apr 2025 09:11:28 -0700 (PDT) From: Paolo Bonzini To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: roy.hopkins@suse.com, seanjc@google.com, thomas.lendacky@amd.com, ashish.kalra@amd.com, michael.roth@amd.com, jroedel@suse.de, nsaenz@amazon.com, anelkz@amazon.de, James.Bottomley@HansenPartnership.com Subject: [PATCH 08/29] KVM: move vcpu_array to struct kvm_plane Date: Tue, 1 Apr 2025 18:10:45 +0200 Message-ID: <20250401161106.790710-9-pbonzini@redhat.com> X-Mailer: git-send-email 2.49.0 In-Reply-To: <20250401161106.790710-1-pbonzini@redhat.com> References: <20250401161106.790710-1-pbonzini@redhat.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Different planes may have only a subset of the vCPUs available in the initial plane, therefore vcpu_array must also be moved to struct kvm_plane. New functions allow accessing the vCPUs of a struct kvm_plane and, as usual, the older names automatically go through kvm->planes[0]. Signed-off-by: Paolo Bonzini --- include/linux/kvm_host.h | 29 +++++++++++++++++++++-------- virt/kvm/kvm_main.c | 22 +++++++++++++++------- 2 files changed, 36 insertions(+), 15 deletions(-) diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index 0db27814294f..0a91b556767e 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -763,6 +763,7 @@ struct kvm_memslots { struct kvm_plane { struct kvm *kvm; + struct xarray vcpu_array; #ifdef CONFIG_KVM_GENERIC_MEMORY_ATTRIBUTES /* Protected by slots_locks (for writes) and RCU (for reads) */ struct xarray mem_attr_array; @@ -795,7 +796,6 @@ struct kvm { struct kvm_memslots __memslots[KVM_MAX_NR_ADDRESS_SPACES][2]; /* The current active memslot set for each address space */ struct kvm_memslots __rcu *memslots[KVM_MAX_NR_ADDRESS_SPACES]; - struct xarray vcpu_array; struct kvm_plane *planes[KVM_MAX_VCPU_PLANES]; @@ -990,20 +990,20 @@ static inline struct kvm_io_bus *kvm_get_bus(struct kvm *kvm, enum kvm_bus idx) !refcount_read(&kvm->users_count)); } -static inline struct kvm_vcpu *kvm_get_vcpu(struct kvm *kvm, int i) +static inline struct kvm_vcpu *kvm_get_plane_vcpu(struct kvm_plane *plane, int i) { - struct kvm_vcpu *vcpu = xa_load(&kvm->vcpu_array, i); + struct kvm_vcpu *vcpu = xa_load(&plane->vcpu_array, i); if (vcpu && unlikely(vcpu->plane == -1)) return NULL; return vcpu; } -#define kvm_for_each_vcpu(idx, vcpup, kvm) \ - xa_for_each(&kvm->vcpu_array, idx, vcpup) \ +#define kvm_for_each_plane_vcpu(idx, vcpup, plane_) \ + xa_for_each(&(plane_)->vcpu_array, idx, vcpup) \ if ((vcpup)->plane == -1) ; else \ -static inline struct kvm_vcpu *kvm_get_vcpu_by_id(struct kvm *kvm, int id) +static inline struct kvm_vcpu *kvm_get_plane_vcpu_by_id(struct kvm_plane *plane, int id) { struct kvm_vcpu *vcpu = NULL; unsigned long i; @@ -1011,15 +1011,28 @@ static inline struct kvm_vcpu *kvm_get_vcpu_by_id(struct kvm *kvm, int id) if (id < 0) return NULL; if (id < KVM_MAX_VCPUS) - vcpu = kvm_get_vcpu(kvm, id); + vcpu = kvm_get_plane_vcpu(plane, id); if (vcpu && vcpu->vcpu_id == id) return vcpu; - kvm_for_each_vcpu(i, vcpu, kvm) + kvm_for_each_plane_vcpu(i, vcpu, plane) if (vcpu->vcpu_id == id) return vcpu; return NULL; } +static inline struct kvm_vcpu *kvm_get_vcpu(struct kvm *kvm, int i) +{ + return kvm_get_plane_vcpu(kvm->planes[0], i); +} + +#define kvm_for_each_vcpu(idx, vcpup, kvm) \ + kvm_for_each_plane_vcpu(idx, vcpup, kvm->planes[0]) + +static inline struct kvm_vcpu *kvm_get_vcpu_by_id(struct kvm *kvm, int id) +{ + return kvm_get_plane_vcpu_by_id(kvm->planes[0], id); +} + void kvm_destroy_vcpus(struct kvm *kvm); void vcpu_load(struct kvm_vcpu *vcpu); diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index eba02cb7cc57..cd4dfc399cad 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -481,12 +481,19 @@ static void kvm_vcpu_destroy(struct kvm_vcpu *vcpu) void kvm_destroy_vcpus(struct kvm *kvm) { + int j; unsigned long i; struct kvm_vcpu *vcpu; - kvm_for_each_vcpu(i, vcpu, kvm) { - kvm_vcpu_destroy(vcpu); - xa_erase(&kvm->vcpu_array, i); + for (j = ARRAY_SIZE(kvm->planes) - 1; j >= 0; j--) { + struct kvm_plane *plane = kvm->planes[j]; + if (!plane) + continue; + + kvm_for_each_plane_vcpu(i, vcpu, plane) { + kvm_vcpu_destroy(vcpu); + xa_erase(&plane->vcpu_array, i); + } } atomic_set(&kvm->online_vcpus, 0); @@ -1110,6 +1117,7 @@ static struct kvm_plane *kvm_create_vm_plane(struct kvm *kvm, unsigned plane_id) plane->kvm = kvm; plane->plane = plane_id; + xa_init(&plane->vcpu_array); #ifdef CONFIG_KVM_GENERIC_MEMORY_ATTRIBUTES xa_init(&plane->mem_attr_array); #endif @@ -1137,7 +1145,6 @@ static struct kvm *kvm_create_vm(unsigned long type, const char *fdname) mutex_init(&kvm->slots_arch_lock); spin_lock_init(&kvm->mn_invalidate_lock); rcuwait_init(&kvm->mn_memslots_update_rcuwait); - xa_init(&kvm->vcpu_array); INIT_LIST_HEAD(&kvm->gpc_list); spin_lock_init(&kvm->gpc_lock); @@ -3930,6 +3937,7 @@ void kvm_vcpu_on_spin(struct kvm_vcpu *me, bool yield_to_kernel_mode) { int nr_vcpus, start, i, idx, yielded; struct kvm *kvm = me->kvm; + struct kvm_plane *plane = kvm->planes[me->plane]; struct kvm_vcpu *vcpu; int try = 3; @@ -3967,7 +3975,7 @@ void kvm_vcpu_on_spin(struct kvm_vcpu *me, bool yield_to_kernel_mode) if (idx == me->vcpu_idx) continue; - vcpu = xa_load(&kvm->vcpu_array, idx); + vcpu = xa_load(&plane->vcpu_array, idx); if (!READ_ONCE(vcpu->ready)) continue; if (kvm_vcpu_is_blocking(vcpu) && !vcpu_dy_runnable(vcpu)) @@ -4192,7 +4200,7 @@ static int kvm_vm_ioctl_create_vcpu(struct kvm *kvm, unsigned long id) */ vcpu->plane = -1; vcpu->vcpu_idx = atomic_read(&kvm->online_vcpus); - r = xa_insert(&kvm->vcpu_array, vcpu->vcpu_idx, vcpu, GFP_KERNEL_ACCOUNT); + r = xa_insert(&kvm->planes[0]->vcpu_array, vcpu->vcpu_idx, vcpu, GFP_KERNEL_ACCOUNT); WARN_ON_ONCE(r == -EBUSY); if (r) goto unlock_vcpu_destroy; @@ -4228,7 +4236,7 @@ static int kvm_vm_ioctl_create_vcpu(struct kvm *kvm, unsigned long id) kvm_put_xa_erase: mutex_unlock(&vcpu->mutex); kvm_put_kvm_no_destroy(kvm); - xa_erase(&kvm->vcpu_array, vcpu->vcpu_idx); + xa_erase(&kvm->planes[0]->vcpu_array, vcpu->vcpu_idx); unlock_vcpu_destroy: mutex_unlock(&kvm->lock); kvm_dirty_ring_free(&vcpu->dirty_ring); From patchwork Tue Apr 1 16:10:46 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Paolo Bonzini X-Patchwork-Id: 14035097 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 74B751A2C3A for ; Tue, 1 Apr 2025 16:11:37 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=170.10.133.124 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1743523899; cv=none; b=bb/Y6L3z09ryh3OxNLXn7OGiCBZFSqxiXryN+EWCCQ/gnZqr2u9JRIDHjhYWKBKu7BNcQIS4AU5robSu17CshXEpNZOalAIS8pNrrGjv1GcMCw+z1U6NlC209ep/chaqeIT7zKAtAy7a/Er+pnVHBcrtGIkSBdNIbAZaeoBZfZg= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1743523899; c=relaxed/simple; bh=pIyJCdOg5EfnYtvOavzOlQjVJCjnaMJWsL07IXTnXPc=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=l2BJ8ij4aM18ntr0x04FqertmSsX4Tqir/z6bOMTShzMKqvyqXl0Yb1jCpG8HZ7UwjWxQJF5EWvuPXuGgQ1npTdUb0giN4vjVmF++ZFD2w9gO0oJWSJ4bAAPCoaQD75j6b3bxGHJj4hQ6V9tLV7kcWu0txW5FAjXuPbl7MrQs0M= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=redhat.com; spf=pass smtp.mailfrom=redhat.com; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b=DEM2U3/U; arc=none smtp.client-ip=170.10.133.124 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=redhat.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=redhat.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="DEM2U3/U" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1743523896; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=P25Au7R1dwmJRVWNgpsYsEFXlIeAhOMlbUwIILW5U9Q=; b=DEM2U3/U3eeO8QNjncAjrysgyTNYgHeOrpTGxDqd6pK0BRwng0Rq3n2qsUdUVMtcZKDi0L dhf1iTzfdS9S2x7TlX/+E2CaItBXve0E8ncpQa8uCO33THRiIoRgWFOMElxd+jppAFJe+p paMtNMaeSKTsd+0CALb4j4cnqjK4xTs= Received: from mail-wr1-f71.google.com (mail-wr1-f71.google.com [209.85.221.71]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-388-3YZRgz6qNX6hy0r2hEo34A-1; Tue, 01 Apr 2025 12:11:34 -0400 X-MC-Unique: 3YZRgz6qNX6hy0r2hEo34A-1 X-Mimecast-MFC-AGG-ID: 3YZRgz6qNX6hy0r2hEo34A_1743523893 Received: by mail-wr1-f71.google.com with SMTP id ffacd0b85a97d-39c184b20a2so1046255f8f.1 for ; Tue, 01 Apr 2025 09:11:34 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1743523893; x=1744128693; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=P25Au7R1dwmJRVWNgpsYsEFXlIeAhOMlbUwIILW5U9Q=; b=w2xhpC/QCLk1eflAZexnGDUeqh3aY3xO8xSkmjmzlHFoZL6KcxfmRBrOOOcw23pQOw n2+aPPWBZxHOvy15WR6oV5ygSQxX87xxVq9ddQk95qjdc29PSuNimYxZWfGmo9R18kF/ tLRGoIjlxn3eOe9u4OKrUdwSLIx+Xg0Vq5Wqp9YvjljVnd5JMJ/gr18rjM8B7kriRQqL XaeZ/qJWej+s77zFFqnQ7B6X5UOMwtrad5LQv/xRYsHw0ESlWWcLaYCRhF4sI0AJyGJ+ NOaK0vABiMFnnHW4Ba1XKBADBj4IRR+apTtgOaGYVhyzEN6SfWbyy4XAvBkPsk1twUxv bQ5Q== X-Forwarded-Encrypted: i=1; AJvYcCU/YJG57t/0Sd58LNskPzxsyEs9zYDby07Rt3r9545aLQUn9o8OCDmHws64haYvCY6j8L8=@vger.kernel.org X-Gm-Message-State: AOJu0YyWFlNEsE8nbwcLkWYYutLGgN641NvfrX9vqt5nRCEuFPiymJ+w 77/H67aZmD7WJGbCCyr4gtpthLBXJ+4RYSxJd08F9CKJfpJl3gRVBrsWOyo3gCw8K2F0wUBoa8P T6PnYzi0MiAgTPMPD/dBVdPgD/i+hRmOSME55cC3ACTGs7blpAg== X-Gm-Gg: ASbGncuJQMj1YrEeApYRZcX5iHpV1jD8JIRjGTRKHhEHt9XPwoTqlMDk01NXU7sSK3V Lmo+iZAOCK1R4bs1681uVTQh0vX+F7c/WMvTvLbchi3iJIaWpB4+6ol1/LpOTJpQvFddPaIw+MO dGD04KiSnVy/ChkfDFxBJP37ZhYnKQb9/psEyg2xX7pD59O8cawxHLRF8+h+ootRn75f1AUrrVq 80AkQ3X3ZV+zYGnJLr7W3KhkOQqHTPIVR2iAX0J3nncwi+Ry8AiJRmWH/i7pMrcy+Rip5rrkE9r miNdkraCQ3dP8/4MWwaWlA== X-Received: by 2002:a05:6000:2913:b0:391:4ca:490 with SMTP id ffacd0b85a97d-39c120e35d1mr10486559f8f.29.1743523892966; Tue, 01 Apr 2025 09:11:32 -0700 (PDT) X-Google-Smtp-Source: AGHT+IHRUznHPy00F+DVXJZ81FziQ7RotBaykrjs+9d/ucxw73LPo+iiCJX9scu8bMnyNgr21RT/Iw== X-Received: by 2002:a05:6000:2913:b0:391:4ca:490 with SMTP id ffacd0b85a97d-39c120e35d1mr10486516f8f.29.1743523892530; Tue, 01 Apr 2025 09:11:32 -0700 (PDT) Received: from [192.168.10.48] ([176.206.111.201]) by smtp.gmail.com with ESMTPSA id 5b1f17b1804b1-43d82efeacasm203836695e9.23.2025.04.01.09.11.30 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 01 Apr 2025 09:11:30 -0700 (PDT) From: Paolo Bonzini To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: roy.hopkins@suse.com, seanjc@google.com, thomas.lendacky@amd.com, ashish.kalra@amd.com, michael.roth@amd.com, jroedel@suse.de, nsaenz@amazon.com, anelkz@amazon.de, James.Bottomley@HansenPartnership.com Subject: [PATCH 09/29] KVM: implement plane file descriptors ioctl and creation Date: Tue, 1 Apr 2025 18:10:46 +0200 Message-ID: <20250401161106.790710-10-pbonzini@redhat.com> X-Mailer: git-send-email 2.49.0 In-Reply-To: <20250401161106.790710-1-pbonzini@redhat.com> References: <20250401161106.790710-1-pbonzini@redhat.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Add the file_operations for planes, the means to create new file descriptors for them, and the KVM_CHECK_EXTENSION implementation for the two new capabilities. KVM_SIGNAL_MSI and KVM_SET_MEMORY_ATTRIBUTES are now available through both vm and plane file descriptors, forward them to the same function that is used by the file_operations for planes. KVM_CHECK_EXTENSION instead remains separate, because it only advertises a very small subset of capabilities when applied to plane file descriptors. Signed-off-by: Paolo Bonzini --- include/linux/kvm_host.h | 19 +++++ include/uapi/linux/kvm.h | 2 + virt/kvm/kvm_main.c | 154 +++++++++++++++++++++++++++++++++------ 3 files changed, 154 insertions(+), 21 deletions(-) diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index 0a91b556767e..dbca418d64f5 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -342,6 +342,8 @@ struct kvm_vcpu { unsigned long guest_debug; struct mutex mutex; + + /* Shared for all planes */ struct kvm_run *run; #ifndef __KVM_HAVE_ARCH_WQP @@ -922,6 +924,23 @@ static inline void kvm_vm_bugged(struct kvm *kvm) } +#if KVM_MAX_VCPU_PLANES == 1 +static inline int kvm_arch_nr_vcpu_planes(struct kvm *kvm) +{ + return KVM_MAX_VCPU_PLANES; +} + +static inline struct kvm_plane *vcpu_to_plane(struct kvm_vcpu *vcpu) +{ + return vcpu->kvm->planes[0]; +} +#else +static inline struct kvm_plane *vcpu_to_plane(struct kvm_vcpu *vcpu) +{ + return vcpu->kvm->planes[vcpu->plane_id]; +} +#endif + #define KVM_BUG(cond, kvm, fmt...) \ ({ \ bool __ret = !!(cond); \ diff --git a/include/uapi/linux/kvm.h b/include/uapi/linux/kvm.h index b0cca93ebcb3..96d25c7fa18f 100644 --- a/include/uapi/linux/kvm.h +++ b/include/uapi/linux/kvm.h @@ -1690,4 +1690,6 @@ struct kvm_pre_fault_memory { __u64 padding[5]; }; +#define KVM_CREATE_PLANE _IO(KVMIO, 0xd6) + #endif /* __LINUX_KVM_H */ diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index cd4dfc399cad..b08fea91dc74 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -4388,6 +4388,80 @@ static int kvm_wait_for_vcpu_online(struct kvm_vcpu *vcpu) return 0; } +static int kvm_plane_ioctl_check_extension(struct kvm_plane *plane, long arg) +{ + switch (arg) { +#ifdef CONFIG_HAVE_KVM_MSI + case KVM_CAP_SIGNAL_MSI: +#endif + case KVM_CAP_CHECK_EXTENSION_VM: + return 1; +#ifdef CONFIG_KVM_GENERIC_MEMORY_ATTRIBUTES + case KVM_CAP_MEMORY_ATTRIBUTES: + return kvm_supported_mem_attributes(plane); +#endif + default: + return 0; + } +} + +static long __kvm_plane_ioctl(struct kvm_plane *plane, unsigned int ioctl, + unsigned long arg) +{ + void __user *argp = (void __user *)arg; + + switch (ioctl) { +#ifdef CONFIG_HAVE_KVM_MSI + case KVM_SIGNAL_MSI: { + struct kvm_msi msi; + + if (copy_from_user(&msi, argp, sizeof(msi))) + return -EFAULT; + return kvm_send_userspace_msi(plane, &msi); + } +#endif +#ifdef CONFIG_KVM_GENERIC_MEMORY_ATTRIBUTES + case KVM_SET_MEMORY_ATTRIBUTES: { + struct kvm_memory_attributes attrs; + + if (copy_from_user(&attrs, argp, sizeof(attrs))) + return -EFAULT; + return kvm_vm_ioctl_set_mem_attributes(plane, &attrs); + } +#endif + case KVM_CHECK_EXTENSION: + return kvm_plane_ioctl_check_extension(plane, arg); + default: + return -ENOTTY; + } +} + +static long kvm_plane_ioctl(struct file *filp, unsigned int ioctl, + unsigned long arg) +{ + struct kvm_plane *plane = filp->private_data; + + if (plane->kvm->mm != current->mm || plane->kvm->vm_dead) + return -EIO; + + return __kvm_plane_ioctl(plane, ioctl, arg); +} + +static int kvm_plane_release(struct inode *inode, struct file *filp) +{ + struct kvm_plane *plane = filp->private_data; + + kvm_put_kvm(plane->kvm); + return 0; +} + +static struct file_operations kvm_plane_fops = { + .unlocked_ioctl = kvm_plane_ioctl, + .release = kvm_plane_release, + KVM_COMPAT(kvm_plane_ioctl), +}; + + static long kvm_vcpu_ioctl(struct file *filp, unsigned int ioctl, unsigned long arg) { @@ -4878,6 +4952,14 @@ static int kvm_vm_ioctl_check_extension_generic(struct kvm *kvm, long arg) if (kvm) return kvm_arch_nr_memslot_as_ids(kvm); return KVM_MAX_NR_ADDRESS_SPACES; +#endif +#if KVM_MAX_VCPU_PLANES > 1 + case KVM_CAP_PLANES: + if (kvm) + return kvm_arch_nr_vcpu_planes(kvm); + return KVM_MAX_PLANES; + case KVM_CAP_PLANES_FPU: + return kvm_arch_planes_share_fpu(kvm); #endif case KVM_CAP_NR_MEMSLOTS: return KVM_USER_MEM_SLOTS; @@ -5112,6 +5194,48 @@ static int kvm_vm_ioctl_get_stats_fd(struct kvm *kvm) return fd; } +static int kvm_vm_ioctl_create_plane(struct kvm *kvm, unsigned id) +{ + struct kvm_plane *plane; + struct file *file; + int r, fd; + + if (id >= KVM_MAX_VCPU_PLANES) + return -EINVAL; + + guard(mutex)(&kvm->lock); + if (kvm->planes[id]) + return -EEXIST; + + fd = get_unused_fd_flags(O_CLOEXEC); + if (fd < 0) + return fd; + + plane = kvm_create_vm_plane(kvm, id); + if (IS_ERR(plane)) { + r = PTR_ERR(plane); + goto put_fd; + } + + kvm_get_kvm(kvm); + file = anon_inode_getfile("kvm-plane", &kvm_plane_fops, plane, O_RDWR); + if (IS_ERR(file)) { + r = PTR_ERR(file); + goto put_kvm; + } + + kvm->planes[id] = plane; + fd_install(fd, file); + return fd; + +put_kvm: + kvm_put_kvm(kvm); + kfree(plane); +put_fd: + put_unused_fd(fd); + return r; +} + #define SANITY_CHECK_MEM_REGION_FIELD(field) \ do { \ BUILD_BUG_ON(offsetof(struct kvm_userspace_memory_region, field) != \ @@ -5130,6 +5254,9 @@ static long kvm_vm_ioctl(struct file *filp, if (kvm->mm != current->mm || kvm->vm_dead) return -EIO; switch (ioctl) { + case KVM_CREATE_PLANE: + r = kvm_vm_ioctl_create_plane(kvm, arg); + break; case KVM_CREATE_VCPU: r = kvm_vm_ioctl_create_vcpu(kvm, arg); break; @@ -5236,16 +5363,12 @@ static long kvm_vm_ioctl(struct file *filp, break; } #ifdef CONFIG_HAVE_KVM_MSI - case KVM_SIGNAL_MSI: { - struct kvm_msi msi; - - r = -EFAULT; - if (copy_from_user(&msi, argp, sizeof(msi))) - goto out; - r = kvm_send_userspace_msi(kvm->planes[0], &msi); - break; - } + case KVM_SIGNAL_MSI: #endif +#ifdef CONFIG_KVM_GENERIC_MEMORY_ATTRIBUTES + case KVM_SET_MEMORY_ATTRIBUTES: +#endif /* CONFIG_KVM_GENERIC_MEMORY_ATTRIBUTES */ + return __kvm_plane_ioctl(kvm->planes[0], ioctl, arg); #ifdef __KVM_HAVE_IRQ_LINE case KVM_IRQ_LINE_STATUS: case KVM_IRQ_LINE: { @@ -5301,18 +5424,6 @@ static long kvm_vm_ioctl(struct file *filp, break; } #endif /* CONFIG_HAVE_KVM_IRQ_ROUTING */ -#ifdef CONFIG_KVM_GENERIC_MEMORY_ATTRIBUTES - case KVM_SET_MEMORY_ATTRIBUTES: { - struct kvm_memory_attributes attrs; - - r = -EFAULT; - if (copy_from_user(&attrs, argp, sizeof(attrs))) - goto out; - - r = kvm_vm_ioctl_set_mem_attributes(kvm->planes[0], &attrs); - break; - } -#endif /* CONFIG_KVM_GENERIC_MEMORY_ATTRIBUTES */ case KVM_CREATE_DEVICE: { struct kvm_create_device cd; @@ -6467,6 +6578,7 @@ int kvm_init(unsigned vcpu_size, unsigned vcpu_align, struct module *module) kvm_chardev_ops.owner = module; kvm_vm_fops.owner = module; kvm_vcpu_fops.owner = module; + kvm_plane_fops.owner = module; kvm_device_fops.owner = module; kvm_preempt_ops.sched_in = kvm_sched_in; From patchwork Tue Apr 1 16:10:47 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Paolo Bonzini X-Patchwork-Id: 14035099 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id DEF2121ABB4 for ; Tue, 1 Apr 2025 16:11:41 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=170.10.129.124 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1743523906; cv=none; b=u5rlKqT2UBQ9aCvzIexyfYcDObFS1q4So8WDY8MfCWCt3u2briBQKwoYmXncHMVCWtJxXAstYxxXYJ/fadVvfIEuEBfNPoJOBcu4SLbClCbUeaMo5cnYcI/QY12Y4qv4SfPxOHRDjdvxuCF+uUs5oBOKBU0VF+rtBz84o6kXbKU= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1743523906; c=relaxed/simple; bh=kEadPKdm26YjO/piLtcno0OXMST7Rw/BeIh7YNl9zus=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=gxWuyqVGJEygTmELte1Z9rgn9qPn1oq2PMDm0baq3leB8112jp4hbbgifyEMWxg9WI8l6xXtdInaD4w+ZTZrpDRTHo/lwIvJW3zZQGaxWm/eYDI0xsZEYarJrss+2C7GHF3UDiBIkb0XrIu14HXunmOwZxhSWH1eziye3UN1s98= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=redhat.com; spf=pass smtp.mailfrom=redhat.com; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b=GmbzESzK; arc=none smtp.client-ip=170.10.129.124 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=redhat.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=redhat.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="GmbzESzK" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1743523901; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=hCHXVPRRw32Pd1RYLwaglPcIyP0wPV42w+9KPzLStsA=; b=GmbzESzKf2ZKYhtDpaTa0hBcu3oFmbrSlTcqSzLBIj8dyi6/r6o34yS6M592hy4q0XUMb8 8oJa0gKHBx5wR6YrfzPzptQ+RHvfFPhKXfaUWkEtgzH+9jdbtUWY0RNSEndC25Sa2SskY/ o0wc+4oybRSOsHB23xqfw8vDptv9gPA= Received: from mail-wr1-f70.google.com (mail-wr1-f70.google.com [209.85.221.70]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-471-vCxPSDnUPGyKSUlD0VwT1Q-1; Tue, 01 Apr 2025 12:11:39 -0400 X-MC-Unique: vCxPSDnUPGyKSUlD0VwT1Q-1 X-Mimecast-MFC-AGG-ID: vCxPSDnUPGyKSUlD0VwT1Q_1743523898 Received: by mail-wr1-f70.google.com with SMTP id ffacd0b85a97d-3912fe32b08so3300901f8f.3 for ; Tue, 01 Apr 2025 09:11:38 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1743523898; x=1744128698; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=hCHXVPRRw32Pd1RYLwaglPcIyP0wPV42w+9KPzLStsA=; b=tO6Mxve/ireBG+LVvJiqw8CCnClgBT52MjWe2cqjfufsfGkbvX267Lqq0eb4hQm+8E RbQ5gKB3T4Ea/+NrCfOE5UW2vUARg3Ao/3b3SaL83HzS+wQ5f6eJIey7coWjeMmAQMe/ cyPOvMUT6oKNVRV+td9h12JPl7u7hkxHbpsRzg7zRMCrejx608JNauzvaUo79LCVzPQZ gK0eKwp5stbV0/mnt3o1tzM4oVcawMWOeWC04FkBI1SRypH6iyzTY+uA+7H/WbYSawKD kjGF1v6ycRm1K4B9UZLZnZPUj4T/Wd6LsW2NX2CjG5/zDA0jNJZv3BqLXlEfBwC8ns2B T8bw== X-Forwarded-Encrypted: i=1; AJvYcCXABvhThrUSDo3/o1+qg7PlWTFYeWVwhHVTWKYwfydoYtYu0c7Vyfbo+ijpdJ7tWco0shU=@vger.kernel.org X-Gm-Message-State: AOJu0Yy3gkgNNNoKDL+auMDeP2C1YnIHpdSo+/qTfu5xIlRUtTrqK2BW pj/Lp7lTZep1TJvELQQ9gLe3OiSqHvkCvIbIGaheTJ4TwoA1jWTCkRY2M4KlblX3XkKjKBM8+Ep BUim35RdoSENxdZ8fKOEFiqN2qMthDk0PsWF4b/FPjnuksAAqyg== X-Gm-Gg: ASbGncunziyTBmjCVHzicBsiggCletoxeyJwnb0LloMuDeSKVFfZ+czizDgYMVnMF5w ulrZdBTvrswQs4YU46B9nKHFb80cZo2+GRMkwtOrL61XYkuDODDhd6bvEHrNO/s4uoDCDQRjX8X 40HnugKq7A83HS8YhjTyzV8mkWG2Iv7+SYCIAFl9fXotVu6xsZRi9kjrbFFLgcInHpx1wPfCcOd +TCfReof+ychMqJCMtmfDuuY98XUKpJUKXj02LhDYqCzfClcLRa4RqO/p2MLafUi4gyh5dcutqZ 7gkuuL7s0fD+3mIhiw0V5w== X-Received: by 2002:a05:6000:2910:b0:391:39bd:a381 with SMTP id ffacd0b85a97d-39c120e3eebmr11210915f8f.30.1743523896326; Tue, 01 Apr 2025 09:11:36 -0700 (PDT) X-Google-Smtp-Source: AGHT+IEqqHlFfEfqmQu9lXk0Qp8CQTjqYvrEJE3OM2uPTiGullHjMI6tXRieoqUgnFZx9Yd8mHv74A== X-Received: by 2002:a05:6000:2910:b0:391:39bd:a381 with SMTP id ffacd0b85a97d-39c120e3eebmr11210831f8f.30.1743523895269; Tue, 01 Apr 2025 09:11:35 -0700 (PDT) Received: from [192.168.10.48] ([176.206.111.201]) by smtp.gmail.com with ESMTPSA id ffacd0b85a97d-39c0b662c05sm14316291f8f.23.2025.04.01.09.11.33 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 01 Apr 2025 09:11:33 -0700 (PDT) From: Paolo Bonzini To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: roy.hopkins@suse.com, seanjc@google.com, thomas.lendacky@amd.com, ashish.kalra@amd.com, michael.roth@amd.com, jroedel@suse.de, nsaenz@amazon.com, anelkz@amazon.de, James.Bottomley@HansenPartnership.com Subject: [PATCH 10/29] KVM: share statistics for same vCPU id on different planes Date: Tue, 1 Apr 2025 18:10:47 +0200 Message-ID: <20250401161106.790710-11-pbonzini@redhat.com> X-Mailer: git-send-email 2.49.0 In-Reply-To: <20250401161106.790710-1-pbonzini@redhat.com> References: <20250401161106.790710-1-pbonzini@redhat.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Statistics are protected by vcpu->mutex; because KVM_RUN takes the plane-0 vCPU mutex, there is no race on applying statistics for all planes to the plane-0 kvm_vcpu struct. This saves the burden on the kernel of implementing the binary stats interface for vCPU plane file descriptors, and on userspace of gathering info from multiple planes. The disadvantage is a slight loss of information, and an extra pointer dereference when updating stats. Signed-off-by: Paolo Bonzini --- arch/arm64/kvm/arm.c | 2 +- arch/arm64/kvm/handle_exit.c | 6 +-- arch/arm64/kvm/hyp/nvhe/gen-hyprel.c | 4 +- arch/arm64/kvm/mmio.c | 4 +- arch/loongarch/kvm/exit.c | 8 ++-- arch/loongarch/kvm/vcpu.c | 2 +- arch/mips/kvm/emulate.c | 2 +- arch/mips/kvm/mips.c | 30 +++++++------- arch/mips/kvm/vz.c | 18 ++++----- arch/powerpc/kvm/book3s.c | 2 +- arch/powerpc/kvm/book3s_hv.c | 46 ++++++++++----------- arch/powerpc/kvm/book3s_hv_rm_xics.c | 8 ++-- arch/powerpc/kvm/book3s_pr.c | 22 +++++----- arch/powerpc/kvm/book3s_pr_papr.c | 2 +- arch/powerpc/kvm/powerpc.c | 4 +- arch/powerpc/kvm/timing.h | 28 ++++++------- arch/riscv/kvm/vcpu.c | 2 +- arch/riscv/kvm/vcpu_exit.c | 10 ++--- arch/riscv/kvm/vcpu_insn.c | 16 ++++---- arch/riscv/kvm/vcpu_sbi.c | 2 +- arch/riscv/kvm/vcpu_sbi_hsm.c | 2 +- arch/s390/kvm/diag.c | 18 ++++----- arch/s390/kvm/intercept.c | 20 +++++----- arch/s390/kvm/interrupt.c | 48 +++++++++++----------- arch/s390/kvm/kvm-s390.c | 8 ++-- arch/s390/kvm/priv.c | 60 ++++++++++++++-------------- arch/s390/kvm/sigp.c | 50 +++++++++++------------ arch/s390/kvm/vsie.c | 2 +- arch/x86/kvm/debugfs.c | 2 +- arch/x86/kvm/hyperv.c | 4 +- arch/x86/kvm/kvm_cache_regs.h | 4 +- arch/x86/kvm/mmu/mmu.c | 18 ++++----- arch/x86/kvm/mmu/tdp_mmu.c | 2 +- arch/x86/kvm/svm/sev.c | 2 +- arch/x86/kvm/svm/svm.c | 18 ++++----- arch/x86/kvm/vmx/tdx.c | 8 ++-- arch/x86/kvm/vmx/vmx.c | 20 +++++----- arch/x86/kvm/x86.c | 40 +++++++++---------- include/linux/kvm_host.h | 5 ++- virt/kvm/kvm_main.c | 19 ++++----- 40 files changed, 285 insertions(+), 283 deletions(-) diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c index 0160b4924351..94fae442a8b8 100644 --- a/arch/arm64/kvm/arm.c +++ b/arch/arm64/kvm/arm.c @@ -1187,7 +1187,7 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu) ret = kvm_arm_vcpu_enter_exit(vcpu); vcpu->mode = OUTSIDE_GUEST_MODE; - vcpu->stat.exits++; + vcpu->stat->exits++; /* * Back from guest *************************************************************/ diff --git a/arch/arm64/kvm/handle_exit.c b/arch/arm64/kvm/handle_exit.c index 512d152233ff..b4f69beedd88 100644 --- a/arch/arm64/kvm/handle_exit.c +++ b/arch/arm64/kvm/handle_exit.c @@ -38,7 +38,7 @@ static int handle_hvc(struct kvm_vcpu *vcpu) { trace_kvm_hvc_arm64(*vcpu_pc(vcpu), vcpu_get_reg(vcpu, 0), kvm_vcpu_hvc_get_imm(vcpu)); - vcpu->stat.hvc_exit_stat++; + vcpu->stat->hvc_exit_stat++; /* Forward hvc instructions to the virtual EL2 if the guest has EL2. */ if (vcpu_has_nv(vcpu)) { @@ -132,10 +132,10 @@ static int kvm_handle_wfx(struct kvm_vcpu *vcpu) if (esr & ESR_ELx_WFx_ISS_WFE) { trace_kvm_wfx_arm64(*vcpu_pc(vcpu), true); - vcpu->stat.wfe_exit_stat++; + vcpu->stat->wfe_exit_stat++; } else { trace_kvm_wfx_arm64(*vcpu_pc(vcpu), false); - vcpu->stat.wfi_exit_stat++; + vcpu->stat->wfi_exit_stat++; } if (esr & ESR_ELx_WFx_ISS_WFxT) { diff --git a/arch/arm64/kvm/hyp/nvhe/gen-hyprel.c b/arch/arm64/kvm/hyp/nvhe/gen-hyprel.c index b63f4e1c1033..b7c3f3b8cc26 100644 --- a/arch/arm64/kvm/hyp/nvhe/gen-hyprel.c +++ b/arch/arm64/kvm/hyp/nvhe/gen-hyprel.c @@ -266,7 +266,7 @@ static void init_elf(const char *path) } /* mmap() the entire ELF file read-only at an arbitrary address. */ - elf.begin = mmap(0, stat.st_size, PROT_READ, MAP_PRIVATE, fd, 0); + elf.begin = mmap(0, stat->st_size, PROT_READ, MAP_PRIVATE, fd, 0); if (elf.begin == MAP_FAILED) { close(fd); fatal_perror("Could not mmap ELF file"); @@ -276,7 +276,7 @@ static void init_elf(const char *path) close(fd); /* Get pointer to the ELF header. */ - assert_ge(stat.st_size, sizeof(*elf.ehdr), "%lu"); + assert_ge(stat->st_size, sizeof(*elf.ehdr), "%lu"); elf.ehdr = elf_ptr(Elf64_Ehdr, 0); /* Check the ELF magic. */ diff --git a/arch/arm64/kvm/mmio.c b/arch/arm64/kvm/mmio.c index ab365e839874..96c5fd5146ba 100644 --- a/arch/arm64/kvm/mmio.c +++ b/arch/arm64/kvm/mmio.c @@ -221,14 +221,14 @@ int io_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa) /* We handled the access successfully in the kernel. */ if (!is_write) memcpy(run->mmio.data, data_buf, len); - vcpu->stat.mmio_exit_kernel++; + vcpu->stat->mmio_exit_kernel++; kvm_handle_mmio_return(vcpu); return 1; } if (is_write) memcpy(run->mmio.data, data_buf, len); - vcpu->stat.mmio_exit_user++; + vcpu->stat->mmio_exit_user++; run->exit_reason = KVM_EXIT_MMIO; return 0; } diff --git a/arch/loongarch/kvm/exit.c b/arch/loongarch/kvm/exit.c index ea321403644a..ee5b3673efc8 100644 --- a/arch/loongarch/kvm/exit.c +++ b/arch/loongarch/kvm/exit.c @@ -31,7 +31,7 @@ static int kvm_emu_cpucfg(struct kvm_vcpu *vcpu, larch_inst inst) rd = inst.reg2_format.rd; rj = inst.reg2_format.rj; - ++vcpu->stat.cpucfg_exits; + ++vcpu->stat->cpucfg_exits; index = vcpu->arch.gprs[rj]; /* @@ -264,7 +264,7 @@ int kvm_complete_iocsr_read(struct kvm_vcpu *vcpu, struct kvm_run *run) int kvm_emu_idle(struct kvm_vcpu *vcpu) { - ++vcpu->stat.idle_exits; + ++vcpu->stat->idle_exits; trace_kvm_exit_idle(vcpu, KVM_TRACE_EXIT_IDLE); if (!kvm_arch_vcpu_runnable(vcpu)) @@ -884,7 +884,7 @@ static int kvm_handle_hypercall(struct kvm_vcpu *vcpu) switch (code) { case KVM_HCALL_SERVICE: - vcpu->stat.hypercall_exits++; + vcpu->stat->hypercall_exits++; kvm_handle_service(vcpu); break; case KVM_HCALL_USER_SERVICE: @@ -893,7 +893,7 @@ static int kvm_handle_hypercall(struct kvm_vcpu *vcpu) break; } - vcpu->stat.hypercall_exits++; + vcpu->stat->hypercall_exits++; vcpu->run->exit_reason = KVM_EXIT_HYPERCALL; vcpu->run->hypercall.nr = KVM_HCALL_USER_SERVICE; vcpu->run->hypercall.args[0] = kvm_read_reg(vcpu, LOONGARCH_GPR_A0); diff --git a/arch/loongarch/kvm/vcpu.c b/arch/loongarch/kvm/vcpu.c index 552cde722932..470c79e79281 100644 --- a/arch/loongarch/kvm/vcpu.c +++ b/arch/loongarch/kvm/vcpu.c @@ -330,7 +330,7 @@ static int kvm_handle_exit(struct kvm_run *run, struct kvm_vcpu *vcpu) ret = kvm_handle_fault(vcpu, ecode); } else { WARN(!intr, "vm exiting with suspicious irq\n"); - ++vcpu->stat.int_exits; + ++vcpu->stat->int_exits; } if (ret == RESUME_GUEST) diff --git a/arch/mips/kvm/emulate.c b/arch/mips/kvm/emulate.c index 0feec52222fb..c9f83b500078 100644 --- a/arch/mips/kvm/emulate.c +++ b/arch/mips/kvm/emulate.c @@ -947,7 +947,7 @@ enum emulation_result kvm_mips_emul_wait(struct kvm_vcpu *vcpu) kvm_debug("[%#lx] !!!WAIT!!! (%#lx)\n", vcpu->arch.pc, vcpu->arch.pending_exceptions); - ++vcpu->stat.wait_exits; + ++vcpu->stat->wait_exits; trace_kvm_exit(vcpu, KVM_TRACE_EXIT_WAIT); if (!vcpu->arch.pending_exceptions) { kvm_vz_lose_htimer(vcpu); diff --git a/arch/mips/kvm/mips.c b/arch/mips/kvm/mips.c index 60b43ea85c12..77637d201699 100644 --- a/arch/mips/kvm/mips.c +++ b/arch/mips/kvm/mips.c @@ -1199,7 +1199,7 @@ static int __kvm_mips_handle_exit(struct kvm_vcpu *vcpu) case EXCCODE_INT: kvm_debug("[%d]EXCCODE_INT @ %p\n", vcpu->vcpu_id, opc); - ++vcpu->stat.int_exits; + ++vcpu->stat->int_exits; if (need_resched()) cond_resched(); @@ -1210,7 +1210,7 @@ static int __kvm_mips_handle_exit(struct kvm_vcpu *vcpu) case EXCCODE_CPU: kvm_debug("EXCCODE_CPU: @ PC: %p\n", opc); - ++vcpu->stat.cop_unusable_exits; + ++vcpu->stat->cop_unusable_exits; ret = kvm_mips_callbacks->handle_cop_unusable(vcpu); /* XXXKYMA: Might need to return to user space */ if (run->exit_reason == KVM_EXIT_IRQ_WINDOW_OPEN) @@ -1218,7 +1218,7 @@ static int __kvm_mips_handle_exit(struct kvm_vcpu *vcpu) break; case EXCCODE_MOD: - ++vcpu->stat.tlbmod_exits; + ++vcpu->stat->tlbmod_exits; ret = kvm_mips_callbacks->handle_tlb_mod(vcpu); break; @@ -1227,7 +1227,7 @@ static int __kvm_mips_handle_exit(struct kvm_vcpu *vcpu) cause, kvm_read_c0_guest_status(&vcpu->arch.cop0), opc, badvaddr); - ++vcpu->stat.tlbmiss_st_exits; + ++vcpu->stat->tlbmiss_st_exits; ret = kvm_mips_callbacks->handle_tlb_st_miss(vcpu); break; @@ -1235,52 +1235,52 @@ static int __kvm_mips_handle_exit(struct kvm_vcpu *vcpu) kvm_debug("TLB LD fault: cause %#x, PC: %p, BadVaddr: %#lx\n", cause, opc, badvaddr); - ++vcpu->stat.tlbmiss_ld_exits; + ++vcpu->stat->tlbmiss_ld_exits; ret = kvm_mips_callbacks->handle_tlb_ld_miss(vcpu); break; case EXCCODE_ADES: - ++vcpu->stat.addrerr_st_exits; + ++vcpu->stat->addrerr_st_exits; ret = kvm_mips_callbacks->handle_addr_err_st(vcpu); break; case EXCCODE_ADEL: - ++vcpu->stat.addrerr_ld_exits; + ++vcpu->stat->addrerr_ld_exits; ret = kvm_mips_callbacks->handle_addr_err_ld(vcpu); break; case EXCCODE_SYS: - ++vcpu->stat.syscall_exits; + ++vcpu->stat->syscall_exits; ret = kvm_mips_callbacks->handle_syscall(vcpu); break; case EXCCODE_RI: - ++vcpu->stat.resvd_inst_exits; + ++vcpu->stat->resvd_inst_exits; ret = kvm_mips_callbacks->handle_res_inst(vcpu); break; case EXCCODE_BP: - ++vcpu->stat.break_inst_exits; + ++vcpu->stat->break_inst_exits; ret = kvm_mips_callbacks->handle_break(vcpu); break; case EXCCODE_TR: - ++vcpu->stat.trap_inst_exits; + ++vcpu->stat->trap_inst_exits; ret = kvm_mips_callbacks->handle_trap(vcpu); break; case EXCCODE_MSAFPE: - ++vcpu->stat.msa_fpe_exits; + ++vcpu->stat->msa_fpe_exits; ret = kvm_mips_callbacks->handle_msa_fpe(vcpu); break; case EXCCODE_FPE: - ++vcpu->stat.fpe_exits; + ++vcpu->stat->fpe_exits; ret = kvm_mips_callbacks->handle_fpe(vcpu); break; case EXCCODE_MSADIS: - ++vcpu->stat.msa_disabled_exits; + ++vcpu->stat->msa_disabled_exits; ret = kvm_mips_callbacks->handle_msa_disabled(vcpu); break; @@ -1317,7 +1317,7 @@ static int __kvm_mips_handle_exit(struct kvm_vcpu *vcpu) if (signal_pending(current)) { run->exit_reason = KVM_EXIT_INTR; ret = (-EINTR << 2) | RESUME_HOST; - ++vcpu->stat.signal_exits; + ++vcpu->stat->signal_exits; trace_kvm_exit(vcpu, KVM_TRACE_EXIT_SIGNAL); } } diff --git a/arch/mips/kvm/vz.c b/arch/mips/kvm/vz.c index ccab4d76b126..c37fd7b3e608 100644 --- a/arch/mips/kvm/vz.c +++ b/arch/mips/kvm/vz.c @@ -1162,7 +1162,7 @@ static enum emulation_result kvm_vz_gpsi_lwc2(union mips_instruction inst, rd = inst.loongson3_lscsr_format.rd; switch (inst.loongson3_lscsr_format.fr) { case 0x8: /* Read CPUCFG */ - ++vcpu->stat.vz_cpucfg_exits; + ++vcpu->stat->vz_cpucfg_exits; hostcfg = read_cpucfg(vcpu->arch.gprs[rs]); switch (vcpu->arch.gprs[rs]) { @@ -1491,38 +1491,38 @@ static int kvm_trap_vz_handle_guest_exit(struct kvm_vcpu *vcpu) trace_kvm_exit(vcpu, KVM_TRACE_EXIT_GEXCCODE_BASE + gexccode); switch (gexccode) { case MIPS_GCTL0_GEXC_GPSI: - ++vcpu->stat.vz_gpsi_exits; + ++vcpu->stat->vz_gpsi_exits; er = kvm_trap_vz_handle_gpsi(cause, opc, vcpu); break; case MIPS_GCTL0_GEXC_GSFC: - ++vcpu->stat.vz_gsfc_exits; + ++vcpu->stat->vz_gsfc_exits; er = kvm_trap_vz_handle_gsfc(cause, opc, vcpu); break; case MIPS_GCTL0_GEXC_HC: - ++vcpu->stat.vz_hc_exits; + ++vcpu->stat->vz_hc_exits; er = kvm_trap_vz_handle_hc(cause, opc, vcpu); break; case MIPS_GCTL0_GEXC_GRR: - ++vcpu->stat.vz_grr_exits; + ++vcpu->stat->vz_grr_exits; er = kvm_trap_vz_no_handler_guest_exit(gexccode, cause, opc, vcpu); break; case MIPS_GCTL0_GEXC_GVA: - ++vcpu->stat.vz_gva_exits; + ++vcpu->stat->vz_gva_exits; er = kvm_trap_vz_no_handler_guest_exit(gexccode, cause, opc, vcpu); break; case MIPS_GCTL0_GEXC_GHFC: - ++vcpu->stat.vz_ghfc_exits; + ++vcpu->stat->vz_ghfc_exits; er = kvm_trap_vz_handle_ghfc(cause, opc, vcpu); break; case MIPS_GCTL0_GEXC_GPA: - ++vcpu->stat.vz_gpa_exits; + ++vcpu->stat->vz_gpa_exits; er = kvm_trap_vz_no_handler_guest_exit(gexccode, cause, opc, vcpu); break; default: - ++vcpu->stat.vz_resvd_exits; + ++vcpu->stat->vz_resvd_exits; er = kvm_trap_vz_no_handler_guest_exit(gexccode, cause, opc, vcpu); break; diff --git a/arch/powerpc/kvm/book3s.c b/arch/powerpc/kvm/book3s.c index d79c5d1098c0..7ea6955cd96c 100644 --- a/arch/powerpc/kvm/book3s.c +++ b/arch/powerpc/kvm/book3s.c @@ -178,7 +178,7 @@ void kvmppc_book3s_dequeue_irqprio(struct kvm_vcpu *vcpu, void kvmppc_book3s_queue_irqprio(struct kvm_vcpu *vcpu, unsigned int vec) { - vcpu->stat.queue_intr++; + vcpu->stat->queue_intr++; set_bit(kvmppc_book3s_vec2irqprio(vec), &vcpu->arch.pending_exceptions); diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c index 86bff159c51e..6e94ffc0bb6b 100644 --- a/arch/powerpc/kvm/book3s_hv.c +++ b/arch/powerpc/kvm/book3s_hv.c @@ -238,7 +238,7 @@ static void kvmppc_fast_vcpu_kick_hv(struct kvm_vcpu *vcpu) waitp = kvm_arch_vcpu_get_wait(vcpu); if (rcuwait_wake_up(waitp)) - ++vcpu->stat.generic.halt_wakeup; + ++vcpu->stat->generic.halt_wakeup; cpu = READ_ONCE(vcpu->arch.thread_cpu); if (cpu >= 0 && kvmppc_ipi_thread(cpu)) @@ -1633,7 +1633,7 @@ static int kvmppc_handle_exit_hv(struct kvm_vcpu *vcpu, struct kvm_run *run = vcpu->run; int r = RESUME_HOST; - vcpu->stat.sum_exits++; + vcpu->stat->sum_exits++; /* * This can happen if an interrupt occurs in the last stages @@ -1662,13 +1662,13 @@ static int kvmppc_handle_exit_hv(struct kvm_vcpu *vcpu, vcpu->arch.trap = BOOK3S_INTERRUPT_HV_DECREMENTER; fallthrough; case BOOK3S_INTERRUPT_HV_DECREMENTER: - vcpu->stat.dec_exits++; + vcpu->stat->dec_exits++; r = RESUME_GUEST; break; case BOOK3S_INTERRUPT_EXTERNAL: case BOOK3S_INTERRUPT_H_DOORBELL: case BOOK3S_INTERRUPT_H_VIRT: - vcpu->stat.ext_intr_exits++; + vcpu->stat->ext_intr_exits++; r = RESUME_GUEST; break; /* SR/HMI/PMI are HV interrupts that host has handled. Resume guest.*/ @@ -1971,7 +1971,7 @@ static int kvmppc_handle_nested_exit(struct kvm_vcpu *vcpu) int r; int srcu_idx; - vcpu->stat.sum_exits++; + vcpu->stat->sum_exits++; /* * This can happen if an interrupt occurs in the last stages @@ -1992,22 +1992,22 @@ static int kvmppc_handle_nested_exit(struct kvm_vcpu *vcpu) switch (vcpu->arch.trap) { /* We're good on these - the host merely wanted to get our attention */ case BOOK3S_INTERRUPT_HV_DECREMENTER: - vcpu->stat.dec_exits++; + vcpu->stat->dec_exits++; r = RESUME_GUEST; break; case BOOK3S_INTERRUPT_EXTERNAL: - vcpu->stat.ext_intr_exits++; + vcpu->stat->ext_intr_exits++; r = RESUME_HOST; break; case BOOK3S_INTERRUPT_H_DOORBELL: case BOOK3S_INTERRUPT_H_VIRT: - vcpu->stat.ext_intr_exits++; + vcpu->stat->ext_intr_exits++; r = RESUME_GUEST; break; /* These need to go to the nested HV */ case BOOK3S_INTERRUPT_NESTED_HV_DECREMENTER: vcpu->arch.trap = BOOK3S_INTERRUPT_HV_DECREMENTER; - vcpu->stat.dec_exits++; + vcpu->stat->dec_exits++; r = RESUME_HOST; break; /* SR/HMI/PMI are HV interrupts that host has handled. Resume guest.*/ @@ -4614,7 +4614,7 @@ static void kvmppc_vcore_blocked(struct kvmppc_vcore *vc) cur = start_poll = ktime_get(); if (vc->halt_poll_ns) { ktime_t stop = ktime_add_ns(start_poll, vc->halt_poll_ns); - ++vc->runner->stat.generic.halt_attempted_poll; + ++vc->runner->stat->generic.halt_attempted_poll; vc->vcore_state = VCORE_POLLING; spin_unlock(&vc->lock); @@ -4631,7 +4631,7 @@ static void kvmppc_vcore_blocked(struct kvmppc_vcore *vc) vc->vcore_state = VCORE_INACTIVE; if (!do_sleep) { - ++vc->runner->stat.generic.halt_successful_poll; + ++vc->runner->stat->generic.halt_successful_poll; goto out; } } @@ -4643,7 +4643,7 @@ static void kvmppc_vcore_blocked(struct kvmppc_vcore *vc) do_sleep = 0; /* If we polled, count this as a successful poll */ if (vc->halt_poll_ns) - ++vc->runner->stat.generic.halt_successful_poll; + ++vc->runner->stat->generic.halt_successful_poll; goto out; } @@ -4657,7 +4657,7 @@ static void kvmppc_vcore_blocked(struct kvmppc_vcore *vc) spin_lock(&vc->lock); vc->vcore_state = VCORE_INACTIVE; trace_kvmppc_vcore_blocked(vc->runner, 1); - ++vc->runner->stat.halt_successful_wait; + ++vc->runner->stat->halt_successful_wait; cur = ktime_get(); @@ -4666,29 +4666,29 @@ static void kvmppc_vcore_blocked(struct kvmppc_vcore *vc) /* Attribute wait time */ if (do_sleep) { - vc->runner->stat.generic.halt_wait_ns += + vc->runner->stat->generic.halt_wait_ns += ktime_to_ns(cur) - ktime_to_ns(start_wait); KVM_STATS_LOG_HIST_UPDATE( - vc->runner->stat.generic.halt_wait_hist, + vc->runner->stat->generic.halt_wait_hist, ktime_to_ns(cur) - ktime_to_ns(start_wait)); /* Attribute failed poll time */ if (vc->halt_poll_ns) { - vc->runner->stat.generic.halt_poll_fail_ns += + vc->runner->stat->generic.halt_poll_fail_ns += ktime_to_ns(start_wait) - ktime_to_ns(start_poll); KVM_STATS_LOG_HIST_UPDATE( - vc->runner->stat.generic.halt_poll_fail_hist, + vc->runner->stat->generic.halt_poll_fail_hist, ktime_to_ns(start_wait) - ktime_to_ns(start_poll)); } } else { /* Attribute successful poll time */ if (vc->halt_poll_ns) { - vc->runner->stat.generic.halt_poll_success_ns += + vc->runner->stat->generic.halt_poll_success_ns += ktime_to_ns(cur) - ktime_to_ns(start_poll); KVM_STATS_LOG_HIST_UPDATE( - vc->runner->stat.generic.halt_poll_success_hist, + vc->runner->stat->generic.halt_poll_success_hist, ktime_to_ns(cur) - ktime_to_ns(start_poll)); } } @@ -4807,7 +4807,7 @@ static int kvmppc_run_vcpu(struct kvm_vcpu *vcpu) kvmppc_core_prepare_to_enter(v); if (signal_pending(v->arch.run_task)) { kvmppc_remove_runnable(vc, v, mftb()); - v->stat.signal_exits++; + v->stat->signal_exits++; v->run->exit_reason = KVM_EXIT_INTR; v->arch.ret = -EINTR; wake_up(&v->arch.cpu_run); @@ -4848,7 +4848,7 @@ static int kvmppc_run_vcpu(struct kvm_vcpu *vcpu) if (vcpu->arch.state == KVMPPC_VCPU_RUNNABLE) { kvmppc_remove_runnable(vc, vcpu, mftb()); - vcpu->stat.signal_exits++; + vcpu->stat->signal_exits++; run->exit_reason = KVM_EXIT_INTR; vcpu->arch.ret = -EINTR; } @@ -5047,7 +5047,7 @@ int kvmhv_run_single_vcpu(struct kvm_vcpu *vcpu, u64 time_limit, for (;;) { set_current_state(TASK_INTERRUPTIBLE); if (signal_pending(current)) { - vcpu->stat.signal_exits++; + vcpu->stat->signal_exits++; run->exit_reason = KVM_EXIT_INTR; vcpu->arch.ret = -EINTR; break; @@ -5070,7 +5070,7 @@ int kvmhv_run_single_vcpu(struct kvm_vcpu *vcpu, u64 time_limit, return vcpu->arch.ret; sigpend: - vcpu->stat.signal_exits++; + vcpu->stat->signal_exits++; run->exit_reason = KVM_EXIT_INTR; vcpu->arch.ret = -EINTR; out: diff --git a/arch/powerpc/kvm/book3s_hv_rm_xics.c b/arch/powerpc/kvm/book3s_hv_rm_xics.c index f2636414d82a..59f740a88581 100644 --- a/arch/powerpc/kvm/book3s_hv_rm_xics.c +++ b/arch/powerpc/kvm/book3s_hv_rm_xics.c @@ -132,7 +132,7 @@ static void icp_rm_set_vcpu_irq(struct kvm_vcpu *vcpu, int hcore; /* Mark the target VCPU as having an interrupt pending */ - vcpu->stat.queue_intr++; + vcpu->stat->queue_intr++; set_bit(BOOK3S_IRQPRIO_EXTERNAL, &vcpu->arch.pending_exceptions); /* Kick self ? Just set MER and return */ @@ -713,14 +713,14 @@ static int ics_rm_eoi(struct kvm_vcpu *vcpu, u32 irq) /* Handle passthrough interrupts */ if (state->host_irq) { - ++vcpu->stat.pthru_all; + ++vcpu->stat->pthru_all; if (state->intr_cpu != -1) { int pcpu = raw_smp_processor_id(); pcpu = cpu_first_thread_sibling(pcpu); - ++vcpu->stat.pthru_host; + ++vcpu->stat->pthru_host; if (state->intr_cpu != pcpu) { - ++vcpu->stat.pthru_bad_aff; + ++vcpu->stat->pthru_bad_aff; xics_opal_set_server(state->host_irq, pcpu); } state->intr_cpu = -1; diff --git a/arch/powerpc/kvm/book3s_pr.c b/arch/powerpc/kvm/book3s_pr.c index 83bcdc80ce51..8cbf7ecc796d 100644 --- a/arch/powerpc/kvm/book3s_pr.c +++ b/arch/powerpc/kvm/book3s_pr.c @@ -493,7 +493,7 @@ static void kvmppc_set_msr_pr(struct kvm_vcpu *vcpu, u64 msr) if (msr & MSR_POW) { if (!vcpu->arch.pending_exceptions) { kvm_vcpu_halt(vcpu); - vcpu->stat.generic.halt_wakeup++; + vcpu->stat->generic.halt_wakeup++; /* Unset POW bit after we woke up */ msr &= ~MSR_POW; @@ -776,13 +776,13 @@ static int kvmppc_handle_pagefault(struct kvm_vcpu *vcpu, return RESUME_HOST; } if (data) - vcpu->stat.sp_storage++; + vcpu->stat->sp_storage++; else if (vcpu->arch.mmu.is_dcbz32(vcpu) && (!(vcpu->arch.hflags & BOOK3S_HFLAG_DCBZ32))) kvmppc_patch_dcbz(vcpu, &pte); } else { /* MMIO */ - vcpu->stat.mmio_exits++; + vcpu->stat->mmio_exits++; vcpu->arch.paddr_accessed = pte.raddr; vcpu->arch.vaddr_accessed = pte.eaddr; r = kvmppc_emulate_mmio(vcpu); @@ -1103,7 +1103,7 @@ static int kvmppc_exit_pr_progint(struct kvm_vcpu *vcpu, unsigned int exit_nr) } } - vcpu->stat.emulated_inst_exits++; + vcpu->stat->emulated_inst_exits++; er = kvmppc_emulate_instruction(vcpu); switch (er) { case EMULATE_DONE: @@ -1138,7 +1138,7 @@ int kvmppc_handle_exit_pr(struct kvm_vcpu *vcpu, unsigned int exit_nr) int r = RESUME_HOST; int s; - vcpu->stat.sum_exits++; + vcpu->stat->sum_exits++; run->exit_reason = KVM_EXIT_UNKNOWN; run->ready_for_interrupt_injection = 1; @@ -1152,7 +1152,7 @@ int kvmppc_handle_exit_pr(struct kvm_vcpu *vcpu, unsigned int exit_nr) case BOOK3S_INTERRUPT_INST_STORAGE: { ulong shadow_srr1 = vcpu->arch.shadow_srr1; - vcpu->stat.pf_instruc++; + vcpu->stat->pf_instruc++; if (kvmppc_is_split_real(vcpu)) kvmppc_fixup_split_real(vcpu); @@ -1180,7 +1180,7 @@ int kvmppc_handle_exit_pr(struct kvm_vcpu *vcpu, unsigned int exit_nr) int idx = srcu_read_lock(&vcpu->kvm->srcu); r = kvmppc_handle_pagefault(vcpu, kvmppc_get_pc(vcpu), exit_nr); srcu_read_unlock(&vcpu->kvm->srcu, idx); - vcpu->stat.sp_instruc++; + vcpu->stat->sp_instruc++; } else if (vcpu->arch.mmu.is_dcbz32(vcpu) && (!(vcpu->arch.hflags & BOOK3S_HFLAG_DCBZ32))) { /* @@ -1201,7 +1201,7 @@ int kvmppc_handle_exit_pr(struct kvm_vcpu *vcpu, unsigned int exit_nr) { ulong dar = kvmppc_get_fault_dar(vcpu); u32 fault_dsisr = vcpu->arch.fault_dsisr; - vcpu->stat.pf_storage++; + vcpu->stat->pf_storage++; #ifdef CONFIG_PPC_BOOK3S_32 /* We set segments as unused segments when invalidating them. So @@ -1256,13 +1256,13 @@ int kvmppc_handle_exit_pr(struct kvm_vcpu *vcpu, unsigned int exit_nr) case BOOK3S_INTERRUPT_HV_DECREMENTER: case BOOK3S_INTERRUPT_DOORBELL: case BOOK3S_INTERRUPT_H_DOORBELL: - vcpu->stat.dec_exits++; + vcpu->stat->dec_exits++; r = RESUME_GUEST; break; case BOOK3S_INTERRUPT_EXTERNAL: case BOOK3S_INTERRUPT_EXTERNAL_HV: case BOOK3S_INTERRUPT_H_VIRT: - vcpu->stat.ext_intr_exits++; + vcpu->stat->ext_intr_exits++; r = RESUME_GUEST; break; case BOOK3S_INTERRUPT_HMI: @@ -1331,7 +1331,7 @@ int kvmppc_handle_exit_pr(struct kvm_vcpu *vcpu, unsigned int exit_nr) r = RESUME_GUEST; } else { /* Guest syscalls */ - vcpu->stat.syscall_exits++; + vcpu->stat->syscall_exits++; kvmppc_book3s_queue_irqprio(vcpu, exit_nr); r = RESUME_GUEST; } diff --git a/arch/powerpc/kvm/book3s_pr_papr.c b/arch/powerpc/kvm/book3s_pr_papr.c index b2c89e850d7a..8f007a86de40 100644 --- a/arch/powerpc/kvm/book3s_pr_papr.c +++ b/arch/powerpc/kvm/book3s_pr_papr.c @@ -393,7 +393,7 @@ int kvmppc_h_pr(struct kvm_vcpu *vcpu, unsigned long cmd) case H_CEDE: kvmppc_set_msr_fast(vcpu, kvmppc_get_msr(vcpu) | MSR_EE); kvm_vcpu_halt(vcpu); - vcpu->stat.generic.halt_wakeup++; + vcpu->stat->generic.halt_wakeup++; return EMULATE_DONE; case H_LOGICAL_CI_LOAD: return kvmppc_h_pr_logical_ci_load(vcpu); diff --git a/arch/powerpc/kvm/powerpc.c b/arch/powerpc/kvm/powerpc.c index ce1d91eed231..a39919dbaffb 100644 --- a/arch/powerpc/kvm/powerpc.c +++ b/arch/powerpc/kvm/powerpc.c @@ -352,7 +352,7 @@ int kvmppc_st(struct kvm_vcpu *vcpu, ulong *eaddr, int size, void *ptr, struct kvmppc_pte pte; int r = -EINVAL; - vcpu->stat.st++; + vcpu->stat->st++; if (vcpu->kvm->arch.kvm_ops && vcpu->kvm->arch.kvm_ops->store_to_eaddr) r = vcpu->kvm->arch.kvm_ops->store_to_eaddr(vcpu, eaddr, ptr, @@ -395,7 +395,7 @@ int kvmppc_ld(struct kvm_vcpu *vcpu, ulong *eaddr, int size, void *ptr, struct kvmppc_pte pte; int rc = -EINVAL; - vcpu->stat.ld++; + vcpu->stat->ld++; if (vcpu->kvm->arch.kvm_ops && vcpu->kvm->arch.kvm_ops->load_from_eaddr) rc = vcpu->kvm->arch.kvm_ops->load_from_eaddr(vcpu, eaddr, ptr, diff --git a/arch/powerpc/kvm/timing.h b/arch/powerpc/kvm/timing.h index 45817ab82bb4..529f32e7aaf1 100644 --- a/arch/powerpc/kvm/timing.h +++ b/arch/powerpc/kvm/timing.h @@ -45,46 +45,46 @@ static inline void kvmppc_account_exit_stat(struct kvm_vcpu *vcpu, int type) */ switch (type) { case EXT_INTR_EXITS: - vcpu->stat.ext_intr_exits++; + vcpu->stat->ext_intr_exits++; break; case DEC_EXITS: - vcpu->stat.dec_exits++; + vcpu->stat->dec_exits++; break; case EMULATED_INST_EXITS: - vcpu->stat.emulated_inst_exits++; + vcpu->stat->emulated_inst_exits++; break; case DSI_EXITS: - vcpu->stat.dsi_exits++; + vcpu->stat->dsi_exits++; break; case ISI_EXITS: - vcpu->stat.isi_exits++; + vcpu->stat->isi_exits++; break; case SYSCALL_EXITS: - vcpu->stat.syscall_exits++; + vcpu->stat->syscall_exits++; break; case DTLB_REAL_MISS_EXITS: - vcpu->stat.dtlb_real_miss_exits++; + vcpu->stat->dtlb_real_miss_exits++; break; case DTLB_VIRT_MISS_EXITS: - vcpu->stat.dtlb_virt_miss_exits++; + vcpu->stat->dtlb_virt_miss_exits++; break; case MMIO_EXITS: - vcpu->stat.mmio_exits++; + vcpu->stat->mmio_exits++; break; case ITLB_REAL_MISS_EXITS: - vcpu->stat.itlb_real_miss_exits++; + vcpu->stat->itlb_real_miss_exits++; break; case ITLB_VIRT_MISS_EXITS: - vcpu->stat.itlb_virt_miss_exits++; + vcpu->stat->itlb_virt_miss_exits++; break; case SIGNAL_EXITS: - vcpu->stat.signal_exits++; + vcpu->stat->signal_exits++; break; case DBELL_EXITS: - vcpu->stat.dbell_exits++; + vcpu->stat->dbell_exits++; break; case GDBELL_EXITS: - vcpu->stat.gdbell_exits++; + vcpu->stat->gdbell_exits++; break; } } diff --git a/arch/riscv/kvm/vcpu.c b/arch/riscv/kvm/vcpu.c index 60d684c76c58..55fb16307cc6 100644 --- a/arch/riscv/kvm/vcpu.c +++ b/arch/riscv/kvm/vcpu.c @@ -967,7 +967,7 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu) kvm_riscv_vcpu_enter_exit(vcpu, &trap); vcpu->mode = OUTSIDE_GUEST_MODE; - vcpu->stat.exits++; + vcpu->stat->exits++; /* Syncup interrupts state with HW */ kvm_riscv_vcpu_sync_interrupts(vcpu); diff --git a/arch/riscv/kvm/vcpu_exit.c b/arch/riscv/kvm/vcpu_exit.c index 6e0c18412795..73116dd903e5 100644 --- a/arch/riscv/kvm/vcpu_exit.c +++ b/arch/riscv/kvm/vcpu_exit.c @@ -195,27 +195,27 @@ int kvm_riscv_vcpu_exit(struct kvm_vcpu *vcpu, struct kvm_run *run, switch (trap->scause) { case EXC_INST_ILLEGAL: kvm_riscv_vcpu_pmu_incr_fw(vcpu, SBI_PMU_FW_ILLEGAL_INSN); - vcpu->stat.instr_illegal_exits++; + vcpu->stat->instr_illegal_exits++; ret = vcpu_redirect(vcpu, trap); break; case EXC_LOAD_MISALIGNED: kvm_riscv_vcpu_pmu_incr_fw(vcpu, SBI_PMU_FW_MISALIGNED_LOAD); - vcpu->stat.load_misaligned_exits++; + vcpu->stat->load_misaligned_exits++; ret = vcpu_redirect(vcpu, trap); break; case EXC_STORE_MISALIGNED: kvm_riscv_vcpu_pmu_incr_fw(vcpu, SBI_PMU_FW_MISALIGNED_STORE); - vcpu->stat.store_misaligned_exits++; + vcpu->stat->store_misaligned_exits++; ret = vcpu_redirect(vcpu, trap); break; case EXC_LOAD_ACCESS: kvm_riscv_vcpu_pmu_incr_fw(vcpu, SBI_PMU_FW_ACCESS_LOAD); - vcpu->stat.load_access_exits++; + vcpu->stat->load_access_exits++; ret = vcpu_redirect(vcpu, trap); break; case EXC_STORE_ACCESS: kvm_riscv_vcpu_pmu_incr_fw(vcpu, SBI_PMU_FW_ACCESS_STORE); - vcpu->stat.store_access_exits++; + vcpu->stat->store_access_exits++; ret = vcpu_redirect(vcpu, trap); break; case EXC_INST_ACCESS: diff --git a/arch/riscv/kvm/vcpu_insn.c b/arch/riscv/kvm/vcpu_insn.c index 97dec18e6989..43911b8a3f1b 100644 --- a/arch/riscv/kvm/vcpu_insn.c +++ b/arch/riscv/kvm/vcpu_insn.c @@ -201,14 +201,14 @@ void kvm_riscv_vcpu_wfi(struct kvm_vcpu *vcpu) static int wfi_insn(struct kvm_vcpu *vcpu, struct kvm_run *run, ulong insn) { - vcpu->stat.wfi_exit_stat++; + vcpu->stat->wfi_exit_stat++; kvm_riscv_vcpu_wfi(vcpu); return KVM_INSN_CONTINUE_NEXT_SEPC; } static int wrs_insn(struct kvm_vcpu *vcpu, struct kvm_run *run, ulong insn) { - vcpu->stat.wrs_exit_stat++; + vcpu->stat->wrs_exit_stat++; kvm_vcpu_on_spin(vcpu, vcpu->arch.guest_context.sstatus & SR_SPP); return KVM_INSN_CONTINUE_NEXT_SEPC; } @@ -335,7 +335,7 @@ static int csr_insn(struct kvm_vcpu *vcpu, struct kvm_run *run, ulong insn) if (rc > KVM_INSN_EXIT_TO_USER_SPACE) { if (rc == KVM_INSN_CONTINUE_NEXT_SEPC) { run->riscv_csr.ret_value = val; - vcpu->stat.csr_exit_kernel++; + vcpu->stat->csr_exit_kernel++; kvm_riscv_vcpu_csr_return(vcpu, run); rc = KVM_INSN_CONTINUE_SAME_SEPC; } @@ -345,7 +345,7 @@ static int csr_insn(struct kvm_vcpu *vcpu, struct kvm_run *run, ulong insn) /* Exit to user-space for CSR emulation */ if (rc <= KVM_INSN_EXIT_TO_USER_SPACE) { - vcpu->stat.csr_exit_user++; + vcpu->stat->csr_exit_user++; run->exit_reason = KVM_EXIT_RISCV_CSR; } @@ -576,13 +576,13 @@ int kvm_riscv_vcpu_mmio_load(struct kvm_vcpu *vcpu, struct kvm_run *run, if (!kvm_io_bus_read(vcpu, KVM_MMIO_BUS, fault_addr, len, data_buf)) { /* Successfully handled MMIO access in the kernel so resume */ memcpy(run->mmio.data, data_buf, len); - vcpu->stat.mmio_exit_kernel++; + vcpu->stat->mmio_exit_kernel++; kvm_riscv_vcpu_mmio_return(vcpu, run); return 1; } /* Exit to userspace for MMIO emulation */ - vcpu->stat.mmio_exit_user++; + vcpu->stat->mmio_exit_user++; run->exit_reason = KVM_EXIT_MMIO; return 0; @@ -709,13 +709,13 @@ int kvm_riscv_vcpu_mmio_store(struct kvm_vcpu *vcpu, struct kvm_run *run, if (!kvm_io_bus_write(vcpu, KVM_MMIO_BUS, fault_addr, len, run->mmio.data)) { /* Successfully handled MMIO access in the kernel so resume */ - vcpu->stat.mmio_exit_kernel++; + vcpu->stat->mmio_exit_kernel++; kvm_riscv_vcpu_mmio_return(vcpu, run); return 1; } /* Exit to userspace for MMIO emulation */ - vcpu->stat.mmio_exit_user++; + vcpu->stat->mmio_exit_user++; run->exit_reason = KVM_EXIT_MMIO; return 0; diff --git a/arch/riscv/kvm/vcpu_sbi.c b/arch/riscv/kvm/vcpu_sbi.c index d1c83a77735e..b500bcaf7b11 100644 --- a/arch/riscv/kvm/vcpu_sbi.c +++ b/arch/riscv/kvm/vcpu_sbi.c @@ -121,7 +121,7 @@ void kvm_riscv_vcpu_sbi_forward(struct kvm_vcpu *vcpu, struct kvm_run *run) struct kvm_cpu_context *cp = &vcpu->arch.guest_context; vcpu->arch.sbi_context.return_handled = 0; - vcpu->stat.ecall_exit_stat++; + vcpu->stat->ecall_exit_stat++; run->exit_reason = KVM_EXIT_RISCV_SBI; run->riscv_sbi.extension_id = cp->a7; run->riscv_sbi.function_id = cp->a6; diff --git a/arch/riscv/kvm/vcpu_sbi_hsm.c b/arch/riscv/kvm/vcpu_sbi_hsm.c index 3070bb31745d..519671760674 100644 --- a/arch/riscv/kvm/vcpu_sbi_hsm.c +++ b/arch/riscv/kvm/vcpu_sbi_hsm.c @@ -82,7 +82,7 @@ static int kvm_sbi_hsm_vcpu_get_status(struct kvm_vcpu *vcpu) return SBI_ERR_INVALID_PARAM; if (kvm_riscv_vcpu_stopped(target_vcpu)) return SBI_HSM_STATE_STOPPED; - else if (target_vcpu->stat.generic.blocking) + else if (target_vcpu->stat->generic.blocking) return SBI_HSM_STATE_SUSPENDED; else return SBI_HSM_STATE_STARTED; diff --git a/arch/s390/kvm/diag.c b/arch/s390/kvm/diag.c index 74f73141f9b9..359d562f7b81 100644 --- a/arch/s390/kvm/diag.c +++ b/arch/s390/kvm/diag.c @@ -24,7 +24,7 @@ static int diag_release_pages(struct kvm_vcpu *vcpu) start = vcpu->run->s.regs.gprs[(vcpu->arch.sie_block->ipa & 0xf0) >> 4]; end = vcpu->run->s.regs.gprs[vcpu->arch.sie_block->ipa & 0xf] + PAGE_SIZE; - vcpu->stat.instruction_diagnose_10++; + vcpu->stat->instruction_diagnose_10++; if (start & ~PAGE_MASK || end & ~PAGE_MASK || start >= end || start < 2 * PAGE_SIZE) @@ -74,7 +74,7 @@ static int __diag_page_ref_service(struct kvm_vcpu *vcpu) VCPU_EVENT(vcpu, 3, "diag page reference parameter block at 0x%llx", vcpu->run->s.regs.gprs[rx]); - vcpu->stat.instruction_diagnose_258++; + vcpu->stat->instruction_diagnose_258++; if (vcpu->run->s.regs.gprs[rx] & 7) return kvm_s390_inject_program_int(vcpu, PGM_SPECIFICATION); rc = read_guest_real(vcpu, vcpu->run->s.regs.gprs[rx], &parm, sizeof(parm)); @@ -145,7 +145,7 @@ static int __diag_page_ref_service(struct kvm_vcpu *vcpu) static int __diag_time_slice_end(struct kvm_vcpu *vcpu) { VCPU_EVENT(vcpu, 5, "%s", "diag time slice end"); - vcpu->stat.instruction_diagnose_44++; + vcpu->stat->instruction_diagnose_44++; kvm_vcpu_on_spin(vcpu, true); return 0; } @@ -170,7 +170,7 @@ static int __diag_time_slice_end_directed(struct kvm_vcpu *vcpu) int tid; tid = vcpu->run->s.regs.gprs[(vcpu->arch.sie_block->ipa & 0xf0) >> 4]; - vcpu->stat.instruction_diagnose_9c++; + vcpu->stat->instruction_diagnose_9c++; /* yield to self */ if (tid == vcpu->vcpu_id) @@ -194,7 +194,7 @@ static int __diag_time_slice_end_directed(struct kvm_vcpu *vcpu) VCPU_EVENT(vcpu, 5, "diag time slice end directed to %d: yield forwarded", tid); - vcpu->stat.diag_9c_forward++; + vcpu->stat->diag_9c_forward++; return 0; } @@ -205,7 +205,7 @@ static int __diag_time_slice_end_directed(struct kvm_vcpu *vcpu) return 0; no_yield: VCPU_EVENT(vcpu, 5, "diag time slice end directed to %d: ignored", tid); - vcpu->stat.diag_9c_ignored++; + vcpu->stat->diag_9c_ignored++; return 0; } @@ -215,7 +215,7 @@ static int __diag_ipl_functions(struct kvm_vcpu *vcpu) unsigned long subcode = vcpu->run->s.regs.gprs[reg] & 0xffff; VCPU_EVENT(vcpu, 3, "diag ipl functions, subcode %lx", subcode); - vcpu->stat.instruction_diagnose_308++; + vcpu->stat->instruction_diagnose_308++; switch (subcode) { case 3: vcpu->run->s390_reset_flags = KVM_S390_RESET_CLEAR; @@ -247,7 +247,7 @@ static int __diag_virtio_hypercall(struct kvm_vcpu *vcpu) { int ret; - vcpu->stat.instruction_diagnose_500++; + vcpu->stat->instruction_diagnose_500++; /* No virtio-ccw notification? Get out quickly. */ if (!vcpu->kvm->arch.css_support || (vcpu->run->s.regs.gprs[1] != KVM_S390_VIRTIO_CCW_NOTIFY)) @@ -301,7 +301,7 @@ int kvm_s390_handle_diag(struct kvm_vcpu *vcpu) case 0x500: return __diag_virtio_hypercall(vcpu); default: - vcpu->stat.instruction_diagnose_other++; + vcpu->stat->instruction_diagnose_other++; return -EOPNOTSUPP; } } diff --git a/arch/s390/kvm/intercept.c b/arch/s390/kvm/intercept.c index 610dd44a948b..74d01f67a257 100644 --- a/arch/s390/kvm/intercept.c +++ b/arch/s390/kvm/intercept.c @@ -57,7 +57,7 @@ static int handle_stop(struct kvm_vcpu *vcpu) int rc = 0; uint8_t flags, stop_pending; - vcpu->stat.exit_stop_request++; + vcpu->stat->exit_stop_request++; /* delay the stop if any non-stop irq is pending */ if (kvm_s390_vcpu_has_irq(vcpu, 1)) @@ -93,7 +93,7 @@ static int handle_validity(struct kvm_vcpu *vcpu) { int viwhy = vcpu->arch.sie_block->ipb >> 16; - vcpu->stat.exit_validity++; + vcpu->stat->exit_validity++; trace_kvm_s390_intercept_validity(vcpu, viwhy); KVM_EVENT(3, "validity intercept 0x%x for pid %u (kvm 0x%pK)", viwhy, current->pid, vcpu->kvm); @@ -106,7 +106,7 @@ static int handle_validity(struct kvm_vcpu *vcpu) static int handle_instruction(struct kvm_vcpu *vcpu) { - vcpu->stat.exit_instruction++; + vcpu->stat->exit_instruction++; trace_kvm_s390_intercept_instruction(vcpu, vcpu->arch.sie_block->ipa, vcpu->arch.sie_block->ipb); @@ -249,7 +249,7 @@ static int handle_prog(struct kvm_vcpu *vcpu) psw_t psw; int rc; - vcpu->stat.exit_program_interruption++; + vcpu->stat->exit_program_interruption++; /* * Intercept 8 indicates a loop of specification exceptions @@ -307,7 +307,7 @@ static int handle_external_interrupt(struct kvm_vcpu *vcpu) psw_t newpsw; int rc; - vcpu->stat.exit_external_interrupt++; + vcpu->stat->exit_external_interrupt++; if (kvm_s390_pv_cpu_is_protected(vcpu)) { newpsw = vcpu->arch.sie_block->gpsw; @@ -388,7 +388,7 @@ static int handle_mvpg_pei(struct kvm_vcpu *vcpu) static int handle_partial_execution(struct kvm_vcpu *vcpu) { - vcpu->stat.exit_pei++; + vcpu->stat->exit_pei++; if (vcpu->arch.sie_block->ipa == 0xb254) /* MVPG */ return handle_mvpg_pei(vcpu); @@ -416,7 +416,7 @@ int handle_sthyi(struct kvm_vcpu *vcpu) code = vcpu->run->s.regs.gprs[reg1]; addr = vcpu->run->s.regs.gprs[reg2]; - vcpu->stat.instruction_sthyi++; + vcpu->stat->instruction_sthyi++; VCPU_EVENT(vcpu, 3, "STHYI: fc: %llu addr: 0x%016llx", code, addr); trace_kvm_s390_handle_sthyi(vcpu, code, addr); @@ -465,7 +465,7 @@ static int handle_operexc(struct kvm_vcpu *vcpu) psw_t oldpsw, newpsw; int rc; - vcpu->stat.exit_operation_exception++; + vcpu->stat->exit_operation_exception++; trace_kvm_s390_handle_operexc(vcpu, vcpu->arch.sie_block->ipa, vcpu->arch.sie_block->ipb); @@ -609,10 +609,10 @@ int kvm_handle_sie_intercept(struct kvm_vcpu *vcpu) switch (vcpu->arch.sie_block->icptcode) { case ICPT_EXTREQ: - vcpu->stat.exit_external_request++; + vcpu->stat->exit_external_request++; return 0; case ICPT_IOREQ: - vcpu->stat.exit_io_request++; + vcpu->stat->exit_io_request++; return 0; case ICPT_INST: rc = handle_instruction(vcpu); diff --git a/arch/s390/kvm/interrupt.c b/arch/s390/kvm/interrupt.c index 07ff0e10cb7f..7576df5305c3 100644 --- a/arch/s390/kvm/interrupt.c +++ b/arch/s390/kvm/interrupt.c @@ -479,7 +479,7 @@ static int __must_check __deliver_cpu_timer(struct kvm_vcpu *vcpu) struct kvm_s390_local_interrupt *li = &vcpu->arch.local_int; int rc = 0; - vcpu->stat.deliver_cputm++; + vcpu->stat->deliver_cputm++; trace_kvm_s390_deliver_interrupt(vcpu->vcpu_id, KVM_S390_INT_CPU_TIMER, 0, 0); if (kvm_s390_pv_cpu_is_protected(vcpu)) { @@ -503,7 +503,7 @@ static int __must_check __deliver_ckc(struct kvm_vcpu *vcpu) struct kvm_s390_local_interrupt *li = &vcpu->arch.local_int; int rc = 0; - vcpu->stat.deliver_ckc++; + vcpu->stat->deliver_ckc++; trace_kvm_s390_deliver_interrupt(vcpu->vcpu_id, KVM_S390_INT_CLOCK_COMP, 0, 0); if (kvm_s390_pv_cpu_is_protected(vcpu)) { @@ -707,7 +707,7 @@ static int __must_check __deliver_machine_check(struct kvm_vcpu *vcpu) trace_kvm_s390_deliver_interrupt(vcpu->vcpu_id, KVM_S390_MCHK, mchk.cr14, mchk.mcic); - vcpu->stat.deliver_machine_check++; + vcpu->stat->deliver_machine_check++; rc = __write_machine_check(vcpu, &mchk); } return rc; @@ -719,7 +719,7 @@ static int __must_check __deliver_restart(struct kvm_vcpu *vcpu) int rc = 0; VCPU_EVENT(vcpu, 3, "%s", "deliver: cpu restart"); - vcpu->stat.deliver_restart_signal++; + vcpu->stat->deliver_restart_signal++; trace_kvm_s390_deliver_interrupt(vcpu->vcpu_id, KVM_S390_RESTART, 0, 0); if (kvm_s390_pv_cpu_is_protected(vcpu)) { @@ -746,7 +746,7 @@ static int __must_check __deliver_set_prefix(struct kvm_vcpu *vcpu) clear_bit(IRQ_PEND_SET_PREFIX, &li->pending_irqs); spin_unlock(&li->lock); - vcpu->stat.deliver_prefix_signal++; + vcpu->stat->deliver_prefix_signal++; trace_kvm_s390_deliver_interrupt(vcpu->vcpu_id, KVM_S390_SIGP_SET_PREFIX, prefix.address, 0); @@ -769,7 +769,7 @@ static int __must_check __deliver_emergency_signal(struct kvm_vcpu *vcpu) spin_unlock(&li->lock); VCPU_EVENT(vcpu, 4, "%s", "deliver: sigp emerg"); - vcpu->stat.deliver_emergency_signal++; + vcpu->stat->deliver_emergency_signal++; trace_kvm_s390_deliver_interrupt(vcpu->vcpu_id, KVM_S390_INT_EMERGENCY, cpu_addr, 0); if (kvm_s390_pv_cpu_is_protected(vcpu)) { @@ -802,7 +802,7 @@ static int __must_check __deliver_external_call(struct kvm_vcpu *vcpu) spin_unlock(&li->lock); VCPU_EVENT(vcpu, 4, "%s", "deliver: sigp ext call"); - vcpu->stat.deliver_external_call++; + vcpu->stat->deliver_external_call++; trace_kvm_s390_deliver_interrupt(vcpu->vcpu_id, KVM_S390_INT_EXTERNAL_CALL, extcall.code, 0); @@ -854,7 +854,7 @@ static int __must_check __deliver_prog(struct kvm_vcpu *vcpu) ilen = pgm_info.flags & KVM_S390_PGM_FLAGS_ILC_MASK; VCPU_EVENT(vcpu, 3, "deliver: program irq code 0x%x, ilen:%d", pgm_info.code, ilen); - vcpu->stat.deliver_program++; + vcpu->stat->deliver_program++; trace_kvm_s390_deliver_interrupt(vcpu->vcpu_id, KVM_S390_PROGRAM_INT, pgm_info.code, 0); @@ -1004,7 +1004,7 @@ static int __must_check __deliver_service(struct kvm_vcpu *vcpu) VCPU_EVENT(vcpu, 4, "deliver: sclp parameter 0x%x", ext.ext_params); - vcpu->stat.deliver_service_signal++; + vcpu->stat->deliver_service_signal++; trace_kvm_s390_deliver_interrupt(vcpu->vcpu_id, KVM_S390_INT_SERVICE, ext.ext_params, 0); @@ -1028,7 +1028,7 @@ static int __must_check __deliver_service_ev(struct kvm_vcpu *vcpu) spin_unlock(&fi->lock); VCPU_EVENT(vcpu, 4, "%s", "deliver: sclp parameter event"); - vcpu->stat.deliver_service_signal++; + vcpu->stat->deliver_service_signal++; trace_kvm_s390_deliver_interrupt(vcpu->vcpu_id, KVM_S390_INT_SERVICE, ext.ext_params, 0); @@ -1091,7 +1091,7 @@ static int __must_check __deliver_virtio(struct kvm_vcpu *vcpu) VCPU_EVENT(vcpu, 4, "deliver: virtio parm: 0x%x,parm64: 0x%llx", inti->ext.ext_params, inti->ext.ext_params2); - vcpu->stat.deliver_virtio++; + vcpu->stat->deliver_virtio++; trace_kvm_s390_deliver_interrupt(vcpu->vcpu_id, inti->type, inti->ext.ext_params, @@ -1177,7 +1177,7 @@ static int __must_check __deliver_io(struct kvm_vcpu *vcpu, inti->io.subchannel_id >> 1 & 0x3, inti->io.subchannel_nr); - vcpu->stat.deliver_io++; + vcpu->stat->deliver_io++; trace_kvm_s390_deliver_interrupt(vcpu->vcpu_id, inti->type, ((__u32)inti->io.subchannel_id << 16) | @@ -1205,7 +1205,7 @@ static int __must_check __deliver_io(struct kvm_vcpu *vcpu, VCPU_EVENT(vcpu, 4, "%s isc %u", "deliver: I/O (AI/gisa)", isc); memset(&io, 0, sizeof(io)); io.io_int_word = isc_to_int_word(isc); - vcpu->stat.deliver_io++; + vcpu->stat->deliver_io++; trace_kvm_s390_deliver_interrupt(vcpu->vcpu_id, KVM_S390_INT_IO(1, 0, 0, 0), ((__u32)io.subchannel_id << 16) | @@ -1290,7 +1290,7 @@ int kvm_s390_handle_wait(struct kvm_vcpu *vcpu) struct kvm_s390_gisa_interrupt *gi = &vcpu->kvm->arch.gisa_int; u64 sltime; - vcpu->stat.exit_wait_state++; + vcpu->stat->exit_wait_state++; /* fast path */ if (kvm_arch_vcpu_runnable(vcpu)) @@ -1476,7 +1476,7 @@ static int __inject_prog(struct kvm_vcpu *vcpu, struct kvm_s390_irq *irq) { struct kvm_s390_local_interrupt *li = &vcpu->arch.local_int; - vcpu->stat.inject_program++; + vcpu->stat->inject_program++; VCPU_EVENT(vcpu, 3, "inject: program irq code 0x%x", irq->u.pgm.code); trace_kvm_s390_inject_vcpu(vcpu->vcpu_id, KVM_S390_PROGRAM_INT, irq->u.pgm.code, 0); @@ -1518,7 +1518,7 @@ static int __inject_pfault_init(struct kvm_vcpu *vcpu, struct kvm_s390_irq *irq) { struct kvm_s390_local_interrupt *li = &vcpu->arch.local_int; - vcpu->stat.inject_pfault_init++; + vcpu->stat->inject_pfault_init++; VCPU_EVENT(vcpu, 4, "inject: pfault init parameter block at 0x%llx", irq->u.ext.ext_params2); trace_kvm_s390_inject_vcpu(vcpu->vcpu_id, KVM_S390_INT_PFAULT_INIT, @@ -1537,7 +1537,7 @@ static int __inject_extcall(struct kvm_vcpu *vcpu, struct kvm_s390_irq *irq) struct kvm_s390_extcall_info *extcall = &li->irq.extcall; uint16_t src_id = irq->u.extcall.code; - vcpu->stat.inject_external_call++; + vcpu->stat->inject_external_call++; VCPU_EVENT(vcpu, 4, "inject: external call source-cpu:%u", src_id); trace_kvm_s390_inject_vcpu(vcpu->vcpu_id, KVM_S390_INT_EXTERNAL_CALL, @@ -1562,7 +1562,7 @@ static int __inject_set_prefix(struct kvm_vcpu *vcpu, struct kvm_s390_irq *irq) struct kvm_s390_local_interrupt *li = &vcpu->arch.local_int; struct kvm_s390_prefix_info *prefix = &li->irq.prefix; - vcpu->stat.inject_set_prefix++; + vcpu->stat->inject_set_prefix++; VCPU_EVENT(vcpu, 3, "inject: set prefix to %x", irq->u.prefix.address); trace_kvm_s390_inject_vcpu(vcpu->vcpu_id, KVM_S390_SIGP_SET_PREFIX, @@ -1583,7 +1583,7 @@ static int __inject_sigp_stop(struct kvm_vcpu *vcpu, struct kvm_s390_irq *irq) struct kvm_s390_stop_info *stop = &li->irq.stop; int rc = 0; - vcpu->stat.inject_stop_signal++; + vcpu->stat->inject_stop_signal++; trace_kvm_s390_inject_vcpu(vcpu->vcpu_id, KVM_S390_SIGP_STOP, 0, 0); if (irq->u.stop.flags & ~KVM_S390_STOP_SUPP_FLAGS) @@ -1607,7 +1607,7 @@ static int __inject_sigp_restart(struct kvm_vcpu *vcpu) { struct kvm_s390_local_interrupt *li = &vcpu->arch.local_int; - vcpu->stat.inject_restart++; + vcpu->stat->inject_restart++; VCPU_EVENT(vcpu, 3, "%s", "inject: restart int"); trace_kvm_s390_inject_vcpu(vcpu->vcpu_id, KVM_S390_RESTART, 0, 0); @@ -1620,7 +1620,7 @@ static int __inject_sigp_emergency(struct kvm_vcpu *vcpu, { struct kvm_s390_local_interrupt *li = &vcpu->arch.local_int; - vcpu->stat.inject_emergency_signal++; + vcpu->stat->inject_emergency_signal++; VCPU_EVENT(vcpu, 4, "inject: emergency from cpu %u", irq->u.emerg.code); trace_kvm_s390_inject_vcpu(vcpu->vcpu_id, KVM_S390_INT_EMERGENCY, @@ -1641,7 +1641,7 @@ static int __inject_mchk(struct kvm_vcpu *vcpu, struct kvm_s390_irq *irq) struct kvm_s390_local_interrupt *li = &vcpu->arch.local_int; struct kvm_s390_mchk_info *mchk = &li->irq.mchk; - vcpu->stat.inject_mchk++; + vcpu->stat->inject_mchk++; VCPU_EVENT(vcpu, 3, "inject: machine check mcic 0x%llx", irq->u.mchk.mcic); trace_kvm_s390_inject_vcpu(vcpu->vcpu_id, KVM_S390_MCHK, 0, @@ -1672,7 +1672,7 @@ static int __inject_ckc(struct kvm_vcpu *vcpu) { struct kvm_s390_local_interrupt *li = &vcpu->arch.local_int; - vcpu->stat.inject_ckc++; + vcpu->stat->inject_ckc++; VCPU_EVENT(vcpu, 3, "%s", "inject: clock comparator external"); trace_kvm_s390_inject_vcpu(vcpu->vcpu_id, KVM_S390_INT_CLOCK_COMP, 0, 0); @@ -1686,7 +1686,7 @@ static int __inject_cpu_timer(struct kvm_vcpu *vcpu) { struct kvm_s390_local_interrupt *li = &vcpu->arch.local_int; - vcpu->stat.inject_cputm++; + vcpu->stat->inject_cputm++; VCPU_EVENT(vcpu, 3, "%s", "inject: cpu timer external"); trace_kvm_s390_inject_vcpu(vcpu->vcpu_id, KVM_S390_INT_CPU_TIMER, 0, 0); diff --git a/arch/s390/kvm/kvm-s390.c b/arch/s390/kvm/kvm-s390.c index 020502af7dc9..46759021e924 100644 --- a/arch/s390/kvm/kvm-s390.c +++ b/arch/s390/kvm/kvm-s390.c @@ -4133,7 +4133,7 @@ bool kvm_arch_no_poll(struct kvm_vcpu *vcpu) /* do not poll with more than halt_poll_max_steal percent of steal time */ if (get_lowcore()->avg_steal_timer * 100 / (TICK_USEC << 12) >= READ_ONCE(halt_poll_max_steal)) { - vcpu->stat.halt_no_poll_steal++; + vcpu->stat->halt_no_poll_steal++; return true; } return false; @@ -4898,7 +4898,7 @@ int __kvm_s390_handle_dat_fault(struct kvm_vcpu *vcpu, gfn_t gfn, gpa_t gaddr, u trace_kvm_s390_major_guest_pfault(vcpu); if (kvm_arch_setup_async_pf(vcpu)) return 0; - vcpu->stat.pfault_sync++; + vcpu->stat->pfault_sync++; /* Could not setup async pfault, try again synchronously */ flags &= ~FOLL_NOWAIT; goto try_again; @@ -4960,7 +4960,7 @@ static int vcpu_post_run_handle_fault(struct kvm_vcpu *vcpu) switch (current->thread.gmap_int_code & PGM_INT_CODE_MASK) { case 0: - vcpu->stat.exit_null++; + vcpu->stat->exit_null++; break; case PGM_SECURE_STORAGE_ACCESS: case PGM_SECURE_STORAGE_VIOLATION: @@ -5351,7 +5351,7 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu) kvm_sigset_deactivate(vcpu); - vcpu->stat.exit_userspace++; + vcpu->stat->exit_userspace++; out: vcpu_put(vcpu); return rc; diff --git a/arch/s390/kvm/priv.c b/arch/s390/kvm/priv.c index 1a49b89706f8..6ff66373f115 100644 --- a/arch/s390/kvm/priv.c +++ b/arch/s390/kvm/priv.c @@ -31,7 +31,7 @@ static int handle_ri(struct kvm_vcpu *vcpu) { - vcpu->stat.instruction_ri++; + vcpu->stat->instruction_ri++; if (test_kvm_facility(vcpu->kvm, 64)) { VCPU_EVENT(vcpu, 3, "%s", "ENABLE: RI (lazy)"); @@ -52,7 +52,7 @@ int kvm_s390_handle_aa(struct kvm_vcpu *vcpu) static int handle_gs(struct kvm_vcpu *vcpu) { - vcpu->stat.instruction_gs++; + vcpu->stat->instruction_gs++; if (test_kvm_facility(vcpu->kvm, 133)) { VCPU_EVENT(vcpu, 3, "%s", "ENABLE: GS (lazy)"); @@ -87,7 +87,7 @@ static int handle_set_clock(struct kvm_vcpu *vcpu) u8 ar; u64 op2; - vcpu->stat.instruction_sck++; + vcpu->stat->instruction_sck++; if (vcpu->arch.sie_block->gpsw.mask & PSW_MASK_PSTATE) return kvm_s390_inject_program_int(vcpu, PGM_PRIVILEGED_OP); @@ -126,7 +126,7 @@ static int handle_set_prefix(struct kvm_vcpu *vcpu) int rc; u8 ar; - vcpu->stat.instruction_spx++; + vcpu->stat->instruction_spx++; if (vcpu->arch.sie_block->gpsw.mask & PSW_MASK_PSTATE) return kvm_s390_inject_program_int(vcpu, PGM_PRIVILEGED_OP); @@ -164,7 +164,7 @@ static int handle_store_prefix(struct kvm_vcpu *vcpu) int rc; u8 ar; - vcpu->stat.instruction_stpx++; + vcpu->stat->instruction_stpx++; if (vcpu->arch.sie_block->gpsw.mask & PSW_MASK_PSTATE) return kvm_s390_inject_program_int(vcpu, PGM_PRIVILEGED_OP); @@ -194,7 +194,7 @@ static int handle_store_cpu_address(struct kvm_vcpu *vcpu) int rc; u8 ar; - vcpu->stat.instruction_stap++; + vcpu->stat->instruction_stap++; if (vcpu->arch.sie_block->gpsw.mask & PSW_MASK_PSTATE) return kvm_s390_inject_program_int(vcpu, PGM_PRIVILEGED_OP); @@ -261,7 +261,7 @@ static int handle_iske(struct kvm_vcpu *vcpu) bool unlocked; int rc; - vcpu->stat.instruction_iske++; + vcpu->stat->instruction_iske++; if (vcpu->arch.sie_block->gpsw.mask & PSW_MASK_PSTATE) return kvm_s390_inject_program_int(vcpu, PGM_PRIVILEGED_OP); @@ -308,7 +308,7 @@ static int handle_rrbe(struct kvm_vcpu *vcpu) bool unlocked; int rc; - vcpu->stat.instruction_rrbe++; + vcpu->stat->instruction_rrbe++; if (vcpu->arch.sie_block->gpsw.mask & PSW_MASK_PSTATE) return kvm_s390_inject_program_int(vcpu, PGM_PRIVILEGED_OP); @@ -359,7 +359,7 @@ static int handle_sske(struct kvm_vcpu *vcpu) bool unlocked; int rc; - vcpu->stat.instruction_sske++; + vcpu->stat->instruction_sske++; if (vcpu->arch.sie_block->gpsw.mask & PSW_MASK_PSTATE) return kvm_s390_inject_program_int(vcpu, PGM_PRIVILEGED_OP); @@ -438,7 +438,7 @@ static int handle_sske(struct kvm_vcpu *vcpu) static int handle_ipte_interlock(struct kvm_vcpu *vcpu) { - vcpu->stat.instruction_ipte_interlock++; + vcpu->stat->instruction_ipte_interlock++; if (psw_bits(vcpu->arch.sie_block->gpsw).pstate) return kvm_s390_inject_program_int(vcpu, PGM_PRIVILEGED_OP); wait_event(vcpu->kvm->arch.ipte_wq, !ipte_lock_held(vcpu->kvm)); @@ -452,7 +452,7 @@ static int handle_test_block(struct kvm_vcpu *vcpu) gpa_t addr; int reg2; - vcpu->stat.instruction_tb++; + vcpu->stat->instruction_tb++; if (vcpu->arch.sie_block->gpsw.mask & PSW_MASK_PSTATE) return kvm_s390_inject_program_int(vcpu, PGM_PRIVILEGED_OP); @@ -486,7 +486,7 @@ static int handle_tpi(struct kvm_vcpu *vcpu) u64 addr; u8 ar; - vcpu->stat.instruction_tpi++; + vcpu->stat->instruction_tpi++; addr = kvm_s390_get_base_disp_s(vcpu, &ar); if (addr & 3) @@ -548,7 +548,7 @@ static int handle_tsch(struct kvm_vcpu *vcpu) struct kvm_s390_interrupt_info *inti = NULL; const u64 isc_mask = 0xffUL << 24; /* all iscs set */ - vcpu->stat.instruction_tsch++; + vcpu->stat->instruction_tsch++; /* a valid schid has at least one bit set */ if (vcpu->run->s.regs.gprs[1]) @@ -593,7 +593,7 @@ static int handle_io_inst(struct kvm_vcpu *vcpu) if (vcpu->arch.sie_block->ipa == 0xb235) return handle_tsch(vcpu); /* Handle in userspace. */ - vcpu->stat.instruction_io_other++; + vcpu->stat->instruction_io_other++; return -EOPNOTSUPP; } else { /* @@ -702,7 +702,7 @@ static int handle_stfl(struct kvm_vcpu *vcpu) int rc; unsigned int fac; - vcpu->stat.instruction_stfl++; + vcpu->stat->instruction_stfl++; if (vcpu->arch.sie_block->gpsw.mask & PSW_MASK_PSTATE) return kvm_s390_inject_program_int(vcpu, PGM_PRIVILEGED_OP); @@ -751,7 +751,7 @@ int kvm_s390_handle_lpsw(struct kvm_vcpu *vcpu) int rc; u8 ar; - vcpu->stat.instruction_lpsw++; + vcpu->stat->instruction_lpsw++; if (gpsw->mask & PSW_MASK_PSTATE) return kvm_s390_inject_program_int(vcpu, PGM_PRIVILEGED_OP); @@ -780,7 +780,7 @@ static int handle_lpswe(struct kvm_vcpu *vcpu) int rc; u8 ar; - vcpu->stat.instruction_lpswe++; + vcpu->stat->instruction_lpswe++; if (vcpu->arch.sie_block->gpsw.mask & PSW_MASK_PSTATE) return kvm_s390_inject_program_int(vcpu, PGM_PRIVILEGED_OP); @@ -804,7 +804,7 @@ static int handle_lpswey(struct kvm_vcpu *vcpu) int rc; u8 ar; - vcpu->stat.instruction_lpswey++; + vcpu->stat->instruction_lpswey++; if (!test_kvm_facility(vcpu->kvm, 193)) return kvm_s390_inject_program_int(vcpu, PGM_OPERATION); @@ -834,7 +834,7 @@ static int handle_stidp(struct kvm_vcpu *vcpu) int rc; u8 ar; - vcpu->stat.instruction_stidp++; + vcpu->stat->instruction_stidp++; if (vcpu->arch.sie_block->gpsw.mask & PSW_MASK_PSTATE) return kvm_s390_inject_program_int(vcpu, PGM_PRIVILEGED_OP); @@ -900,7 +900,7 @@ static int handle_stsi(struct kvm_vcpu *vcpu) int rc = 0; u8 ar; - vcpu->stat.instruction_stsi++; + vcpu->stat->instruction_stsi++; VCPU_EVENT(vcpu, 3, "STSI: fc: %u sel1: %u sel2: %u", fc, sel1, sel2); if (vcpu->arch.sie_block->gpsw.mask & PSW_MASK_PSTATE) @@ -1044,7 +1044,7 @@ static int handle_epsw(struct kvm_vcpu *vcpu) { int reg1, reg2; - vcpu->stat.instruction_epsw++; + vcpu->stat->instruction_epsw++; kvm_s390_get_regs_rre(vcpu, ®1, ®2); @@ -1076,7 +1076,7 @@ static int handle_pfmf(struct kvm_vcpu *vcpu) unsigned long start, end; unsigned char key; - vcpu->stat.instruction_pfmf++; + vcpu->stat->instruction_pfmf++; kvm_s390_get_regs_rre(vcpu, ®1, ®2); @@ -1256,7 +1256,7 @@ static int handle_essa(struct kvm_vcpu *vcpu) VCPU_EVENT(vcpu, 4, "ESSA: release %d pages", entries); gmap = vcpu->arch.gmap; - vcpu->stat.instruction_essa++; + vcpu->stat->instruction_essa++; if (!vcpu->kvm->arch.use_cmma) return kvm_s390_inject_program_int(vcpu, PGM_OPERATION); @@ -1345,7 +1345,7 @@ int kvm_s390_handle_lctl(struct kvm_vcpu *vcpu) u64 ga; u8 ar; - vcpu->stat.instruction_lctl++; + vcpu->stat->instruction_lctl++; if (vcpu->arch.sie_block->gpsw.mask & PSW_MASK_PSTATE) return kvm_s390_inject_program_int(vcpu, PGM_PRIVILEGED_OP); @@ -1384,7 +1384,7 @@ int kvm_s390_handle_stctl(struct kvm_vcpu *vcpu) u64 ga; u8 ar; - vcpu->stat.instruction_stctl++; + vcpu->stat->instruction_stctl++; if (vcpu->arch.sie_block->gpsw.mask & PSW_MASK_PSTATE) return kvm_s390_inject_program_int(vcpu, PGM_PRIVILEGED_OP); @@ -1418,7 +1418,7 @@ static int handle_lctlg(struct kvm_vcpu *vcpu) u64 ga; u8 ar; - vcpu->stat.instruction_lctlg++; + vcpu->stat->instruction_lctlg++; if (vcpu->arch.sie_block->gpsw.mask & PSW_MASK_PSTATE) return kvm_s390_inject_program_int(vcpu, PGM_PRIVILEGED_OP); @@ -1456,7 +1456,7 @@ static int handle_stctg(struct kvm_vcpu *vcpu) u64 ga; u8 ar; - vcpu->stat.instruction_stctg++; + vcpu->stat->instruction_stctg++; if (vcpu->arch.sie_block->gpsw.mask & PSW_MASK_PSTATE) return kvm_s390_inject_program_int(vcpu, PGM_PRIVILEGED_OP); @@ -1508,7 +1508,7 @@ static int handle_tprot(struct kvm_vcpu *vcpu) int ret, cc; u8 ar; - vcpu->stat.instruction_tprot++; + vcpu->stat->instruction_tprot++; if (vcpu->arch.sie_block->gpsw.mask & PSW_MASK_PSTATE) return kvm_s390_inject_program_int(vcpu, PGM_PRIVILEGED_OP); @@ -1572,7 +1572,7 @@ static int handle_sckpf(struct kvm_vcpu *vcpu) { u32 value; - vcpu->stat.instruction_sckpf++; + vcpu->stat->instruction_sckpf++; if (vcpu->arch.sie_block->gpsw.mask & PSW_MASK_PSTATE) return kvm_s390_inject_program_int(vcpu, PGM_PRIVILEGED_OP); @@ -1589,7 +1589,7 @@ static int handle_sckpf(struct kvm_vcpu *vcpu) static int handle_ptff(struct kvm_vcpu *vcpu) { - vcpu->stat.instruction_ptff++; + vcpu->stat->instruction_ptff++; /* we don't emulate any control instructions yet */ kvm_s390_set_psw_cc(vcpu, 3); diff --git a/arch/s390/kvm/sigp.c b/arch/s390/kvm/sigp.c index 55c34cb35428..79cf7f77fec6 100644 --- a/arch/s390/kvm/sigp.c +++ b/arch/s390/kvm/sigp.c @@ -306,61 +306,61 @@ static int handle_sigp_dst(struct kvm_vcpu *vcpu, u8 order_code, switch (order_code) { case SIGP_SENSE: - vcpu->stat.instruction_sigp_sense++; + vcpu->stat->instruction_sigp_sense++; rc = __sigp_sense(vcpu, dst_vcpu, status_reg); break; case SIGP_EXTERNAL_CALL: - vcpu->stat.instruction_sigp_external_call++; + vcpu->stat->instruction_sigp_external_call++; rc = __sigp_external_call(vcpu, dst_vcpu, status_reg); break; case SIGP_EMERGENCY_SIGNAL: - vcpu->stat.instruction_sigp_emergency++; + vcpu->stat->instruction_sigp_emergency++; rc = __sigp_emergency(vcpu, dst_vcpu); break; case SIGP_STOP: - vcpu->stat.instruction_sigp_stop++; + vcpu->stat->instruction_sigp_stop++; rc = __sigp_stop(vcpu, dst_vcpu); break; case SIGP_STOP_AND_STORE_STATUS: - vcpu->stat.instruction_sigp_stop_store_status++; + vcpu->stat->instruction_sigp_stop_store_status++; rc = __sigp_stop_and_store_status(vcpu, dst_vcpu, status_reg); break; case SIGP_STORE_STATUS_AT_ADDRESS: - vcpu->stat.instruction_sigp_store_status++; + vcpu->stat->instruction_sigp_store_status++; rc = __sigp_store_status_at_addr(vcpu, dst_vcpu, parameter, status_reg); break; case SIGP_SET_PREFIX: - vcpu->stat.instruction_sigp_prefix++; + vcpu->stat->instruction_sigp_prefix++; rc = __sigp_set_prefix(vcpu, dst_vcpu, parameter, status_reg); break; case SIGP_COND_EMERGENCY_SIGNAL: - vcpu->stat.instruction_sigp_cond_emergency++; + vcpu->stat->instruction_sigp_cond_emergency++; rc = __sigp_conditional_emergency(vcpu, dst_vcpu, parameter, status_reg); break; case SIGP_SENSE_RUNNING: - vcpu->stat.instruction_sigp_sense_running++; + vcpu->stat->instruction_sigp_sense_running++; rc = __sigp_sense_running(vcpu, dst_vcpu, status_reg); break; case SIGP_START: - vcpu->stat.instruction_sigp_start++; + vcpu->stat->instruction_sigp_start++; rc = __prepare_sigp_re_start(vcpu, dst_vcpu, order_code); break; case SIGP_RESTART: - vcpu->stat.instruction_sigp_restart++; + vcpu->stat->instruction_sigp_restart++; rc = __prepare_sigp_re_start(vcpu, dst_vcpu, order_code); break; case SIGP_INITIAL_CPU_RESET: - vcpu->stat.instruction_sigp_init_cpu_reset++; + vcpu->stat->instruction_sigp_init_cpu_reset++; rc = __prepare_sigp_cpu_reset(vcpu, dst_vcpu, order_code); break; case SIGP_CPU_RESET: - vcpu->stat.instruction_sigp_cpu_reset++; + vcpu->stat->instruction_sigp_cpu_reset++; rc = __prepare_sigp_cpu_reset(vcpu, dst_vcpu, order_code); break; default: - vcpu->stat.instruction_sigp_unknown++; + vcpu->stat->instruction_sigp_unknown++; rc = __prepare_sigp_unknown(vcpu, dst_vcpu); } @@ -387,34 +387,34 @@ static int handle_sigp_order_in_user_space(struct kvm_vcpu *vcpu, u8 order_code, return 0; /* update counters as we're directly dropping to user space */ case SIGP_STOP: - vcpu->stat.instruction_sigp_stop++; + vcpu->stat->instruction_sigp_stop++; break; case SIGP_STOP_AND_STORE_STATUS: - vcpu->stat.instruction_sigp_stop_store_status++; + vcpu->stat->instruction_sigp_stop_store_status++; break; case SIGP_STORE_STATUS_AT_ADDRESS: - vcpu->stat.instruction_sigp_store_status++; + vcpu->stat->instruction_sigp_store_status++; break; case SIGP_STORE_ADDITIONAL_STATUS: - vcpu->stat.instruction_sigp_store_adtl_status++; + vcpu->stat->instruction_sigp_store_adtl_status++; break; case SIGP_SET_PREFIX: - vcpu->stat.instruction_sigp_prefix++; + vcpu->stat->instruction_sigp_prefix++; break; case SIGP_START: - vcpu->stat.instruction_sigp_start++; + vcpu->stat->instruction_sigp_start++; break; case SIGP_RESTART: - vcpu->stat.instruction_sigp_restart++; + vcpu->stat->instruction_sigp_restart++; break; case SIGP_INITIAL_CPU_RESET: - vcpu->stat.instruction_sigp_init_cpu_reset++; + vcpu->stat->instruction_sigp_init_cpu_reset++; break; case SIGP_CPU_RESET: - vcpu->stat.instruction_sigp_cpu_reset++; + vcpu->stat->instruction_sigp_cpu_reset++; break; default: - vcpu->stat.instruction_sigp_unknown++; + vcpu->stat->instruction_sigp_unknown++; } VCPU_EVENT(vcpu, 3, "SIGP: order %u for CPU %d handled in userspace", order_code, cpu_addr); @@ -447,7 +447,7 @@ int kvm_s390_handle_sigp(struct kvm_vcpu *vcpu) trace_kvm_s390_handle_sigp(vcpu, order_code, cpu_addr, parameter); switch (order_code) { case SIGP_SET_ARCHITECTURE: - vcpu->stat.instruction_sigp_arch++; + vcpu->stat->instruction_sigp_arch++; rc = __sigp_set_arch(vcpu, parameter, &vcpu->run->s.regs.gprs[r1]); break; diff --git a/arch/s390/kvm/vsie.c b/arch/s390/kvm/vsie.c index a78df3a4f353..904a3d84c1b3 100644 --- a/arch/s390/kvm/vsie.c +++ b/arch/s390/kvm/vsie.c @@ -1456,7 +1456,7 @@ int kvm_s390_handle_vsie(struct kvm_vcpu *vcpu) unsigned long scb_addr; int rc; - vcpu->stat.instruction_sie++; + vcpu->stat->instruction_sie++; if (!test_kvm_cpu_feat(vcpu->kvm, KVM_S390_VM_CPU_FEAT_SIEF2)) return -EOPNOTSUPP; if (vcpu->arch.sie_block->gpsw.mask & PSW_MASK_PSTATE) diff --git a/arch/x86/kvm/debugfs.c b/arch/x86/kvm/debugfs.c index 999227fc7c66..ff31d1bb49ec 100644 --- a/arch/x86/kvm/debugfs.c +++ b/arch/x86/kvm/debugfs.c @@ -24,7 +24,7 @@ DEFINE_SIMPLE_ATTRIBUTE(vcpu_timer_advance_ns_fops, vcpu_get_timer_advance_ns, N static int vcpu_get_guest_mode(void *data, u64 *val) { struct kvm_vcpu *vcpu = (struct kvm_vcpu *) data; - *val = vcpu->stat.guest_mode; + *val = vcpu->stat->guest_mode; return 0; } diff --git a/arch/x86/kvm/hyperv.c b/arch/x86/kvm/hyperv.c index 6ebeb6cea6c0..c6592e7f40a2 100644 --- a/arch/x86/kvm/hyperv.c +++ b/arch/x86/kvm/hyperv.c @@ -1988,7 +1988,7 @@ int kvm_hv_vcpu_flush_tlb(struct kvm_vcpu *vcpu) for (j = 0; j < (entries[i] & ~PAGE_MASK) + 1; j++) kvm_x86_call(flush_tlb_gva)(vcpu, gva + j * PAGE_SIZE); - ++vcpu->stat.tlb_flush; + ++vcpu->stat->tlb_flush; } return 0; @@ -2390,7 +2390,7 @@ static int kvm_hv_hypercall_complete(struct kvm_vcpu *vcpu, u64 result) trace_kvm_hv_hypercall_done(result); kvm_hv_hypercall_set_result(vcpu, result); - ++vcpu->stat.hypercalls; + ++vcpu->stat->hypercalls; ret = kvm_skip_emulated_instruction(vcpu); diff --git a/arch/x86/kvm/kvm_cache_regs.h b/arch/x86/kvm/kvm_cache_regs.h index 36a8786db291..1b9232aad730 100644 --- a/arch/x86/kvm/kvm_cache_regs.h +++ b/arch/x86/kvm/kvm_cache_regs.h @@ -225,7 +225,7 @@ static inline u64 kvm_read_edx_eax(struct kvm_vcpu *vcpu) static inline void enter_guest_mode(struct kvm_vcpu *vcpu) { vcpu->arch.hflags |= HF_GUEST_MASK; - vcpu->stat.guest_mode = 1; + vcpu->stat->guest_mode = 1; } static inline void leave_guest_mode(struct kvm_vcpu *vcpu) @@ -237,7 +237,7 @@ static inline void leave_guest_mode(struct kvm_vcpu *vcpu) kvm_make_request(KVM_REQ_LOAD_EOI_EXITMAP, vcpu); } - vcpu->stat.guest_mode = 0; + vcpu->stat->guest_mode = 0; } static inline bool is_guest_mode(struct kvm_vcpu *vcpu) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 04e4b041e248..2d8953163fa0 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -3014,7 +3014,7 @@ static int mmu_set_spte(struct kvm_vcpu *vcpu, struct kvm_memory_slot *slot, bool write_fault = fault && fault->write; if (unlikely(is_noslot_pfn(pfn))) { - vcpu->stat.pf_mmio_spte_created++; + vcpu->stat->pf_mmio_spte_created++; mark_mmio_spte(vcpu, sptep, gfn, pte_access); return RET_PF_EMULATE; } @@ -3689,7 +3689,7 @@ static int fast_page_fault(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault) walk_shadow_page_lockless_end(vcpu); if (ret != RET_PF_INVALID) - vcpu->stat.pf_fast++; + vcpu->stat->pf_fast++; return ret; } @@ -4446,7 +4446,7 @@ void kvm_arch_async_page_ready(struct kvm_vcpu *vcpu, struct kvm_async_pf *work) * truly spurious and never trigger emulation */ if (r == RET_PF_FIXED) - vcpu->stat.pf_fixed++; + vcpu->stat->pf_fixed++; } static inline u8 kvm_max_level_for_order(int order) @@ -6262,7 +6262,7 @@ int noinline kvm_mmu_page_fault(struct kvm_vcpu *vcpu, gpa_t cr2_or_gpa, u64 err } if (r == RET_PF_INVALID) { - vcpu->stat.pf_taken++; + vcpu->stat->pf_taken++; r = kvm_mmu_do_page_fault(vcpu, cr2_or_gpa, error_code, false, &emulation_type, NULL); @@ -6278,11 +6278,11 @@ int noinline kvm_mmu_page_fault(struct kvm_vcpu *vcpu, gpa_t cr2_or_gpa, u64 err &emulation_type); if (r == RET_PF_FIXED) - vcpu->stat.pf_fixed++; + vcpu->stat->pf_fixed++; else if (r == RET_PF_EMULATE) - vcpu->stat.pf_emulate++; + vcpu->stat->pf_emulate++; else if (r == RET_PF_SPURIOUS) - vcpu->stat.pf_spurious++; + vcpu->stat->pf_spurious++; /* * None of handle_mmio_page_fault(), kvm_mmu_do_page_fault(), or @@ -6396,7 +6396,7 @@ void kvm_mmu_invlpg(struct kvm_vcpu *vcpu, gva_t gva) * done here for them. */ kvm_mmu_invalidate_addr(vcpu, vcpu->arch.walk_mmu, gva, KVM_MMU_ROOTS_ALL); - ++vcpu->stat.invlpg; + ++vcpu->stat->invlpg; } EXPORT_SYMBOL_GPL(kvm_mmu_invlpg); @@ -6418,7 +6418,7 @@ void kvm_mmu_invpcid_gva(struct kvm_vcpu *vcpu, gva_t gva, unsigned long pcid) if (roots) kvm_mmu_invalidate_addr(vcpu, mmu, gva, roots); - ++vcpu->stat.invlpg; + ++vcpu->stat->invlpg; /* * Mappings not reachable via the current cr3 or the prev_roots will be diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c index b23b1b2e60a8..72f81c99d665 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.c +++ b/arch/x86/kvm/mmu/tdp_mmu.c @@ -1181,7 +1181,7 @@ static int tdp_mmu_map_handle_target_level(struct kvm_vcpu *vcpu, /* If a MMIO SPTE is installed, the MMIO will need to be emulated. */ if (unlikely(is_mmio_spte(vcpu->kvm, new_spte))) { - vcpu->stat.pf_mmio_spte_created++; + vcpu->stat->pf_mmio_spte_created++; trace_mark_mmio_spte(rcu_dereference(iter->sptep), iter->gfn, new_spte); ret = RET_PF_EMULATE; diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c index 0bc708ee2788..827dbe4d2b3b 100644 --- a/arch/x86/kvm/svm/sev.c +++ b/arch/x86/kvm/svm/sev.c @@ -4306,7 +4306,7 @@ int sev_handle_vmgexit(struct kvm_vcpu *vcpu) svm->sev_es.ghcb_sa); break; case SVM_VMGEXIT_NMI_COMPLETE: - ++vcpu->stat.nmi_window_exits; + ++vcpu->stat->nmi_window_exits; svm->nmi_masked = false; kvm_make_request(KVM_REQ_EVENT, vcpu); ret = 1; diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c index f692794d18a2..f6a435ff7e2d 100644 --- a/arch/x86/kvm/svm/svm.c +++ b/arch/x86/kvm/svm/svm.c @@ -1577,7 +1577,7 @@ static void svm_vcpu_put(struct kvm_vcpu *vcpu) svm_prepare_host_switch(vcpu); - ++vcpu->stat.host_state_reload; + ++vcpu->stat->host_state_reload; } static unsigned long svm_get_rflags(struct kvm_vcpu *vcpu) @@ -2238,7 +2238,7 @@ static int io_interception(struct kvm_vcpu *vcpu) int size, in, string; unsigned port; - ++vcpu->stat.io_exits; + ++vcpu->stat->io_exits; string = (io_info & SVM_IOIO_STR_MASK) != 0; in = (io_info & SVM_IOIO_TYPE_MASK) != 0; port = io_info >> 16; @@ -2268,7 +2268,7 @@ static int smi_interception(struct kvm_vcpu *vcpu) static int intr_interception(struct kvm_vcpu *vcpu) { - ++vcpu->stat.irq_exits; + ++vcpu->stat->irq_exits; return 1; } @@ -2592,7 +2592,7 @@ static int iret_interception(struct kvm_vcpu *vcpu) WARN_ON_ONCE(sev_es_guest(vcpu->kvm)); - ++vcpu->stat.nmi_window_exits; + ++vcpu->stat->nmi_window_exits; svm->awaiting_iret_completion = true; svm_clr_iret_intercept(svm); @@ -3254,7 +3254,7 @@ static int interrupt_window_interception(struct kvm_vcpu *vcpu) */ kvm_clear_apicv_inhibit(vcpu->kvm, APICV_INHIBIT_REASON_IRQWIN); - ++vcpu->stat.irq_window_exits; + ++vcpu->stat->irq_window_exits; return 1; } @@ -3664,7 +3664,7 @@ static void svm_inject_nmi(struct kvm_vcpu *vcpu) svm->nmi_masked = true; svm_set_iret_intercept(svm); } - ++vcpu->stat.nmi_injections; + ++vcpu->stat->nmi_injections; } static bool svm_is_vnmi_pending(struct kvm_vcpu *vcpu) @@ -3695,7 +3695,7 @@ static bool svm_set_vnmi_pending(struct kvm_vcpu *vcpu) * the NMI is "injected", but for all intents and purposes, passing the * NMI off to hardware counts as injection. */ - ++vcpu->stat.nmi_injections; + ++vcpu->stat->nmi_injections; return true; } @@ -3716,7 +3716,7 @@ static void svm_inject_irq(struct kvm_vcpu *vcpu, bool reinjected) trace_kvm_inj_virq(vcpu->arch.interrupt.nr, vcpu->arch.interrupt.soft, reinjected); - ++vcpu->stat.irq_injections; + ++vcpu->stat->irq_injections; svm->vmcb->control.event_inj = vcpu->arch.interrupt.nr | SVM_EVTINJ_VALID | type; @@ -4368,7 +4368,7 @@ static __no_kcsan fastpath_t svm_vcpu_run(struct kvm_vcpu *vcpu, /* Track VMRUNs that have made past consistency checking */ if (svm->nested.nested_run_pending && svm->vmcb->control.exit_code != SVM_EXIT_ERR) - ++vcpu->stat.nested_run; + ++vcpu->stat->nested_run; svm->nested.nested_run_pending = 0; } diff --git a/arch/x86/kvm/vmx/tdx.c b/arch/x86/kvm/vmx/tdx.c index 84369f539fb2..cf894f572321 100644 --- a/arch/x86/kvm/vmx/tdx.c +++ b/arch/x86/kvm/vmx/tdx.c @@ -813,7 +813,7 @@ static void tdx_prepare_switch_to_host(struct kvm_vcpu *vcpu) if (!vt->guest_state_loaded) return; - ++vcpu->stat.host_state_reload; + ++vcpu->stat->host_state_reload; wrmsrl(MSR_KERNEL_GS_BASE, vt->msr_host_kernel_gs_base); if (tdx->guest_entered) { @@ -1082,7 +1082,7 @@ fastpath_t tdx_vcpu_run(struct kvm_vcpu *vcpu, bool force_immediate_exit) void tdx_inject_nmi(struct kvm_vcpu *vcpu) { - ++vcpu->stat.nmi_injections; + ++vcpu->stat->nmi_injections; td_management_write8(to_tdx(vcpu), TD_VCPU_PEND_NMI, 1); /* * From KVM's perspective, NMI injection is completed right after @@ -1321,7 +1321,7 @@ static int tdx_emulate_io(struct kvm_vcpu *vcpu) u64 size, write; int ret; - ++vcpu->stat.io_exits; + ++vcpu->stat->io_exits; size = tdx->vp_enter_args.r12; write = tdx->vp_enter_args.r13; @@ -2072,7 +2072,7 @@ int tdx_handle_exit(struct kvm_vcpu *vcpu, fastpath_t fastpath) case EXIT_REASON_EXCEPTION_NMI: return tdx_handle_exception_nmi(vcpu); case EXIT_REASON_EXTERNAL_INTERRUPT: - ++vcpu->stat.irq_exits; + ++vcpu->stat->irq_exits; return 1; case EXIT_REASON_CPUID: return tdx_emulate_cpuid(vcpu); diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index 19dc85e5ac37..02458bb0b486 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -1361,7 +1361,7 @@ static void vmx_prepare_switch_to_host(struct vcpu_vmx *vmx) host_state = &vmx->loaded_vmcs->host_state; - ++vmx->vcpu.stat.host_state_reload; + ++vmx->vcpu.stat->host_state_reload; #ifdef CONFIG_X86_64 rdmsrl(MSR_KERNEL_GS_BASE, vmx->msr_guest_kernel_gs_base); @@ -4922,7 +4922,7 @@ void vmx_inject_irq(struct kvm_vcpu *vcpu, bool reinjected) trace_kvm_inj_virq(irq, vcpu->arch.interrupt.soft, reinjected); - ++vcpu->stat.irq_injections; + ++vcpu->stat->irq_injections; if (vmx->rmode.vm86_active) { int inc_eip = 0; if (vcpu->arch.interrupt.soft) @@ -4959,7 +4959,7 @@ void vmx_inject_nmi(struct kvm_vcpu *vcpu) vmx->loaded_vmcs->vnmi_blocked_time = 0; } - ++vcpu->stat.nmi_injections; + ++vcpu->stat->nmi_injections; vmx->loaded_vmcs->nmi_known_unmasked = false; if (vmx->rmode.vm86_active) { @@ -5353,7 +5353,7 @@ static int handle_exception_nmi(struct kvm_vcpu *vcpu) static __always_inline int handle_external_interrupt(struct kvm_vcpu *vcpu) { - ++vcpu->stat.irq_exits; + ++vcpu->stat->irq_exits; return 1; } @@ -5373,7 +5373,7 @@ static int handle_io(struct kvm_vcpu *vcpu) exit_qualification = vmx_get_exit_qual(vcpu); string = (exit_qualification & 16) != 0; - ++vcpu->stat.io_exits; + ++vcpu->stat->io_exits; if (string) return kvm_emulate_instruction(vcpu, 0); @@ -5633,7 +5633,7 @@ static int handle_interrupt_window(struct kvm_vcpu *vcpu) kvm_make_request(KVM_REQ_EVENT, vcpu); - ++vcpu->stat.irq_window_exits; + ++vcpu->stat->irq_window_exits; return 1; } @@ -5811,7 +5811,7 @@ static int handle_nmi_window(struct kvm_vcpu *vcpu) return -EIO; exec_controls_clearbit(to_vmx(vcpu), CPU_BASED_NMI_WINDOW_EXITING); - ++vcpu->stat.nmi_window_exits; + ++vcpu->stat->nmi_window_exits; kvm_make_request(KVM_REQ_EVENT, vcpu); return 1; @@ -6062,7 +6062,7 @@ static int handle_notify(struct kvm_vcpu *vcpu) unsigned long exit_qual = vmx_get_exit_qual(vcpu); bool context_invalid = exit_qual & NOTIFY_VM_CONTEXT_INVALID; - ++vcpu->stat.notify_window_exits; + ++vcpu->stat->notify_window_exits; /* * Notify VM exit happened while executing iret from NMI, @@ -6666,7 +6666,7 @@ static noinstr void vmx_l1d_flush(struct kvm_vcpu *vcpu) return; } - vcpu->stat.l1d_flush++; + vcpu->stat->l1d_flush++; if (static_cpu_has(X86_FEATURE_FLUSH_L1D)) { native_wrmsrl(MSR_IA32_FLUSH_CMD, L1D_FLUSH); @@ -7450,7 +7450,7 @@ fastpath_t vmx_vcpu_run(struct kvm_vcpu *vcpu, bool force_immediate_exit) */ if (vmx->nested.nested_run_pending && !vmx_get_exit_reason(vcpu).failed_vmentry) - ++vcpu->stat.nested_run; + ++vcpu->stat->nested_run; vmx->nested.nested_run_pending = 0; } diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 98a36df7cf62..2c8bdb139b75 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -949,7 +949,7 @@ static int complete_emulated_insn_gp(struct kvm_vcpu *vcpu, int err) void kvm_inject_page_fault(struct kvm_vcpu *vcpu, struct x86_exception *fault) { - ++vcpu->stat.pf_guest; + ++vcpu->stat->pf_guest; /* * Async #PF in L2 is always forwarded to L1 as a VM-Exit regardless of @@ -3607,7 +3607,7 @@ static void kvmclock_reset(struct kvm_vcpu *vcpu) static void kvm_vcpu_flush_tlb_all(struct kvm_vcpu *vcpu) { - ++vcpu->stat.tlb_flush; + ++vcpu->stat->tlb_flush; kvm_x86_call(flush_tlb_all)(vcpu); /* Flushing all ASIDs flushes the current ASID... */ @@ -3616,7 +3616,7 @@ static void kvm_vcpu_flush_tlb_all(struct kvm_vcpu *vcpu) static void kvm_vcpu_flush_tlb_guest(struct kvm_vcpu *vcpu) { - ++vcpu->stat.tlb_flush; + ++vcpu->stat->tlb_flush; if (!tdp_enabled) { /* @@ -3641,7 +3641,7 @@ static void kvm_vcpu_flush_tlb_guest(struct kvm_vcpu *vcpu) static inline void kvm_vcpu_flush_tlb_current(struct kvm_vcpu *vcpu) { - ++vcpu->stat.tlb_flush; + ++vcpu->stat->tlb_flush; kvm_x86_call(flush_tlb_current)(vcpu); } @@ -5067,11 +5067,11 @@ static void kvm_steal_time_set_preempted(struct kvm_vcpu *vcpu) * preempted if and only if the VM-Exit was due to a host interrupt. */ if (!vcpu->arch.at_instruction_boundary) { - vcpu->stat.preemption_other++; + vcpu->stat->preemption_other++; return; } - vcpu->stat.preemption_reported++; + vcpu->stat->preemption_reported++; if (!(vcpu->arch.st.msr_val & KVM_MSR_ENABLED)) return; @@ -8874,7 +8874,7 @@ static int handle_emulation_failure(struct kvm_vcpu *vcpu, int emulation_type) { struct kvm *kvm = vcpu->kvm; - ++vcpu->stat.insn_emulation_fail; + ++vcpu->stat->insn_emulation_fail; trace_kvm_emulate_insn_failed(vcpu); if (emulation_type & EMULTYPE_VMWARE_GP) { @@ -9119,7 +9119,7 @@ int x86_decode_emulated_instruction(struct kvm_vcpu *vcpu, int emulation_type, r = x86_decode_insn(ctxt, insn, insn_len, emulation_type); trace_kvm_emulate_insn_start(vcpu); - ++vcpu->stat.insn_emulation; + ++vcpu->stat->insn_emulation; return r; } @@ -9285,7 +9285,7 @@ int x86_emulate_instruction(struct kvm_vcpu *vcpu, gpa_t cr2_or_gpa, } r = 0; } else if (vcpu->mmio_needed) { - ++vcpu->stat.mmio_exits; + ++vcpu->stat->mmio_exits; if (!vcpu->mmio_is_write) writeback = false; @@ -10011,7 +10011,7 @@ static void kvm_sched_yield(struct kvm_vcpu *vcpu, unsigned long dest_id) struct kvm_vcpu *target = NULL; struct kvm_apic_map *map; - vcpu->stat.directed_yield_attempted++; + vcpu->stat->directed_yield_attempted++; if (single_task_running()) goto no_yield; @@ -10034,7 +10034,7 @@ static void kvm_sched_yield(struct kvm_vcpu *vcpu, unsigned long dest_id) if (kvm_vcpu_yield_to(target) <= 0) goto no_yield; - vcpu->stat.directed_yield_successful++; + vcpu->stat->directed_yield_successful++; no_yield: return; @@ -10061,7 +10061,7 @@ int ____kvm_emulate_hypercall(struct kvm_vcpu *vcpu, int cpl, unsigned long a3 = kvm_rsi_read(vcpu); int op_64_bit = is_64_bit_hypercall(vcpu); - ++vcpu->stat.hypercalls; + ++vcpu->stat->hypercalls; trace_kvm_hypercall(nr, a0, a1, a2, a3); @@ -10916,7 +10916,7 @@ static int vcpu_enter_guest(struct kvm_vcpu *vcpu) if (kvm_check_request(KVM_REQ_EVENT, vcpu) || req_int_win || kvm_xen_has_interrupt(vcpu)) { - ++vcpu->stat.req_event; + ++vcpu->stat->req_event; r = kvm_apic_accept_events(vcpu); if (r < 0) { r = 0; @@ -11048,7 +11048,7 @@ static int vcpu_enter_guest(struct kvm_vcpu *vcpu) } /* Note, VM-Exits that go down the "slow" path are accounted below. */ - ++vcpu->stat.exits; + ++vcpu->stat->exits; } /* @@ -11099,11 +11099,11 @@ static int vcpu_enter_guest(struct kvm_vcpu *vcpu) * VM-Exit on SVM and any ticks that occur between VM-Exit and now. * An instruction is required after local_irq_enable() to fully unblock * interrupts on processors that implement an interrupt shadow, the - * stat.exits increment will do nicely. + * stat->exits increment will do nicely. */ kvm_before_interrupt(vcpu, KVM_HANDLING_IRQ); local_irq_enable(); - ++vcpu->stat.exits; + ++vcpu->stat->exits; local_irq_disable(); kvm_after_interrupt(vcpu); @@ -11321,7 +11321,7 @@ static int vcpu_run(struct kvm_vcpu *vcpu) kvm_vcpu_ready_for_interrupt_injection(vcpu)) { r = 0; vcpu->run->exit_reason = KVM_EXIT_IRQ_WINDOW_OPEN; - ++vcpu->stat.request_irq_exits; + ++vcpu->stat->request_irq_exits; break; } @@ -11346,7 +11346,7 @@ static int __kvm_emulate_halt(struct kvm_vcpu *vcpu, int state, int reason) * managed by userspace, in which case userspace is responsible for * handling wake events. */ - ++vcpu->stat.halt_exits; + ++vcpu->stat->halt_exits; if (lapic_in_kernel(vcpu)) { if (kvm_vcpu_has_events(vcpu) || vcpu->arch.pv.pv_unhalted) state = KVM_MP_STATE_RUNNABLE; @@ -11515,7 +11515,7 @@ static void kvm_load_guest_fpu(struct kvm_vcpu *vcpu) static void kvm_put_guest_fpu(struct kvm_vcpu *vcpu) { fpu_swap_kvm_fpstate(&vcpu->arch.guest_fpu, false); - ++vcpu->stat.fpu_reload; + ++vcpu->stat->fpu_reload; trace_kvm_fpu(0); } @@ -11564,7 +11564,7 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu) if (signal_pending(current)) { r = -EINTR; kvm_run->exit_reason = KVM_EXIT_INTR; - ++vcpu->stat.signal_exits; + ++vcpu->stat->signal_exits; } goto out; } diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index dbca418d64f5..d2e0c0e8ff17 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -393,7 +393,8 @@ struct kvm_vcpu { bool ready; bool scheduled_out; struct kvm_vcpu_arch arch; - struct kvm_vcpu_stat stat; + struct kvm_vcpu_stat *stat; + struct kvm_vcpu_stat __stat; char stats_id[KVM_STATS_NAME_SIZE]; struct kvm_dirty_ring dirty_ring; @@ -2489,7 +2490,7 @@ static inline int kvm_arch_vcpu_run_pid_change(struct kvm_vcpu *vcpu) static inline void kvm_handle_signal_exit(struct kvm_vcpu *vcpu) { vcpu->run->exit_reason = KVM_EXIT_INTR; - vcpu->stat.signal_exits++; + vcpu->stat->signal_exits++; } #endif /* CONFIG_KVM_XFER_TO_GUEST_WORK */ diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index b08fea91dc74..dce89a2f0a31 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -3632,7 +3632,7 @@ bool kvm_vcpu_block(struct kvm_vcpu *vcpu) struct rcuwait *wait = kvm_arch_vcpu_get_wait(vcpu); bool waited = false; - vcpu->stat.generic.blocking = 1; + vcpu->stat->generic.blocking = 1; preempt_disable(); kvm_arch_vcpu_blocking(vcpu); @@ -3654,7 +3654,7 @@ bool kvm_vcpu_block(struct kvm_vcpu *vcpu) kvm_arch_vcpu_unblocking(vcpu); preempt_enable(); - vcpu->stat.generic.blocking = 0; + vcpu->stat->generic.blocking = 0; return waited; } @@ -3662,16 +3662,16 @@ bool kvm_vcpu_block(struct kvm_vcpu *vcpu) static inline void update_halt_poll_stats(struct kvm_vcpu *vcpu, ktime_t start, ktime_t end, bool success) { - struct kvm_vcpu_stat_generic *stats = &vcpu->stat.generic; + struct kvm_vcpu_stat_generic *stats = &vcpu->stat->generic; u64 poll_ns = ktime_to_ns(ktime_sub(end, start)); - ++vcpu->stat.generic.halt_attempted_poll; + ++vcpu->stat->generic.halt_attempted_poll; if (success) { - ++vcpu->stat.generic.halt_successful_poll; + ++vcpu->stat->generic.halt_successful_poll; if (!vcpu_valid_wakeup(vcpu)) - ++vcpu->stat.generic.halt_poll_invalid; + ++vcpu->stat->generic.halt_poll_invalid; stats->halt_poll_success_ns += poll_ns; KVM_STATS_LOG_HIST_UPDATE(stats->halt_poll_success_hist, poll_ns); @@ -3735,9 +3735,9 @@ void kvm_vcpu_halt(struct kvm_vcpu *vcpu) cur = ktime_get(); if (waited) { - vcpu->stat.generic.halt_wait_ns += + vcpu->stat->generic.halt_wait_ns += ktime_to_ns(cur) - ktime_to_ns(poll_end); - KVM_STATS_LOG_HIST_UPDATE(vcpu->stat.generic.halt_wait_hist, + KVM_STATS_LOG_HIST_UPDATE(vcpu->stat->generic.halt_wait_hist, ktime_to_ns(cur) - ktime_to_ns(poll_end)); } out: @@ -3782,7 +3782,7 @@ bool kvm_vcpu_wake_up(struct kvm_vcpu *vcpu) { if (__kvm_vcpu_wake_up(vcpu)) { WRITE_ONCE(vcpu->ready, true); - ++vcpu->stat.generic.halt_wakeup; + ++vcpu->stat->generic.halt_wakeup; return true; } @@ -4174,6 +4174,7 @@ static int kvm_vm_ioctl_create_vcpu(struct kvm *kvm, unsigned long id) vcpu->run = page_address(page); vcpu->plane0 = vcpu; + vcpu->stat = &vcpu->__stat; kvm_vcpu_init(vcpu, kvm, id); r = kvm_arch_vcpu_create(vcpu); From patchwork Tue Apr 1 16:10:48 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Paolo Bonzini X-Patchwork-Id: 14035098 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 3D13121B9C1 for ; Tue, 1 Apr 2025 16:11:42 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=170.10.129.124 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1743523904; cv=none; b=DgeHT77k6zEvF2KntoID4QvFyyj/q57H1qypAdw2QwlZlREUJyyIooFn3yOoDgo2O+w6PPaR7qE041cfVOjvMFKbl8hcMpdgl2NuVLB5eGVYJ9g5+CyT0KxOoaF/I8xgesEowFDuf5VXQo0gKuzjagaqvYjJ5i8jAm0+tYGankA= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1743523904; c=relaxed/simple; bh=RE4EXCpfxhno9yiUXTtOFR2cTLxZp7UNPnnGbcmpDXs=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=SurYjOCtbjQ4fngzFO7FlmU6+4aBErxDwmiw/xfEkr8lifMzbpk6P1JTn2MAydydBg9u1eOkkoaJTyeRP83bp6GsB5KEAchJFqhE1BfOKpKE8SlwXFhIXvMKTAorHm6IO9f6bo/qUo2pscLz1LEt6Xjc9hTWAKwzmv8rXjDe76c= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=redhat.com; spf=pass smtp.mailfrom=redhat.com; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b=A6RlKK3J; arc=none smtp.client-ip=170.10.129.124 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=redhat.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=redhat.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="A6RlKK3J" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1743523902; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=yjR3mqM2w8v8J+UuArqg6Kc21meadYNT6Oek6yPNa0w=; b=A6RlKK3JQHjOlyJ7GGMZsFvrLquE1UJ/mJ74RaA44neiHd5hyzCIOnIMyryBk0LSaKqmyz aHBDI+I5TXDxt8K25jXGCv/XaahqUFNnmyHQ/VLS5KbTgEyGFhGFjDARrSWz5lksgnb9V+ SMj89+hCoJF1cvITIrwEbr6pghYoDLU= Received: from mail-wr1-f72.google.com (mail-wr1-f72.google.com [209.85.221.72]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-463-UEeKNcuDPp249YFu7k9Y3A-1; Tue, 01 Apr 2025 12:11:38 -0400 X-MC-Unique: UEeKNcuDPp249YFu7k9Y3A-1 X-Mimecast-MFC-AGG-ID: UEeKNcuDPp249YFu7k9Y3A_1743523898 Received: by mail-wr1-f72.google.com with SMTP id ffacd0b85a97d-39c142d4909so1836242f8f.3 for ; Tue, 01 Apr 2025 09:11:38 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1743523897; x=1744128697; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=yjR3mqM2w8v8J+UuArqg6Kc21meadYNT6Oek6yPNa0w=; b=VZKV4wceLiepw/MI4xlGbTEpQfU4FolDiKaw5VPmYvjin4ni9xvhg6k1WSABva3jUW Uyp8v20g5hM0Q5t59JgnESE2d7xn+IEWHj9VImzEfVvF9430EwvXBLDvLqpEMa2oorUS YvMlPDjLePHvSX757FqkNFfVPFD3ymf1Fk0mPdG+g4FWaij/oDaF5O37c0yuLL8Hmu84 u145P3s6PZGNC3kNdPFy/fiNTP6gnBd6AMjEPH8HaGOcLXp6IapHeaA246u8OHmtH7NJ Y4xogVw4LIgVRtPwEkmxqczrUdH1amTDi6q8h3M0QdmZLGM8r8P19kuNvoRoDyWC/l4k DZIg== X-Forwarded-Encrypted: i=1; AJvYcCWzUGQUf8AF8GelVDuq32FYgBoLGATKJzHmHNtYc8PeJSxdFu+yOQ1EMzkiDvzlQiSbO6E=@vger.kernel.org X-Gm-Message-State: AOJu0YwRV/6Nov7YULHtNh2qJl0xr6XqIYo8y06Z86fspArVQctlwMp/ D5y6PHO9fEGNXHA+wy5ZbnP/oDJJZGNjRmRE+tgebaiHbUfASB6vNlj2uQ60YC+O8+6YLkJ+dkl Z1P6R0tQeWyZlz+mwP9EubwFJ5KbiLK8JCXIp80hHR7BC1mTGWg== X-Gm-Gg: ASbGncuGAwkEYZOpWUjIy7CSedpu8mqsnPNqG6vj2csXOGng1PWUp+r1W+w+9jdYPDL Hh4NboFKSn0RPiJh7dQ7B5CR5Cu+DVlykOGFu4Lx8WcymF4VwsX/LX138fp6g+et9ORj3mXYDbD 1CMcyQ6lTLv80/HCwBU0uEIPuE45FTvIeivfXUda/WA76+sHCZu4NW1mdoJFRKGlDCsd8JUAzIw ApQBlyKc65VfnkWgxzrui9V6DP9c1YZKhVOnKQA/b2APBcycPYF3kOJNQXqNiBu3z7CmQ+UXUe3 u8CUHi0AMu8eH0VMAgg5Tg== X-Received: by 2002:a05:6000:1863:b0:399:6dd9:9f40 with SMTP id ffacd0b85a97d-39c120cc8a1mr11575902f8f.9.1743523897702; Tue, 01 Apr 2025 09:11:37 -0700 (PDT) X-Google-Smtp-Source: AGHT+IFf14Rq+IqwJ0RMb19wAyhNB4F46l/fAlyv854rgC7681D5ZIz2UVlu3+YdbiTNo+4kZUtxqg== X-Received: by 2002:a05:6000:1863:b0:399:6dd9:9f40 with SMTP id ffacd0b85a97d-39c120cc8a1mr11575872f8f.9.1743523897271; Tue, 01 Apr 2025 09:11:37 -0700 (PDT) Received: from [192.168.10.48] ([176.206.111.201]) by smtp.gmail.com with ESMTPSA id ffacd0b85a97d-39c0b79e0afsm14662940f8f.65.2025.04.01.09.11.35 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 01 Apr 2025 09:11:36 -0700 (PDT) From: Paolo Bonzini To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: roy.hopkins@suse.com, seanjc@google.com, thomas.lendacky@amd.com, ashish.kalra@amd.com, michael.roth@amd.com, jroedel@suse.de, nsaenz@amazon.com, anelkz@amazon.de, James.Bottomley@HansenPartnership.com Subject: [PATCH 11/29] KVM: anticipate allocation of dirty ring Date: Tue, 1 Apr 2025 18:10:48 +0200 Message-ID: <20250401161106.790710-12-pbonzini@redhat.com> X-Mailer: git-send-email 2.49.0 In-Reply-To: <20250401161106.790710-1-pbonzini@redhat.com> References: <20250401161106.790710-1-pbonzini@redhat.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Put together code that deals with data that is shared by all planes: vcpu->run and dirty ring. Signed-off-by: Paolo Bonzini --- virt/kvm/kvm_main.c | 20 ++++++++++---------- 1 file changed, 10 insertions(+), 10 deletions(-) diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index dce89a2f0a31..4c7e379fbf7d 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -4173,20 +4173,20 @@ static int kvm_vm_ioctl_create_vcpu(struct kvm *kvm, unsigned long id) } vcpu->run = page_address(page); + if (kvm->dirty_ring_size) { + r = kvm_dirty_ring_alloc(kvm, &vcpu->dirty_ring, + id, kvm->dirty_ring_size); + if (r) + goto vcpu_free_run_page; + } + vcpu->plane0 = vcpu; vcpu->stat = &vcpu->__stat; kvm_vcpu_init(vcpu, kvm, id); r = kvm_arch_vcpu_create(vcpu); if (r) - goto vcpu_free_run_page; - - if (kvm->dirty_ring_size) { - r = kvm_dirty_ring_alloc(kvm, &vcpu->dirty_ring, - id, kvm->dirty_ring_size); - if (r) - goto arch_vcpu_destroy; - } + goto vcpu_free_dirty_ring; mutex_lock(&kvm->lock); @@ -4240,9 +4240,9 @@ static int kvm_vm_ioctl_create_vcpu(struct kvm *kvm, unsigned long id) xa_erase(&kvm->planes[0]->vcpu_array, vcpu->vcpu_idx); unlock_vcpu_destroy: mutex_unlock(&kvm->lock); - kvm_dirty_ring_free(&vcpu->dirty_ring); -arch_vcpu_destroy: kvm_arch_vcpu_destroy(vcpu); +vcpu_free_dirty_ring: + kvm_dirty_ring_free(&vcpu->dirty_ring); vcpu_free_run_page: free_page((unsigned long)vcpu->run); vcpu_free: From patchwork Tue Apr 1 16:10:49 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Paolo Bonzini X-Patchwork-Id: 14035100 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id A8F5121CA1D for ; Tue, 1 Apr 2025 16:11:46 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=170.10.129.124 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1743523908; cv=none; b=dbITcHlBoqXtXcgPsArIMVzGkjHkKK2amolB+Biqd5gE5VyFVgVUoQ1Z6BduSjyhmisCIjkY3FnKs9+5tj8Q0cHjZ3MXeiH0ng1eGqgfrHEv1elwWtc7bwtEwR/G7aIe/WpFuv+Sm9o6Kcuer08/hqUktcnwXsFTaTxdgS8M6FI= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1743523908; c=relaxed/simple; bh=xknqBKrnf/cRf6MzXxubS95UH7TntUJgejhgAFT3r9g=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=W+NyrXLysbHJae7mP0VwT6m+bhYflgQMH038DQY0irRKwViRgaHBgq7830A3iq8/nY/bqpkGte/Gk79kumYQv4VTJAnWpvlclFfhFVggmrxfJbrCfhvHlPNgNulcA6c983HXW9GmkTaL2+c5t+q/KpHil/vz12QYckkkKvWXq1o= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=redhat.com; spf=pass smtp.mailfrom=redhat.com; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b=Zwy0hkxt; arc=none smtp.client-ip=170.10.129.124 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=redhat.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=redhat.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="Zwy0hkxt" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1743523905; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=cF+qO0yatMZdwcKNEFIrHdbvxZj7acs2e7VdEaArYVg=; b=Zwy0hkxtD75drwzkHy2TaRKFWKetkWDm8EGfDgA2ZSNyP5I5wzzIVq0XklZTp/4MRFACbH 5om0Nlyzr4XgaYaElVcaGQW3mEzZyMbs1unyrUk3Cs+IXInxoXJcNeltEtO+lGsAfhgLez 77VZwosGEetwm5sxgBQokYRrNIOw+k4= Received: from mail-wr1-f71.google.com (mail-wr1-f71.google.com [209.85.221.71]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-523-jzk_pm-NMj67O6_v1PZMQA-1; Tue, 01 Apr 2025 12:11:44 -0400 X-MC-Unique: jzk_pm-NMj67O6_v1PZMQA-1 X-Mimecast-MFC-AGG-ID: jzk_pm-NMj67O6_v1PZMQA_1743523903 Received: by mail-wr1-f71.google.com with SMTP id ffacd0b85a97d-3978ef9a284so2694307f8f.3 for ; Tue, 01 Apr 2025 09:11:44 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1743523903; x=1744128703; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=cF+qO0yatMZdwcKNEFIrHdbvxZj7acs2e7VdEaArYVg=; b=r1G9hDigNYDSxZVpLi6sPrIaeO3/9q57/a4B6/shegX0QeKqYb5Zc/eYTK9ep0q96u F7uEdj1QMeZEEv5fHyXWsoUB7Dfl+bA1a3/ZpTTNcoI4J2J9AocuHCU5hMLPgBRcNzks 8fIiCXtaNSRRMZXhuRHBoJUUJE5HQYIZ/4Cfmu9IgsYDJJS7j+/zW6SbN57ENdD1wsZG C5SYaA26jbGBs1nnL+mPCZ1O9PytBmFz71NpvekKqWq1Y6+scSVJ4HfDuZGq+KLeLJvN Ajo94+IzJxPysCumaSr8EtH+kNQ11C4DSQ0ZcnJTdOAh+v4TfQswSW4AMLrop/hVRcjU erqw== X-Forwarded-Encrypted: i=1; AJvYcCXY/IrnXV4lIyfSh2zwLKihHMSI1CvdwklcM9VuWZ5ncjrOxPJCkiEXqNWJcSIwtXK6y+w=@vger.kernel.org X-Gm-Message-State: AOJu0YykPnY28geVgLcUDMfENxPjjbFL8JunYgIR6nLF5yuDqMxkZ0SZ nnhzCL6ga9R/kqWL7ILIsXYuY+8Mz9oVbirAoLdnDMPc2uwxUhrD66iyHC5htEK8qKXhin9BCZZ gXtcpyCCBA1Nz2M9BGimHMV86vK4FTBEbXHExGQG08kHZE6w6QamGoTs5rg== X-Gm-Gg: ASbGncs+3jPb/9eW3ACyCwGTXUfrD2p0un84VAZisW9mVTLBxvXidQ9SEKqnXsUm/Uw MI2DEPKhqP3PY6b0tEdp0CVjgHIyYK0+zrKxe5/71byAHhodyBYp7BBKFv1lsYiTWCbO6AF0Sn/ 3PPzaXSvtoPs10QkpeTB6WMCg95dVJkhckODbrMnf5pZNLpVMeaFNs6x1pnlaVnjox3lgpeixTx ilbQyLy6L0d6eK7WTl0Nm8NaRrVa+094GNccHKle26W454PXOctyHOSz8LojjtNQGDM9WMvI4pA sYfDOEeV/4i4GNqnkX+WTw== X-Received: by 2002:a05:6000:2b03:b0:391:4674:b10f with SMTP id ffacd0b85a97d-39c121188demr8496015f8f.36.1743523902840; Tue, 01 Apr 2025 09:11:42 -0700 (PDT) X-Google-Smtp-Source: AGHT+IGqJ3rJ6+IP5QCotzUgeA4XjgsMo38RrmP0uvZCP5poVx+NlUkpMpatJ7KifS+nAPHv2myHRg== X-Received: by 2002:a05:6000:2b03:b0:391:4674:b10f with SMTP id ffacd0b85a97d-39c121188demr8495987f8f.36.1743523902498; Tue, 01 Apr 2025 09:11:42 -0700 (PDT) Received: from [192.168.10.48] ([176.206.111.201]) by smtp.gmail.com with ESMTPSA id ffacd0b85a97d-39c0b79e0dcsm14597487f8f.64.2025.04.01.09.11.37 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 01 Apr 2025 09:11:38 -0700 (PDT) From: Paolo Bonzini To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: roy.hopkins@suse.com, seanjc@google.com, thomas.lendacky@amd.com, ashish.kalra@amd.com, michael.roth@amd.com, jroedel@suse.de, nsaenz@amazon.com, anelkz@amazon.de, James.Bottomley@HansenPartnership.com Subject: [PATCH 12/29] KVM: share dirty ring for same vCPU id on different planes Date: Tue, 1 Apr 2025 18:10:49 +0200 Message-ID: <20250401161106.790710-13-pbonzini@redhat.com> X-Mailer: git-send-email 2.49.0 In-Reply-To: <20250401161106.790710-1-pbonzini@redhat.com> References: <20250401161106.790710-1-pbonzini@redhat.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 The dirty page ring is read by mmap()-ing the vCPU file descriptor, which is only possible for plane 0. This is not a problem because it is only filled by KVM_RUN which takes the plane-0 vCPU mutex, and it is therefore possible to share it for vCPUs that have the same id but are on different planes. (TODO: double check). Signed-off-by: Paolo Bonzini --- include/linux/kvm_host.h | 6 ++++-- virt/kvm/dirty_ring.c | 5 +++-- virt/kvm/kvm_main.c | 10 +++++----- 3 files changed, 12 insertions(+), 9 deletions(-) diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index d2e0c0e8ff17..b511aed2de8e 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -394,9 +394,11 @@ struct kvm_vcpu { bool scheduled_out; struct kvm_vcpu_arch arch; struct kvm_vcpu_stat *stat; - struct kvm_vcpu_stat __stat; char stats_id[KVM_STATS_NAME_SIZE]; - struct kvm_dirty_ring dirty_ring; + struct kvm_dirty_ring *dirty_ring; + + struct kvm_vcpu_stat __stat; + struct kvm_dirty_ring __dirty_ring; /* * The most recently used memslot by this vCPU and the slots generation diff --git a/virt/kvm/dirty_ring.c b/virt/kvm/dirty_ring.c index d14ffc7513ee..66e6a6a67d13 100644 --- a/virt/kvm/dirty_ring.c +++ b/virt/kvm/dirty_ring.c @@ -172,11 +172,12 @@ int kvm_dirty_ring_reset(struct kvm *kvm, struct kvm_dirty_ring *ring) void kvm_dirty_ring_push(struct kvm_vcpu *vcpu, u32 slot, u64 offset) { - struct kvm_dirty_ring *ring = &vcpu->dirty_ring; + struct kvm_dirty_ring *ring = vcpu->dirty_ring; struct kvm_dirty_gfn *entry; /* It should never get full */ WARN_ON_ONCE(kvm_dirty_ring_full(ring)); + lockdep_assert_held(&vcpu->plane0->mutex); entry = &ring->dirty_gfns[ring->dirty_index & (ring->size - 1)]; @@ -204,7 +205,7 @@ bool kvm_dirty_ring_check_request(struct kvm_vcpu *vcpu) * the dirty ring is reset by userspace. */ if (kvm_check_request(KVM_REQ_DIRTY_RING_SOFT_FULL, vcpu) && - kvm_dirty_ring_soft_full(&vcpu->dirty_ring)) { + kvm_dirty_ring_soft_full(vcpu->dirty_ring)) { kvm_make_request(KVM_REQ_DIRTY_RING_SOFT_FULL, vcpu); vcpu->run->exit_reason = KVM_EXIT_DIRTY_RING_FULL; trace_kvm_dirty_ring_exit(vcpu); diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index 4c7e379fbf7d..863fd80ddfbe 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -466,7 +466,7 @@ static void kvm_vcpu_init(struct kvm_vcpu *vcpu, struct kvm *kvm, unsigned id) static void kvm_vcpu_destroy(struct kvm_vcpu *vcpu) { kvm_arch_vcpu_destroy(vcpu); - kvm_dirty_ring_free(&vcpu->dirty_ring); + kvm_dirty_ring_free(vcpu->dirty_ring); /* * No need for rcu_read_lock as VCPU_RUN is the only place that changes @@ -4038,7 +4038,7 @@ static vm_fault_t kvm_vcpu_fault(struct vm_fault *vmf) #endif else if (kvm_page_in_dirty_ring(vcpu->kvm, vmf->pgoff)) page = kvm_dirty_ring_get_page( - &vcpu->dirty_ring, + vcpu->dirty_ring, vmf->pgoff - KVM_DIRTY_LOG_PAGE_OFFSET); else return kvm_arch_vcpu_fault(vcpu, vmf); @@ -4174,7 +4174,7 @@ static int kvm_vm_ioctl_create_vcpu(struct kvm *kvm, unsigned long id) vcpu->run = page_address(page); if (kvm->dirty_ring_size) { - r = kvm_dirty_ring_alloc(kvm, &vcpu->dirty_ring, + r = kvm_dirty_ring_alloc(kvm, &vcpu->__dirty_ring, id, kvm->dirty_ring_size); if (r) goto vcpu_free_run_page; @@ -4242,7 +4242,7 @@ static int kvm_vm_ioctl_create_vcpu(struct kvm *kvm, unsigned long id) mutex_unlock(&kvm->lock); kvm_arch_vcpu_destroy(vcpu); vcpu_free_dirty_ring: - kvm_dirty_ring_free(&vcpu->dirty_ring); + kvm_dirty_ring_free(&vcpu->__dirty_ring); vcpu_free_run_page: free_page((unsigned long)vcpu->run); vcpu_free: @@ -5047,7 +5047,7 @@ static int kvm_vm_ioctl_reset_dirty_pages(struct kvm *kvm) mutex_lock(&kvm->slots_lock); kvm_for_each_vcpu(i, vcpu, kvm) - cleared += kvm_dirty_ring_reset(vcpu->kvm, &vcpu->dirty_ring); + cleared += kvm_dirty_ring_reset(vcpu->kvm, vcpu->dirty_ring); mutex_unlock(&kvm->slots_lock); From patchwork Tue Apr 1 16:10:50 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Paolo Bonzini X-Patchwork-Id: 14035101 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 53DFD21CC7B for ; Tue, 1 Apr 2025 16:11:48 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=170.10.129.124 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1743523911; cv=none; b=LJfkBxTu2cHX30cTuKImZi3pMH8Ckmhrl5yj3bk4er4oP2R8NHGWIScCJ1F8pLjhvpJ6zQarH1sFJgkP6qEoApf3w1WJMDYp9l4IXSp6C2kc1F9nCEb2uv77B0/gI5ZLqYEeNzoX45fDh8jw+URds+/OqaWZtnVMj/v+7jv7UvI= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1743523911; c=relaxed/simple; bh=qcCJMIEB/YYA+Wr5VMPjm8ILdoDyBhZd0vKvkLuJklw=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=u0WEijIw/QCNiRrlt1k8Rt7ckr0FSJ+Cdw6OaaUL6xxA3+uBiO9NwTuvn1JgLym30Rs3qysNBk2QLQY2v9+FeIXDG3rByheMhKsvEEIs6/ANDhEW2MmmpbvcBuCGprAvzQNLZ2iKyEoJviNhYRaRi9bX5/pdl46OzJgj8+m/xsQ= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=redhat.com; spf=pass smtp.mailfrom=redhat.com; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b=gbDXab7h; arc=none smtp.client-ip=170.10.129.124 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=redhat.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=redhat.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="gbDXab7h" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1743523908; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=V4ffDXlDC4IXD1cfMgUEPRkug63IDolKcU/ffLWZ1l4=; b=gbDXab7hacLGYq6YLHYNbY+e22eW8fIzyP0VtJ2JdjXtmrMyEwdCSH/6qbd/j0eM1/78kV YDkA2L6XJABwE0UKBmirKGBJUevILzQbG31/TCfvLusWNfI2YvVzkFLQrZ2oe+ntAcWYKF 8YtrcJVRG4Gb91n40NK+7kdV8NzAUTY= Received: from mail-wm1-f70.google.com (mail-wm1-f70.google.com [209.85.128.70]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-657-5hAsmAaKML6WRW0Fvh4PBw-1; Tue, 01 Apr 2025 12:11:47 -0400 X-MC-Unique: 5hAsmAaKML6WRW0Fvh4PBw-1 X-Mimecast-MFC-AGG-ID: 5hAsmAaKML6WRW0Fvh4PBw_1743523906 Received: by mail-wm1-f70.google.com with SMTP id 5b1f17b1804b1-43d00017e9dso39784795e9.0 for ; Tue, 01 Apr 2025 09:11:46 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1743523906; x=1744128706; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=V4ffDXlDC4IXD1cfMgUEPRkug63IDolKcU/ffLWZ1l4=; b=HHVnqsJfGJFqfjt2y6vWmmHJAQ27DFkFy5NHLHLEldeWwrcHI2vyWP8p9wV3sw9Gt4 zyz12H8Bz8MyXAONnhSVjupzCvvlp6+CzjBTEXIu83fMi40tj7YCVLIeCA4VFj4rKPzV w2PBDp1cRugcn5C3KprkNVI9p1TzOwm8mazvQHsvzWFkcpU6bi6ijtXa/Sur1F8r/ULE wcQTYddsUVpLakN10J/2BYKDMGlbfYjcsxWcN/R6vTRuB2pOeoPdtWtN1s4DDfHP+QZE S/CKCN4E4y+TRwf2G5m6mKr1XzrooDyBbvWcudZrzDUWpLbnnb0AQzrPcMdm3UhnpFjm dhIA== X-Forwarded-Encrypted: i=1; AJvYcCUVT5RvL5QKpraWGwAkKVDB3VlkfSQK26ztRTwflO48pVl/McP0pdy82lrEf8nKpVtJZJ8=@vger.kernel.org X-Gm-Message-State: AOJu0YzXCUfqVBak8JFVfDU0KMvrjEcOVbmjqNsUmt1BLNmKo7+N48Ye X/RkCP+16wHeJ7TT+b26Xsj/cAHpP1mIxoBYPlCKprsD7gMcxSQzROK6HLJFYSPn1xhlx7K/ogh DbWo2sEr1eh+aRTSY65O+vAbS6gaNududEnNa0GI9ivAfWy7LOQ== X-Gm-Gg: ASbGncvUW7JYPld+/OYw7Gh4RCB7RIXGok4Yg0bKz6l63Tg0dZUt6hPP8IdJMZNLQM/ ZQOvoxDDcQ/GP2Flc2WRtl0OJhyQQaqTD+tDfQYVWgUX2wjzwPMIAQEHwKdshlvOljwz8JM0lLY wzkcWf8GJ6ODRS96mFVzrYdvKBW8CzBCxUTQ0Vd75rcsKDzwj2A30Wj/RBXowQb9fKBmQwXUZKJ 6s4N0y3w3+h9zMVawHweKjKHAzX6YwFqhgyrlyYV1h+6wVtz0e/dM89aJBvyfMrLuRkzhRPXp1M 67JyhnmNQNlGPE7j1ArlXQ== X-Received: by 2002:a05:600c:4e87:b0:43c:f8fc:f686 with SMTP id 5b1f17b1804b1-43db61d6c2fmr101787135e9.3.1743523905650; Tue, 01 Apr 2025 09:11:45 -0700 (PDT) X-Google-Smtp-Source: AGHT+IGSQ1wvUh2eHOLlh8NcVQtz5xS01dfxv4KX3HPNNarZ4kwVszorKHWEoxoNvdgXqk1Sv1qazA== X-Received: by 2002:a05:600c:4e87:b0:43c:f8fc:f686 with SMTP id 5b1f17b1804b1-43db61d6c2fmr101786685e9.3.1743523905112; Tue, 01 Apr 2025 09:11:45 -0700 (PDT) Received: from [192.168.10.48] ([176.206.111.201]) by smtp.gmail.com with ESMTPSA id 5b1f17b1804b1-43d82e83482sm207502195e9.14.2025.04.01.09.11.43 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 01 Apr 2025 09:11:43 -0700 (PDT) From: Paolo Bonzini To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: roy.hopkins@suse.com, seanjc@google.com, thomas.lendacky@amd.com, ashish.kalra@amd.com, michael.roth@amd.com, jroedel@suse.de, nsaenz@amazon.com, anelkz@amazon.de, James.Bottomley@HansenPartnership.com Subject: [PATCH 13/29] KVM: implement vCPU creation for extra planes Date: Tue, 1 Apr 2025 18:10:50 +0200 Message-ID: <20250401161106.790710-14-pbonzini@redhat.com> X-Mailer: git-send-email 2.49.0 In-Reply-To: <20250401161106.790710-1-pbonzini@redhat.com> References: <20250401161106.790710-1-pbonzini@redhat.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 For userspace to have fun with planes it is probably useful to let them create vCPUs on the non-zero planes as well. Since such vCPUs are backed by the same struct kvm_vcpu, these are regular vCPU file descriptors except that they only allow a small subset of ioctls (mostly get/set) and they share some of the backing resources, notably vcpu->run. TODO: prefault might be useful on non-default planes as well? Signed-off-by: Paolo Bonzini --- Documentation/virt/kvm/locking.rst | 3 + include/linux/kvm_host.h | 4 +- include/uapi/linux/kvm.h | 1 + virt/kvm/kvm_main.c | 167 +++++++++++++++++++++++------ 4 files changed, 142 insertions(+), 33 deletions(-) diff --git a/Documentation/virt/kvm/locking.rst b/Documentation/virt/kvm/locking.rst index ae8bce7fecbe..ad22344deb28 100644 --- a/Documentation/virt/kvm/locking.rst +++ b/Documentation/virt/kvm/locking.rst @@ -26,6 +26,9 @@ The acquisition orders for mutexes are as follows: are taken on the waiting side when modifying memslots, so MMU notifiers must not take either kvm->slots_lock or kvm->slots_arch_lock. +- when VMs have multiple planes, vcpu->mutex for plane 0 can taken + outside vcpu->mutex for the same id and another plane + cpus_read_lock() vs kvm_lock: - Taking cpus_read_lock() outside of kvm_lock is problematic, despite that diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index b511aed2de8e..99fd90c5d71b 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -343,6 +343,9 @@ struct kvm_vcpu { struct mutex mutex; + /* Only valid on plane 0 */ + bool wants_to_run; + /* Shared for all planes */ struct kvm_run *run; @@ -388,7 +391,6 @@ struct kvm_vcpu { bool dy_eligible; } spin_loop; #endif - bool wants_to_run; bool preempted; bool ready; bool scheduled_out; diff --git a/include/uapi/linux/kvm.h b/include/uapi/linux/kvm.h index 96d25c7fa18f..24fa002cd7c1 100644 --- a/include/uapi/linux/kvm.h +++ b/include/uapi/linux/kvm.h @@ -1691,5 +1691,6 @@ struct kvm_pre_fault_memory { }; #define KVM_CREATE_PLANE _IO(KVMIO, 0xd6) +#define KVM_CREATE_VCPU_PLANE _IO(KVMIO, 0xd7) #endif /* __LINUX_KVM_H */ diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index 863fd80ddfbe..06fa2a6ad96f 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -438,11 +438,11 @@ void *kvm_mmu_memory_cache_alloc(struct kvm_mmu_memory_cache *mc) } #endif -static void kvm_vcpu_init(struct kvm_vcpu *vcpu, struct kvm *kvm, unsigned id) +static void kvm_vcpu_init(struct kvm_vcpu *vcpu, struct kvm_plane *plane, unsigned id) { mutex_init(&vcpu->mutex); vcpu->cpu = -1; - vcpu->kvm = kvm; + vcpu->kvm = plane->kvm; vcpu->vcpu_id = id; vcpu->pid = NULL; rwlock_init(&vcpu->pid_lock); @@ -459,8 +459,13 @@ static void kvm_vcpu_init(struct kvm_vcpu *vcpu, struct kvm *kvm, unsigned id) vcpu->last_used_slot = NULL; /* Fill the stats id string for the vcpu */ - snprintf(vcpu->stats_id, sizeof(vcpu->stats_id), "kvm-%d/vcpu-%d", - task_pid_nr(current), id); + if (plane->plane) { + snprintf(vcpu->stats_id, sizeof(vcpu->stats_id), "kvm-%d/vcpu-%d:%d", + task_pid_nr(current), id, plane->plane); + } else { + snprintf(vcpu->stats_id, sizeof(vcpu->stats_id), "kvm-%d/vcpu-%d", + task_pid_nr(current), id); + } } static void kvm_vcpu_destroy(struct kvm_vcpu *vcpu) @@ -475,7 +480,9 @@ static void kvm_vcpu_destroy(struct kvm_vcpu *vcpu) */ put_pid(vcpu->pid); - free_page((unsigned long)vcpu->run); + if (!vcpu->plane) + free_page((unsigned long)vcpu->run); + kmem_cache_free(kvm_vcpu_cache, vcpu); } @@ -4026,6 +4033,9 @@ static vm_fault_t kvm_vcpu_fault(struct vm_fault *vmf) struct kvm_vcpu *vcpu = vmf->vma->vm_file->private_data; struct page *page; + if (vcpu->plane) + return VM_FAULT_SIGBUS; + if (vmf->pgoff == 0) page = virt_to_page(vcpu->run); #ifdef CONFIG_X86 @@ -4113,7 +4123,10 @@ static void kvm_create_vcpu_debugfs(struct kvm_vcpu *vcpu) if (!debugfs_initialized()) return; - snprintf(dir_name, sizeof(dir_name), "vcpu%d", vcpu->vcpu_id); + if (vcpu->plane) + snprintf(dir_name, sizeof(dir_name), "vcpu%d:%d", vcpu->vcpu_id, vcpu->plane); + else + snprintf(dir_name, sizeof(dir_name), "vcpu%d", vcpu->vcpu_id); debugfs_dentry = debugfs_create_dir(dir_name, vcpu->kvm->debugfs_dentry); debugfs_create_file("pid", 0444, debugfs_dentry, vcpu, @@ -4126,9 +4139,10 @@ static void kvm_create_vcpu_debugfs(struct kvm_vcpu *vcpu) /* * Creates some virtual cpus. Good luck creating more than one. */ -static int kvm_vm_ioctl_create_vcpu(struct kvm *kvm, unsigned long id) +static int kvm_vm_ioctl_create_vcpu(struct kvm_plane *plane, struct kvm_vcpu *plane0_vcpu, unsigned long id) { int r; + struct kvm *kvm = plane->kvm; struct kvm_vcpu *vcpu; struct page *page; @@ -4165,24 +4179,33 @@ static int kvm_vm_ioctl_create_vcpu(struct kvm *kvm, unsigned long id) goto vcpu_decrement; } - BUILD_BUG_ON(sizeof(struct kvm_run) > PAGE_SIZE); - page = alloc_page(GFP_KERNEL_ACCOUNT | __GFP_ZERO); - if (!page) { - r = -ENOMEM; - goto vcpu_free; - } - vcpu->run = page_address(page); + if (plane->plane) { + page = NULL; + vcpu->run = plane0_vcpu->run; + } else { + WARN_ON(plane0_vcpu != NULL); + plane0_vcpu = vcpu; - if (kvm->dirty_ring_size) { - r = kvm_dirty_ring_alloc(kvm, &vcpu->__dirty_ring, - id, kvm->dirty_ring_size); - if (r) - goto vcpu_free_run_page; + BUILD_BUG_ON(sizeof(struct kvm_run) > PAGE_SIZE); + page = alloc_page(GFP_KERNEL_ACCOUNT | __GFP_ZERO); + if (!page) { + r = -ENOMEM; + goto vcpu_free; + } + vcpu->run = page_address(page); + + if (kvm->dirty_ring_size) { + r = kvm_dirty_ring_alloc(kvm, &vcpu->__dirty_ring, + id, kvm->dirty_ring_size); + if (r) + goto vcpu_free_run_page; + } } - vcpu->plane0 = vcpu; - vcpu->stat = &vcpu->__stat; - kvm_vcpu_init(vcpu, kvm, id); + vcpu->plane0 = plane0_vcpu; + vcpu->stat = &plane0_vcpu->__stat; + vcpu->dirty_ring = &plane0_vcpu->__dirty_ring; + kvm_vcpu_init(vcpu, plane, id); r = kvm_arch_vcpu_create(vcpu); if (r) @@ -4190,7 +4213,7 @@ static int kvm_vm_ioctl_create_vcpu(struct kvm *kvm, unsigned long id) mutex_lock(&kvm->lock); - if (kvm_get_vcpu_by_id(kvm, id)) { + if (kvm_get_plane_vcpu_by_id(plane, id)) { r = -EEXIST; goto unlock_vcpu_destroy; } @@ -4200,8 +4223,13 @@ static int kvm_vm_ioctl_create_vcpu(struct kvm *kvm, unsigned long id) * release semantics, which ensures the write is visible to kvm_get_vcpu(). */ vcpu->plane = -1; - vcpu->vcpu_idx = atomic_read(&kvm->online_vcpus); - r = xa_insert(&kvm->planes[0]->vcpu_array, vcpu->vcpu_idx, vcpu, GFP_KERNEL_ACCOUNT); + if (plane->plane) + vcpu->vcpu_idx = atomic_read(&kvm->online_vcpus); + else + vcpu->vcpu_idx = plane0_vcpu->vcpu_idx; + + r = xa_insert(&plane->vcpu_array, vcpu->vcpu_idx, + vcpu, GFP_KERNEL_ACCOUNT); WARN_ON_ONCE(r == -EBUSY); if (r) goto unlock_vcpu_destroy; @@ -4220,13 +4248,14 @@ static int kvm_vm_ioctl_create_vcpu(struct kvm *kvm, unsigned long id) if (r < 0) goto kvm_put_xa_erase; - atomic_inc(&kvm->online_vcpus); + if (!plane0_vcpu) + atomic_inc(&kvm->online_vcpus); /* * Pairs with xa_load() in kvm_get_vcpu, ensuring that online_vcpus * is updated before vcpu->plane. */ - smp_store_release(&vcpu->plane, 0); + smp_store_release(&vcpu->plane, plane->plane); mutex_unlock(&vcpu->mutex); mutex_unlock(&kvm->lock); @@ -4237,14 +4266,15 @@ static int kvm_vm_ioctl_create_vcpu(struct kvm *kvm, unsigned long id) kvm_put_xa_erase: mutex_unlock(&vcpu->mutex); kvm_put_kvm_no_destroy(kvm); - xa_erase(&kvm->planes[0]->vcpu_array, vcpu->vcpu_idx); + xa_erase(&plane->vcpu_array, vcpu->vcpu_idx); unlock_vcpu_destroy: mutex_unlock(&kvm->lock); kvm_arch_vcpu_destroy(vcpu); vcpu_free_dirty_ring: kvm_dirty_ring_free(&vcpu->__dirty_ring); vcpu_free_run_page: - free_page((unsigned long)vcpu->run); + if (page) + __free_page(page); vcpu_free: kmem_cache_free(kvm_vcpu_cache, vcpu); vcpu_decrement: @@ -4406,6 +4436,35 @@ static int kvm_plane_ioctl_check_extension(struct kvm_plane *plane, long arg) } } +static int kvm_plane_ioctl_create_vcpu(struct kvm_plane *plane, long arg) +{ + int r = -EINVAL; + struct file *file; + struct kvm_vcpu *vcpu; + int fd; + + if (arg != (int)arg) + return -EBADF; + + fd = arg; + file = fget(fd); + if (!file) + return -EBADF; + + if (file->f_op != &kvm_vcpu_fops) + goto err; + + vcpu = file->private_data; + if (vcpu->kvm != plane->kvm) + goto err; + + r = kvm_vm_ioctl_create_vcpu(plane, vcpu, vcpu->vcpu_id); + +err: + fput(file); + return r; +} + static long __kvm_plane_ioctl(struct kvm_plane *plane, unsigned int ioctl, unsigned long arg) { @@ -4432,6 +4491,8 @@ static long __kvm_plane_ioctl(struct kvm_plane *plane, unsigned int ioctl, #endif case KVM_CHECK_EXTENSION: return kvm_plane_ioctl_check_extension(plane, arg); + case KVM_CREATE_VCPU_PLANE: + return kvm_plane_ioctl_create_vcpu(plane, arg); default: return -ENOTTY; } @@ -4463,6 +4524,44 @@ static struct file_operations kvm_plane_fops = { }; +static inline bool kvm_arch_is_vcpu_plane_ioctl(unsigned ioctl) +{ + switch (ioctl) { + case KVM_GET_DEBUGREGS: + case KVM_SET_DEBUGREGS: + case KVM_GET_FPU: + case KVM_SET_FPU: + case KVM_GET_LAPIC: + case KVM_SET_LAPIC: + case KVM_GET_MSRS: + case KVM_SET_MSRS: + case KVM_GET_NESTED_STATE: + case KVM_SET_NESTED_STATE: + case KVM_GET_ONE_REG: + case KVM_SET_ONE_REG: + case KVM_GET_REGS: + case KVM_SET_REGS: + case KVM_GET_SREGS: + case KVM_SET_SREGS: + case KVM_GET_SREGS2: + case KVM_SET_SREGS2: + case KVM_GET_VCPU_EVENTS: + case KVM_SET_VCPU_EVENTS: + case KVM_GET_XCRS: + case KVM_SET_XCRS: + case KVM_GET_XSAVE: + case KVM_GET_XSAVE2: + case KVM_SET_XSAVE: + + case KVM_GET_REG_LIST: + case KVM_TRANSLATE: + return true; + + default: + return false; + } +} + static long kvm_vcpu_ioctl(struct file *filp, unsigned int ioctl, unsigned long arg) { @@ -4475,6 +4574,9 @@ static long kvm_vcpu_ioctl(struct file *filp, if (vcpu->kvm->mm != current->mm || vcpu->kvm->vm_dead) return -EIO; + if (vcpu->plane && !kvm_arch_is_vcpu_plane_ioctl(ioctl)) + return -EINVAL; + if (unlikely(_IOC_TYPE(ioctl) != KVMIO)) return -EINVAL; @@ -4958,7 +5060,7 @@ static int kvm_vm_ioctl_check_extension_generic(struct kvm *kvm, long arg) case KVM_CAP_PLANES: if (kvm) return kvm_arch_nr_vcpu_planes(kvm); - return KVM_MAX_PLANES; + return KVM_MAX_VCPU_PLANES; case KVM_CAP_PLANES_FPU: return kvm_arch_planes_share_fpu(kvm); #endif @@ -5201,7 +5303,8 @@ static int kvm_vm_ioctl_create_plane(struct kvm *kvm, unsigned id) struct file *file; int r, fd; - if (id >= KVM_MAX_VCPU_PLANES) + if (id >= kvm_arch_nr_vcpu_planes(kvm) + || WARN_ON_ONCE(id >= KVM_MAX_VCPU_PLANES)) return -EINVAL; guard(mutex)(&kvm->lock); @@ -5259,7 +5362,7 @@ static long kvm_vm_ioctl(struct file *filp, r = kvm_vm_ioctl_create_plane(kvm, arg); break; case KVM_CREATE_VCPU: - r = kvm_vm_ioctl_create_vcpu(kvm, arg); + r = kvm_vm_ioctl_create_vcpu(kvm->planes[0], NULL, arg); break; case KVM_ENABLE_CAP: { struct kvm_enable_cap cap; From patchwork Tue Apr 1 16:10:51 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Paolo Bonzini X-Patchwork-Id: 14035102 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id D0B1B20E005 for ; Tue, 1 Apr 2025 16:11:51 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=170.10.133.124 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1743523913; cv=none; b=K2SdPi0sE9n7effMILZKl2Ue0EIxVt+94kyrLhEQi68Q4HyosU0Rs4qZ0JJ38w72Ev/BjbrauocljcVP/gD6vmlD8piBekzRx66FmIBInpL+KkaVHzPLNyRayKq1LXO4mcE8rwMWj83Ukz1mbBWji5FsghYYbBIm3/MbVgG7k2Q= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1743523913; c=relaxed/simple; bh=RefrmAkQ0L+4IQy+OTCdYSqXd/r04zNM+VND4kN61fk=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=l+WSlLalzdAOh6O/my3dEa++OoVirvYXQjmU82CXHY/Wxbb2Fw3tLWnjiA2zyU1JoqRfFd97SHf7rDSEH5EqWUbKJm9Z8VFN+updJcXDQz2U64VdBZw8XssgM9NyXf+UisjUAsj9Sz8iheHye2Mecu5Q5qsERExdKgCfN0EPOKM= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=redhat.com; spf=pass smtp.mailfrom=redhat.com; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b=L0aONPLX; arc=none smtp.client-ip=170.10.133.124 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=redhat.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=redhat.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="L0aONPLX" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1743523911; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=dIfnqOGSQOH72XfcWRssGMHV1iaqS8FYDinT6SkIDjk=; b=L0aONPLXxzvdHp1JIHZC+0II1jdX5vchpaXb0RO/OK5wQyt21U1J0+bPRJ+d9+fhZvxXjq 2t6N+XMpLPV4qhW2UZ242esbb7msukr+Gmf5jpMPc010CsVSAvA0aNrGB4Idhb0mMNVGt3 lUVHTH16v9GK3QRo2b1L6l6vqEvUrnU= Received: from mail-wm1-f71.google.com (mail-wm1-f71.google.com [209.85.128.71]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-638-dBvFaTtcNYW-uprC807mmQ-1; Tue, 01 Apr 2025 12:11:49 -0400 X-MC-Unique: dBvFaTtcNYW-uprC807mmQ-1 X-Mimecast-MFC-AGG-ID: dBvFaTtcNYW-uprC807mmQ_1743523908 Received: by mail-wm1-f71.google.com with SMTP id 5b1f17b1804b1-43d3b211d0eso33421435e9.1 for ; Tue, 01 Apr 2025 09:11:49 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1743523908; x=1744128708; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=dIfnqOGSQOH72XfcWRssGMHV1iaqS8FYDinT6SkIDjk=; b=ilR7ke+JcKENpV/QciLVTseFyHiCBrCWt5CMoOOWWKR9eCchnx6nLY2WhcJJTkzxat TpslJ76tCxE7GzrEIDwOsSXTvnPm6xRTKQYYTsVEB2VZ+Gi2Q2X9krDpV+9cScD33EiV xz55nVhucLU71Li8/4Rj+IKwREbycq3zPo3Eb4F+tljU94RJXwUmoID+UYXPgZplQECO ww295qJMZ99BfY6vXnK1xt/m97Ck4BKTSSyQJWPwBaC/edtSLxIZY1TuY3uHoD1DhkPx y55ArbC6ZRyG4sUlib/bxn2rtL7siEseQWNORsUPHT9/t8WOOWmfDCwEWzJtEGffVwJu IAUQ== X-Forwarded-Encrypted: i=1; AJvYcCU1GoroZYUDxJLKIAwVqIdSHFSl208JOSesDsvoo+rtumNJo785MnZ4WgEWPpFwnQuuFx4=@vger.kernel.org X-Gm-Message-State: AOJu0YyMcESFv7sQ/3hxUwfjWtfsXFUsAc6pAxNlyGSfYhTuiajttOyz l2FyfqxD7tqIy9s7vHySQkCGvGTJGCyLI9CMrRBkTLcYHgqtZkOW3pPYAAe8lyj9WRS4KNdU8J6 vIFvM5aIB7HUlTZpTNYBY0prOOYkW+aDRN1XNbQvFUMq0WSOs0w== X-Gm-Gg: ASbGncs7VTja0E7mY6suX1uhsTTzSTmt90Ezog19BXPRgHoUufcfPc0zzcKaPOxy6O6 sfmdYWnA5oONFQcgFyRpdxm5mWGkvUiqBuKa5KMz26FPnA/mEqtdSe9gAyCjcV409hRjlS7eDEW aGTbz6YWh+T1HquUFJif3INNkB/s0N1B8f3ZqAbPtCbmTTF6I4ckI1EFuiT3h2aOI2DgtkcXmYP LVpkIRLSaIKkINNAOX4/8bW2hPTGyYP8o6ReT8HBbJ/10PO/9IELT8DFxce2vob0yF16TcK+2Ty V5JDSw1IxhHyD5cazW9HXw== X-Received: by 2002:a05:6000:18a2:b0:39c:dfa:e86c with SMTP id ffacd0b85a97d-39c27efc018mr542258f8f.13.1743523907993; Tue, 01 Apr 2025 09:11:47 -0700 (PDT) X-Google-Smtp-Source: AGHT+IFEH9kQ+udDvKebNrdMvaVgGuN+oLXLUK6O9cvy6KAk/oIwzHbZqyR7QPJ1CTGQr9niaqGFOA== X-Received: by 2002:a05:6000:18a2:b0:39c:dfa:e86c with SMTP id ffacd0b85a97d-39c27efc018mr542224f8f.13.1743523907620; Tue, 01 Apr 2025 09:11:47 -0700 (PDT) Received: from [192.168.10.48] ([176.206.111.201]) by smtp.gmail.com with ESMTPSA id ffacd0b85a97d-39c0b66ad1esm14707692f8f.52.2025.04.01.09.11.45 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 01 Apr 2025 09:11:46 -0700 (PDT) From: Paolo Bonzini To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: roy.hopkins@suse.com, seanjc@google.com, thomas.lendacky@amd.com, ashish.kalra@amd.com, michael.roth@amd.com, jroedel@suse.de, nsaenz@amazon.com, anelkz@amazon.de, James.Bottomley@HansenPartnership.com Subject: [PATCH 14/29] KVM: pass plane to kvm_arch_vcpu_create Date: Tue, 1 Apr 2025 18:10:51 +0200 Message-ID: <20250401161106.790710-15-pbonzini@redhat.com> X-Mailer: git-send-email 2.49.0 In-Reply-To: <20250401161106.790710-1-pbonzini@redhat.com> References: <20250401161106.790710-1-pbonzini@redhat.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Pass the plane to architecture-specific code, so that it can also share backing data between plane 0 and the non-zero planes. Signed-off-by: Paolo Bonzini --- arch/arm64/kvm/arm.c | 2 +- arch/loongarch/kvm/vcpu.c | 2 +- arch/mips/kvm/mips.c | 2 +- arch/powerpc/kvm/powerpc.c | 2 +- arch/riscv/kvm/vcpu.c | 2 +- arch/s390/kvm/kvm-s390.c | 2 +- arch/x86/kvm/x86.c | 2 +- include/linux/kvm_host.h | 2 +- virt/kvm/kvm_main.c | 2 +- 9 files changed, 9 insertions(+), 9 deletions(-) diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c index 94fae442a8b8..3df9a7c164a3 100644 --- a/arch/arm64/kvm/arm.c +++ b/arch/arm64/kvm/arm.c @@ -427,7 +427,7 @@ int kvm_arch_vcpu_precreate(struct kvm *kvm, unsigned int id) return 0; } -int kvm_arch_vcpu_create(struct kvm_vcpu *vcpu) +int kvm_arch_vcpu_create(struct kvm_vcpu *vcpu, struct kvm_plane *plane) { int err; diff --git a/arch/loongarch/kvm/vcpu.c b/arch/loongarch/kvm/vcpu.c index 470c79e79281..71b0fd05917f 100644 --- a/arch/loongarch/kvm/vcpu.c +++ b/arch/loongarch/kvm/vcpu.c @@ -1479,7 +1479,7 @@ int kvm_arch_vcpu_precreate(struct kvm *kvm, unsigned int id) return 0; } -int kvm_arch_vcpu_create(struct kvm_vcpu *vcpu) +int kvm_arch_vcpu_create(struct kvm_vcpu *vcpu, struct kvm_plane *plane) { unsigned long timer_hz; struct loongarch_csrs *csr; diff --git a/arch/mips/kvm/mips.c b/arch/mips/kvm/mips.c index 77637d201699..fec95594c041 100644 --- a/arch/mips/kvm/mips.c +++ b/arch/mips/kvm/mips.c @@ -275,7 +275,7 @@ int kvm_arch_vcpu_precreate(struct kvm *kvm, unsigned int id) return 0; } -int kvm_arch_vcpu_create(struct kvm_vcpu *vcpu) +int kvm_arch_vcpu_create(struct kvm_vcpu *vcpu, struct kvm_plane *plane) { int err, size; void *gebase, *p, *handler, *refill_start, *refill_end; diff --git a/arch/powerpc/kvm/powerpc.c b/arch/powerpc/kvm/powerpc.c index a39919dbaffb..359ca3924461 100644 --- a/arch/powerpc/kvm/powerpc.c +++ b/arch/powerpc/kvm/powerpc.c @@ -762,7 +762,7 @@ static enum hrtimer_restart kvmppc_decrementer_wakeup(struct hrtimer *timer) return HRTIMER_NORESTART; } -int kvm_arch_vcpu_create(struct kvm_vcpu *vcpu) +int kvm_arch_vcpu_create(struct kvm_vcpu *vcpu, struct kvm_plane *plane) { int err; diff --git a/arch/riscv/kvm/vcpu.c b/arch/riscv/kvm/vcpu.c index 55fb16307cc6..0f114c01484e 100644 --- a/arch/riscv/kvm/vcpu.c +++ b/arch/riscv/kvm/vcpu.c @@ -107,7 +107,7 @@ int kvm_arch_vcpu_precreate(struct kvm *kvm, unsigned int id) return 0; } -int kvm_arch_vcpu_create(struct kvm_vcpu *vcpu) +int kvm_arch_vcpu_create(struct kvm_vcpu *vcpu, struct kvm_plane *plane) { int rc; struct kvm_cpu_context *cntx; diff --git a/arch/s390/kvm/kvm-s390.c b/arch/s390/kvm/kvm-s390.c index 46759021e924..8e3f8bc04a42 100644 --- a/arch/s390/kvm/kvm-s390.c +++ b/arch/s390/kvm/kvm-s390.c @@ -3970,7 +3970,7 @@ int kvm_arch_vcpu_precreate(struct kvm *kvm, unsigned int id) return 0; } -int kvm_arch_vcpu_create(struct kvm_vcpu *vcpu) +int kvm_arch_vcpu_create(struct kvm_vcpu *vcpu, struct kvm_plane *plane) { struct sie_page *sie_page; int rc; diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 2c8bdb139b75..9f699f056ce6 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -12293,7 +12293,7 @@ int kvm_arch_vcpu_precreate(struct kvm *kvm, unsigned int id) return kvm_x86_call(vcpu_precreate)(kvm); } -int kvm_arch_vcpu_create(struct kvm_vcpu *vcpu) +int kvm_arch_vcpu_create(struct kvm_vcpu *vcpu, struct kvm_plane *plane) { struct page *page; int r; diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index 99fd90c5d71b..16a8b3adb76d 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -1622,7 +1622,7 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu); void kvm_arch_vcpu_load(struct kvm_vcpu *vcpu, int cpu); void kvm_arch_vcpu_put(struct kvm_vcpu *vcpu); int kvm_arch_vcpu_precreate(struct kvm *kvm, unsigned int id); -int kvm_arch_vcpu_create(struct kvm_vcpu *vcpu); +int kvm_arch_vcpu_create(struct kvm_vcpu *vcpu, struct kvm_plane *plane); void kvm_arch_vcpu_postcreate(struct kvm_vcpu *vcpu); void kvm_arch_vcpu_destroy(struct kvm_vcpu *vcpu); diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index 06fa2a6ad96f..cb04fe6f8a2c 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -4207,7 +4207,7 @@ static int kvm_vm_ioctl_create_vcpu(struct kvm_plane *plane, struct kvm_vcpu *pl vcpu->dirty_ring = &plane0_vcpu->__dirty_ring; kvm_vcpu_init(vcpu, plane, id); - r = kvm_arch_vcpu_create(vcpu); + r = kvm_arch_vcpu_create(vcpu, plane); if (r) goto vcpu_free_dirty_ring; From patchwork Tue Apr 1 16:10:52 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Paolo Bonzini X-Patchwork-Id: 14035103 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id EF60C21D5B3 for ; Tue, 1 Apr 2025 16:11:55 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=170.10.129.124 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1743523917; cv=none; b=Ub6yzVqIxsmmdDbg/fHcwJ8lbWFVq3selG9wwZ3fv66eFT2LDeWjwcRpI4trEYokeb9ul1cjhqskEwyVQpj/baHBFSQ9b6lIlZ65iQsN0nIenfOKAk4NpzBlmUZih/D3aybxKcaPFfmLIvYOTMsUWdqohbHSuntXmh2XjhY9kMA= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1743523917; c=relaxed/simple; bh=aUHeMAtza3lALIkrYa7Cju0Ow+Rzlcu6X1bDd+6rBGE=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=rjH2akuUixla7ZMgt0Bw012F7004H9XmdgypX0Zyh8c4AfafUC1Xnwi87Vg3yEE9WRMAdPfelI2wwj1hXtVnoW+wpTvTOV3TwTRrphSP5LaRiYl4YlqO/n+Ft/zm37Ec12mAoygn+0VkV87h3hawON90TfV3qWTlounE2+zKVCY= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=redhat.com; spf=pass smtp.mailfrom=redhat.com; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b=Ft2cE/M0; arc=none smtp.client-ip=170.10.129.124 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=redhat.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=redhat.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="Ft2cE/M0" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1743523915; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=DC/nMXHGZlSx0sJ7rLf3alDuDl6Zb/0M1re23sBSw4g=; b=Ft2cE/M07xFZ3ogA6Poez8rkp3tIhUC+FPEPrFuFXfynQ0LcD+J1srGC4S6hLujVS0Nd5g cDutjg8jwFjUXESWi4XyFIYdebAeV/7uM8xJD3QrVAOoc21kU3sHErnu1bt580X+gowZax OMQwot1VHw5q2A4WBozLLt2ZL1GyJLU= Received: from mail-wm1-f70.google.com (mail-wm1-f70.google.com [209.85.128.70]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-63-TnQrgOVsNO2kF6Y532hU3w-1; Tue, 01 Apr 2025 12:11:52 -0400 X-MC-Unique: TnQrgOVsNO2kF6Y532hU3w-1 X-Mimecast-MFC-AGG-ID: TnQrgOVsNO2kF6Y532hU3w_1743523911 Received: by mail-wm1-f70.google.com with SMTP id 5b1f17b1804b1-43e9a3d2977so20138225e9.1 for ; Tue, 01 Apr 2025 09:11:51 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1743523911; x=1744128711; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=DC/nMXHGZlSx0sJ7rLf3alDuDl6Zb/0M1re23sBSw4g=; b=vu1qWLH4yWY2hyCYooAobR0n2Ikmi9x3Q9EItpEw9tnwpWYmZmnoJFMeEPosx4vF90 7l1OwA++3yn5Y9qDyyWeucludRySa02CtvLKo3BylGYa1ny4GifZ98Xi3NNDfJ25UG5Q t3cS7DTe1LfcrwYR9Vy+zRTHebm5Tp7Un3hwDkaZ8ahHdtAQtHxFpDq+R6ZSUiX86muL Dclb1wdPodfSoTbBWrnz5pNOKwiZVUgJ6nFztDAZ3aXjw6rqCkY7iKOq4l/Qq69S8eqM WQh46T435rti8wY6F6T6TlA7HkVFrbK/mtBhvUZ6dd0Rm0/qQCz2tXPYxZMen7QCRM8X dVUg== X-Forwarded-Encrypted: i=1; AJvYcCWHVlUTFGkhcFEIBMINlwceFCJ4Rff4lWC0+/UE9zjzfhitehLih5WwPaeVKJGgj/YtsoY=@vger.kernel.org X-Gm-Message-State: AOJu0YxpfRbRKJEIih6zxCmoADqSXVUwuamBdFghkDu7UI0ivg3GM3cI 6u4Is1kFZWwNGhrRFmW+QkIZbqcJVi+hG8WDNalzmWOeKyLBH4ZrJRF8z4ZFcJcKAn1/57bttvR 3Sajb4JfEl/thqRqyDA/aVc+14QUfPYkV6p0yKyrrX/uCFEfuPw== X-Gm-Gg: ASbGncvqp1zWiIsAtVUylQIMs6WF1BNkb0Z2a8tM77KKTZzN5P4Q6ePH7c14+pJY69I KrEltdfOF5MF4CzUvPFPC7VUhZPDGtqVK+wEVVYg9EkY9MM2iYHx3nXtchV+qs/RbRyjLSSloGV sDxY15YFiLePWopfVkwWtLyn/coJ2Md6z9oh+27b91JUidfjy0Jt+7IgArcwdKEbMAxtLH7N8/j lRFfl3F/sFYSyXJ/jr/LxIXH+owONkqnh4liWdEYhY0nzj7pkO03x7z5n37HvZCr4bcNw0KuetA k+liHgD//Xt0ygKuhJFBBQ== X-Received: by 2002:a05:600c:444d:b0:43c:fe85:e4ba with SMTP id 5b1f17b1804b1-43db6248f51mr106824685e9.15.1743523910758; Tue, 01 Apr 2025 09:11:50 -0700 (PDT) X-Google-Smtp-Source: AGHT+IE6cteiO2nE177KrCSu7vAEZTo6A+KIZLn6H0SoIEEXCGg9tK9PGpoiT8WxAi5lNC52sLjZ4w== X-Received: by 2002:a05:600c:444d:b0:43c:fe85:e4ba with SMTP id 5b1f17b1804b1-43db6248f51mr106824335e9.15.1743523910433; Tue, 01 Apr 2025 09:11:50 -0700 (PDT) Received: from [192.168.10.48] ([176.206.111.201]) by smtp.gmail.com with ESMTPSA id ffacd0b85a97d-39c0b6588d0sm14276926f8f.7.2025.04.01.09.11.48 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 01 Apr 2025 09:11:48 -0700 (PDT) From: Paolo Bonzini To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: roy.hopkins@suse.com, seanjc@google.com, thomas.lendacky@amd.com, ashish.kalra@amd.com, michael.roth@amd.com, jroedel@suse.de, nsaenz@amazon.com, anelkz@amazon.de, James.Bottomley@HansenPartnership.com Subject: [PATCH 15/29] KVM: x86: pass vcpu to kvm_pv_send_ipi() Date: Tue, 1 Apr 2025 18:10:52 +0200 Message-ID: <20250401161106.790710-16-pbonzini@redhat.com> X-Mailer: git-send-email 2.49.0 In-Reply-To: <20250401161106.790710-1-pbonzini@redhat.com> References: <20250401161106.790710-1-pbonzini@redhat.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Signed-off-by: Paolo Bonzini --- arch/x86/include/asm/kvm_host.h | 2 +- arch/x86/kvm/lapic.c | 4 ++-- arch/x86/kvm/x86.c | 2 +- 3 files changed, 4 insertions(+), 4 deletions(-) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index 8240f565a764..e29694a97a19 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -2334,7 +2334,7 @@ int kvm_cpu_get_extint(struct kvm_vcpu *v); int kvm_cpu_get_interrupt(struct kvm_vcpu *v); void kvm_vcpu_reset(struct kvm_vcpu *vcpu, bool init_event); -int kvm_pv_send_ipi(struct kvm *kvm, unsigned long ipi_bitmap_low, +int kvm_pv_send_ipi(struct kvm_vcpu *source, unsigned long ipi_bitmap_low, unsigned long ipi_bitmap_high, u32 min, unsigned long icr, int op_64_bit); diff --git a/arch/x86/kvm/lapic.c b/arch/x86/kvm/lapic.c index d8d11d9fd30a..c078269f7b1d 100644 --- a/arch/x86/kvm/lapic.c +++ b/arch/x86/kvm/lapic.c @@ -861,7 +861,7 @@ static int __pv_send_ipi(unsigned long *ipi_bitmap, struct kvm_apic_map *map, return count; } -int kvm_pv_send_ipi(struct kvm *kvm, unsigned long ipi_bitmap_low, +int kvm_pv_send_ipi(struct kvm_vcpu *source, unsigned long ipi_bitmap_low, unsigned long ipi_bitmap_high, u32 min, unsigned long icr, int op_64_bit) { @@ -879,7 +879,7 @@ int kvm_pv_send_ipi(struct kvm *kvm, unsigned long ipi_bitmap_low, irq.trig_mode = icr & APIC_INT_LEVELTRIG; rcu_read_lock(); - map = rcu_dereference(kvm->arch.apic_map); + map = rcu_dereference(source->kvm->arch.apic_map); count = -EOPNOTSUPP; if (likely(map)) { diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 9f699f056ce6..a527a425c55d 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -10101,7 +10101,7 @@ int ____kvm_emulate_hypercall(struct kvm_vcpu *vcpu, int cpl, if (!guest_pv_has(vcpu, KVM_FEATURE_PV_SEND_IPI)) break; - ret = kvm_pv_send_ipi(vcpu->kvm, a0, a1, a2, a3, op_64_bit); + ret = kvm_pv_send_ipi(vcpu, a0, a1, a2, a3, op_64_bit); break; case KVM_HC_SCHED_YIELD: if (!guest_pv_has(vcpu, KVM_FEATURE_PV_SCHED_YIELD)) From patchwork Tue Apr 1 16:10:53 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Paolo Bonzini X-Patchwork-Id: 14035104 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 092D820E026 for ; Tue, 1 Apr 2025 16:11:56 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=170.10.133.124 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1743523918; cv=none; b=cNKtJbN13o9EcbqwrMkwJB95Hg8UeYwpGG7cfJRrMU4M8mtlgg95uRCTTFw1XrAirUCpQEaHjzXNhdydEKaJ6THIKjdD+FcOOwWUX4H61NyUQ3JOi2Tmz1W7MJNVIJEtmPHLXLuFxojVQXOQa1divjiHYKfMhfhRhwkNR2fIp+0= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1743523918; c=relaxed/simple; bh=4+ShkaclNBVkSqlXTUWslO5wly515CBbMa70/QPy/14=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=bMjDAqKzjax7RkyIwHROoFZ3TSUXcUKZNUUyZIz98zYYkUJtYseAEzeeEsBodVBPxbqlZrGa5LVUpyNTk2yuOkq0L04WSUQeSN4GAtR7ZWdQK1hinBW32xWsM/qzLGLFw/SYGye4sSBt/jH2vpiJ9OtTiRkRtYosLSxc6oWPAfI= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=redhat.com; spf=pass smtp.mailfrom=redhat.com; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b=a5ziWGK0; arc=none smtp.client-ip=170.10.133.124 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=redhat.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=redhat.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="a5ziWGK0" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1743523916; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=UaINNrC6inNW3Mym/pTKp2ET1N4DSwNzn/ffTCwZc4A=; b=a5ziWGK0oKoIHNuJiQnGEyZq+DPHg7XmLjZcn5tlBwoJUbdMHwT0ayn/ErPo3Bf3xIvA/7 5qXhc5Lo0TQ+dZDYi8+RjHPVO9FWf26rMuHiaIUyQGS2GEBS4PDu68Tg42LXkYu5U0oTif VC1VRMZoqL22QsTLmjy8nnsN4yipIU4= Received: from mail-wr1-f70.google.com (mail-wr1-f70.google.com [209.85.221.70]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-78-7ccnJkkLNaiumHLPY130Eg-1; Tue, 01 Apr 2025 12:11:55 -0400 X-MC-Unique: 7ccnJkkLNaiumHLPY130Eg-1 X-Mimecast-MFC-AGG-ID: 7ccnJkkLNaiumHLPY130Eg_1743523914 Received: by mail-wr1-f70.google.com with SMTP id ffacd0b85a97d-3912d5f6689so3444512f8f.1 for ; Tue, 01 Apr 2025 09:11:54 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1743523913; x=1744128713; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=UaINNrC6inNW3Mym/pTKp2ET1N4DSwNzn/ffTCwZc4A=; b=siN0CsUWCkNJJ1vRwMT+X0pZT6Ajva3ehfkHDrFLSagUNsFPj8P9aA7U99KWgQOUaC H3RO1gZNP2upZyB5uClqn5z+Xh77cIPMLtmGllEmhwmy9Corh8gQ0t0iU+2iNP6CIYYO ZpAXVQ/Nb5LsiGw3SYGrTEWgrqZQdxq86fqo3qsBnAnMOX4Ow8/aQuZVaTl5qkmnIvjQ Q+QEwIyaxNDxPzpF1graBpSjEW1G9nXuScGbQiVd6cG5R6HCCvKyXylt054fyuLdDwit HHzg0DrZZLRZMWTnZV/5ua0+i6k+Oi7GwJ8PImYoudeseJUNisnXvoGlvF5mD/dHBVVY dT6A== X-Forwarded-Encrypted: i=1; AJvYcCX/9w6z2Y8K8FffYZcePB5DWKor1gZVAH3sovJuKy3P5dbnGdiHYCi2ILgMDfRpfIGUjh8=@vger.kernel.org X-Gm-Message-State: AOJu0YyStOvTvzQB1SieUaVCwV+0yhVMhczJJt2vGKFevnEkJPrYGocG pV8XKiX8CPwirhHEtz1DbNgLCabFa1HA6VWkAh2/MQqzdHxkhozQKnurt2xmXVl1IDXezyEY73V cY7qHPqDgXq/fXQeVf4K9Vbk0gXDpughU4vHu0oUjVeqe1JtEaEs2LKmmIw== X-Gm-Gg: ASbGncs167WX/ZW6cj3XU+n0lw6ZSafM/+XA28fVu9FPzT1YV5Hh0IPjCwZbzhTDcdn OCCn7KChLzmMvowTgmRFfHEdY1YrH/sJXVkdzcIBU0+y5qwIeU0455UXGi5x/p8m0P4h9Gt7qsN uGYQsz0rShOxCRmBtI7uULsEfFHMmWtsIfyjKNq/I+Onl2Xe7b2wYAUkv3G32l3oS+r02/HbyXo qu0ZylycRoWJrkm5Wvj8k68Zdzv1ckYKg/hqgkW44wyaoJ+O6tGtTjDK7a409h7vGfM6Y6Eue/N Etf8qcy56cj/QdKHki15Qw== X-Received: by 2002:a05:6000:430e:b0:39c:268e:ae04 with SMTP id ffacd0b85a97d-39c268eae24mr2570930f8f.0.1743523913170; Tue, 01 Apr 2025 09:11:53 -0700 (PDT) X-Google-Smtp-Source: AGHT+IEHP7UEfcGfLr0vO0H9IZqm3DR9p/GuTN1cmhrTGdbGPRl5CO1PoNYMm3svPGs26HA+6q4WUQ== X-Received: by 2002:a05:6000:430e:b0:39c:268e:ae04 with SMTP id ffacd0b85a97d-39c268eae24mr2570871f8f.0.1743523912597; Tue, 01 Apr 2025 09:11:52 -0700 (PDT) Received: from [192.168.10.48] ([176.206.111.201]) by smtp.gmail.com with ESMTPSA id ffacd0b85a97d-39c0b7a41b4sm14291601f8f.85.2025.04.01.09.11.50 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 01 Apr 2025 09:11:51 -0700 (PDT) From: Paolo Bonzini To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: roy.hopkins@suse.com, seanjc@google.com, thomas.lendacky@amd.com, ashish.kalra@amd.com, michael.roth@amd.com, jroedel@suse.de, nsaenz@amazon.com, anelkz@amazon.de, James.Bottomley@HansenPartnership.com Subject: [PATCH 16/29] KVM: x86: split "if" in __kvm_set_or_clear_apicv_inhibit Date: Tue, 1 Apr 2025 18:10:53 +0200 Message-ID: <20250401161106.790710-17-pbonzini@redhat.com> X-Mailer: git-send-email 2.49.0 In-Reply-To: <20250401161106.790710-1-pbonzini@redhat.com> References: <20250401161106.790710-1-pbonzini@redhat.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Signed-off-by: Paolo Bonzini --- arch/x86/kvm/x86.c | 23 ++++++++++++----------- 1 file changed, 12 insertions(+), 11 deletions(-) diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index a527a425c55d..f70d9a572455 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -10637,6 +10637,7 @@ void __kvm_set_or_clear_apicv_inhibit(struct kvm *kvm, enum kvm_apicv_inhibit reason, bool set) { unsigned long old, new; + bool changed; lockdep_assert_held_write(&kvm->arch.apicv_update_lock); @@ -10644,10 +10645,10 @@ void __kvm_set_or_clear_apicv_inhibit(struct kvm *kvm, return; old = new = kvm->arch.apicv_inhibit_reasons; - set_or_clear_apicv_inhibit(&new, reason, set); + changed = (!!old != !!new); - if (!!old != !!new) { + if (changed) { /* * Kick all vCPUs before setting apicv_inhibit_reasons to avoid * false positives in the sanity check WARN in vcpu_enter_guest(). @@ -10661,16 +10662,16 @@ void __kvm_set_or_clear_apicv_inhibit(struct kvm *kvm, * servicing the request with a stale apicv_inhibit_reasons. */ kvm_make_all_cpus_request(kvm, KVM_REQ_APICV_UPDATE); - kvm->arch.apicv_inhibit_reasons = new; - if (new) { - unsigned long gfn = gpa_to_gfn(APIC_DEFAULT_PHYS_BASE); - int idx = srcu_read_lock(&kvm->srcu); + } - kvm_zap_gfn_range(kvm, gfn, gfn+1); - srcu_read_unlock(&kvm->srcu, idx); - } - } else { - kvm->arch.apicv_inhibit_reasons = new; + kvm->arch.apicv_inhibit_reasons = new; + + if (changed && set) { + unsigned long gfn = gpa_to_gfn(APIC_DEFAULT_PHYS_BASE); + int idx = srcu_read_lock(&kvm->srcu); + + kvm_zap_gfn_range(kvm, gfn, gfn+1); + srcu_read_unlock(&kvm->srcu, idx); } } From patchwork Tue Apr 1 16:10:54 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Paolo Bonzini X-Patchwork-Id: 14035105 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id E1ECE221569 for ; Tue, 1 Apr 2025 16:11:59 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=170.10.133.124 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1743523921; cv=none; b=tgjf/mUpifJJxlTqa/WF8UfuVWzYRDFrun/HF1nj/36ppblO7d0ZvmTDi6dVLc//z2iREbMJ1tB2gfdWO7IzsYI/g5nbWUjqafwQhXdN66Iz2r/k9/7uuxz0hO++EIXurUznbu+02cf3r2APLzITTAwXEQzVcCBGx511dbOnlic= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1743523921; c=relaxed/simple; bh=2CI/mkWBrTgnbik9OCHRAn1y+W1g2QV9UjWHcVB514o=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=NOdkVkc8kU1IqT6xPxZ5lHLF4jDgADmiGQNN0MtBjt1maO3OJmYGQO+lVcI+JeYoQv5zS1ecaCSI1GaIOOQ/F/uv90rGBcqEA8cIbgaNT3I32F9soAirsBodS3a2GNYVG63myPjWZgqqTYNWXqUDFtIUfpRbmeMVNHkmK68ppW8= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=redhat.com; spf=pass smtp.mailfrom=redhat.com; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b=Xr20Axfc; arc=none smtp.client-ip=170.10.133.124 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=redhat.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=redhat.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="Xr20Axfc" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1743523919; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=jloFR1LDGvNeXK6NZWVsMX9b3AeMl8ttvkfYFFUhR2s=; b=Xr20Axfc8Hi19FQm961vQVoduSaE+7HSCBt0AX64eeHJduUsFLe34N8UkiK7nHAAj3Ka2I /nX9MfNq5Mrqqt+Yd3iXOeSmt/F9r84RxMTeOhzxM4IyOxcfU0z09ut4rffdz1do1ODdsL Q2xh9aw0EuK0RK02TMeyoPVndrcBAqA= Received: from mail-wm1-f72.google.com (mail-wm1-f72.google.com [209.85.128.72]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-544-hf8uBGwOMmOPy9gxzkVrCA-1; Tue, 01 Apr 2025 12:11:57 -0400 X-MC-Unique: hf8uBGwOMmOPy9gxzkVrCA-1 X-Mimecast-MFC-AGG-ID: hf8uBGwOMmOPy9gxzkVrCA_1743523916 Received: by mail-wm1-f72.google.com with SMTP id 5b1f17b1804b1-43d51bd9b41so51391015e9.3 for ; Tue, 01 Apr 2025 09:11:57 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1743523916; x=1744128716; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=jloFR1LDGvNeXK6NZWVsMX9b3AeMl8ttvkfYFFUhR2s=; b=kbrts2+UVFYOhl41SzKo1q8dm8A7+LPVruVqt/PGZx0VtFXxliygX9nLjN4IFDk80E 5+UoPLNGg9NOnzq5fclJV9vIQews4SyaU8MZIJ7i5BGcPNopaQX7jVQGEdNcD2w8bHmH HEQiKxE7P2GWm9EjM2rSNFfU0aDg+Jvg9e1I9G6pbirsRucOd/gBX8+z7eTD6olIciN0 yp1fr9ZFaRom1AZt6+UKIAr58g3m2Hu3bdWeS0vE1L/wxGuXi4d1nyrUjLxOeroyu8mk ZU8bRD4QBly3i5tERBojooseQ7TJR9orVsOsx7/hxOgPGbZ+Z1nBOtAv6Xqew3R6/yY4 I4/Q== X-Forwarded-Encrypted: i=1; AJvYcCVQp2RWTpwq8hqcrZuXmCjOHU9OdJof/OjSX7WEv7y+6imC+nBETQtJ4z7Q7RAg8RZGORs=@vger.kernel.org X-Gm-Message-State: AOJu0YwFd9mdm7I+rRJOZw2MUMTpAnEW+gbGzixE48S05dnzp49l0/wx 6AvwYIU/HHhE1aKit6Rfx/8oGOxk5cTNxq3KAf9SWzENmbSz6Kg9LP3t8vI9UDVu35FEAqKq85D rn8au38YqSt8LECw4KJmYGy3hc5ik+4yqKLVRMtn8IZGagSgheQ== X-Gm-Gg: ASbGncuiXEOFz/tgNot11OoSMsFEEOvlFOrrFknUZdUTeTtJOPKxYylRZolQN4PIjiU 117bqb+64Cif11oNfGXVQcJDmIYdwF8tWN9e+Tvw/TJiaPldxb85cT7D11B5p865jc6I+cKuLD+ Azy+CQaJ+qLAwcy5dx3/kKdXC63M/8HwS/ZODwzNyYEhQNgtAnNWo53ZCKkt1cQO/vHJClDhNXg AuFNmgTuuqAc5ZTG25QKFsx4yF5qGgrjkY9ivrPGU/kyI5WfACz7Y27Sdx0jC1nfVjI1mep3ObU 9oyvGCNKi1cyIWrunnuzFw== X-Received: by 2002:a05:600c:3b9d:b0:43c:fe15:41c9 with SMTP id 5b1f17b1804b1-43db6227a09mr118810525e9.9.1743523916376; Tue, 01 Apr 2025 09:11:56 -0700 (PDT) X-Google-Smtp-Source: AGHT+IGeSHmn91hmeZ7x284dZn+qQrcC5V3D8Ul3VDkKtIvDkX2ROgQp+v6dSPuSJJ1Vzfw+gbjPKg== X-Received: by 2002:a05:600c:3b9d:b0:43c:fe15:41c9 with SMTP id 5b1f17b1804b1-43db6227a09mr118810055e9.9.1743523915943; Tue, 01 Apr 2025 09:11:55 -0700 (PDT) Received: from [192.168.10.48] ([176.206.111.201]) by smtp.gmail.com with ESMTPSA id 5b1f17b1804b1-43d82efe9d1sm203529975e9.24.2025.04.01.09.11.53 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 01 Apr 2025 09:11:53 -0700 (PDT) From: Paolo Bonzini To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: roy.hopkins@suse.com, seanjc@google.com, thomas.lendacky@amd.com, ashish.kalra@amd.com, michael.roth@amd.com, jroedel@suse.de, nsaenz@amazon.com, anelkz@amazon.de, James.Bottomley@HansenPartnership.com Subject: [PATCH 17/29] KVM: x86: block creating irqchip if planes are active Date: Tue, 1 Apr 2025 18:10:54 +0200 Message-ID: <20250401161106.790710-18-pbonzini@redhat.com> X-Mailer: git-send-email 2.49.0 In-Reply-To: <20250401161106.790710-1-pbonzini@redhat.com> References: <20250401161106.790710-1-pbonzini@redhat.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Force creating the irqchip before planes, so that APICV_INHIBIT_REASON_ABSENT only needs to be removed from plane 0. Signed-off-by: Paolo Bonzini --- Documentation/virt/kvm/api.rst | 6 ++++-- arch/x86/kvm/x86.c | 4 ++-- include/linux/kvm_host.h | 1 + virt/kvm/kvm_main.c | 1 + 4 files changed, 8 insertions(+), 4 deletions(-) diff --git a/Documentation/virt/kvm/api.rst b/Documentation/virt/kvm/api.rst index e1c67bc6df47..16d836b954dc 100644 --- a/Documentation/virt/kvm/api.rst +++ b/Documentation/virt/kvm/api.rst @@ -882,6 +882,8 @@ On s390, a dummy irq routing table is created. Note that on s390 the KVM_CAP_S390_IRQCHIP vm capability needs to be enabled before KVM_CREATE_IRQCHIP can be used. +The interrupt controller must be created before any extra VM planes. + 4.25 KVM_IRQ_LINE ----------------- @@ -7792,8 +7794,8 @@ used in the IRQ routing table. The first args[0] MSI routes are reserved for the IOAPIC pins. Whenever the LAPIC receives an EOI for these routes, a KVM_EXIT_IOAPIC_EOI vmexit will be reported to userspace. -Fails if VCPU has already been created, or if the irqchip is already in the -kernel (i.e. KVM_CREATE_IRQCHIP has already been called). +Fails if VCPUs or planes have already been created, or if the irqchip is +already in the kernel (i.e. KVM_CREATE_IRQCHIP has already been called). 7.6 KVM_CAP_S390_RI ------------------- diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index f70d9a572455..653886e6e1c8 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -6561,7 +6561,7 @@ int kvm_vm_ioctl_enable_cap(struct kvm *kvm, r = -EEXIST; if (irqchip_in_kernel(kvm)) goto split_irqchip_unlock; - if (kvm->created_vcpus) + if (kvm->created_vcpus || kvm->has_planes) goto split_irqchip_unlock; /* Pairs with irqchip_in_kernel. */ smp_wmb(); @@ -7087,7 +7087,7 @@ int kvm_arch_vm_ioctl(struct file *filp, unsigned int ioctl, unsigned long arg) goto create_irqchip_unlock; r = -EINVAL; - if (kvm->created_vcpus) + if (kvm->created_vcpus || kvm->has_planes) goto create_irqchip_unlock; r = kvm_pic_init(kvm); diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index 16a8b3adb76d..152dc5845309 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -883,6 +883,7 @@ struct kvm { bool dirty_ring_with_bitmap; bool vm_bugged; bool vm_dead; + bool has_planes; #ifdef CONFIG_HAVE_KVM_PM_NOTIFIER struct notifier_block pm_notifier; diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index cb04fe6f8a2c..db38894f6fa3 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -5316,6 +5316,7 @@ static int kvm_vm_ioctl_create_plane(struct kvm *kvm, unsigned id) return fd; plane = kvm_create_vm_plane(kvm, id); + kvm->has_planes = true; if (IS_ERR(plane)) { r = PTR_ERR(plane); goto put_fd; From patchwork Tue Apr 1 16:10:55 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Paolo Bonzini X-Patchwork-Id: 14035106 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 8EF25221727 for ; Tue, 1 Apr 2025 16:12:02 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=170.10.133.124 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1743523924; cv=none; b=g2jqdu9EV+M8ZsJnIDxKW9btYxylfmSe+clrpILzJOgkKpRLAA2auFvZaGmQZr2frWuduYD/bvnCoKO73UMmFEGYG/PfOpfOYmtSd02MQxtAWrolvxv0Z1J2H0ndTMgPsg/6MkEKy7lFwXV/D9z7vRrnEI/T7L3iX0QFek9GJzs= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1743523924; c=relaxed/simple; bh=ZS7aDIfTrGTsc3CaJTz/rmizqT3vdz4Mep7cX4xxC20=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=cbo6Bhtn7BK9jVoYnzFd57guzghOfROHCGbTyf8PPukzRuHHJ1E1zK6e9UKvE2O29eOFFPKtB4ZzThU6luryb+QdkDXtVbWN1ixh/yu6WQLFTFg4nTID0YyHRJDrmH6bvMr98i2j36tIbo60k3WUmt2m2FdY2SohuQZN4mQ8IbY= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=redhat.com; spf=pass smtp.mailfrom=redhat.com; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b=DyNGMZ4f; arc=none smtp.client-ip=170.10.133.124 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=redhat.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=redhat.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="DyNGMZ4f" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1743523921; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=MGWdj9YvBEqpmgwNvC3/3vvpLMl37emPFU+fvsJE+7I=; b=DyNGMZ4fdEhQ2apnH5R2D3NvmpsC/Gu0weLAuDQSRf8Hnia1XcMav4D2D80y3+JnfFa6kY Ow4LJGJErAybIbsd/oI4mOlRGjbfa4vxiW5AFzIjTJCisIuTVx864OFQ/Zffnc9Z/xcnls IxXwEO69bTknQ3p+6Vu5XuGMvPKZmi0= Received: from mail-wr1-f71.google.com (mail-wr1-f71.google.com [209.85.221.71]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-68-1X-I1EchPmGsq8ZOnSC7Zw-1; Tue, 01 Apr 2025 12:12:00 -0400 X-MC-Unique: 1X-I1EchPmGsq8ZOnSC7Zw-1 X-Mimecast-MFC-AGG-ID: 1X-I1EchPmGsq8ZOnSC7Zw_1743523919 Received: by mail-wr1-f71.google.com with SMTP id ffacd0b85a97d-391345e3aa3so3286165f8f.0 for ; Tue, 01 Apr 2025 09:11:59 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1743523919; x=1744128719; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=MGWdj9YvBEqpmgwNvC3/3vvpLMl37emPFU+fvsJE+7I=; b=KFcF7vHLTzpfBHQt33Jb9uN7MeX8+9fGVQ7hbll+xAk7XtOIRO4PQOCgceeHXaHgNe 0gA60nQge6DB1P9OfozniK9GutQPP0AOQIqAxNqU8tZTkQkQGuwuBkg7VL/arHDQnAfs wmLTiZHO+rmGzG6mrThjfo4t/6voF3ICu/qi1fzuVng5gMWxCcO2hk7qqfQMiuS8IJPe bUWSGtc5LjGNmvFeDUJvqkclkJ7jbwSO9n4elQ31sI0skg/NoYMygTMI3jWZuxP2VUwd NI/v2seXE+RIbwB7fjvKleYsove7J2LZeD4DjG5VWLKQCWMboHOpN3q7J+gWBINrxV3+ eU1A== X-Forwarded-Encrypted: i=1; AJvYcCXWtAJ5VLD41VI0yHds/QamDJ4eXESX2Fv9Pcz0Vee8P8ubRALAndB0/AJ/Gj40aOIY3SE=@vger.kernel.org X-Gm-Message-State: AOJu0Yw6trh/hfIriXADpQvHcfjoXfR+VLrGp+JE9wUt5q6Q0D6Ajl90 MhtummEb1PFjnhfx/OALHaWJsO6KGUQpUHIV0BdlrDhmBzgtrBqMTdKUrz5RaKDLVPRl9PBMW+w jTUI7NRpaqdcjxaJc2oofpIaUADEwSYcUb7Nsu4UC06PVbUTtOw== X-Gm-Gg: ASbGncvf3SgIcwU/sdl0yUA/llsys/FJUHPqv6T5EGCGtADhsZgItS4kDpeJHk/RuEH LARfj0zIYOAha2fR4A0opPkqYVFmc7kJtosnCNSCwIUQ6byo56yu8G4O5R80LMRFeO2DaR8W+ir uXOK918mtvSdwPkKtEF38HMAQuyUf07GDL3u/UkeDyVrXT5FG9KBgqhoenPf8Sr3KEu8HmQpoI+ 4EpHGJdVc6BPuItkLt509rsPQsLoH4h1xdB/tBTqmqtXLFa08r38bE9QT1TWO16+U6lz04xm6Oy /ZckfMy0+8b9ZBAUQ7bjUg== X-Received: by 2002:a05:6000:2508:b0:39c:2688:4ebf with SMTP id ffacd0b85a97d-39c26884ef7mr2250154f8f.6.1743523918613; Tue, 01 Apr 2025 09:11:58 -0700 (PDT) X-Google-Smtp-Source: AGHT+IHej+s4j6aGPDin98dlsYk56EQfZoCit5BuzqKpA6tBi49RGgt4j5M4CiOy46C9AQy9PMLW9Q== X-Received: by 2002:a05:6000:2508:b0:39c:2688:4ebf with SMTP id ffacd0b85a97d-39c26884ef7mr2250116f8f.6.1743523918194; Tue, 01 Apr 2025 09:11:58 -0700 (PDT) Received: from [192.168.10.48] ([176.206.111.201]) by smtp.gmail.com with ESMTPSA id ffacd0b85a97d-39c0ddeecc9sm13258499f8f.83.2025.04.01.09.11.56 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 01 Apr 2025 09:11:57 -0700 (PDT) From: Paolo Bonzini To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: roy.hopkins@suse.com, seanjc@google.com, thomas.lendacky@amd.com, ashish.kalra@amd.com, michael.roth@amd.com, jroedel@suse.de, nsaenz@amazon.com, anelkz@amazon.de, James.Bottomley@HansenPartnership.com Subject: [PATCH 18/29] KVM: x86: track APICv inhibits per plane Date: Tue, 1 Apr 2025 18:10:55 +0200 Message-ID: <20250401161106.790710-19-pbonzini@redhat.com> X-Mailer: git-send-email 2.49.0 In-Reply-To: <20250401161106.790710-1-pbonzini@redhat.com> References: <20250401161106.790710-1-pbonzini@redhat.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 As a first step towards per-plane APIC maps, track APICv inhibits per plane. Most of the inhibits are set or cleared when building the map, and the virtual machine as a whole will have the OR of the inhibits of the individual plane. Signed-off-by: Paolo Bonzini --- arch/x86/include/asm/kvm_host.h | 21 +++++---- arch/x86/kvm/hyperv.c | 2 +- arch/x86/kvm/i8254.c | 4 +- arch/x86/kvm/lapic.c | 15 +++--- arch/x86/kvm/svm/sev.c | 2 +- arch/x86/kvm/svm/svm.c | 3 +- arch/x86/kvm/x86.c | 83 +++++++++++++++++++++++++-------- include/linux/kvm_host.h | 2 +- 8 files changed, 90 insertions(+), 42 deletions(-) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index e29694a97a19..d07ab048d7cc 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -1087,6 +1087,7 @@ struct kvm_arch_memory_slot { }; struct kvm_arch_plane { + unsigned long apicv_inhibit_reasons; }; /* @@ -1299,11 +1300,13 @@ enum kvm_apicv_inhibit { /* * PIT (i8254) 're-inject' mode, relies on EOI intercept, * which AVIC doesn't support for edge triggered interrupts. + * Applied only to plane 0. */ APICV_INHIBIT_REASON_PIT_REINJ, /* - * AVIC is disabled because SEV doesn't support it. + * AVIC is disabled because SEV doesn't support it. Sticky and applied + * only to plane 0. */ APICV_INHIBIT_REASON_SEV, @@ -2232,21 +2235,21 @@ gpa_t kvm_mmu_gva_to_gpa_system(struct kvm_vcpu *vcpu, gva_t gva, bool kvm_apicv_activated(struct kvm *kvm); bool kvm_vcpu_apicv_activated(struct kvm_vcpu *vcpu); void __kvm_vcpu_update_apicv(struct kvm_vcpu *vcpu); -void __kvm_set_or_clear_apicv_inhibit(struct kvm *kvm, +void __kvm_set_or_clear_apicv_inhibit(struct kvm_plane *plane, enum kvm_apicv_inhibit reason, bool set); -void kvm_set_or_clear_apicv_inhibit(struct kvm *kvm, +void kvm_set_or_clear_apicv_inhibit(struct kvm_plane *plane, enum kvm_apicv_inhibit reason, bool set); -static inline void kvm_set_apicv_inhibit(struct kvm *kvm, +static inline void kvm_set_apicv_inhibit(struct kvm_plane *plane, enum kvm_apicv_inhibit reason) { - kvm_set_or_clear_apicv_inhibit(kvm, reason, true); + kvm_set_or_clear_apicv_inhibit(plane, reason, true); } -static inline void kvm_clear_apicv_inhibit(struct kvm *kvm, +static inline void kvm_clear_apicv_inhibit(struct kvm_plane *plane, enum kvm_apicv_inhibit reason) { - kvm_set_or_clear_apicv_inhibit(kvm, reason, false); + kvm_set_or_clear_apicv_inhibit(plane, reason, false); } int kvm_mmu_page_fault(struct kvm_vcpu *vcpu, gpa_t cr2_or_gpa, u64 error_code, @@ -2360,8 +2363,8 @@ void kvm_make_scan_ioapic_request(struct kvm *kvm); void kvm_make_scan_ioapic_request_mask(struct kvm *kvm, unsigned long *vcpu_bitmap); -static inline void kvm_arch_init_plane(struct kvm_plane *plane) {} -static inline void kvm_arch_free_plane(struct kvm_plane *plane) {} +void kvm_arch_init_plane(struct kvm_plane *plane); +void kvm_arch_free_plane(struct kvm_plane *plane); bool kvm_arch_async_page_not_present(struct kvm_vcpu *vcpu, struct kvm_async_pf *work); diff --git a/arch/x86/kvm/hyperv.c b/arch/x86/kvm/hyperv.c index c6592e7f40a2..a522b467be48 100644 --- a/arch/x86/kvm/hyperv.c +++ b/arch/x86/kvm/hyperv.c @@ -145,7 +145,7 @@ static void synic_update_vector(struct kvm_vcpu_hv_synic *synic, * Inhibit APICv if any vCPU is using SynIC's AutoEOI, which relies on * the hypervisor to manually inject IRQs. */ - __kvm_set_or_clear_apicv_inhibit(vcpu->kvm, + __kvm_set_or_clear_apicv_inhibit(vcpu_to_plane(vcpu), APICV_INHIBIT_REASON_HYPERV, !!hv->synic_auto_eoi_used); diff --git a/arch/x86/kvm/i8254.c b/arch/x86/kvm/i8254.c index e3a3e7b90c26..ded1a9565c36 100644 --- a/arch/x86/kvm/i8254.c +++ b/arch/x86/kvm/i8254.c @@ -306,13 +306,13 @@ void kvm_pit_set_reinject(struct kvm_pit *pit, bool reinject) * So, deactivate APICv when PIT is in reinject mode. */ if (reinject) { - kvm_set_apicv_inhibit(kvm, APICV_INHIBIT_REASON_PIT_REINJ); + kvm_set_apicv_inhibit(kvm->planes[0], APICV_INHIBIT_REASON_PIT_REINJ); /* The initial state is preserved while ps->reinject == 0. */ kvm_pit_reset_reinject(pit); kvm_register_irq_ack_notifier(kvm, &ps->irq_ack_notifier); kvm_register_irq_mask_notifier(kvm, 0, &pit->mask_notifier); } else { - kvm_clear_apicv_inhibit(kvm, APICV_INHIBIT_REASON_PIT_REINJ); + kvm_clear_apicv_inhibit(kvm->planes[0], APICV_INHIBIT_REASON_PIT_REINJ); kvm_unregister_irq_ack_notifier(kvm, &ps->irq_ack_notifier); kvm_unregister_irq_mask_notifier(kvm, 0, &pit->mask_notifier); } diff --git a/arch/x86/kvm/lapic.c b/arch/x86/kvm/lapic.c index c078269f7b1d..4077c8d1e37e 100644 --- a/arch/x86/kvm/lapic.c +++ b/arch/x86/kvm/lapic.c @@ -377,6 +377,7 @@ enum { static void kvm_recalculate_apic_map(struct kvm *kvm) { + struct kvm_plane *plane = kvm->planes[0]; struct kvm_apic_map *new, *old = NULL; struct kvm_vcpu *vcpu; unsigned long i; @@ -456,19 +457,19 @@ static void kvm_recalculate_apic_map(struct kvm *kvm) * map also applies to APICv. */ if (!new) - kvm_set_apicv_inhibit(kvm, APICV_INHIBIT_REASON_PHYSICAL_ID_ALIASED); + kvm_set_apicv_inhibit(plane, APICV_INHIBIT_REASON_PHYSICAL_ID_ALIASED); else - kvm_clear_apicv_inhibit(kvm, APICV_INHIBIT_REASON_PHYSICAL_ID_ALIASED); + kvm_clear_apicv_inhibit(plane, APICV_INHIBIT_REASON_PHYSICAL_ID_ALIASED); if (!new || new->logical_mode == KVM_APIC_MODE_MAP_DISABLED) - kvm_set_apicv_inhibit(kvm, APICV_INHIBIT_REASON_LOGICAL_ID_ALIASED); + kvm_set_apicv_inhibit(plane, APICV_INHIBIT_REASON_LOGICAL_ID_ALIASED); else - kvm_clear_apicv_inhibit(kvm, APICV_INHIBIT_REASON_LOGICAL_ID_ALIASED); + kvm_clear_apicv_inhibit(plane, APICV_INHIBIT_REASON_LOGICAL_ID_ALIASED); if (xapic_id_mismatch) - kvm_set_apicv_inhibit(kvm, APICV_INHIBIT_REASON_APIC_ID_MODIFIED); + kvm_set_apicv_inhibit(plane, APICV_INHIBIT_REASON_APIC_ID_MODIFIED); else - kvm_clear_apicv_inhibit(kvm, APICV_INHIBIT_REASON_APIC_ID_MODIFIED); + kvm_clear_apicv_inhibit(plane, APICV_INHIBIT_REASON_APIC_ID_MODIFIED); old = rcu_dereference_protected(kvm->arch.apic_map, lockdep_is_held(&kvm->arch.apic_map_lock)); @@ -2630,7 +2631,7 @@ static void __kvm_apic_set_base(struct kvm_vcpu *vcpu, u64 value) if ((value & MSR_IA32_APICBASE_ENABLE) && apic->base_address != APIC_DEFAULT_PHYS_BASE) { - kvm_set_apicv_inhibit(apic->vcpu->kvm, + kvm_set_apicv_inhibit(vcpu_to_plane(vcpu), APICV_INHIBIT_REASON_APIC_BASE_MODIFIED); } } diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c index 827dbe4d2b3b..130d895f1d95 100644 --- a/arch/x86/kvm/svm/sev.c +++ b/arch/x86/kvm/svm/sev.c @@ -458,7 +458,7 @@ static int __sev_guest_init(struct kvm *kvm, struct kvm_sev_cmd *argp, INIT_LIST_HEAD(&sev->mirror_vms); sev->need_init = false; - kvm_set_apicv_inhibit(kvm, APICV_INHIBIT_REASON_SEV); + kvm_set_apicv_inhibit(kvm->planes[0], APICV_INHIBIT_REASON_SEV); return 0; diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c index f6a435ff7e2d..917bfe8db101 100644 --- a/arch/x86/kvm/svm/svm.c +++ b/arch/x86/kvm/svm/svm.c @@ -3926,7 +3926,8 @@ static void svm_enable_irq_window(struct kvm_vcpu *vcpu) * the VM wide AVIC inhibition. */ if (!is_guest_mode(vcpu)) - kvm_set_apicv_inhibit(vcpu->kvm, APICV_INHIBIT_REASON_IRQWIN); + kvm_set_apicv_inhibit(vcpu_to_plane(vcpu), + APICV_INHIBIT_REASON_IRQWIN); svm_set_vintr(svm); } diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 653886e6e1c8..382d8ace131f 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -6567,7 +6567,7 @@ int kvm_vm_ioctl_enable_cap(struct kvm *kvm, smp_wmb(); kvm->arch.irqchip_mode = KVM_IRQCHIP_SPLIT; kvm->arch.nr_reserved_ioapic_pins = cap->args[0]; - kvm_clear_apicv_inhibit(kvm, APICV_INHIBIT_REASON_ABSENT); + kvm_clear_apicv_inhibit(kvm->planes[0], APICV_INHIBIT_REASON_ABSENT); r = 0; split_irqchip_unlock: mutex_unlock(&kvm->lock); @@ -7109,7 +7109,7 @@ int kvm_arch_vm_ioctl(struct file *filp, unsigned int ioctl, unsigned long arg) /* Write kvm->irq_routing before enabling irqchip_in_kernel. */ smp_wmb(); kvm->arch.irqchip_mode = KVM_IRQCHIP_KERNEL; - kvm_clear_apicv_inhibit(kvm, APICV_INHIBIT_REASON_ABSENT); + kvm_clear_apicv_inhibit(kvm->planes[0], APICV_INHIBIT_REASON_ABSENT); create_irqchip_unlock: mutex_unlock(&kvm->lock); break; @@ -9996,14 +9996,18 @@ static void set_or_clear_apicv_inhibit(unsigned long *inhibits, trace_kvm_apicv_inhibit_changed(reason, set, *inhibits); } -static void kvm_apicv_init(struct kvm *kvm) +static void kvm_apicv_init(struct kvm *kvm, unsigned long *apicv_inhibit_reasons) { - enum kvm_apicv_inhibit reason = enable_apicv ? APICV_INHIBIT_REASON_ABSENT : - APICV_INHIBIT_REASON_DISABLED; + enum kvm_apicv_inhibit reason; - set_or_clear_apicv_inhibit(&kvm->arch.apicv_inhibit_reasons, reason, true); + if (!enable_apicv) + reason = APICV_INHIBIT_REASON_DISABLED; + else if (!irqchip_kernel(kvm)) + reason = APICV_INHIBIT_REASON_ABSENT; + else + return; - init_rwsem(&kvm->arch.apicv_update_lock); + set_or_clear_apicv_inhibit(apicv_inhibit_reasons, reason, true); } static void kvm_sched_yield(struct kvm_vcpu *vcpu, unsigned long dest_id) @@ -10633,10 +10637,22 @@ static void kvm_vcpu_update_apicv(struct kvm_vcpu *vcpu) __kvm_vcpu_update_apicv(vcpu); } -void __kvm_set_or_clear_apicv_inhibit(struct kvm *kvm, +static bool kvm_compute_apicv_inhibit(struct kvm *kvm, + enum kvm_apicv_inhibit reason) +{ + int i; + for (i = 0; i < KVM_MAX_VCPU_PLANES; i++) + if (test_bit(reason, &kvm->planes[i]->arch.apicv_inhibit_reasons)) + return true; + + return false; +} + +void __kvm_set_or_clear_apicv_inhibit(struct kvm_plane *plane, enum kvm_apicv_inhibit reason, bool set) { - unsigned long old, new; + struct kvm *kvm = plane->kvm; + unsigned long local, global; bool changed; lockdep_assert_held_write(&kvm->arch.apicv_update_lock); @@ -10644,9 +10660,24 @@ void __kvm_set_or_clear_apicv_inhibit(struct kvm *kvm, if (!(kvm_x86_ops.required_apicv_inhibits & BIT(reason))) return; - old = new = kvm->arch.apicv_inhibit_reasons; - set_or_clear_apicv_inhibit(&new, reason, set); - changed = (!!old != !!new); + local = plane->arch.apicv_inhibit_reasons; + set_or_clear_apicv_inhibit(&local, reason, set); + + /* Could this flip change the global state? */ + global = kvm->arch.apicv_inhibit_reasons; + if ((local & BIT(reason)) == (global & BIT(reason))) { + /* Easy case 1, the bit is now the same as for the whole VM. */ + changed = false; + } else if (set) { + /* Easy case 2, maybe the bit flipped globally from clear to set? */ + changed = !global; + set_or_clear_apicv_inhibit(&global, reason, set); + } else { + /* Harder case, check if no other plane had this inhibit. */ + set = kvm_compute_apicv_inhibit(kvm, reason); + set_or_clear_apicv_inhibit(&global, reason, set); + changed = !global; + } if (changed) { /* @@ -10664,7 +10695,8 @@ void __kvm_set_or_clear_apicv_inhibit(struct kvm *kvm, kvm_make_all_cpus_request(kvm, KVM_REQ_APICV_UPDATE); } - kvm->arch.apicv_inhibit_reasons = new; + plane->arch.apicv_inhibit_reasons = local; + kvm->arch.apicv_inhibit_reasons = global; if (changed && set) { unsigned long gfn = gpa_to_gfn(APIC_DEFAULT_PHYS_BASE); @@ -10675,14 +10707,17 @@ void __kvm_set_or_clear_apicv_inhibit(struct kvm *kvm, } } -void kvm_set_or_clear_apicv_inhibit(struct kvm *kvm, +void kvm_set_or_clear_apicv_inhibit(struct kvm_plane *plane, enum kvm_apicv_inhibit reason, bool set) { + struct kvm *kvm; + if (!enable_apicv) return; + kvm = plane->kvm; down_write(&kvm->arch.apicv_update_lock); - __kvm_set_or_clear_apicv_inhibit(kvm, reason, set); + __kvm_set_or_clear_apicv_inhibit(plane, reason, set); up_write(&kvm->arch.apicv_update_lock); } EXPORT_SYMBOL_GPL(kvm_set_or_clear_apicv_inhibit); @@ -12083,24 +12118,26 @@ int kvm_arch_vcpu_ioctl_set_sregs(struct kvm_vcpu *vcpu, return ret; } -static void kvm_arch_vcpu_guestdbg_update_apicv_inhibit(struct kvm *kvm) +static void kvm_arch_vcpu_guestdbg_update_apicv_inhibit(struct kvm_plane *plane) { bool set = false; + struct kvm *kvm; struct kvm_vcpu *vcpu; unsigned long i; if (!enable_apicv) return; + kvm = plane->kvm; down_write(&kvm->arch.apicv_update_lock); - kvm_for_each_vcpu(i, vcpu, kvm) { + kvm_for_each_plane_vcpu(i, vcpu, plane) { if (vcpu->guest_debug & KVM_GUESTDBG_BLOCKIRQ) { set = true; break; } } - __kvm_set_or_clear_apicv_inhibit(kvm, APICV_INHIBIT_REASON_BLOCKIRQ, set); + __kvm_set_or_clear_apicv_inhibit(plane, APICV_INHIBIT_REASON_BLOCKIRQ, set); up_write(&kvm->arch.apicv_update_lock); } @@ -12156,7 +12193,7 @@ int kvm_arch_vcpu_ioctl_set_guest_debug(struct kvm_vcpu *vcpu, kvm_x86_call(update_exception_bitmap)(vcpu); - kvm_arch_vcpu_guestdbg_update_apicv_inhibit(vcpu->kvm); + kvm_arch_vcpu_guestdbg_update_apicv_inhibit(vcpu_to_plane(vcpu)); r = 0; @@ -12732,6 +12769,11 @@ void kvm_arch_free_vm(struct kvm *kvm) } +void kvm_arch_init_plane(struct kvm_plane *plane) +{ + kvm_apicv_init(plane->kvm, &plane->arch.apicv_inhibit_reasons); +} + int kvm_arch_init_vm(struct kvm *kvm, unsigned long type) { int ret; @@ -12767,6 +12809,7 @@ int kvm_arch_init_vm(struct kvm *kvm, unsigned long type) set_bit(KVM_IRQFD_RESAMPLE_IRQ_SOURCE_ID, &kvm->arch.irq_sources_bitmap); + init_rwsem(&kvm->arch.apicv_update_lock); raw_spin_lock_init(&kvm->arch.tsc_write_lock); mutex_init(&kvm->arch.apic_map_lock); seqcount_raw_spinlock_init(&kvm->arch.pvclock_sc, &kvm->arch.tsc_write_lock); @@ -12789,7 +12832,7 @@ int kvm_arch_init_vm(struct kvm *kvm, unsigned long type) INIT_DELAYED_WORK(&kvm->arch.kvmclock_update_work, kvmclock_update_fn); INIT_DELAYED_WORK(&kvm->arch.kvmclock_sync_work, kvmclock_sync_fn); - kvm_apicv_init(kvm); + kvm_apicv_init(kvm, &kvm->arch.apicv_inhibit_reasons); kvm_hv_init_vm(kvm); kvm_xen_init_vm(kvm); diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index 152dc5845309..5cade1c04646 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -943,7 +943,7 @@ static inline struct kvm_plane *vcpu_to_plane(struct kvm_vcpu *vcpu) #else static inline struct kvm_plane *vcpu_to_plane(struct kvm_vcpu *vcpu) { - return vcpu->kvm->planes[vcpu->plane_id]; + return vcpu->kvm->planes[vcpu->plane]; } #endif From patchwork Tue Apr 1 16:10:56 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Paolo Bonzini X-Patchwork-Id: 14035107 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 25454221D8D for ; Tue, 1 Apr 2025 16:12:05 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=170.10.129.124 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1743523927; cv=none; b=E5hmbyc82j1r+f7T8f5+hYFqDZt+SOZV/5yCwSF2hxXNP6LL5irhwcbA1z4VD1HzOAMYLbL7UFYMJ9qs3MCzRL4e0MY59UsGR/4jwOK9TgItxpENtvEQ/ZUx7v0e5HfQVIvuC5Bauzjv1+dBotZpRnVYEu67gte+blKTwX5q8p0= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1743523927; c=relaxed/simple; bh=5tt6gGijGYJr7yu5paRKM0yRgVu/D/f2JfLQVjaI0xc=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=Imqg6E0mj2m4LNKiXSwgK1RoLoohbLavt+iz+X/oYoLTn+hartN9HPwV/UAzspJ4OGwzpHbsKndRajnuQM7JCKcmE52LcSETpsdFwLHGpGQzK+T2CWs0eaYKFiSwlqcZwkp8WhNwy/xhyRxBaqnYAFDT/8vWqLoM5YEnR1Ko0Yc= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=redhat.com; spf=pass smtp.mailfrom=redhat.com; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b=YRf6/fdZ; arc=none smtp.client-ip=170.10.129.124 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=redhat.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=redhat.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="YRf6/fdZ" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1743523924; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=VvZ6p2WrQQTblruIVD9gbYy1bh08r8hJdsaxuYeoEEI=; b=YRf6/fdZlmOzuZ1tk+FLjaAqDGz1twfqzvDg0uvtopLWgWSsJwZk5AsdfDQogn0FRohkOg EMKSt5pfYf8yrRHympdx+P1kK6+ImX8TjMf5ycMcbZ0IYO+7oQrVeD3Cf1wupE6u+hWzC2 hpbr0EdK6BLzWq1lyE/pnIAgp7Dum4c= Received: from mail-wr1-f69.google.com (mail-wr1-f69.google.com [209.85.221.69]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-199-SYXGf879P5WdsdpLAnVgFQ-1; Tue, 01 Apr 2025 12:12:02 -0400 X-MC-Unique: SYXGf879P5WdsdpLAnVgFQ-1 X-Mimecast-MFC-AGG-ID: SYXGf879P5WdsdpLAnVgFQ_1743523922 Received: by mail-wr1-f69.google.com with SMTP id ffacd0b85a97d-3913aaf1e32so3298224f8f.0 for ; Tue, 01 Apr 2025 09:12:02 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1743523921; x=1744128721; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=VvZ6p2WrQQTblruIVD9gbYy1bh08r8hJdsaxuYeoEEI=; b=kKv9YF2bJTeZDJFaZSlzTGJSvXukZZUlHo2neV53ZE7cYhqWKuA5noRwy3ZE4L4zRn gn29PfpQlvoHmz6tEy9re3TH25Yw78Eu/Lh7SAq0eLsxoduZ7BZcLx0pBWiq0ay9Arlq l4h6WUu51cAeNo9EpEgxxY1F/alpSmiEYjpKpliwwWWEFSF6u1irb2APs2tpJuLnxcJp 0Ws5twkXiKRSvgT28+LGoMv1vdUO/bTRLvlq1G6H+Ljg7q/sk9QvPjb1edYjy/POSAnr Vv9yp46hh8a1VW68IPSGlEei/ONRG2s8GFoVAO+pKLYWswCf9WOGX1Z3J/7MNazIMubh u+pg== X-Forwarded-Encrypted: i=1; AJvYcCU6rZu8PO4RqljPbIjjlKpmlkph9s7VROS1lS1Z7usSh1KbA/aeFdtXwMVIpl70IdvO100=@vger.kernel.org X-Gm-Message-State: AOJu0YwF5QCYWxtdcIgTkhOk3y4SsslcBIkK26YRd/Cmir72keacV2hE KwOaP/A4n0FyHeB0KOnoCiv3jdxoqvfgXYQSfiUTxTw4RkOuynGZNiyE1wDe56gARSulb63pNxe X1FIyh3es5bn9foix05Zq4UcOIzxMOOg9U5BOgeALY8BmFd1iRA== X-Gm-Gg: ASbGncvov8AMrQ+/T4Xl0hHiJ0BRy9B7RCw4Gjm9L9HQrKGYAR687f5SncWwRmbUDYF zLC1go6tnigShKRWEvJAvpXxfBkStId572+Mbjc6vdIjMqih1pZx+JtBHd+6rhSYaAzQhgJ5UbP +PWsNCv7FKaQI52Aq4C8BERY66kyuzycpn4Ohb17+quf/ux1NA8fI7Hh89vyFCYXcwnQ1X5LRTj 7404k9jhGPjTRKcPVFwTTh7Hi+IsTNcVloAZV9wAk0RriwvLyOMywT3WYyjQWKiFs2hqXRy0jJt 9w/A5IMztZ+djr6wbL1weg== X-Received: by 2002:a05:6000:290f:b0:390:f552:d291 with SMTP id ffacd0b85a97d-39c120dc53emr14114204f8f.22.1743523921381; Tue, 01 Apr 2025 09:12:01 -0700 (PDT) X-Google-Smtp-Source: AGHT+IFEjZPOv67fdn5slA2kWJTIKa56TO0rlWQ9//KKstMEhjPv7ui4gb+V8jJo68vRdLNVttSIuA== X-Received: by 2002:a05:6000:290f:b0:390:f552:d291 with SMTP id ffacd0b85a97d-39c120dc53emr14114152f8f.22.1743523920948; Tue, 01 Apr 2025 09:12:00 -0700 (PDT) Received: from [192.168.10.48] ([176.206.111.201]) by smtp.gmail.com with ESMTPSA id ffacd0b85a97d-39c0b79e33asm14665265f8f.66.2025.04.01.09.11.58 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 01 Apr 2025 09:11:59 -0700 (PDT) From: Paolo Bonzini To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: roy.hopkins@suse.com, seanjc@google.com, thomas.lendacky@amd.com, ashish.kalra@amd.com, michael.roth@amd.com, jroedel@suse.de, nsaenz@amazon.com, anelkz@amazon.de, James.Bottomley@HansenPartnership.com Subject: [PATCH 19/29] KVM: x86: move APIC map to kvm_arch_plane Date: Tue, 1 Apr 2025 18:10:56 +0200 Message-ID: <20250401161106.790710-20-pbonzini@redhat.com> X-Mailer: git-send-email 2.49.0 In-Reply-To: <20250401161106.790710-1-pbonzini@redhat.com> References: <20250401161106.790710-1-pbonzini@redhat.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 IRQs need to be directed to the appropriate plane (typically, but not always, the same as the vCPU that is running). Because each plane has a separate struct kvm_vcpu *, the map that holds the pointers to them must be individual to the plane as well. This works fine as long as all IRQs (even those directed at multiple CPUs) only target a single plane. Signed-off-by: Paolo Bonzini --- arch/x86/include/asm/kvm_host.h | 7 +-- arch/x86/kvm/lapic.c | 94 +++++++++++++++++++-------------- arch/x86/kvm/svm/sev.c | 2 +- arch/x86/kvm/x86.c | 10 ++-- 4 files changed, 67 insertions(+), 46 deletions(-) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index d07ab048d7cc..f832352cf4d3 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -1087,6 +1087,10 @@ struct kvm_arch_memory_slot { }; struct kvm_arch_plane { + struct mutex apic_map_lock; + struct kvm_apic_map __rcu *apic_map; + atomic_t apic_map_dirty; + unsigned long apicv_inhibit_reasons; }; @@ -1381,9 +1385,6 @@ struct kvm_arch { struct kvm_ioapic *vioapic; struct kvm_pit *vpit; atomic_t vapics_in_nmi_mode; - struct mutex apic_map_lock; - struct kvm_apic_map __rcu *apic_map; - atomic_t apic_map_dirty; bool apic_access_memslot_enabled; bool apic_access_memslot_inhibited; diff --git a/arch/x86/kvm/lapic.c b/arch/x86/kvm/lapic.c index 4077c8d1e37e..6ed5f5b4f878 100644 --- a/arch/x86/kvm/lapic.c +++ b/arch/x86/kvm/lapic.c @@ -375,9 +375,9 @@ enum { DIRTY }; -static void kvm_recalculate_apic_map(struct kvm *kvm) +static void kvm_recalculate_apic_map(struct kvm_plane *plane) { - struct kvm_plane *plane = kvm->planes[0]; + struct kvm *kvm = plane->kvm; struct kvm_apic_map *new, *old = NULL; struct kvm_vcpu *vcpu; unsigned long i; @@ -385,27 +385,27 @@ static void kvm_recalculate_apic_map(struct kvm *kvm) bool xapic_id_mismatch; int r; - /* Read kvm->arch.apic_map_dirty before kvm->arch.apic_map. */ - if (atomic_read_acquire(&kvm->arch.apic_map_dirty) == CLEAN) + /* Read plane->arch.apic_map_dirty before plane->arch.apic_map. */ + if (atomic_read_acquire(&plane->arch.apic_map_dirty) == CLEAN) return; WARN_ONCE(!irqchip_in_kernel(kvm), "Dirty APIC map without an in-kernel local APIC"); - mutex_lock(&kvm->arch.apic_map_lock); + mutex_lock(&plane->arch.apic_map_lock); retry: /* - * Read kvm->arch.apic_map_dirty before kvm->arch.apic_map (if clean) + * Read plane->arch.apic_map_dirty before plane->arch.apic_map (if clean) * or the APIC registers (if dirty). Note, on retry the map may have * not yet been marked dirty by whatever task changed a vCPU's x2APIC * ID, i.e. the map may still show up as in-progress. In that case * this task still needs to retry and complete its calculation. */ - if (atomic_cmpxchg_acquire(&kvm->arch.apic_map_dirty, + if (atomic_cmpxchg_acquire(&plane->arch.apic_map_dirty, DIRTY, UPDATE_IN_PROGRESS) == CLEAN) { /* Someone else has updated the map. */ - mutex_unlock(&kvm->arch.apic_map_lock); + mutex_unlock(&plane->arch.apic_map_lock); return; } @@ -418,7 +418,7 @@ static void kvm_recalculate_apic_map(struct kvm *kvm) */ xapic_id_mismatch = false; - kvm_for_each_vcpu(i, vcpu, kvm) + kvm_for_each_plane_vcpu(i, vcpu, plane) if (kvm_apic_present(vcpu)) max_id = max(max_id, kvm_x2apic_id(vcpu->arch.apic)); @@ -432,7 +432,7 @@ static void kvm_recalculate_apic_map(struct kvm *kvm) new->max_apic_id = max_id; new->logical_mode = KVM_APIC_MODE_SW_DISABLED; - kvm_for_each_vcpu(i, vcpu, kvm) { + kvm_for_each_plane_vcpu(i, vcpu, plane) { if (!kvm_apic_present(vcpu)) continue; @@ -471,21 +471,29 @@ static void kvm_recalculate_apic_map(struct kvm *kvm) else kvm_clear_apicv_inhibit(plane, APICV_INHIBIT_REASON_APIC_ID_MODIFIED); - old = rcu_dereference_protected(kvm->arch.apic_map, - lockdep_is_held(&kvm->arch.apic_map_lock)); - rcu_assign_pointer(kvm->arch.apic_map, new); + old = rcu_dereference_protected(plane->arch.apic_map, + lockdep_is_held(&plane->arch.apic_map_lock)); + rcu_assign_pointer(plane->arch.apic_map, new); /* - * Write kvm->arch.apic_map before clearing apic->apic_map_dirty. + * Write plane->arch.apic_map before clearing apic->apic_map_dirty. * If another update has come in, leave it DIRTY. */ - atomic_cmpxchg_release(&kvm->arch.apic_map_dirty, + atomic_cmpxchg_release(&plane->arch.apic_map_dirty, UPDATE_IN_PROGRESS, CLEAN); - mutex_unlock(&kvm->arch.apic_map_lock); + mutex_unlock(&plane->arch.apic_map_lock); if (old) kvfree_rcu(old, rcu); - kvm_make_scan_ioapic_request(kvm); + if (plane->plane == 0) + kvm_make_scan_ioapic_request(kvm); +} + +static inline void kvm_mark_apic_map_dirty(struct kvm_vcpu *vcpu) +{ + struct kvm_plane *plane = vcpu_to_plane(vcpu); + + atomic_set_release(&plane->arch.apic_map_dirty, DIRTY); } static inline void apic_set_spiv(struct kvm_lapic *apic, u32 val) @@ -501,7 +509,7 @@ static inline void apic_set_spiv(struct kvm_lapic *apic, u32 val) else static_branch_inc(&apic_sw_disabled.key); - atomic_set_release(&apic->vcpu->kvm->arch.apic_map_dirty, DIRTY); + kvm_mark_apic_map_dirty(apic->vcpu); } /* Check if there are APF page ready requests pending */ @@ -514,19 +522,19 @@ static inline void apic_set_spiv(struct kvm_lapic *apic, u32 val) static inline void kvm_apic_set_xapic_id(struct kvm_lapic *apic, u8 id) { kvm_lapic_set_reg(apic, APIC_ID, id << 24); - atomic_set_release(&apic->vcpu->kvm->arch.apic_map_dirty, DIRTY); + kvm_mark_apic_map_dirty(apic->vcpu); } static inline void kvm_apic_set_ldr(struct kvm_lapic *apic, u32 id) { kvm_lapic_set_reg(apic, APIC_LDR, id); - atomic_set_release(&apic->vcpu->kvm->arch.apic_map_dirty, DIRTY); + kvm_mark_apic_map_dirty(apic->vcpu); } static inline void kvm_apic_set_dfr(struct kvm_lapic *apic, u32 val) { kvm_lapic_set_reg(apic, APIC_DFR, val); - atomic_set_release(&apic->vcpu->kvm->arch.apic_map_dirty, DIRTY); + kvm_mark_apic_map_dirty(apic->vcpu); } static inline void kvm_apic_set_x2apic_id(struct kvm_lapic *apic, u32 id) @@ -537,7 +545,7 @@ static inline void kvm_apic_set_x2apic_id(struct kvm_lapic *apic, u32 id) kvm_lapic_set_reg(apic, APIC_ID, id); kvm_lapic_set_reg(apic, APIC_LDR, ldr); - atomic_set_release(&apic->vcpu->kvm->arch.apic_map_dirty, DIRTY); + kvm_mark_apic_map_dirty(apic->vcpu); } static inline int apic_lvt_enabled(struct kvm_lapic *apic, int lvt_type) @@ -866,6 +874,7 @@ int kvm_pv_send_ipi(struct kvm_vcpu *source, unsigned long ipi_bitmap_low, unsigned long ipi_bitmap_high, u32 min, unsigned long icr, int op_64_bit) { + struct kvm_plane *plane = vcpu_to_plane(source); struct kvm_apic_map *map; struct kvm_lapic_irq irq = {0}; int cluster_size = op_64_bit ? 64 : 32; @@ -880,7 +889,7 @@ int kvm_pv_send_ipi(struct kvm_vcpu *source, unsigned long ipi_bitmap_low, irq.trig_mode = icr & APIC_INT_LEVELTRIG; rcu_read_lock(); - map = rcu_dereference(source->kvm->arch.apic_map); + map = rcu_dereference(plane->arch.apic_map); count = -EOPNOTSUPP; if (likely(map)) { @@ -1152,7 +1161,7 @@ static bool kvm_apic_is_broadcast_dest(struct kvm *kvm, struct kvm_lapic **src, * means that the interrupt should be dropped. In this case, *bitmap would be * zero and *dst undefined. */ -static inline bool kvm_apic_map_get_dest_lapic(struct kvm *kvm, +static inline bool kvm_apic_map_get_dest_lapic(struct kvm_plane *plane, struct kvm_lapic **src, struct kvm_lapic_irq *irq, struct kvm_apic_map *map, struct kvm_lapic ***dst, unsigned long *bitmap) @@ -1166,7 +1175,7 @@ static inline bool kvm_apic_map_get_dest_lapic(struct kvm *kvm, } else if (irq->shorthand) return false; - if (!map || kvm_apic_is_broadcast_dest(kvm, src, irq, map)) + if (!map || kvm_apic_is_broadcast_dest(plane->kvm, src, irq, map)) return false; if (irq->dest_mode == APIC_DEST_PHYSICAL) { @@ -1207,7 +1216,7 @@ static inline bool kvm_apic_map_get_dest_lapic(struct kvm *kvm, bitmap, 16); if (!(*dst)[lowest]) { - kvm_apic_disabled_lapic_found(kvm); + kvm_apic_disabled_lapic_found(plane->kvm); *bitmap = 0; return true; } @@ -1221,6 +1230,7 @@ static inline bool kvm_apic_map_get_dest_lapic(struct kvm *kvm, bool kvm_irq_delivery_to_apic_fast(struct kvm *kvm, struct kvm_lapic *src, struct kvm_lapic_irq *irq, int *r, struct dest_map *dest_map) { + struct kvm_plane *plane = kvm->planes[0]; struct kvm_apic_map *map; unsigned long bitmap; struct kvm_lapic **dst = NULL; @@ -1228,6 +1238,10 @@ bool kvm_irq_delivery_to_apic_fast(struct kvm *kvm, struct kvm_lapic *src, bool ret; *r = -1; + if (KVM_BUG_ON(!plane, kvm)) { + *r = 0; + return true; + } if (irq->shorthand == APIC_DEST_SELF) { if (KVM_BUG_ON(!src, kvm)) { @@ -1239,9 +1253,9 @@ bool kvm_irq_delivery_to_apic_fast(struct kvm *kvm, struct kvm_lapic *src, } rcu_read_lock(); - map = rcu_dereference(kvm->arch.apic_map); + map = rcu_dereference(plane->arch.apic_map); - ret = kvm_apic_map_get_dest_lapic(kvm, &src, irq, map, &dst, &bitmap); + ret = kvm_apic_map_get_dest_lapic(plane, &src, irq, map, &dst, &bitmap); if (ret) { *r = 0; for_each_set_bit(i, &bitmap, 16) { @@ -1272,6 +1286,7 @@ bool kvm_irq_delivery_to_apic_fast(struct kvm *kvm, struct kvm_lapic *src, bool kvm_intr_is_single_vcpu_fast(struct kvm *kvm, struct kvm_lapic_irq *irq, struct kvm_vcpu **dest_vcpu) { + struct kvm_plane *plane = kvm->planes[0]; struct kvm_apic_map *map; unsigned long bitmap; struct kvm_lapic **dst = NULL; @@ -1281,9 +1296,9 @@ bool kvm_intr_is_single_vcpu_fast(struct kvm *kvm, struct kvm_lapic_irq *irq, return false; rcu_read_lock(); - map = rcu_dereference(kvm->arch.apic_map); + map = rcu_dereference(plane->arch.apic_map); - if (kvm_apic_map_get_dest_lapic(kvm, NULL, irq, map, &dst, &bitmap) && + if (kvm_apic_map_get_dest_lapic(plane, NULL, irq, map, &dst, &bitmap) && hweight16(bitmap) == 1) { unsigned long i = find_first_bit(&bitmap, 16); @@ -1407,6 +1422,7 @@ static int __apic_accept_irq(struct kvm_lapic *apic, int delivery_mode, void kvm_bitmap_or_dest_vcpus(struct kvm *kvm, struct kvm_lapic_irq *irq, unsigned long *vcpu_bitmap) { + struct kvm_plane *plane = kvm->planes[0]; struct kvm_lapic **dest_vcpu = NULL; struct kvm_lapic *src = NULL; struct kvm_apic_map *map; @@ -1416,9 +1432,9 @@ void kvm_bitmap_or_dest_vcpus(struct kvm *kvm, struct kvm_lapic_irq *irq, bool ret; rcu_read_lock(); - map = rcu_dereference(kvm->arch.apic_map); + map = rcu_dereference(plane->arch.apic_map); - ret = kvm_apic_map_get_dest_lapic(kvm, &src, irq, map, &dest_vcpu, + ret = kvm_apic_map_get_dest_lapic(plane, &src, irq, map, &dest_vcpu, &bitmap); if (ret) { for_each_set_bit(i, &bitmap, 16) { @@ -2420,7 +2436,7 @@ static int kvm_lapic_reg_write(struct kvm_lapic *apic, u32 reg, u32 val) * was toggled, the APIC ID changed, etc... The maps are marked dirty * on relevant changes, i.e. this is a nop for most writes. */ - kvm_recalculate_apic_map(apic->vcpu->kvm); + kvm_recalculate_apic_map(vcpu_to_plane(apic->vcpu)); return ret; } @@ -2610,7 +2626,7 @@ static void __kvm_apic_set_base(struct kvm_vcpu *vcpu, u64 value) kvm_make_request(KVM_REQ_APF_READY, vcpu); } else { static_branch_inc(&apic_hw_disabled.key); - atomic_set_release(&apic->vcpu->kvm->arch.apic_map_dirty, DIRTY); + kvm_mark_apic_map_dirty(apic->vcpu); } } @@ -2657,7 +2673,7 @@ int kvm_apic_set_base(struct kvm_vcpu *vcpu, u64 value, bool host_initiated) } __kvm_apic_set_base(vcpu, value); - kvm_recalculate_apic_map(vcpu->kvm); + kvm_recalculate_apic_map(vcpu_to_plane(vcpu)); return 0; } EXPORT_SYMBOL_GPL(kvm_apic_set_base); @@ -2823,7 +2839,7 @@ void kvm_lapic_reset(struct kvm_vcpu *vcpu, bool init_event) vcpu->arch.apic_arb_prio = 0; vcpu->arch.apic_attention = 0; - kvm_recalculate_apic_map(vcpu->kvm); + kvm_recalculate_apic_map(vcpu_to_plane(apic->vcpu)); } /* @@ -3115,13 +3131,13 @@ int kvm_apic_set_state(struct kvm_vcpu *vcpu, struct kvm_lapic_state *s) r = kvm_apic_state_fixup(vcpu, s, true); if (r) { - kvm_recalculate_apic_map(vcpu->kvm); + kvm_recalculate_apic_map(vcpu_to_plane(apic->vcpu)); return r; } memcpy(vcpu->arch.apic->regs, s->regs, sizeof(*s)); - atomic_set_release(&apic->vcpu->kvm->arch.apic_map_dirty, DIRTY); - kvm_recalculate_apic_map(vcpu->kvm); + kvm_mark_apic_map_dirty(apic->vcpu); + kvm_recalculate_apic_map(vcpu_to_plane(apic->vcpu)); kvm_apic_set_version(vcpu); apic_update_ppr(apic); diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c index 130d895f1d95..9d4492862c11 100644 --- a/arch/x86/kvm/svm/sev.c +++ b/arch/x86/kvm/svm/sev.c @@ -458,7 +458,7 @@ static int __sev_guest_init(struct kvm *kvm, struct kvm_sev_cmd *argp, INIT_LIST_HEAD(&sev->mirror_vms); sev->need_init = false; - kvm_set_apicv_inhibit(kvm->planes[0], APICV_INHIBIT_REASON_SEV); + kvm_set_apicv_inhibit(kvm->planes[[0], APICV_INHIBIT_REASON_SEV); return 0; diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 382d8ace131f..19e3bb33bf7d 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -10021,7 +10021,7 @@ static void kvm_sched_yield(struct kvm_vcpu *vcpu, unsigned long dest_id) goto no_yield; rcu_read_lock(); - map = rcu_dereference(vcpu->kvm->arch.apic_map); + map = rcu_dereference(vcpu_to_plane(vcpu)->arch.apic_map); if (likely(map) && dest_id <= map->max_apic_id && map->phys_map[dest_id]) target = map->phys_map[dest_id]->vcpu; @@ -12771,6 +12771,7 @@ void kvm_arch_free_vm(struct kvm *kvm) void kvm_arch_init_plane(struct kvm_plane *plane) { + mutex_init(&plane->arch.apic_map_lock); kvm_apicv_init(plane->kvm, &plane->arch.apicv_inhibit_reasons); } @@ -12811,7 +12812,6 @@ int kvm_arch_init_vm(struct kvm *kvm, unsigned long type) init_rwsem(&kvm->arch.apicv_update_lock); raw_spin_lock_init(&kvm->arch.tsc_write_lock); - mutex_init(&kvm->arch.apic_map_lock); seqcount_raw_spinlock_init(&kvm->arch.pvclock_sc, &kvm->arch.tsc_write_lock); kvm->arch.kvmclock_offset = -get_kvmclock_base_ns(); @@ -12960,6 +12960,11 @@ void kvm_arch_pre_destroy_vm(struct kvm *kvm) static_call_cond(kvm_x86_vm_pre_destroy)(kvm); } +void kvm_arch_free_plane(struct kvm_plane *plane) +{ + kvfree(rcu_dereference_check(plane->arch.apic_map, 1)); +} + void kvm_arch_destroy_vm(struct kvm *kvm) { if (current->mm == kvm->mm) { @@ -12981,7 +12986,6 @@ void kvm_arch_destroy_vm(struct kvm *kvm) kvm_free_msr_filter(srcu_dereference_check(kvm->arch.msr_filter, &kvm->srcu, 1)); kvm_pic_destroy(kvm); kvm_ioapic_destroy(kvm); - kvfree(rcu_dereference_check(kvm->arch.apic_map, 1)); kfree(srcu_dereference_check(kvm->arch.pmu_event_filter, &kvm->srcu, 1)); kvm_mmu_uninit_vm(kvm); kvm_page_track_cleanup(kvm); From patchwork Tue Apr 1 16:10:57 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Paolo Bonzini X-Patchwork-Id: 14035108 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 3F523221DBC for ; Tue, 1 Apr 2025 16:12:08 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=170.10.129.124 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1743523930; cv=none; b=VAPp1vMXCfrPlSaHirKw+mmUXwZyS5P9yucPAbiDZLdfzAiEBI0pY/yv90W9oxp6GovJ/nGbmucZ2ah+h60qSS4vvwkXcGtSt7jpmJ4WJCRBAkZfHwyGF1xbyMqxdwYzoK1Xn3eC0Iep1LcIigfDgGZqigzmfpKCEKRRQ9jrHP8= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1743523930; c=relaxed/simple; bh=u5ldWiz2cXiotD/cOofMPUD+9S2OYpN2imw/B/Aw+M0=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=Hg47+x7Bss1NjhvnhaxR8bF79DC+EbgJThm15fvokJg7APZaavn94tr0B/hgY68LYQC3xmYGmaaJMYNEmH/tvJc7RdYJDVc+QK4cmyWButr108HrAUoHKKUlmu4WwOx1wqOAJ907rHM9YF12EKYeE1ZSPlK2pat8Ol3lk4cWLb8= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=redhat.com; spf=pass smtp.mailfrom=redhat.com; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b=YsppG4+P; arc=none smtp.client-ip=170.10.129.124 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=redhat.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=redhat.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="YsppG4+P" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1743523927; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=Vrf7NR/iVZ/2wCl5JeoFe4lGIXCBAxX7ph94CsZNoVw=; b=YsppG4+P/VGc3p3wmw2XXtNkRb36F5fIPM+Tj4WJ0y7/4GK6LoKebDehpSgNkRphVQ2k7o 0MfcGsTtpVaOl7ubzyMKGcHAJzgoBdQNqY7bD+UHZY5A7Ef4OK8vyQUUN4hAFmCse9H/6W 0u7umTG/1VgwDclCuT+OSPA/MJYnJug= Received: from mail-wm1-f72.google.com (mail-wm1-f72.google.com [209.85.128.72]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-523--Ks8kZBkN7--Nk0LxB8fcg-1; Tue, 01 Apr 2025 12:12:06 -0400 X-MC-Unique: -Ks8kZBkN7--Nk0LxB8fcg-1 X-Mimecast-MFC-AGG-ID: -Ks8kZBkN7--Nk0LxB8fcg_1743523925 Received: by mail-wm1-f72.google.com with SMTP id 5b1f17b1804b1-43ceeaf1524so30159295e9.1 for ; Tue, 01 Apr 2025 09:12:06 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1743523925; x=1744128725; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=Vrf7NR/iVZ/2wCl5JeoFe4lGIXCBAxX7ph94CsZNoVw=; b=bdOMQ7vUj5q3/Nh1tKf2x1feKDaGNNIjjipg7sPlnrf0mM+hKixvNEpOjg7P520jwJ EfAm343tpUqoT/1RAKVCBHMbsHym/zZ261bfTWAwIrITchnlDkOCBqLhAHy5XaH/u/iE kpDxdEyZXJj3KH6LovHvD7eaM6O0fhwQ8mr07ibWVhP6b4FGX/1FWIsmFAILhszfFqnp wboTmmkSVT94bjM44XNfre8yPDepsNTzOeeYHTw5LUVopOLfNzw/yhbTFxwSApdS3OHs eHMyTrjK4+lqGGdncMhyXWyxbxT35tjc24qH4Sr8vt3bSrCpNBxhb/XzkwxWFDqsqyGi 4Sng== X-Forwarded-Encrypted: i=1; AJvYcCWvQ0vWZUYQTHlfSy7D93ikQsUizHINIJEm0Md0H/fS9e8DR0vyUKPrmEO4p5LAcTk/6eE=@vger.kernel.org X-Gm-Message-State: AOJu0YzTkisZGxUUU+BSg/0YG3T0nTcZBzX32pIZRuAh7vpIipOTveqd wllfWkMJp2TO+2ujU7AeNFgoIKpKp/Zyoo+hPsIq9M3mSomdVQaORN/sm1aTI20Zvg70D9Qp+0A DzDbRjrEjrhDzeoV2zcR6EUjB4vtN1ySO9Cjz7noIEQYCiwVEFg== X-Gm-Gg: ASbGncvvXoMqhmrDID6bIhVdFaWDfeNzWNEoWmKd+cS4/TR9FT8sE1DmAIOfUBEJdhf +rffvyIhRpeAkLKWzcel4DFmE1skLKHZ06SoSu+CyrzFCo0lg03UrB5oZozy6wM7O5uS1WCpXE4 q7ffZVNMW9toWefuBn56Uo4fvmI2qytwh5JyyNfcsssTGftUwG1vP7KtUx+BxV1gqHwI0YWZvDT MoYFo0l/Xk41y3KcAfwr02NRl8a2PsaWFwT60Eg51UMFroWBQVlWRTnXH0TV72eg3gMj5AYJlt0 sl84XYo0ccNK9rJpJ95Ing== X-Received: by 2002:a7b:c410:0:b0:43d:174:2668 with SMTP id 5b1f17b1804b1-43eb0432f65mr5326845e9.0.1743523924938; Tue, 01 Apr 2025 09:12:04 -0700 (PDT) X-Google-Smtp-Source: AGHT+IEk/I3vo1cRXa7Mz4utMMm6kLtNUQbKtvEfiervEB5Xb0Kb0XowzumjTcrRZh98kiO/aEFnGA== X-Received: by 2002:a7b:c410:0:b0:43d:174:2668 with SMTP id 5b1f17b1804b1-43eb0432f65mr5326525e9.0.1743523924573; Tue, 01 Apr 2025 09:12:04 -0700 (PDT) Received: from [192.168.10.48] ([176.206.111.201]) by smtp.gmail.com with ESMTPSA id 5b1f17b1804b1-43d82dede21sm203158185e9.4.2025.04.01.09.12.02 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 01 Apr 2025 09:12:02 -0700 (PDT) From: Paolo Bonzini To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: roy.hopkins@suse.com, seanjc@google.com, thomas.lendacky@amd.com, ashish.kalra@amd.com, michael.roth@amd.com, jroedel@suse.de, nsaenz@amazon.com, anelkz@amazon.de, James.Bottomley@HansenPartnership.com Subject: [PATCH 20/29] KVM: x86: add planes support for interrupt delivery Date: Tue, 1 Apr 2025 18:10:57 +0200 Message-ID: <20250401161106.790710-21-pbonzini@redhat.com> X-Mailer: git-send-email 2.49.0 In-Reply-To: <20250401161106.790710-1-pbonzini@redhat.com> References: <20250401161106.790710-1-pbonzini@redhat.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Plumb the destination plane into struct kvm_lapic_irq and propagate it everywhere. The in-kernel IOAPIC only targets plane 0. Signed-off-by: Paolo Bonzini --- arch/x86/include/asm/kvm_host.h | 1 + arch/x86/kvm/hyperv.c | 1 + arch/x86/kvm/ioapic.c | 4 ++-- arch/x86/kvm/irq_comm.c | 14 +++++++++++--- arch/x86/kvm/lapic.c | 8 ++++---- arch/x86/kvm/x86.c | 8 +++++--- arch/x86/kvm/xen.c | 1 + 7 files changed, 25 insertions(+), 12 deletions(-) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index f832352cf4d3..283d8a4b5b14 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -1661,6 +1661,7 @@ struct kvm_lapic_irq { u16 delivery_mode; u16 dest_mode; bool level; + u8 plane; u16 trig_mode; u32 shorthand; u32 dest_id; diff --git a/arch/x86/kvm/hyperv.c b/arch/x86/kvm/hyperv.c index a522b467be48..cd1ff31038d2 100644 --- a/arch/x86/kvm/hyperv.c +++ b/arch/x86/kvm/hyperv.c @@ -491,6 +491,7 @@ static int synic_set_irq(struct kvm_vcpu_hv_synic *synic, u32 sint) irq.delivery_mode = APIC_DM_FIXED; irq.vector = vector; irq.level = 1; + irq.plane = vcpu->plane; ret = kvm_irq_delivery_to_apic(vcpu->kvm, vcpu->arch.apic, &irq, NULL); trace_kvm_hv_synic_set_irq(vcpu->vcpu_id, sint, irq.vector, ret); diff --git a/arch/x86/kvm/ioapic.c b/arch/x86/kvm/ioapic.c index 995eb5054360..c538867afceb 100644 --- a/arch/x86/kvm/ioapic.c +++ b/arch/x86/kvm/ioapic.c @@ -402,7 +402,7 @@ static void ioapic_write_indirect(struct kvm_ioapic *ioapic, u32 val) ioapic_service(ioapic, index, false); } if (e->fields.delivery_mode == APIC_DM_FIXED) { - struct kvm_lapic_irq irq; + struct kvm_lapic_irq irq = { 0 }; irq.vector = e->fields.vector; irq.delivery_mode = e->fields.delivery_mode << 8; @@ -442,7 +442,7 @@ static void ioapic_write_indirect(struct kvm_ioapic *ioapic, u32 val) static int ioapic_service(struct kvm_ioapic *ioapic, int irq, bool line_status) { union kvm_ioapic_redirect_entry *entry = &ioapic->redirtbl[irq]; - struct kvm_lapic_irq irqe; + struct kvm_lapic_irq irqe = { 0 }; int ret; if (entry->fields.mask || diff --git a/arch/x86/kvm/irq_comm.c b/arch/x86/kvm/irq_comm.c index 8136695f7b96..94f9db50384e 100644 --- a/arch/x86/kvm/irq_comm.c +++ b/arch/x86/kvm/irq_comm.c @@ -48,6 +48,7 @@ int kvm_irq_delivery_to_apic(struct kvm *kvm, struct kvm_lapic *src, struct kvm_lapic_irq *irq, struct dest_map *dest_map) { int r = -1; + struct kvm_plane *plane = kvm->planes[irq->plane]; struct kvm_vcpu *vcpu, *lowest = NULL; unsigned long i, dest_vcpu_bitmap[BITS_TO_LONGS(KVM_MAX_VCPUS)]; unsigned int dest_vcpus = 0; @@ -63,7 +64,7 @@ int kvm_irq_delivery_to_apic(struct kvm *kvm, struct kvm_lapic *src, memset(dest_vcpu_bitmap, 0, sizeof(dest_vcpu_bitmap)); - kvm_for_each_vcpu(i, vcpu, kvm) { + kvm_for_each_plane_vcpu(i, vcpu, plane) { if (!kvm_apic_present(vcpu)) continue; @@ -92,7 +93,7 @@ int kvm_irq_delivery_to_apic(struct kvm *kvm, struct kvm_lapic *src, int idx = kvm_vector_to_index(irq->vector, dest_vcpus, dest_vcpu_bitmap, KVM_MAX_VCPUS); - lowest = kvm_get_vcpu(kvm, idx); + lowest = kvm_get_plane_vcpu(plane, idx); } if (lowest) @@ -119,13 +120,20 @@ void kvm_set_msi_irq(struct kvm *kvm, struct kvm_kernel_irq_routing_entry *e, irq->msi_redir_hint = msg.arch_addr_lo.redirect_hint; irq->level = 1; irq->shorthand = APIC_DEST_NOSHORT; + irq->plane = e->msi.plane; } EXPORT_SYMBOL_GPL(kvm_set_msi_irq); static inline bool kvm_msi_route_invalid(struct kvm *kvm, struct kvm_kernel_irq_routing_entry *e) { - return kvm->arch.x2apic_format && (e->msi.address_hi & 0xff); + if (kvm->arch.x2apic_format && (e->msi.address_hi & 0xff)) + return true; + + if (!kvm->planes[e->msi.plane]) + return true; + + return false; } int kvm_set_msi(struct kvm_kernel_irq_routing_entry *e, diff --git a/arch/x86/kvm/lapic.c b/arch/x86/kvm/lapic.c index 6ed5f5b4f878..16a0e2387f2c 100644 --- a/arch/x86/kvm/lapic.c +++ b/arch/x86/kvm/lapic.c @@ -1223,14 +1223,13 @@ static inline bool kvm_apic_map_get_dest_lapic(struct kvm_plane *plane, } *bitmap = (lowest >= 0) ? 1 << lowest : 0; - return true; } bool kvm_irq_delivery_to_apic_fast(struct kvm *kvm, struct kvm_lapic *src, struct kvm_lapic_irq *irq, int *r, struct dest_map *dest_map) { - struct kvm_plane *plane = kvm->planes[0]; + struct kvm_plane *plane = kvm->planes[irq->plane]; struct kvm_apic_map *map; unsigned long bitmap; struct kvm_lapic **dst = NULL; @@ -1286,7 +1285,7 @@ bool kvm_irq_delivery_to_apic_fast(struct kvm *kvm, struct kvm_lapic *src, bool kvm_intr_is_single_vcpu_fast(struct kvm *kvm, struct kvm_lapic_irq *irq, struct kvm_vcpu **dest_vcpu) { - struct kvm_plane *plane = kvm->planes[0]; + struct kvm_plane *plane = kvm->planes[irq->plane]; struct kvm_apic_map *map; unsigned long bitmap; struct kvm_lapic **dst = NULL; @@ -1422,7 +1421,7 @@ static int __apic_accept_irq(struct kvm_lapic *apic, int delivery_mode, void kvm_bitmap_or_dest_vcpus(struct kvm *kvm, struct kvm_lapic_irq *irq, unsigned long *vcpu_bitmap) { - struct kvm_plane *plane = kvm->planes[0]; + struct kvm_plane *plane = kvm->planes[irq->plane]; struct kvm_lapic **dest_vcpu = NULL; struct kvm_lapic *src = NULL; struct kvm_apic_map *map; @@ -1544,6 +1543,7 @@ void kvm_apic_send_ipi(struct kvm_lapic *apic, u32 icr_low, u32 icr_high) irq.trig_mode = icr_low & APIC_INT_LEVELTRIG; irq.shorthand = icr_low & APIC_SHORT_MASK; irq.msi_redir_hint = false; + irq.plane = apic->vcpu->plane; if (apic_x2apic_mode(apic)) irq.dest_id = icr_high; else diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 19e3bb33bf7d..ce8e623052a7 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -9949,7 +9949,7 @@ static int kvm_pv_clock_pairing(struct kvm_vcpu *vcpu, gpa_t paddr, * * @apicid - apicid of vcpu to be kicked. */ -static void kvm_pv_kick_cpu_op(struct kvm *kvm, int apicid) +static void kvm_pv_kick_cpu_op(struct kvm *kvm, unsigned plane_id, int apicid) { /* * All other fields are unused for APIC_DM_REMRD, but may be consumed by @@ -9960,6 +9960,7 @@ static void kvm_pv_kick_cpu_op(struct kvm *kvm, int apicid) .dest_mode = APIC_DEST_PHYSICAL, .shorthand = APIC_DEST_NOSHORT, .dest_id = apicid, + .plane = plane_id, }; kvm_irq_delivery_to_apic(kvm, NULL, &lapic_irq, NULL); @@ -10092,7 +10093,7 @@ int ____kvm_emulate_hypercall(struct kvm_vcpu *vcpu, int cpl, if (!guest_pv_has(vcpu, KVM_FEATURE_PV_UNHALT)) break; - kvm_pv_kick_cpu_op(vcpu->kvm, a1); + kvm_pv_kick_cpu_op(vcpu->kvm, vcpu->plane, a1); kvm_sched_yield(vcpu, a1); ret = 0; break; @@ -13559,7 +13560,8 @@ void kvm_arch_async_page_present(struct kvm_vcpu *vcpu, { struct kvm_lapic_irq irq = { .delivery_mode = APIC_DM_FIXED, - .vector = vcpu->arch.apf.vec + .vector = vcpu->arch.apf.vec, + .plane = vcpu->plane, }; if (work->wakeup_all) diff --git a/arch/x86/kvm/xen.c b/arch/x86/kvm/xen.c index 7449be30d701..ac9c69f2190b 100644 --- a/arch/x86/kvm/xen.c +++ b/arch/x86/kvm/xen.c @@ -625,6 +625,7 @@ void kvm_xen_inject_vcpu_vector(struct kvm_vcpu *v) irq.shorthand = APIC_DEST_NOSHORT; irq.delivery_mode = APIC_DM_FIXED; irq.level = 1; + irq.plane = v->plane; kvm_irq_delivery_to_apic(v->kvm, NULL, &irq, NULL); } From patchwork Tue Apr 1 16:10:58 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Paolo Bonzini X-Patchwork-Id: 14035109 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 7D701221F01 for ; Tue, 1 Apr 2025 16:12:10 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=170.10.133.124 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1743523933; cv=none; b=PiKzoXiU8IfxuWYkhZc53LRzrjDcfOl5Yv21p934jOXLkcMp6K2zR5Li9iUI0BCpzW98f0srpErEyqMQ9EielnXg0RvtrbOnoxyOtLX4sOjJmC/aC2mIfOaGUxriRw4ZUBZBhKUrMer4M9dQaaPgbFQkx1cHP+QQIifxQqWhquE= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1743523933; c=relaxed/simple; bh=6R6OEQxDgjR/HZg6sCVP87ZgC7PikBueK2/660D907Y=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=P9XWdpTjzSx0WvG3h9iIqj+lOrt7J8IJE2FeIVgmSPPihTSVvWEJtSlDn79v4mIFh6ngkPGvgFThgeKxr3l7vJSLUgTtYvAP+ZMDpUSlsVZR40CaZgoEz4bdpgIN79FsnJ9mKIjT4FrhloEX72jxPWyiNregcafgaugm7e70hsw= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=redhat.com; spf=pass smtp.mailfrom=redhat.com; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b=RlvjR329; arc=none smtp.client-ip=170.10.133.124 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=redhat.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=redhat.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="RlvjR329" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1743523929; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=kkUQYRtnACncLvUHjuKhMmXCrd7LF2Jtv0MEz++g5Qo=; b=RlvjR329KmT2/jtSFaZz9te3ccG8O8vK1VM3EIc5Xt/oWwTFX6nzIoh0zOVFZPbJ3BQl7F /1DPJmJmQv441Nwm5sQ89Qf4AbPK6C7cqIFajarcG6oWmFnDoTKFzbYzdrXhWv2F3ywI84 lKZMF2pDWlOZIHl6GnnS5ST2mLZ2oAA= Received: from mail-wm1-f70.google.com (mail-wm1-f70.google.com [209.85.128.70]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-626-DgSNDbNePwSormcbKW4vxw-1; Tue, 01 Apr 2025 12:12:08 -0400 X-MC-Unique: DgSNDbNePwSormcbKW4vxw-1 X-Mimecast-MFC-AGG-ID: DgSNDbNePwSormcbKW4vxw_1743523927 Received: by mail-wm1-f70.google.com with SMTP id 5b1f17b1804b1-43cf172ff63so37522875e9.3 for ; Tue, 01 Apr 2025 09:12:08 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1743523927; x=1744128727; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=kkUQYRtnACncLvUHjuKhMmXCrd7LF2Jtv0MEz++g5Qo=; b=PMkAiHzC7Mf5er/3RL3OnlljN4XygS1fwUtiXLAHyCCHE8hCgpecXq/2H7yXfeH67n mo5Z1KTHE+XHwLKXqPLcqDmjBa6CuvicswOLy5AyMcxJagEDbU9GQGqMqUnKxJn7JDyT CwX1NVk4RnF8PTREFxfSHWTI5657HDycLRcDQ6fD1rwm4s4Z+Vho0qpbCb1G+TA3THbq aPnAFskhjJhyBSGwa2VD0Ms35aFHK9e0YvP3/29dRdQd0rtCMht3+P8/TuJ6AbUUaOhM 77FrZY4/1WhkUm5J6JSgQVwyNKS7LaaTrLzh5pI8E0Wela42rw2Dsy3mUi5wphbxPkyC YT9w== X-Forwarded-Encrypted: i=1; AJvYcCWhWleZjy+zXBqUs4EmJT7VjHkDQ4Vb43eZvSvc8ukqqJJ/N8qyPHAWNJoO1t7nQq5JAXs=@vger.kernel.org X-Gm-Message-State: AOJu0YxqPHjyBTeY8tN40Z/6B+6aIEZfWttiGGHEUWO1RKo0B5CbF3ZG lCJct1zt2P8AZofaR5X1ZGDiGVZvkZ59Mq7Mkky9nFbdbDugQqrC4MBmvDWP0qn0FoIsosGUG0m pEfMd02TlkDhhXzblSde2fjDMqszc68n6/C7nAiiwG87CUOoKuw== X-Gm-Gg: ASbGncu/jjH5H5sAayOFPfdEAdU5cGg021VPyK5btHp4LBv+1VYFHyLJ4tcry/3Th5I 5QD1Pd/EzTD19/HLPi0aOjjkq9I5f5usoy46nCvGKHN5beVrOJl/YB+e0GB9KvNftIPXWbYFA4/ /mIQD99vVLdoEwEkkL4gk2VwWAhaIquwVoxMo4167DQstnqPcfrIbgyBoZ8RDs/Z7P8pbUp6S+W rD4PLy7O3aV09oeHTWbSeSQ1xb/q9wuIMOYVUPwe1cx2wTLIL2cIZt4C4EwuCK+h8HP2ptv6hhe AmfyKC0AOlweLKTD2j/Vag== X-Received: by 2002:a05:600c:6792:b0:43c:e9f7:d6a3 with SMTP id 5b1f17b1804b1-43ea7c717femr33648845e9.13.1743523927322; Tue, 01 Apr 2025 09:12:07 -0700 (PDT) X-Google-Smtp-Source: AGHT+IGnhM5o4ZnKjk9+MG32jGQEDPKEoKojaSrwXizsO3e6J/Rj02GKdm3qrLDu+19IPPoL3w0DsA== X-Received: by 2002:a05:600c:6792:b0:43c:e9f7:d6a3 with SMTP id 5b1f17b1804b1-43ea7c717femr33648325e9.13.1743523926906; Tue, 01 Apr 2025 09:12:06 -0700 (PDT) Received: from [192.168.10.48] ([176.206.111.201]) by smtp.gmail.com with ESMTPSA id 5b1f17b1804b1-43d82e83482sm207511105e9.14.2025.04.01.09.12.05 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 01 Apr 2025 09:12:05 -0700 (PDT) From: Paolo Bonzini To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: roy.hopkins@suse.com, seanjc@google.com, thomas.lendacky@amd.com, ashish.kalra@amd.com, michael.roth@amd.com, jroedel@suse.de, nsaenz@amazon.com, anelkz@amazon.de, James.Bottomley@HansenPartnership.com Subject: [PATCH 21/29] KVM: x86: add infrastructure to share FPU across planes Date: Tue, 1 Apr 2025 18:10:58 +0200 Message-ID: <20250401161106.790710-22-pbonzini@redhat.com> X-Mailer: git-send-email 2.49.0 In-Reply-To: <20250401161106.790710-1-pbonzini@redhat.com> References: <20250401161106.790710-1-pbonzini@redhat.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Wrap fpu_alloc_guest_fpstate() and fpu_free_guest_fpstate() so that only one FPU exists for vCPUs that are in different planes but share the same vCPU id. This API could be handy for VTL implementation but it may be tricky because for some registers sharing would be a bad idea (even MPX right now if it weren't deprecated, but APX in the future could be worse). Signed-off-by: Paolo Bonzini --- arch/x86/include/asm/kvm_host.h | 3 +++ arch/x86/kvm/x86.c | 47 ++++++++++++++++++++++++++++----- 2 files changed, 44 insertions(+), 6 deletions(-) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index 283d8a4b5b14..9ac39f128a53 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -1347,6 +1347,7 @@ struct kvm_arch { unsigned int indirect_shadow_pages; u8 mmu_valid_gen; u8 vm_type; + bool planes_share_fpu; bool has_private_mem; bool has_protected_state; bool pre_fault_allowed; @@ -2447,4 +2448,6 @@ int memslot_rmap_alloc(struct kvm_memory_slot *slot, unsigned long npages); */ #define KVM_EXIT_HYPERCALL_MBZ GENMASK_ULL(31, 1) +bool kvm_arch_planes_share_fpu(struct kvm *kvm); + #endif /* _ASM_X86_KVM_HOST_H */ diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index ce8e623052a7..ebdbd08a840b 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -6626,6 +6626,17 @@ int kvm_vm_ioctl_enable_cap(struct kvm *kvm, kvm->arch.triple_fault_event = cap->args[0]; r = 0; break; + case KVM_CAP_PLANES_FPU: + r = -EINVAL; + if (atomic_read(&kvm->online_vcpus)) + break; + if (cap->args[0] > 1) + break; + if (cap->args[0] && kvm->arch.has_protected_state) + break; + kvm->arch.planes_share_fpu = cap->args[0]; + r = 0; + break; case KVM_CAP_X86_USER_SPACE_MSR: r = -EINVAL; if (cap->args[0] & ~KVM_MSR_EXIT_REASON_VALID_MASK) @@ -12332,6 +12343,27 @@ int kvm_arch_vcpu_precreate(struct kvm *kvm, unsigned int id) return kvm_x86_call(vcpu_precreate)(kvm); } +static void kvm_free_guest_fpstate(struct kvm_vcpu *vcpu, unsigned plane) +{ + if (plane == 0 || !vcpu->kvm->arch.planes_share_fpu) + fpu_free_guest_fpstate(&vcpu->arch.guest_fpu); +} + +static int kvm_init_guest_fpstate(struct kvm_vcpu *vcpu, struct kvm_vcpu *plane0_vcpu) +{ + if (plane0_vcpu && vcpu->kvm->arch.planes_share_fpu) { + vcpu->arch.guest_fpu = plane0_vcpu->arch.guest_fpu; + return 0; + } + + if (!fpu_alloc_guest_fpstate(&vcpu->arch.guest_fpu)) { + pr_err("failed to allocate vcpu's fpu\n"); + return -ENOMEM; + } + + return 0; +} + int kvm_arch_vcpu_create(struct kvm_vcpu *vcpu, struct kvm_plane *plane) { struct page *page; @@ -12378,10 +12410,8 @@ int kvm_arch_vcpu_create(struct kvm_vcpu *vcpu, struct kvm_plane *plane) if (!alloc_emulate_ctxt(vcpu)) goto free_wbinvd_dirty_mask; - if (!fpu_alloc_guest_fpstate(&vcpu->arch.guest_fpu)) { - pr_err("failed to allocate vcpu's fpu\n"); + if (kvm_init_guest_fpstate(vcpu, plane->plane ? vcpu->plane0 : NULL) < 0) goto free_emulate_ctxt; - } kvm_async_pf_hash_reset(vcpu); @@ -12413,7 +12443,7 @@ int kvm_arch_vcpu_create(struct kvm_vcpu *vcpu, struct kvm_plane *plane) return 0; free_guest_fpu: - fpu_free_guest_fpstate(&vcpu->arch.guest_fpu); + kvm_free_guest_fpstate(vcpu, plane->plane); free_emulate_ctxt: kmem_cache_free(x86_emulator_cache, vcpu->arch.emulate_ctxt); free_wbinvd_dirty_mask: @@ -12459,7 +12489,7 @@ void kvm_arch_vcpu_destroy(struct kvm_vcpu *vcpu) kmem_cache_free(x86_emulator_cache, vcpu->arch.emulate_ctxt); free_cpumask_var(vcpu->arch.wbinvd_dirty_mask); - fpu_free_guest_fpstate(&vcpu->arch.guest_fpu); + kvm_free_guest_fpstate(vcpu, vcpu->plane); kvm_xen_destroy_vcpu(vcpu); kvm_hv_vcpu_uninit(vcpu); @@ -12824,7 +12854,7 @@ int kvm_arch_init_vm(struct kvm *kvm, unsigned long type) kvm->arch.apic_bus_cycle_ns = APIC_BUS_CYCLE_NS_DEFAULT; kvm->arch.guest_can_read_msr_platform_info = true; kvm->arch.enable_pmu = enable_pmu; - + kvm->arch.planes_share_fpu = false; #if IS_ENABLED(CONFIG_HYPERV) spin_lock_init(&kvm->arch.hv_root_tdp_lock); kvm->arch.hv_root_tdp = INVALID_PAGE; @@ -13881,6 +13911,11 @@ int kvm_handle_invpcid(struct kvm_vcpu *vcpu, unsigned long type, gva_t gva) } EXPORT_SYMBOL_GPL(kvm_handle_invpcid); +bool kvm_arch_planes_share_fpu(struct kvm *kvm) +{ + return !kvm || kvm->arch.planes_share_fpu; +} + static int complete_sev_es_emulated_mmio(struct kvm_vcpu *vcpu) { struct kvm_run *run = vcpu->run; From patchwork Tue Apr 1 16:10:59 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Paolo Bonzini X-Patchwork-Id: 14035110 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 7FBEB2222A9 for ; Tue, 1 Apr 2025 16:12:13 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=170.10.133.124 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1743523935; cv=none; b=kqWvIj23lXWCOdJtHGjvxckaIs6i4GKEfeU8T+BvWoYGrZAnvxCaUXYEjurqOVd3IRNgXjdYG4dm91nF+91dpRu9MKbB/9ONPfFD86yHSiIjc9M1zjVNLsMb3AmnFitQXbxzLHQUwMSlJQPXWf1iHTuxKgNpbEeGT0zgKSJAWF0= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1743523935; c=relaxed/simple; bh=kQoni2s9+27EXmP2XI2+CYoemw/pJEF7XDPIcFmDe2g=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=DsTFHp5Y5NyPXR9a6JSLB6UR9s5Wsig3sEv5WSgDDffIBIXwcLuK9aHo6GtM9EYZVLwOZWupRgEO3NOJWutLsrh40+qSGd7PI5opfzuck0n06uGEp6iL7A3qLfJrkTvkjlwmfu/sXFKFQLIUP++lAty7kEs2/O891CL5SeK0r0s= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=redhat.com; spf=pass smtp.mailfrom=redhat.com; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b=CIgVf/80; arc=none smtp.client-ip=170.10.133.124 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=redhat.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=redhat.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="CIgVf/80" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1743523932; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=ChySLp6+UIK2NHqTKpsGQPiliMxZce6mgamUWnpoK/k=; b=CIgVf/80JkAVEyct9qC1txCqW5Y4H4e6r7CJROF1qibDV5TJFRdkugl+wNKrR7IgAGAJHF fwUm7BiulzOG/DQog8b2j/k1pDQYdQdJu968YipdLXhBMiv2fvj2Mr22789/2QK5UpAQg4 aSSgYozVEbFLtEtudSV+NCqg46EfnsE= Received: from mail-wm1-f71.google.com (mail-wm1-f71.google.com [209.85.128.71]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-132-9nI9PuruMlGtxZYADxjpFw-1; Tue, 01 Apr 2025 12:12:11 -0400 X-MC-Unique: 9nI9PuruMlGtxZYADxjpFw-1 X-Mimecast-MFC-AGG-ID: 9nI9PuruMlGtxZYADxjpFw_1743523930 Received: by mail-wm1-f71.google.com with SMTP id 5b1f17b1804b1-43bd0586a73so44997145e9.2 for ; Tue, 01 Apr 2025 09:12:11 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1743523930; x=1744128730; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=ChySLp6+UIK2NHqTKpsGQPiliMxZce6mgamUWnpoK/k=; b=B/fMaZ1Sbc3nB+3yG3KNBMFCS6PDurdDLcGM+vcfAo10FQB6IvwXrpwyn7qN+QiWB/ hiYhihyYS5iHbBr8+VLglL8CZdyx3YxutldaSyG4vsoEhLXN4DeAwK1FgDX41GUN0Z11 qO9/JZcs/PyjWoZCIGOzn5MHBuu3EAg0soAGe6eKoSJQREk4p1XI93Ma3rofTKK6Xkq7 UCuSH5yjKdgYwV6hlC1tVlatAv4E0UBIpUzcgAuXKwLPKkzzANptLGQ3c6g2Cgla+g/y QMFjPhuwH1z3HTjDBdRgdpwAzbMqxgO/D4wikuXNJJMXF69TyhdlrumRk4tIcBFbID48 ujkg== X-Forwarded-Encrypted: i=1; AJvYcCXfs6YHfKZwX4rWGu7qQGenHhjK39XuQAijzeBR+L7Pc0yjZoncmIzd7vlpVy5odVczH6E=@vger.kernel.org X-Gm-Message-State: AOJu0Yw65kQ4oGcNyoAK0Gt28+7UCTgcMeF07idfanBrngURnHdy4aT2 lCYWgAtiqYRrr1/pURDH/AFM/VUEI6gFVbQSDxd9YaBwGxcQhpCax1OXUa6+IA+J2xdB2hpXvOj ElvLcfiOLW3/ymu9kdIRGe1mY18nzQBAaC6A9cGkgqdXFRgj9+g== X-Gm-Gg: ASbGnctzRM9XIpjEtPsHFqoIBGAzuZgeYHBfXRIe/J0qPq0Npdr/YxyWbw/+iBcRlh7 9uKYBaueisRiIfrC3syF0K5vOCqsIkt28G5NZzSPjkw2T5+x9vGtM8zeaPD/EouYwNZYq5gEoD3 AfcUMdFZHOosjw023dBaIZWmRFGqUx0p3LmkJeCpbDuaB+5/05QVwArhX2XBRV9+vE961e3qYDb +r1/84BAXm6fs0WaGpJ5BwiWie6pGf0KwVdgmb2jAJ8f3hFa9sKk0fpgmsoO/nDVkQGtDQyvaGn 1lHhi7gkz+MUuPAOR4K5Hw== X-Received: by 2002:a05:600c:6b6c:b0:43d:fa59:be39 with SMTP id 5b1f17b1804b1-43dfa59bf9fmr83046195e9.33.1743523930206; Tue, 01 Apr 2025 09:12:10 -0700 (PDT) X-Google-Smtp-Source: AGHT+IEZXiJ8c1MKAD59oM+jxw4s6wIAOa62pt3LsarDDoqS1UAAl1mtYiWot9iqnkG352qXMDvHFg== X-Received: by 2002:a05:600c:6b6c:b0:43d:fa59:be39 with SMTP id 5b1f17b1804b1-43dfa59bf9fmr83045935e9.33.1743523929800; Tue, 01 Apr 2025 09:12:09 -0700 (PDT) Received: from [192.168.10.48] ([176.206.111.201]) by smtp.gmail.com with ESMTPSA id 5b1f17b1804b1-43d8fbbfef2sm160662985e9.11.2025.04.01.09.12.07 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 01 Apr 2025 09:12:08 -0700 (PDT) From: Paolo Bonzini To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: roy.hopkins@suse.com, seanjc@google.com, thomas.lendacky@amd.com, ashish.kalra@amd.com, michael.roth@amd.com, jroedel@suse.de, nsaenz@amazon.com, anelkz@amazon.de, James.Bottomley@HansenPartnership.com Subject: [PATCH 22/29] KVM: x86: implement initial plane support Date: Tue, 1 Apr 2025 18:10:59 +0200 Message-ID: <20250401161106.790710-23-pbonzini@redhat.com> X-Mailer: git-send-email 2.49.0 In-Reply-To: <20250401161106.790710-1-pbonzini@redhat.com> References: <20250401161106.790710-1-pbonzini@redhat.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Implement more of the shared state, namely the PIO emulation area and ioctl(KVM_RUN). Signed-off-by: Paolo Bonzini --- arch/x86/kvm/x86.c | 45 +++++++++++++++++++++++++++++++++++---------- 1 file changed, 35 insertions(+), 10 deletions(-) diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index ebdbd08a840b..d2b43d9b6543 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -11567,7 +11567,7 @@ static void kvm_put_guest_fpu(struct kvm_vcpu *vcpu) trace_kvm_fpu(0); } -int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu) +static int kvm_vcpu_ioctl_run_plane(struct kvm_vcpu *vcpu) { struct kvm_queued_exception *ex = &vcpu->arch.exception; struct kvm_run *kvm_run = vcpu->run; @@ -11585,7 +11585,7 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu) kvm_vcpu_srcu_read_lock(vcpu); if (unlikely(vcpu->arch.mp_state == KVM_MP_STATE_UNINITIALIZED)) { - if (!vcpu->wants_to_run) { + if (!vcpu->plane0->wants_to_run) { r = -EINTR; goto out; } @@ -11664,7 +11664,7 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu) WARN_ON_ONCE(vcpu->mmio_needed); } - if (!vcpu->wants_to_run) { + if (!vcpu->plane0->wants_to_run) { r = -EINTR; goto out; } @@ -11687,6 +11687,25 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu) return r; } +int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu) +{ + int plane_id = READ_ONCE(vcpu->run->plane); + struct kvm_plane *plane = vcpu->kvm->planes[plane_id]; + int r; + + if (plane_id) { + vcpu = kvm_get_plane_vcpu(plane, vcpu->vcpu_id); + mutex_lock_nested(&vcpu->mutex, 1); + } + + r = kvm_vcpu_ioctl_run_plane(vcpu); + + if (plane_id) + mutex_unlock(&vcpu->mutex); + + return r; +} + static void __get_regs(struct kvm_vcpu *vcpu, struct kvm_regs *regs) { if (vcpu->arch.emulate_regs_need_sync_to_vcpu) { @@ -12366,7 +12385,7 @@ static int kvm_init_guest_fpstate(struct kvm_vcpu *vcpu, struct kvm_vcpu *plane0 int kvm_arch_vcpu_create(struct kvm_vcpu *vcpu, struct kvm_plane *plane) { - struct page *page; + struct page *page = NULL; int r; vcpu->arch.last_vmentry_cpu = -1; @@ -12390,10 +12409,15 @@ int kvm_arch_vcpu_create(struct kvm_vcpu *vcpu, struct kvm_plane *plane) r = -ENOMEM; - page = alloc_page(GFP_KERNEL_ACCOUNT | __GFP_ZERO); - if (!page) - goto fail_free_lapic; - vcpu->arch.pio_data = page_address(page); + if (plane->plane) { + page = NULL; + vcpu->arch.pio_data = vcpu->plane0->arch.pio_data; + } else { + page = alloc_page(GFP_KERNEL_ACCOUNT | __GFP_ZERO); + if (!page) + goto fail_free_lapic; + vcpu->arch.pio_data = page_address(page); + } vcpu->arch.mce_banks = kcalloc(KVM_MAX_MCE_BANKS * 4, sizeof(u64), GFP_KERNEL_ACCOUNT); @@ -12451,7 +12475,7 @@ int kvm_arch_vcpu_create(struct kvm_vcpu *vcpu, struct kvm_plane *plane) fail_free_mce_banks: kfree(vcpu->arch.mce_banks); kfree(vcpu->arch.mci_ctl2_banks); - free_page((unsigned long)vcpu->arch.pio_data); + __free_page(page); fail_free_lapic: kvm_free_lapic(vcpu); fail_mmu_destroy: @@ -12500,7 +12524,8 @@ void kvm_arch_vcpu_destroy(struct kvm_vcpu *vcpu) idx = srcu_read_lock(&vcpu->kvm->srcu); kvm_mmu_destroy(vcpu); srcu_read_unlock(&vcpu->kvm->srcu, idx); - free_page((unsigned long)vcpu->arch.pio_data); + if (!vcpu->plane) + free_page((unsigned long)vcpu->arch.pio_data); kvfree(vcpu->arch.cpuid_entries); } From patchwork Tue Apr 1 16:11:00 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Paolo Bonzini X-Patchwork-Id: 14035111 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 18D0D2222DD for ; Tue, 1 Apr 2025 16:12:16 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=170.10.133.124 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1743523937; cv=none; b=L3j+1T+UNSwlLwvSAZxaFQbJ4EVu5L52bK76FryH1ThhxnEqAQKqsKTGmRQjkIKTNBbaCMO1JRw8TKaebw/YhdT55cUR4MSjFJzAsfAOw3e0T/424H1MUE/YhHxYTC2dFwnaTrmJQMkPwK9d3E4+EpChckZFEL8GvgH7543mauQ= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1743523937; c=relaxed/simple; bh=rEEG/JqTWYMnquhcA1NsEvfPwDFWFVv9Eii5wJCc9Lk=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=fW/F1UeDmWmZgVqvPvAtbF5dIPJjxdh50zFLhEZ5+I3NhqJ4DssbAefEzdUV7tNuGy/yJqwdvNPFserVUjBfbRa9czQByI9159i58hQPWY6FemN/yKHk3RXSG18O+5UvhXHVBjDRS27iiKMLo/R7Xz507tLk0IHv04CUrvp2KDk= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=redhat.com; spf=pass smtp.mailfrom=redhat.com; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b=fdcRZeMp; arc=none smtp.client-ip=170.10.133.124 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=redhat.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=redhat.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="fdcRZeMp" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1743523935; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=clEaJtGkNoCtPvE/vwlOxKGgMTuZLXXi/afw5Ojz1dw=; b=fdcRZeMp+6Cv7Rqo498oJ1QBv8fQH+8f8F/eGvNd4cxAQnIJEDk332GaZb8EqursOYf6Dr HVOeeEF7vypGM+KSX5Nn8FXH4WhddoYHnJ2/WcO2Vnv+V0wkM25YKGIvyuTcTdGUpFSZf9 P0TNN3VOD5yHNlKfQGgWlpN/u1u+U68= Received: from mail-wm1-f70.google.com (mail-wm1-f70.google.com [209.85.128.70]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-388-I4aRB9-UOpe-paTa5YVDKw-1; Tue, 01 Apr 2025 12:12:13 -0400 X-MC-Unique: I4aRB9-UOpe-paTa5YVDKw-1 X-Mimecast-MFC-AGG-ID: I4aRB9-UOpe-paTa5YVDKw_1743523932 Received: by mail-wm1-f70.google.com with SMTP id 5b1f17b1804b1-43d51bd9b41so51393245e9.3 for ; Tue, 01 Apr 2025 09:12:13 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1743523932; x=1744128732; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=clEaJtGkNoCtPvE/vwlOxKGgMTuZLXXi/afw5Ojz1dw=; b=AjBzXmusE5PHJ4odf7OvpAvr57UKPQ/ds1l+i4CFcX0RyS1tV2k573+wfzMj6bt732 LMArp7a6AlAsPmxyyWdg/Va/RitrsdPGtQZDU1cFKsb0Dw1Nv5qTMI2N/ahkBf9uFYZA 1LALLFse1r5ITNxta07uJucn0QlsvRwUBia07Qvrd6ax6ZJNpaPsjqljkUHVKR7heeZ7 iwL0CiGgZFm/S7uXQ7HRgz16YQ6LH61qNYAYXOFKOcY76GxbLFIZPDb1mkB2qI6mTq9c 9iggdqR8X4PIOtLD44G8AZ0pC6B6D7QGNA6HjwDoK9EhvuppircmrV6RFhln02AMBIAR dyRw== X-Forwarded-Encrypted: i=1; AJvYcCVqaTCd8A7nGZKwPzMxNrIJj/HBZXk49S76jnWM2Her3xta9sIyQUZzBA8P7rWf5BFV6Fw=@vger.kernel.org X-Gm-Message-State: AOJu0YxHolMTMC1VL50YevrB8AvMwvGJcxdk0gJ1lpMgfjUiSRI6SuW7 f3ujOtBDRQPsqpz174JwM4Wvx98JnCwp1Iq4bzTYmKEbi6EXcUpx3RQH6Z6Bb/aWkRyM5g+jV10 kh3AfoHgWTtU8KxK6gXBNC090HKG9Kd4/EZauoMOHSD9UeLyn3w== X-Gm-Gg: ASbGncu9muelWyqRx1qSOgT/M6ZibCwlwGssEQvnyeVPInUZ0C33mGsb1BdLbo6hjwY GFAsMFxR7TUsCrfCt35iQ9bVAcXulsLQMG1/qv+e5ppRW4kj2lqw06EDBCaUmVKd+SjmsnttLrk y9sz4q3xSMIYOM8cBWIduCt9aGe8BEIciSWCbrN53VYQ8MmthVU2razoLFFM5AN4yZ3dSuE/GVD iEv0HsZxPSiNkYd/nRHSV3Ke3/rMvC8Xw2tGb8vnhMT8/Gu8kVZD0Hs0o60pXy8vbI7s5qydr42 9pxNw210utAvsXAPlydSmA== X-Received: by 2002:a05:600c:3584:b0:439:6118:c188 with SMTP id 5b1f17b1804b1-43db62b7c5emr104954805e9.19.1743523932506; Tue, 01 Apr 2025 09:12:12 -0700 (PDT) X-Google-Smtp-Source: AGHT+IGcsGAEYE+5PwlQjdEBXjpJ/Slimxfn6OxxsnrMh8halxw+3MQX3DryS9bQZc9lJ/fr1kYiAQ== X-Received: by 2002:a05:600c:3584:b0:439:6118:c188 with SMTP id 5b1f17b1804b1-43db62b7c5emr104954555e9.19.1743523932128; Tue, 01 Apr 2025 09:12:12 -0700 (PDT) Received: from [192.168.10.48] ([176.206.111.201]) by smtp.gmail.com with ESMTPSA id 5b1f17b1804b1-43d8fba3b13sm165471445e9.3.2025.04.01.09.12.10 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 01 Apr 2025 09:12:10 -0700 (PDT) From: Paolo Bonzini To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: roy.hopkins@suse.com, seanjc@google.com, thomas.lendacky@amd.com, ashish.kalra@amd.com, michael.roth@amd.com, jroedel@suse.de, nsaenz@amazon.com, anelkz@amazon.de, James.Bottomley@HansenPartnership.com Subject: [PATCH 23/29] KVM: x86: extract kvm_post_set_cpuid Date: Tue, 1 Apr 2025 18:11:00 +0200 Message-ID: <20250401161106.790710-24-pbonzini@redhat.com> X-Mailer: git-send-email 2.49.0 In-Reply-To: <20250401161106.790710-1-pbonzini@redhat.com> References: <20250401161106.790710-1-pbonzini@redhat.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 CPU state depends on CPUID info and is initialized by KVM_SET_CPUID2, but KVM_SET_CPUID2 does not exist for non-default planes. Instead, they just copy over the CPUID info of plane 0. Extract the tail of KVM_SET_CPUID2 so that it can be executed as part of KVM_CREATE_VCPU_PLANE. Signed-off-by: Paolo Bonzini --- arch/x86/kvm/cpuid.c | 38 ++++++++++++++++++++++++-------------- arch/x86/kvm/cpuid.h | 1 + 2 files changed, 25 insertions(+), 14 deletions(-) diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c index f760a8a5d719..142decb3a736 100644 --- a/arch/x86/kvm/cpuid.c +++ b/arch/x86/kvm/cpuid.c @@ -488,6 +488,29 @@ u64 kvm_vcpu_reserved_gpa_bits_raw(struct kvm_vcpu *vcpu) return rsvd_bits(cpuid_maxphyaddr(vcpu), 63); } +int kvm_post_set_cpuid(struct kvm_vcpu *vcpu) +{ + int r; + +#ifdef CONFIG_KVM_HYPERV + if (kvm_cpuid_has_hyperv(vcpu)) { + r = kvm_hv_vcpu_init(vcpu); + if (r) + return r; + } +#endif + + r = kvm_check_cpuid(vcpu); + if (r) + return r; + +#ifdef CONFIG_KVM_XEN + vcpu->arch.xen.cpuid = kvm_get_hypervisor_cpuid(vcpu, XEN_SIGNATURE); +#endif + kvm_vcpu_after_set_cpuid(vcpu); + return 0; +} + static int kvm_set_cpuid(struct kvm_vcpu *vcpu, struct kvm_cpuid_entry2 *e2, int nent) { @@ -529,23 +552,10 @@ static int kvm_set_cpuid(struct kvm_vcpu *vcpu, struct kvm_cpuid_entry2 *e2, goto success; } -#ifdef CONFIG_KVM_HYPERV - if (kvm_cpuid_has_hyperv(vcpu)) { - r = kvm_hv_vcpu_init(vcpu); - if (r) - goto err; - } -#endif - - r = kvm_check_cpuid(vcpu); + r = kvm_post_set_cpuid(vcpu); if (r) goto err; -#ifdef CONFIG_KVM_XEN - vcpu->arch.xen.cpuid = kvm_get_hypervisor_cpuid(vcpu, XEN_SIGNATURE); -#endif - kvm_vcpu_after_set_cpuid(vcpu); - success: kvfree(e2); return 0; diff --git a/arch/x86/kvm/cpuid.h b/arch/x86/kvm/cpuid.h index d3f5ae15a7ca..05cc1245f570 100644 --- a/arch/x86/kvm/cpuid.h +++ b/arch/x86/kvm/cpuid.h @@ -42,6 +42,7 @@ static inline struct kvm_cpuid_entry2 *kvm_find_cpuid_entry(struct kvm_vcpu *vcp int kvm_dev_ioctl_get_cpuid(struct kvm_cpuid2 *cpuid, struct kvm_cpuid_entry2 __user *entries, unsigned int type); +int kvm_post_set_cpuid(struct kvm_vcpu *vcpu); int kvm_vcpu_ioctl_set_cpuid(struct kvm_vcpu *vcpu, struct kvm_cpuid *cpuid, struct kvm_cpuid_entry __user *entries); From patchwork Tue Apr 1 16:11:01 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Paolo Bonzini X-Patchwork-Id: 14035112 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 8DC2922258B for ; Tue, 1 Apr 2025 16:12:18 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=170.10.133.124 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1743523940; cv=none; b=pNrtnEUoE9vERWmHHIzTlT1wawTCb55q5vuMV+HHkIuL/CRzsQpzMUjqnrI1m1ibaRYq9U1fCtKybKxM+VIdteZca85nzNzzQ/bwx59fP8uba0dyu3ziVmRU1R4sq5bBh75TjyWn3ttZ0g0IKq+oF+g28jowAFhL6GIW5w0ydaU= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1743523940; c=relaxed/simple; bh=RqolX2nBYsBB1RvK+C0gqm089aXRe/tRZbLNpfcHISU=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=pJfqsVykdZ9YIj/TFUs05i53qyENN1ArKABGwwsVipSfxbJPzIQuEohBUZqnsm/Gsj710UOISkaZdJyNWh9xDcEOBXaeMAZr2Dp9H59jGFhJ4wDrlforAxF6hgobVtUpTHQPpOzASz9iCFpbqLP4rZCEl8AIclh4OacLYr0GrhI= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=redhat.com; spf=pass smtp.mailfrom=redhat.com; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b=F1ZcQn58; arc=none smtp.client-ip=170.10.133.124 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=redhat.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=redhat.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="F1ZcQn58" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1743523937; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=/yWeFqVMRebyIXvHSXUhfNNjcs9eft8Ib/p8yK7977Q=; b=F1ZcQn58tH/socWyYuYFQvlkLpYktVg9GFzXT7jCn/Q3HqmcpTR9Wqnh69mW12INi9Jw/2 r+ckAfgyQNEU4CfpW5ND9TKWWqr/b6e/FMSaABBXxS1qSzvTGz7KPsyQUlDG41wYsxLef+ V8r4wQsCYqe9608qqTZYNgu/B2vH068= Received: from mail-wm1-f70.google.com (mail-wm1-f70.google.com [209.85.128.70]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-168-JFEaHDMfPJyL75a0LamuwQ-1; Tue, 01 Apr 2025 12:12:16 -0400 X-MC-Unique: JFEaHDMfPJyL75a0LamuwQ-1 X-Mimecast-MFC-AGG-ID: JFEaHDMfPJyL75a0LamuwQ_1743523935 Received: by mail-wm1-f70.google.com with SMTP id 5b1f17b1804b1-43cfda30a3cso37269795e9.3 for ; Tue, 01 Apr 2025 09:12:16 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1743523935; x=1744128735; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=/yWeFqVMRebyIXvHSXUhfNNjcs9eft8Ib/p8yK7977Q=; b=GxVPkQqrGtnhB+JCsk0CHpkO36lSzxqchFvwvj+9poieAHOAPQ6q6QMuSBab3YS09M P01GwUN+KWIT1Jw9v07IKdHx66N6OdgJQcs+EL7eyvgFpPWmldFUmHb8nJxgfGrRHIsl MM3tv1dTC2bDgt6ZC126iv++bRwIhJDCjW5IMnxDjxvxueKPlhqdU9cmhLjNcrrIiLVo LIWuHS6EsIG5wtfKT0zOg6rbhXzj1wxO3bGOKwHunJCZRN8HvrMdDkQUYP4nHGP9lz+f 5L+M+uTV7WG5amOJR5BnYzIJ9jNRggbOG6loPFhiT6MJPY+Yr3uufKVcMUXSPVBZHKIb PFaA== X-Forwarded-Encrypted: i=1; AJvYcCXKmR7B7BT8lSttZ3pTceGEvYtpOWt6FUQJa0aq6+HyeiefEkHoXR/dWEZkPyxRJsv+AW8=@vger.kernel.org X-Gm-Message-State: AOJu0YwKJtQePVMMadI/RXH0kR6g/JNQbFdypHfA2fHLJp9nvmWX6RFA j8sppH3S+67778HpJX5HWFe9D68S5bFE/DknlQFPbu5V2F8/AJZvDWDXW6SuP9e+I0jfkuTmHj0 RZJQH+EAYWnWAwRStuHHyynoRvyws2Vk6jYPRUL/IiM6+o262Wg== X-Gm-Gg: ASbGnctp/WNWaEuvBdWYIHxXSFod9jU+mO/INd6iP/YRdRyxIQwyK6p+PXdKII+97FY 4QBJWVisX3vm3iUCU3YwLKs7rBPbfa+RhJlknY53zFxs6vwBgq/ktV4Cd18MbO74ZYDPjrG3VFa a9X2/EfkcHmazG7h6DDn7Om/NhPIuTQUdE+R5G6nCW2yEtmSN/puteQrq5LOooaxG4ry1+tWbyv naHou8xlwjyo8qgRyGKNshAz2+peOU2OPTXbscHQ26fI71e6iebs44r0wYfh3Caxf1m8gM3ISM9 81A1mRU2NMXaiu0g+HMXog== X-Received: by 2002:a5d:584c:0:b0:391:4559:8761 with SMTP id ffacd0b85a97d-39c1211394bmr11532838f8f.36.1743523935285; Tue, 01 Apr 2025 09:12:15 -0700 (PDT) X-Google-Smtp-Source: AGHT+IFdiNekii4PSYbxjvDp66z82YdwPHoAMXYlQUesXdMmLvzhLibRLalKupSShxNYAuhAEstwpw== X-Received: by 2002:a5d:584c:0:b0:391:4559:8761 with SMTP id ffacd0b85a97d-39c1211394bmr11532793f8f.36.1743523934852; Tue, 01 Apr 2025 09:12:14 -0700 (PDT) Received: from [192.168.10.48] ([176.206.111.201]) by smtp.gmail.com with ESMTPSA id 5b1f17b1804b1-43d900008d8sm158223435e9.33.2025.04.01.09.12.12 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 01 Apr 2025 09:12:13 -0700 (PDT) From: Paolo Bonzini To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: roy.hopkins@suse.com, seanjc@google.com, thomas.lendacky@amd.com, ashish.kalra@amd.com, michael.roth@amd.com, jroedel@suse.de, nsaenz@amazon.com, anelkz@amazon.de, James.Bottomley@HansenPartnership.com Subject: [PATCH 24/29] KVM: x86: initialize CPUID for non-default planes Date: Tue, 1 Apr 2025 18:11:01 +0200 Message-ID: <20250401161106.790710-25-pbonzini@redhat.com> X-Mailer: git-send-email 2.49.0 In-Reply-To: <20250401161106.790710-1-pbonzini@redhat.com> References: <20250401161106.790710-1-pbonzini@redhat.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Copy the initial CPUID from plane 0. To avoid mismatches, block KVM_SET_CPUID{,2} after KVM_CREATE_VCPU_PLANE similar to how it is blocked after KVM_RUN; this is handled by a tiny bit of architecture independent code. Signed-off-by: Paolo Bonzini --- Documentation/virt/kvm/api.rst | 4 +++- arch/x86/kvm/cpuid.c | 19 ++++++++++++++++++- arch/x86/kvm/cpuid.h | 1 + arch/x86/kvm/x86.c | 7 ++++++- include/linux/kvm_host.h | 1 + virt/kvm/kvm_main.c | 1 + 6 files changed, 30 insertions(+), 3 deletions(-) diff --git a/Documentation/virt/kvm/api.rst b/Documentation/virt/kvm/api.rst index 16d836b954dc..3739d16b7164 100644 --- a/Documentation/virt/kvm/api.rst +++ b/Documentation/virt/kvm/api.rst @@ -736,7 +736,9 @@ Caveat emptor: configuration (if there is) is not corrupted. Userspace can get a copy of the resulting CPUID configuration through KVM_GET_CPUID2 in case. - Using KVM_SET_CPUID{,2} after KVM_RUN, i.e. changing the guest vCPU model - after running the guest, may cause guest instability. + after running the guest, is forbidden; so is using the ioctls after + KVM_CREATE_VCPU_PLANE, because all planes must have the same CPU + capabilities. - Using heterogeneous CPUID configurations, modulo APIC IDs, topology, etc... may cause guest instability. diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c index 142decb3a736..44e6d4989bdd 100644 --- a/arch/x86/kvm/cpuid.c +++ b/arch/x86/kvm/cpuid.c @@ -545,7 +545,7 @@ static int kvm_set_cpuid(struct kvm_vcpu *vcpu, struct kvm_cpuid_entry2 *e2, * KVM_SET_CPUID{,2} again. To support this legacy behavior, check * whether the supplied CPUID data is equal to what's already set. */ - if (kvm_vcpu_has_run(vcpu)) { + if (kvm_vcpu_has_run(vcpu) || vcpu->has_planes) { r = kvm_cpuid_check_equal(vcpu, e2, nent); if (r) goto err; @@ -567,6 +567,23 @@ static int kvm_set_cpuid(struct kvm_vcpu *vcpu, struct kvm_cpuid_entry2 *e2, return r; } +int kvm_dup_cpuid(struct kvm_vcpu *vcpu, struct kvm_vcpu *source) +{ + if (WARN_ON_ONCE(vcpu->arch.cpuid_entries || vcpu->arch.cpuid_nent)) + return -EEXIST; + + vcpu->arch.cpuid_entries = kmemdup(source->arch.cpuid_entries, + source->arch.cpuid_nent * sizeof(struct kvm_cpuid_entry2), + GFP_KERNEL_ACCOUNT); + if (!vcpu->arch.cpuid_entries) + return -ENOMEM; + + memcpy(vcpu->arch.cpu_caps, source->arch.cpu_caps, sizeof(source->arch.cpu_caps)); + vcpu->arch.cpuid_nent = source->arch.cpuid_nent; + + return 0; +} + /* when an old userspace process fills a new kernel module */ int kvm_vcpu_ioctl_set_cpuid(struct kvm_vcpu *vcpu, struct kvm_cpuid *cpuid, diff --git a/arch/x86/kvm/cpuid.h b/arch/x86/kvm/cpuid.h index 05cc1245f570..a5983c635a70 100644 --- a/arch/x86/kvm/cpuid.h +++ b/arch/x86/kvm/cpuid.h @@ -42,6 +42,7 @@ static inline struct kvm_cpuid_entry2 *kvm_find_cpuid_entry(struct kvm_vcpu *vcp int kvm_dev_ioctl_get_cpuid(struct kvm_cpuid2 *cpuid, struct kvm_cpuid_entry2 __user *entries, unsigned int type); +int kvm_dup_cpuid(struct kvm_vcpu *vcpu, struct kvm_vcpu *source); int kvm_post_set_cpuid(struct kvm_vcpu *vcpu); int kvm_vcpu_ioctl_set_cpuid(struct kvm_vcpu *vcpu, struct kvm_cpuid *cpuid, diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index d2b43d9b6543..be4d7b97367b 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -12412,6 +12412,11 @@ int kvm_arch_vcpu_create(struct kvm_vcpu *vcpu, struct kvm_plane *plane) if (plane->plane) { page = NULL; vcpu->arch.pio_data = vcpu->plane0->arch.pio_data; + r = kvm_dup_cpuid(vcpu, vcpu->plane0); + if (r < 0) + goto fail_free_lapic; + + r = -ENOMEM; } else { page = alloc_page(GFP_KERNEL_ACCOUNT | __GFP_ZERO); if (!page) @@ -12459,7 +12464,7 @@ int kvm_arch_vcpu_create(struct kvm_vcpu *vcpu, struct kvm_plane *plane) kvm_xen_init_vcpu(vcpu); vcpu_load(vcpu); - kvm_vcpu_after_set_cpuid(vcpu); + WARN_ON_ONCE(kvm_post_set_cpuid(vcpu)); kvm_set_tsc_khz(vcpu, vcpu->kvm->arch.default_tsc_khz); kvm_vcpu_reset(vcpu, false); kvm_init_mmu(vcpu); diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index 5cade1c04646..0b764951f461 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -344,6 +344,7 @@ struct kvm_vcpu { struct mutex mutex; /* Only valid on plane 0 */ + bool has_planes; bool wants_to_run; /* Shared for all planes */ diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index db38894f6fa3..3a04fdf0865d 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -4182,6 +4182,7 @@ static int kvm_vm_ioctl_create_vcpu(struct kvm_plane *plane, struct kvm_vcpu *pl if (plane->plane) { page = NULL; vcpu->run = plane0_vcpu->run; + plane0_vcpu->has_planes = true; } else { WARN_ON(plane0_vcpu != NULL); plane0_vcpu = vcpu; From patchwork Tue Apr 1 16:11:02 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Paolo Bonzini X-Patchwork-Id: 14035113 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 00D8322331F for ; Tue, 1 Apr 2025 16:12:20 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=170.10.133.124 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1743523942; cv=none; b=SGWpzGzLkrw9Ze3e0eZ5IaZna3fuOyg2d0jI2qUK2b2DsPg/xMlzSWxvAgmgw9RAcVqX10TQPTaH+bbmJEd8C1XdcR8kfN4/qD+LsOSKTkCOrLHBrW5zK4aMRhf88/YUBe+VC3Jo+XPWhVwtluSmZ4FzlhLBvjb0bBSp1rgmb7E= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1743523942; c=relaxed/simple; bh=pn8er/Ns1V0RvexqIaqaUyOMfeX7VOy6D1cEZq4AgVs=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=ieKtXAyt1F0r1eh/xz5Tm8DlqxxsiZmncLjuWLG9ypO4vHfwIq4yalMgfiUNCpGB3YbHAW7J7611f1Qrs94PA6INdeMBt8jYMT2QfOvPX+i/5t/WYsipZfsg/kscv1UEGySEheodHoYA3RZJxdVEeRuDJH7Te2OT9dZodchF804= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=redhat.com; spf=pass smtp.mailfrom=redhat.com; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b=RO0w+fnS; arc=none smtp.client-ip=170.10.133.124 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=redhat.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=redhat.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="RO0w+fnS" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1743523940; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=STKQiGVrBKxKXPgyW50q0lNwcXD/6o7OaldUCF+7vOE=; b=RO0w+fnS2DEtpfJFYGck71MkJg2J+HiqiQxv5FwCIpuEx60Jfxqp08eHoE/N/0G8Xf1yeD Y2hJvXAN6jTcZ/r0l54YL7L2Xs7DfHgrBxMu7LQ9PDMnpykDqHRgHZ9qSwJr/6r/cKtBUY TAhDDix1H2o7OzeYxRHTD4A4j+UnQY8= Received: from mail-wr1-f70.google.com (mail-wr1-f70.google.com [209.85.221.70]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-494-aFPionnwM8iHI04aE1bcZg-1; Tue, 01 Apr 2025 12:12:18 -0400 X-MC-Unique: aFPionnwM8iHI04aE1bcZg-1 X-Mimecast-MFC-AGG-ID: aFPionnwM8iHI04aE1bcZg_1743523938 Received: by mail-wr1-f70.google.com with SMTP id ffacd0b85a97d-39131f2bbe5so2322881f8f.3 for ; Tue, 01 Apr 2025 09:12:18 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1743523938; x=1744128738; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=STKQiGVrBKxKXPgyW50q0lNwcXD/6o7OaldUCF+7vOE=; b=BUlGA0TtEGzmDY90+ZNHtGElQo85UmRn/BrtwZYulzJZtTmPXR9dsy0/KDQ3Hr2n8O fxfEkbcegcZmZhRIPeDLd+i+qnufXGYsTkwoIFDjZVrksYST2lt/PCRbqtOc891IYph/ oAmRdBDTiUe2m6GV5GU05irISYe23NzFteCY5BSDFd48BxGFeIuCfkLPSxFlq9cZgMd4 syzIV+ncxmxR93wiEQm3/RkN+Gw159hm6Gn1FEN3i07fxjlG5bPtvgIVaG3twasUs9Rp rUofm0mo4FYPCo2C+D8x+gcxrgYKwD25Vuc63D9Y3w9lLey4BKL0mAh5pK1NJqfCb1tY iHJg== X-Forwarded-Encrypted: i=1; AJvYcCXubbrQRGAHXZeP4WfOzRvQMSe/msGjaggEn5jnYlKIHXKiXIlWcGadFfHk90akVV2rXWc=@vger.kernel.org X-Gm-Message-State: AOJu0YyfU3XUJ3STCwg0C6brWprXuE4bZQNAa/k6jwKoCWwwSBTd2Q1X 5nY8ZA/DF2Wsp55X2fzMjGW02pKLs9FZOISzUlVmil3bBLUFcZ5psDEYG5DNvCZNHeJWKjqwOg3 nMfr1h6LrI+Koplq3Lk5+mDsyF4Uwm8fsrbeOmT/FVG+N1aiVSQ== X-Gm-Gg: ASbGncuxV/cKX9CuFF1mA1lhEqaGrH41o/0S1zPcgvMsixBRDGbceaMRf15jQkX07sK UEMGWA3alVV57PuwHjYRoazJqUIoWXVxjXL9RYt7ZJj4/y2mmmjYvwnPJFOWdoi8rmabWTbCPMv iOsTwSbUuTRHgCddkuQN+4en9kZ4UzGpjadkr3xy1KoqLtNuRKCqtLuNt9utwIKBBsRoKAv9QNy xMlfmIsZ08oFS0pcYiOGiAPqZ07Zx/pNaAHWYtiVz6qjZ4Hlk6nWXeeFPAG4uKrZxPxIz16djO/ Yf7WddldKI/CoYzHSVaGLA== X-Received: by 2002:a05:6000:40e1:b0:391:2ab1:d4c2 with SMTP id ffacd0b85a97d-39c12115daamr10848572f8f.37.1743523937451; Tue, 01 Apr 2025 09:12:17 -0700 (PDT) X-Google-Smtp-Source: AGHT+IFixcIUQZovBxbTUqE5qYkRuf8yLn9S/Hc/iFsk0oOlWsNioebYuSesp4eyH3gSeVV6bFB72A== X-Received: by 2002:a05:6000:40e1:b0:391:2ab1:d4c2 with SMTP id ffacd0b85a97d-39c12115daamr10848526f8f.37.1743523937028; Tue, 01 Apr 2025 09:12:17 -0700 (PDT) Received: from [192.168.10.48] ([176.206.111.201]) by smtp.gmail.com with ESMTPSA id 5b1f17b1804b1-43d8fcceaaasm160266655e9.18.2025.04.01.09.12.15 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 01 Apr 2025 09:12:15 -0700 (PDT) From: Paolo Bonzini To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: roy.hopkins@suse.com, seanjc@google.com, thomas.lendacky@amd.com, ashish.kalra@amd.com, michael.roth@amd.com, jroedel@suse.de, nsaenz@amazon.com, anelkz@amazon.de, James.Bottomley@HansenPartnership.com Subject: [PATCH 25/29] KVM: x86: handle interrupt priorities for planes Date: Tue, 1 Apr 2025 18:11:02 +0200 Message-ID: <20250401161106.790710-26-pbonzini@redhat.com> X-Mailer: git-send-email 2.49.0 In-Reply-To: <20250401161106.790710-1-pbonzini@redhat.com> References: <20250401161106.790710-1-pbonzini@redhat.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Force a userspace exit if an interrupt is delivered to a higher-priority plane, where priority is represented by vcpu->run->req_exit_planes. The set of planes with pending IRR are manipulated atomically and stored in the plane-0 vCPU, since it is handy to reach from the target vCPU. TODO: haven't put much thought into IPI virtualization. Signed-off-by: Paolo Bonzini --- arch/x86/include/asm/kvm_host.h | 7 +++++ arch/x86/kvm/lapic.c | 36 +++++++++++++++++++++++-- arch/x86/kvm/x86.c | 48 +++++++++++++++++++++++++++++++++ include/linux/kvm_host.h | 2 ++ 4 files changed, 91 insertions(+), 2 deletions(-) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index 9ac39f128a53..0344e8bed319 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -125,6 +125,7 @@ #define KVM_REQ_HV_TLB_FLUSH \ KVM_ARCH_REQ_FLAGS(32, KVM_REQUEST_WAIT | KVM_REQUEST_NO_WAKEUP) #define KVM_REQ_UPDATE_PROTECTED_GUEST_STATE KVM_ARCH_REQ(34) +#define KVM_REQ_PLANE_INTERRUPT KVM_ARCH_REQ(35) #define CR0_RESERVED_BITS \ (~(unsigned long)(X86_CR0_PE | X86_CR0_MP | X86_CR0_EM | X86_CR0_TS \ @@ -864,6 +865,12 @@ struct kvm_vcpu_arch { u64 xcr0; u64 guest_supported_xcr0; + /* + * Only valid in plane0. The bitmask of planes that received + * an interrupt, to be checked against req_exit_planes. + */ + atomic_t irr_pending_planes; + struct kvm_pio_request pio; void *pio_data; void *sev_pio_data; diff --git a/arch/x86/kvm/lapic.c b/arch/x86/kvm/lapic.c index 16a0e2387f2c..21dbc539cbe7 100644 --- a/arch/x86/kvm/lapic.c +++ b/arch/x86/kvm/lapic.c @@ -1311,6 +1311,39 @@ bool kvm_intr_is_single_vcpu_fast(struct kvm *kvm, struct kvm_lapic_irq *irq, return ret; } +static void kvm_lapic_deliver_interrupt(struct kvm_vcpu *vcpu, struct kvm_lapic *apic, + int delivery_mode, int trig_mode, int vector) +{ + struct kvm_vcpu *plane0_vcpu = vcpu->plane0; + struct kvm_plane *running_plane; + u16 req_exit_planes; + + kvm_x86_call(deliver_interrupt)(apic, delivery_mode, trig_mode, vector); + + /* + * test_and_set_bit implies a memory barrier, so IRR is written before + * reading irr_pending_planes below... + */ + if (!test_and_set_bit(vcpu->plane, &plane0_vcpu->arch.irr_pending_planes)) { + /* + * ... and also running_plane and req_exit_planes are read after writing + * irr_pending_planes. Both barriers pair with kvm_arch_vcpu_ioctl_run(). + */ + smp_mb__after_atomic(); + + running_plane = READ_ONCE(plane0_vcpu->running_plane); + if (!running_plane) + return; + + req_exit_planes = READ_ONCE(plane0_vcpu->req_exit_planes); + if (!(req_exit_planes & BIT(vcpu->plane))) + return; + + kvm_make_request(KVM_REQ_PLANE_INTERRUPT, + kvm_get_plane_vcpu(running_plane, vcpu->vcpu_id)); + } +} + /* * Add a pending IRQ into lapic. * Return 1 if successfully added and 0 if discarded. @@ -1352,8 +1385,7 @@ static int __apic_accept_irq(struct kvm_lapic *apic, int delivery_mode, apic->regs + APIC_TMR); } - kvm_x86_call(deliver_interrupt)(apic, delivery_mode, - trig_mode, vector); + kvm_lapic_deliver_interrupt(vcpu, apic, delivery_mode, trig_mode, vector); break; case APIC_DM_REMRD: diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index be4d7b97367b..4546d1049f43 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -10960,6 +10960,19 @@ static int vcpu_enter_guest(struct kvm_vcpu *vcpu) goto out; } } + if (kvm_check_request(KVM_REQ_PLANE_INTERRUPT, vcpu)) { + u16 irr_pending_planes = atomic_read(&vcpu->plane0->arch.irr_pending_planes); + u16 target = irr_pending_planes & vcpu->plane0->req_exit_planes; + if (target) { + vcpu->run->exit_reason = KVM_EXIT_PLANE_EVENT; + vcpu->run->plane_event.cause = KVM_PLANE_EVENT_INTERRUPT; + vcpu->run->plane_event.flags = 0; + vcpu->run->plane_event.pending_event_planes = irr_pending_planes; + vcpu->run->plane_event.target = target; + r = 0; + goto out; + } + } } if (kvm_check_request(KVM_REQ_EVENT, vcpu) || req_int_win || @@ -11689,8 +11702,11 @@ static int kvm_vcpu_ioctl_run_plane(struct kvm_vcpu *vcpu) int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu) { + struct kvm_vcpu *plane0_vcpu = vcpu; int plane_id = READ_ONCE(vcpu->run->plane); struct kvm_plane *plane = vcpu->kvm->planes[plane_id]; + u16 req_exit_planes = READ_ONCE(vcpu->run->req_exit_planes) & ~BIT(plane_id); + u16 irr_pending_planes; int r; if (plane_id) { @@ -11698,8 +11714,40 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu) mutex_lock_nested(&vcpu->mutex, 1); } + if (plane0_vcpu->has_planes) { + plane0_vcpu->req_exit_planes = req_exit_planes; + plane0_vcpu->running_plane = plane; + + /* + * Check for cross-plane interrupts that happened while outside KVM_RUN; + * write running_plane and req_exit_planes before reading irr_pending_planes. + * If an interrupt hasn't set irr_pending_planes yet, it will trigger + * KVM_REQ_PLANE_INTERRUPT itself in kvm_lapic_deliver_interrupt(). + */ + smp_mb__before_atomic(); + + irr_pending_planes = atomic_fetch_and(~BIT(plane_id), &plane0_vcpu->arch.irr_pending_planes); + if (req_exit_planes & irr_pending_planes) + kvm_make_request(KVM_REQ_PLANE_INTERRUPT, vcpu); + } + r = kvm_vcpu_ioctl_run_plane(vcpu); + if (plane0_vcpu->has_planes) { + smp_store_release(&plane0_vcpu->running_plane, NULL); + + /* + * Clear irr_pending_planes before reading IRR; pairs with + * kvm_lapic_deliver_interrupt(). If this side doesn't see IRR set, + * the other side will certainly see the cleared bit irr_pending_planes + * and set it, and vice versa. + */ + clear_bit(plane_id, &plane0_vcpu->arch.irr_pending_planes); + smp_mb__after_atomic(); + if (kvm_lapic_find_highest_irr(vcpu)) + atomic_or(BIT(plane_id), &plane0_vcpu->arch.irr_pending_planes); + } + if (plane_id) mutex_unlock(&vcpu->mutex); diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index 0b764951f461..442aed2b9cc6 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -346,6 +346,8 @@ struct kvm_vcpu { /* Only valid on plane 0 */ bool has_planes; bool wants_to_run; + u16 req_exit_planes; + struct kvm_plane *running_plane; /* Shared for all planes */ struct kvm_run *run; From patchwork Tue Apr 1 16:11:03 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Paolo Bonzini X-Patchwork-Id: 14035114 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 71DB92236E1 for ; Tue, 1 Apr 2025 16:12:23 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=170.10.129.124 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1743523945; cv=none; b=WnduadQwdLO2krBzykQnB8azhxuAN3uVt/hY5M+0xAAtTOQpnWOIT1Q/bGoL98zwNVjDELhwnivAo539fQa0VcCVIUtPDsBaGcGE/pxGZYBotnWM4sVgi53xyvMJsiLx2ix/MelEFOmjzcZg1bLMenCufYivQpWCBF+ngp7ehCc= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1743523945; c=relaxed/simple; bh=Jr48n7DTkbCOMB/d0GEJ1ZQWRT8vQDsoeXOK//lT2/0=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=QMx6vUW7NEInjj4isWX1wWT+cQKndBQeK+HwyWUU/Hf+GAOJW0tSspXxCnXfotTPLKcgnJaa5QFkodkzzXSrh9/I+hMscOHo/Na/Lla369GVPv0t7Hw2WnQCqCDjyelwEfB+/d6f6VbHeJ+wDiqo4AiC1ZnKG5av5v9FMF2A87c= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=redhat.com; spf=pass smtp.mailfrom=redhat.com; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b=X4TYjrWV; arc=none smtp.client-ip=170.10.129.124 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=redhat.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=redhat.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="X4TYjrWV" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1743523942; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=JMmSY+BAN+od5q2oN5V+h7YwfbF4XF3XqSY7nT/C4e4=; b=X4TYjrWVzd7DLnPuVWUXT9q5HKDXSU7oTjbW7VDJQFBxlqug5rM6WhTTh3UTqnx+HqbMga i7f+7VfM072tYnBh/xdXbNphO3HY1E3Z+a1cxbj3sDuOApEpzDk+KybVMI+T2PDci7utxk lbxP03WbakCbbjxPqWvi1mRzOEbuDsU= Received: from mail-wr1-f72.google.com (mail-wr1-f72.google.com [209.85.221.72]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-395-kCOM49tqOJuyAn3m4e2Ymw-1; Tue, 01 Apr 2025 12:12:21 -0400 X-MC-Unique: kCOM49tqOJuyAn3m4e2Ymw-1 X-Mimecast-MFC-AGG-ID: kCOM49tqOJuyAn3m4e2Ymw_1743523940 Received: by mail-wr1-f72.google.com with SMTP id ffacd0b85a97d-39c184b20a2so1046832f8f.1 for ; Tue, 01 Apr 2025 09:12:21 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1743523940; x=1744128740; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=JMmSY+BAN+od5q2oN5V+h7YwfbF4XF3XqSY7nT/C4e4=; b=a8ypUcMCZ6MlSSp3TI/RdmFrXGhsiLfjOaN0NvGZLiyepVfQFGcGaAhH0bGDQK5CR+ CbmcunvJL1ZywDMy+ju2VPDvBSeZP8YRslPDR48ZJvECgcAvutntvMBfnwsCUDyF4Sj6 flPnHcuJVKVKg3WGkHmCVcceV4sVecCNbBtSicMDHwpMRxMnb80CT1DnciSE3PocV+lb hOsKneWRFfGRiAJ+BfN+J45WSao0Q9XjhprAtX91uTQAz6klcdjglETwv3i2nHTaMxzN 169W7b8sWprYlI4YAg6iZM8sW9q+lyEgr0K/JX7lFMo9v801BljR7stbl3/tlWK3ufMX jI9g== X-Forwarded-Encrypted: i=1; AJvYcCUOtezB3b/7UqX2cHvw1APrRHCMwMPUP+T6Ed9fFVALhyBnF7k9FFUdERw2HCaJ2m7nSVk=@vger.kernel.org X-Gm-Message-State: AOJu0YzmET8ix7AgzHTPxPSQkxp0BP3a0uzK/HawqEpBFgffPcMXcF2Q n7Xn30pbmVb0A4se9LxVKzM/FPvzPUwiWcwn7jsNIBNr1V+c70fpcOaDuY5txtlEf4l0f3p53aG tysA4gWrk+JOg6x/DgAr3kv5lylFK2W/zFGGxRErLg5ZHjDLD4w== X-Gm-Gg: ASbGncuoafiB9NvJtpFlZvCovAbZCaCM9DwCNVTny7mR+WYnYoLm34h2j55wVWcaOB+ 7qMjbEUvFtqTWWqdxB/snivSMBt4GayQlAyOE6RoFk/pmdZE60REjqc0CzwDQi8OlKxnV9lytAY a30r4YgsK0x5Cikgyq2aHXfg9pfUgPBGuPy1hfKq+JqT47svMLrM/odWu+tbgJzUu2AMmFbi3eN t8yR7fywXOaoXIzRy6zlHI0/jpG4bHfavdhqrbGWZ5QnwB7MOsQ14lMNmNBfZv3wUfR5H81MS4+ 7W+bj0t6/dqlh68NvrFb1A== X-Received: by 2002:a05:6000:188b:b0:391:20ef:62d6 with SMTP id ffacd0b85a97d-39c120cb835mr10465205f8f.11.1743523940109; Tue, 01 Apr 2025 09:12:20 -0700 (PDT) X-Google-Smtp-Source: AGHT+IFFwlHiQzmqyX2anRC0qQyOVR2lkXQfjeA/y3ngR7h2MBX/VCdz2HKMdJUXOZo8Odrhhl0ySQ== X-Received: by 2002:a05:6000:188b:b0:391:20ef:62d6 with SMTP id ffacd0b85a97d-39c120cb835mr10465179f8f.11.1743523939799; Tue, 01 Apr 2025 09:12:19 -0700 (PDT) Received: from [192.168.10.48] ([176.206.111.201]) by smtp.gmail.com with ESMTPSA id 5b1f17b1804b1-43d8fbc1716sm158501095e9.15.2025.04.01.09.12.17 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 01 Apr 2025 09:12:17 -0700 (PDT) From: Paolo Bonzini To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: roy.hopkins@suse.com, seanjc@google.com, thomas.lendacky@amd.com, ashish.kalra@amd.com, michael.roth@amd.com, jroedel@suse.de, nsaenz@amazon.com, anelkz@amazon.de, James.Bottomley@HansenPartnership.com Subject: [PATCH 26/29] KVM: x86: enable up to 16 planes Date: Tue, 1 Apr 2025 18:11:03 +0200 Message-ID: <20250401161106.790710-27-pbonzini@redhat.com> X-Mailer: git-send-email 2.49.0 In-Reply-To: <20250401161106.790710-1-pbonzini@redhat.com> References: <20250401161106.790710-1-pbonzini@redhat.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Allow up to 16 VM planes, it's a nice round number. FIXME: online_vcpus is used by x86 code that deals with TSC synchronization. Maybe kvmclock should be moved to planex. Signed-off-by: Paolo Bonzini --- arch/x86/include/asm/kvm_host.h | 3 +++ arch/x86/kvm/x86.c | 6 ++++++ 2 files changed, 9 insertions(+) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index 0344e8bed319..d0cb177b6f52 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -2339,6 +2339,8 @@ enum { # define kvm_memslots_for_spte_role(kvm, role) __kvm_memslots(kvm, 0) #endif +#define KVM_MAX_VCPU_PLANES 16 + int kvm_cpu_has_injectable_intr(struct kvm_vcpu *v); int kvm_cpu_has_interrupt(struct kvm_vcpu *vcpu); int kvm_cpu_has_extint(struct kvm_vcpu *v); @@ -2455,6 +2457,7 @@ int memslot_rmap_alloc(struct kvm_memory_slot *slot, unsigned long npages); */ #define KVM_EXIT_HYPERCALL_MBZ GENMASK_ULL(31, 1) +int kvm_arch_nr_vcpu_planes(struct kvm *kvm); bool kvm_arch_planes_share_fpu(struct kvm *kvm); #endif /* _ASM_X86_KVM_HOST_H */ diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 4546d1049f43..86d1a567f62e 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -13989,6 +13989,12 @@ int kvm_handle_invpcid(struct kvm_vcpu *vcpu, unsigned long type, gva_t gva) } EXPORT_SYMBOL_GPL(kvm_handle_invpcid); +int kvm_arch_nr_vcpu_planes(struct kvm *kvm) +{ + /* TODO: use kvm_x86_ops so that SNP can use planes for VTPLs. */ + return kvm->arch.has_protected_state ? 1 : KVM_MAX_VCPU_PLANES; +} + bool kvm_arch_planes_share_fpu(struct kvm *kvm) { return !kvm || kvm->arch.planes_share_fpu; From patchwork Tue Apr 1 16:11:04 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Paolo Bonzini X-Patchwork-Id: 14035115 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 9393920F069 for ; Tue, 1 Apr 2025 16:12:26 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=170.10.133.124 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1743523948; cv=none; b=fuoX6irpGAW1NFua8ScpixIE9QOuNdr+xvmIsiL8JvLpXX5PZ232bIPe7qlMcX5PWVM2P1YuYvJoa6t5n5WMosZsxdlEv3uho+YZ8FVrJMAe0NCloCwN95VnU+R72KmWwe3kwTDGtixJHTuK40X49cdmSYEN0ZB/9FCKZwLVZnw= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1743523948; c=relaxed/simple; bh=7h29/HU5sdApjkYIeGF2/5ZLb/HC0XBAWZ+vi6Cv12I=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=RiFjMc79T2NpY1CgVb5lT+wBLbys2qiBc5WpQj9XqoTKQ1fH6pErvGdQyAqs/Glro6fUu+jpIrhW4ffbrtiCnpeERVMoAUjlusue1oeEduELfGBNPwA69kaCsgPpSze00/AXw001SfCPBcBNd8OsMZiIrg1XThgkqXEs3ikdnfQ= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=redhat.com; spf=pass smtp.mailfrom=redhat.com; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b=DDjZJudh; arc=none smtp.client-ip=170.10.133.124 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=redhat.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=redhat.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="DDjZJudh" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1743523945; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=Za9UhBdtRl20IVd/exE7AzVdA7LtofpEuC+6DQ+7Q1I=; b=DDjZJudhhTntg9xxUarhZc5vM+EdtBcRi4jynvdzin9UTl2/xgpCISCg68ALTRM33+GhYb j5hBykc0fJFIx55qqNWYlgkJ0ar5JLieJnqtg4NT+WHHW9UWn3EKBXGTGcmABb1w0YhJbN EzZ4/jwPZyVqQGJI+UXrzO6bbr2VT/Q= Received: from mail-wm1-f70.google.com (mail-wm1-f70.google.com [209.85.128.70]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-204-_P-BD5FLO4ChEyP0P0Kr_w-1; Tue, 01 Apr 2025 12:12:24 -0400 X-MC-Unique: _P-BD5FLO4ChEyP0P0Kr_w-1 X-Mimecast-MFC-AGG-ID: _P-BD5FLO4ChEyP0P0Kr_w_1743523943 Received: by mail-wm1-f70.google.com with SMTP id 5b1f17b1804b1-43e9b0fd00cso13354635e9.0 for ; Tue, 01 Apr 2025 09:12:24 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1743523943; x=1744128743; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=Za9UhBdtRl20IVd/exE7AzVdA7LtofpEuC+6DQ+7Q1I=; b=m5Jft7KUWtcrO2buCoGICiCwafJKUbLgpac73XNTMkAazL7fC+yg8fzLF2iMDa/Ial nSr2+T7AKqsVhqTLuL6Fs+D7eZeOnHU6qgkuNr1tyIQO3h37hmzNhQpPEassBMU7DpC+ Q37kdVoJVGAyj3TUWf7fpIXvmnu8a3ewK8IHWh88P93Mpvnv8fuhVheSwJA0IVtQJk2p kMLvkAs7jB4d0WJef1wGxiWjHLLs0ziJWH4MiTmWzevtT38OFs6QBWZZpqaHRVbsXFhx hEqDzok2zVrELnoNSqjDgmOus0Asyqu6HH9k0BIA9GccDJJTIyM+Kmun6z5Wn0CXPD4+ S5bw== X-Forwarded-Encrypted: i=1; AJvYcCXXzlYfcJDBSjWBX2V0veiPbtk3mzaRhAOG/M8IemY/uTO5/utbl5GeNd+K93XpFrDY0wA=@vger.kernel.org X-Gm-Message-State: AOJu0YxwDACQokPZMGj/S9PdOp/NySH+00IhIBz9tzSlPnA9fNJ2dEli qZ69G88Fo5QUuizvqEFfNa15qF1DiY4SvIUmKsqq5uXtTcRW8snk8MTaz+QP4g+7BI9WjWLs5tl iH2U+BclERhiGWKtbpWVdak7ZLM41Tw8+SS76f2fqSAZj/6nc8g== X-Gm-Gg: ASbGncvatMZk4PRsVrNccl1rabbBHG/qJIBbj1HZ0I0qUmvOxpBn7bdEZUaotnFXJVk wiyL7C0DgBcrdyLzS0cYMgLD7gkvlmv3yjo+PzTe20XUR9531gH/TYVvWX3S1qh+AVQFVqGqEhr 4R3OPtuO/qAaeAW5UzY6cGH1UZmFQOMpH9fqYXh/hlTYllivFI6oDXE8QdUCmPfgHqwKYntJzgY HXiwvqaIWztICyLqVN1e4MGVRpETMOxvE5ttoBiXniei1QQhmV0O7OSjLS119sJAksWd+RCszLe W7vbAsfFaAcvITm/qYa5ig== X-Received: by 2002:a05:600c:198e:b0:43c:f3e1:a729 with SMTP id 5b1f17b1804b1-43eb05bd475mr6791715e9.12.1743523943425; Tue, 01 Apr 2025 09:12:23 -0700 (PDT) X-Google-Smtp-Source: AGHT+IHgj2ZbCQGfmVhz6lrnJ+5wyjIJ9OaWTHSb2mNQbdBcNI3fX/rNQGWp16Y3i8t/HcaXbmtslA== X-Received: by 2002:a05:600c:198e:b0:43c:f3e1:a729 with SMTP id 5b1f17b1804b1-43eb05bd475mr6791375e9.12.1743523943004; Tue, 01 Apr 2025 09:12:23 -0700 (PDT) Received: from [192.168.10.48] ([176.206.111.201]) by smtp.gmail.com with ESMTPSA id ffacd0b85a97d-39c0b6588dbsm14265870f8f.2.2025.04.01.09.12.20 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 01 Apr 2025 09:12:20 -0700 (PDT) From: Paolo Bonzini To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: roy.hopkins@suse.com, seanjc@google.com, thomas.lendacky@amd.com, ashish.kalra@amd.com, michael.roth@amd.com, jroedel@suse.de, nsaenz@amazon.com, anelkz@amazon.de, James.Bottomley@HansenPartnership.com Subject: [PATCH 27/29] selftests: kvm: introduce basic test for VM planes Date: Tue, 1 Apr 2025 18:11:04 +0200 Message-ID: <20250401161106.790710-28-pbonzini@redhat.com> X-Mailer: git-send-email 2.49.0 In-Reply-To: <20250401161106.790710-1-pbonzini@redhat.com> References: <20250401161106.790710-1-pbonzini@redhat.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Check a few error cases and ensure that a vCPU can have a second plane added to it. For now, all interactions happen through the bare __vm_ioctl() interface or even directly through the ioctl() system call. Signed-off-by: Paolo Bonzini --- tools/testing/selftests/kvm/Makefile.kvm | 1 + tools/testing/selftests/kvm/plane_test.c | 108 +++++++++++++++++++++++ 2 files changed, 109 insertions(+) create mode 100644 tools/testing/selftests/kvm/plane_test.c diff --git a/tools/testing/selftests/kvm/Makefile.kvm b/tools/testing/selftests/kvm/Makefile.kvm index f62b0a5aba35..b1d0b410cc03 100644 --- a/tools/testing/selftests/kvm/Makefile.kvm +++ b/tools/testing/selftests/kvm/Makefile.kvm @@ -57,6 +57,7 @@ TEST_GEN_PROGS_COMMON += guest_print_test TEST_GEN_PROGS_COMMON += kvm_binary_stats_test TEST_GEN_PROGS_COMMON += kvm_create_max_vcpus TEST_GEN_PROGS_COMMON += kvm_page_table_test +TEST_GEN_PROGS_COMMON += plane_test TEST_GEN_PROGS_COMMON += set_memory_region_test # Compiled test targets diff --git a/tools/testing/selftests/kvm/plane_test.c b/tools/testing/selftests/kvm/plane_test.c new file mode 100644 index 000000000000..43c8de13490a --- /dev/null +++ b/tools/testing/selftests/kvm/plane_test.c @@ -0,0 +1,108 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Copyright (C) 2025 Red Hat, Inc. + * + * Test for architecture-neutral VM plane functionality + */ +#include +#include +#include +#include + +#include "test_util.h" + +#include "kvm_util.h" +#include "asm/kvm.h" +#include "linux/kvm.h" + +void test_create_plane_errors(int max_planes) +{ + struct kvm_vm *vm; + struct kvm_vcpu *vcpu; + int planefd, plane_vcpufd; + + vm = vm_create_barebones(); + vcpu = __vm_vcpu_add(vm, 0); + + planefd = __vm_ioctl(vm, KVM_CREATE_PLANE, (void *)(unsigned long)0); + TEST_ASSERT(planefd == -1 && errno == EEXIST, + "Creating existing plane, expecting EEXIST. ret: %d, errno: %d", + planefd, errno); + + planefd = __vm_ioctl(vm, KVM_CREATE_PLANE, (void *)(unsigned long)max_planes); + TEST_ASSERT(planefd == -1 && errno == EINVAL, + "Creating plane %d, expecting EINVAL. ret: %d, errno: %d", + max_planes, planefd, errno); + + plane_vcpufd = __vm_ioctl(vm, KVM_CREATE_VCPU_PLANE, (void *)(unsigned long)vcpu->fd); + TEST_ASSERT(plane_vcpufd == -1 && errno == ENOTTY, + "Creating vCPU for plane 0, expecting ENOTTY. ret: %d, errno: %d", + plane_vcpufd, errno); + + kvm_vm_free(vm); + ksft_test_result_pass("error conditions\n"); +} + +void test_create_plane(void) +{ + struct kvm_vm *vm; + struct kvm_vcpu *vcpu; + int r, planefd, plane_vcpufd; + + vm = vm_create_barebones(); + vcpu = __vm_vcpu_add(vm, 0); + + planefd = __vm_ioctl(vm, KVM_CREATE_PLANE, (void *)(unsigned long)1); + TEST_ASSERT(planefd >= 0, "Creating new plane, got error: %d", + errno); + + r = ioctl(planefd, KVM_CHECK_EXTENSION, KVM_CAP_PLANES); + TEST_ASSERT(r == 0, + "Checking KVM_CHECK_EXTENSION(KVM_CAP_PLANES). ret: %d", r); + + r = ioctl(planefd, KVM_CHECK_EXTENSION, KVM_CAP_CHECK_EXTENSION_VM); + TEST_ASSERT(r == 1, + "Checking KVM_CHECK_EXTENSION(KVM_CAP_CHECK_EXTENSION_VM). ret: %d", r); + + r = __vm_ioctl(vm, KVM_CREATE_PLANE, (void *)(unsigned long)1); + TEST_ASSERT(r == -1 && errno == EEXIST, + "Creating existing plane, expecting EEXIST. ret: %d, errno: %d", + r, errno); + + plane_vcpufd = ioctl(planefd, KVM_CREATE_VCPU_PLANE, (void *)(unsigned long)vcpu->fd); + TEST_ASSERT(plane_vcpufd >= 0, "Creating vCPU for plane 1, got error: %d", errno); + + r = ioctl(planefd, KVM_CREATE_VCPU_PLANE, (void *)(unsigned long)vcpu->fd); + TEST_ASSERT(r == -1 && errno == EEXIST, + "Creating vCPU again for plane 1. ret: %d, errno: %d", + r, errno); + + r = ioctl(planefd, KVM_RUN, (void *)(unsigned long)0); + TEST_ASSERT(r == -1 && errno == ENOTTY, + "Running plane vCPU again for plane 1. ret: %d, errno: %d", + r, errno); + + close(plane_vcpufd); + close(planefd); + + kvm_vm_free(vm); + ksft_test_result_pass("basic planefd and plane_vcpufd operation\n"); +} + +int main(int argc, char *argv[]) +{ + int cap_planes = kvm_check_cap(KVM_CAP_PLANES); + TEST_REQUIRE(cap_planes); + + ksft_print_header(); + ksft_set_plan(2); + + pr_info("# KVM_CAP_PLANES: %d\n", cap_planes); + + test_create_plane_errors(cap_planes); + + if (cap_planes > 1) + test_create_plane(); + + ksft_finished(); +} From patchwork Tue Apr 1 16:11:05 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Paolo Bonzini X-Patchwork-Id: 14035116 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id E2089224887 for ; Tue, 1 Apr 2025 16:12:30 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=170.10.129.124 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1743523952; cv=none; b=hqqvLnF2BN2A4W0dHG+Yh9OUuqMp0JyJ84i6LA/Oo4+Pr/pnY4SgSQa82U8V6+0k32yRpjw+9xo5coS6n61kKZd3WDl6Xqwdu6onU2c2dlD/RIl0F3rVO5WCvROlzj8hXjatDmRV1jbf2jvXVWbwplGd2puu/xsXoYxvEpRqmfs= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1743523952; c=relaxed/simple; bh=IzgYPa9TLRk/unXxPit+XhBgPDXmtIE8BQwrHks1a1I=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=A7j2wIRroUy3mf3raLH+rcWP+9dOWT5nZz2rMboayDVzCtCTxd6xk+qog7uxs91GOhSWBHeqrjQVMYt4cb2PiPK8JHkzRjz4MUSFfmbxFUarNtZAri+eo6oAEDOtsMgGHvz4jO3k0J3F005dyYCYH7EegGIJ8fr+M2t1oMCtzAo= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=redhat.com; spf=pass smtp.mailfrom=redhat.com; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b=FskCZzmj; arc=none smtp.client-ip=170.10.129.124 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=redhat.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=redhat.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="FskCZzmj" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1743523949; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=ThjFpyMC5Hp17v5ZX3gmbcUwmxsf/SI/3VCI26Zru1Q=; b=FskCZzmjo+biTgYld/gpiRMW4B2e+sWznDvvJVRCIvziwmnB3QKsy9mCsAlXScE629cxVF S4+ESB1xsk5/2ZJjZSx3CbH41qf1YybYW8ci4LOnm2VR8Z9FOeiXwm64kxVkbKlPLBCL4Y 80i3NnAkJxHEo/zphfUKEWla89n1HMw= Received: from mail-wr1-f72.google.com (mail-wr1-f72.google.com [209.85.221.72]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-569-7lSXeaVbMfmoemT3gPkW3w-1; Tue, 01 Apr 2025 12:12:27 -0400 X-MC-Unique: 7lSXeaVbMfmoemT3gPkW3w-1 X-Mimecast-MFC-AGG-ID: 7lSXeaVbMfmoemT3gPkW3w_1743523946 Received: by mail-wr1-f72.google.com with SMTP id ffacd0b85a97d-3912e4e2033so2464158f8f.0 for ; Tue, 01 Apr 2025 09:12:27 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1743523946; x=1744128746; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=ThjFpyMC5Hp17v5ZX3gmbcUwmxsf/SI/3VCI26Zru1Q=; b=mRkGV9S++C6EUtLFBPzTaepCA2v6qBTcN/Wt8u/dL5i6Yj/a/RPOrJ1eMq3K4xKR5Z 3YenXyKimnMdVWDTAr2goHEgnJCbfvscrVYFLuQT2Y8BsD8gdWWlJW7/Iy0NJ4I/Q5VJ dWKaygtRGsF1a+GcUBgxXtdYYk24z9Hx7A1UROW5jqCq0ItgfNWBQUzvRRbafnk5jzwp Wn0cWLmJLlDNjb57lJTcIANkMi8pbD9EtGWiCxPR8xPdibdfm3qXeo9LXrXMNim0bcrE H0ICVQNlgy3eAhi4XaTtBPo/5kwpEygHwRmepdrwUE/X6sA0YxKQoHzjDx+mdMT9m+M5 sIKA== X-Forwarded-Encrypted: i=1; AJvYcCX7Dw1Si5tnJyzZIVRwLIjhrASoc/Au6DzlANnNZM3YeO89fSz53SKm2mJryskvFsPoAWE=@vger.kernel.org X-Gm-Message-State: AOJu0Yz8OkfTT2wM3d7jRkhWIj2qwiRXSKTl7JUxShZfK0wV4r93vmMt m4WyORPwdvjBFpIoExxB5ck3P9gZa7WhQN7hERF1jKualRARHmjhB8qsxZFALw9/qc+MQOogafr FS6hBUnfWeOSW/4BnQLnLUrWENiN4Z5NsurXIzwrjGMciMey55w== X-Gm-Gg: ASbGncuI9eSQgzuRb96vd0C3t6wpdAKB50EcrGmHtH3Se9E+cmb485XZm/Q3D/RrDP2 YPDrHa1jjFdSQn00U4zxHDwhxXGxXH4S8XKcCO/f2iFBm2/Povhcqt0STWAyqUi9qX87pEZVM/F nOAdc1iapezn7J4WY4yQIX2Kw02nejBFgLFL4610Q4+un/B+Qm+x1erd9iUXNZNpn0ZCnMhq3l4 KvA/imsC2ZB5o8P4OMvjaHNfXspLv3uYX8RLlQSjHKp8D+eHx+T+TfMY1rlKMOrlNbHTTERqqyO cnbt7faC4743ppK5Z6PBLQ== X-Received: by 2002:a5d:5984:0:b0:391:4095:49b7 with SMTP id ffacd0b85a97d-39c120e079amr11107176f8f.25.1743523946132; Tue, 01 Apr 2025 09:12:26 -0700 (PDT) X-Google-Smtp-Source: AGHT+IEBwtO/kXcEsLY7KXcPjFkl0MSW5C3pyJQtoPcatQB9sVIkAZFxi2d3eLTTZNav5W8Iob3Pfw== X-Received: by 2002:a5d:5984:0:b0:391:4095:49b7 with SMTP id ffacd0b85a97d-39c120e079amr11107136f8f.25.1743523945640; Tue, 01 Apr 2025 09:12:25 -0700 (PDT) Received: from [192.168.10.48] ([176.206.111.201]) by smtp.gmail.com with ESMTPSA id 5b1f17b1804b1-43d830f5ea4sm206316555e9.25.2025.04.01.09.12.23 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 01 Apr 2025 09:12:23 -0700 (PDT) From: Paolo Bonzini To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: roy.hopkins@suse.com, seanjc@google.com, thomas.lendacky@amd.com, ashish.kalra@amd.com, michael.roth@amd.com, jroedel@suse.de, nsaenz@amazon.com, anelkz@amazon.de, James.Bottomley@HansenPartnership.com Subject: [PATCH 28/29] selftests: kvm: add plane infrastructure Date: Tue, 1 Apr 2025 18:11:05 +0200 Message-ID: <20250401161106.790710-29-pbonzini@redhat.com> X-Mailer: git-send-email 2.49.0 In-Reply-To: <20250401161106.790710-1-pbonzini@redhat.com> References: <20250401161106.790710-1-pbonzini@redhat.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Allow creating plane and vCPU-plane file descriptors, and close them when the VM is freed. Rewrite the previous test using the new infrastructure (separate for easier review). Signed-off-by: Paolo Bonzini --- .../testing/selftests/kvm/include/kvm_util.h | 48 ++++++++++++++ tools/testing/selftests/kvm/lib/kvm_util.c | 65 ++++++++++++++++++- tools/testing/selftests/kvm/plane_test.c | 21 +++--- 3 files changed, 119 insertions(+), 15 deletions(-) diff --git a/tools/testing/selftests/kvm/include/kvm_util.h b/tools/testing/selftests/kvm/include/kvm_util.h index 373912464fb4..c1dfe071357e 100644 --- a/tools/testing/selftests/kvm/include/kvm_util.h +++ b/tools/testing/selftests/kvm/include/kvm_util.h @@ -67,6 +67,20 @@ struct kvm_vcpu { uint32_t dirty_gfns_count; }; +struct kvm_plane { + struct list_head list; + uint32_t id; + int fd; + struct kvm_vm *vm; +}; + +struct kvm_plane_vcpu { + struct list_head list; + uint32_t id; + int fd; + struct kvm_vcpu *plane0; +}; + struct userspace_mem_regions { struct rb_root gpa_tree; struct rb_root hva_tree; @@ -93,6 +107,8 @@ struct kvm_vm { unsigned int va_bits; uint64_t max_gfn; struct list_head vcpus; + struct list_head planes; + struct list_head plane_vcpus; struct userspace_mem_regions regions; struct sparsebit *vpages_valid; struct sparsebit *vpages_mapped; @@ -338,6 +354,21 @@ do { \ __TEST_ASSERT_VM_VCPU_IOCTL(!ret, #cmd, ret, vm); \ }) +static __always_inline void static_assert_is_plane(struct kvm_plane *plane) { } + +#define __plane_ioctl(plane, cmd, arg) \ +({ \ + static_assert_is_plane(plane); \ + kvm_do_ioctl((plane)->fd, cmd, arg); \ +}) + +#define plane_ioctl(plane, cmd, arg) \ +({ \ + int ret = __plane_ioctl(plane, cmd, arg); \ + \ + __TEST_ASSERT_VM_VCPU_IOCTL(!ret, #cmd, ret, (plane)->vm); \ +}) + static __always_inline void static_assert_is_vcpu(struct kvm_vcpu *vcpu) { } #define __vcpu_ioctl(vcpu, cmd, arg) \ @@ -353,6 +384,21 @@ static __always_inline void static_assert_is_vcpu(struct kvm_vcpu *vcpu) { } __TEST_ASSERT_VM_VCPU_IOCTL(!ret, #cmd, ret, (vcpu)->vm); \ }) +static __always_inline void static_assert_is_plane_vcpu(struct kvm_plane_vcpu *plane_vcpu) { } + +#define __plane_vcpu_ioctl(plane_vcpu, cmd, arg) \ +({ \ + static_assert_is_plane_vcpu(plane_vcpu); \ + kvm_do_ioctl((plane_vcpu)->fd, cmd, arg); \ +}) + +#define plane_vcpu_ioctl(plane_vcpu, cmd, arg) \ +({ \ + int ret = __plane_vcpu_ioctl(plane_vcpu, cmd, arg); \ + \ + __TEST_ASSERT_VM_VCPU_IOCTL(!ret, #cmd, ret, (plane_vcpu)->plane0->vm); \ +}) + /* * Looks up and returns the value corresponding to the capability * (KVM_CAP_*) given by cap. @@ -601,6 +647,8 @@ void vm_mem_region_set_flags(struct kvm_vm *vm, uint32_t slot, uint32_t flags); void vm_mem_region_move(struct kvm_vm *vm, uint32_t slot, uint64_t new_gpa); void vm_mem_region_delete(struct kvm_vm *vm, uint32_t slot); struct kvm_vcpu *__vm_vcpu_add(struct kvm_vm *vm, uint32_t vcpu_id); +struct kvm_plane *vm_plane_add(struct kvm_vm *vm, int plane_id); +struct kvm_plane_vcpu *__vm_plane_vcpu_add(struct kvm_vcpu *vcpu, struct kvm_plane *plane); void vm_populate_vaddr_bitmap(struct kvm_vm *vm); vm_vaddr_t vm_vaddr_unused_gap(struct kvm_vm *vm, size_t sz, vm_vaddr_t vaddr_min); vm_vaddr_t vm_vaddr_alloc(struct kvm_vm *vm, size_t sz, vm_vaddr_t vaddr_min); diff --git a/tools/testing/selftests/kvm/lib/kvm_util.c b/tools/testing/selftests/kvm/lib/kvm_util.c index 815bc45dd8dc..a2f233945e1c 100644 --- a/tools/testing/selftests/kvm/lib/kvm_util.c +++ b/tools/testing/selftests/kvm/lib/kvm_util.c @@ -279,6 +279,8 @@ struct kvm_vm *____vm_create(struct vm_shape shape) TEST_ASSERT(vm != NULL, "Insufficient Memory"); INIT_LIST_HEAD(&vm->vcpus); + INIT_LIST_HEAD(&vm->planes); + INIT_LIST_HEAD(&vm->plane_vcpus); vm->regions.gpa_tree = RB_ROOT; vm->regions.hva_tree = RB_ROOT; hash_init(vm->regions.slot_hash); @@ -757,10 +759,22 @@ static void vm_vcpu_rm(struct kvm_vm *vm, struct kvm_vcpu *vcpu) void kvm_vm_release(struct kvm_vm *vmp) { - struct kvm_vcpu *vcpu, *tmp; + struct kvm_vcpu *vcpu, *tmp_vcpu; + struct kvm_plane_vcpu *plane_vcpu, *tmp_plane_vcpu; + struct kvm_plane *plane, *tmp_plane; int ret; - list_for_each_entry_safe(vcpu, tmp, &vmp->vcpus, list) + list_for_each_entry_safe(plane_vcpu, tmp_plane_vcpu, &vmp->plane_vcpus, list) { + close(plane_vcpu->fd); + free(plane_vcpu); + } + + list_for_each_entry_safe(plane, tmp_plane, &vmp->planes, list) { + close(plane->fd); + free(plane); + } + + list_for_each_entry_safe(vcpu, tmp_vcpu, &vmp->vcpus, list) vm_vcpu_rm(vmp, vcpu); ret = close(vmp->fd); @@ -1314,6 +1328,52 @@ static bool vcpu_exists(struct kvm_vm *vm, uint32_t vcpu_id) return false; } +/* + * Adds a virtual CPU to the VM specified by vm with the ID given by vcpu_id. + * No additional vCPU setup is done. Returns the vCPU. + */ +struct kvm_plane *vm_plane_add(struct kvm_vm *vm, int plane_id) +{ + struct kvm_plane *plane; + + /* Allocate and initialize new vcpu structure. */ + plane = calloc(1, sizeof(*plane)); + TEST_ASSERT(plane != NULL, "Insufficient Memory"); + + plane->fd = __vm_ioctl(vm, KVM_CREATE_PLANE, (void *)(unsigned long)plane_id); + TEST_ASSERT_VM_VCPU_IOCTL(plane->fd >= 0, KVM_CREATE_PLANE, plane->fd, vm); + plane->vm = vm; + plane->id = plane_id; + + /* Add to linked-list of extra-plane VCPUs. */ + list_add(&plane->list, &vm->planes); + + return plane; +} + +/* + * Adds a virtual CPU to the VM specified by vm with the ID given by vcpu_id. + * No additional vCPU setup is done. Returns the vCPU. + */ +struct kvm_plane_vcpu *__vm_plane_vcpu_add(struct kvm_vcpu *vcpu, struct kvm_plane *plane) +{ + struct kvm_plane_vcpu *plane_vcpu; + + /* Allocate and initialize new vcpu structure. */ + plane_vcpu = calloc(1, sizeof(*plane_vcpu)); + TEST_ASSERT(plane_vcpu != NULL, "Insufficient Memory"); + + plane_vcpu->fd = __plane_ioctl(plane, KVM_CREATE_VCPU_PLANE, (void *)(unsigned long)vcpu->fd); + TEST_ASSERT_VM_VCPU_IOCTL(plane_vcpu->fd >= 0, KVM_CREATE_VCPU_PLANE, plane_vcpu->fd, plane->vm); + plane_vcpu->id = vcpu->id; + plane_vcpu->plane0 = vcpu; + + /* Add to linked-list of extra-plane VCPUs. */ + list_add(&plane_vcpu->list, &plane->vm->plane_vcpus); + + return plane_vcpu; +} + /* * Adds a virtual CPU to the VM specified by vm with the ID given by vcpu_id. * No additional vCPU setup is done. Returns the vCPU. @@ -2021,6 +2081,7 @@ static struct exit_reason { KVM_EXIT_STRING(NOTIFY), KVM_EXIT_STRING(LOONGARCH_IOCSR), KVM_EXIT_STRING(MEMORY_FAULT), + KVM_EXIT_STRING(PLANE_EVENT), }; /* diff --git a/tools/testing/selftests/kvm/plane_test.c b/tools/testing/selftests/kvm/plane_test.c index 43c8de13490a..9cf3ab76b3cd 100644 --- a/tools/testing/selftests/kvm/plane_test.c +++ b/tools/testing/selftests/kvm/plane_test.c @@ -47,20 +47,19 @@ void test_create_plane(void) { struct kvm_vm *vm; struct kvm_vcpu *vcpu; - int r, planefd, plane_vcpufd; + struct kvm_plane *plane; + int r; vm = vm_create_barebones(); vcpu = __vm_vcpu_add(vm, 0); - planefd = __vm_ioctl(vm, KVM_CREATE_PLANE, (void *)(unsigned long)1); - TEST_ASSERT(planefd >= 0, "Creating new plane, got error: %d", - errno); + plane = vm_plane_add(vm, 1); - r = ioctl(planefd, KVM_CHECK_EXTENSION, KVM_CAP_PLANES); + r = __plane_ioctl(plane, KVM_CHECK_EXTENSION, (void *)(unsigned long)KVM_CAP_PLANES); TEST_ASSERT(r == 0, "Checking KVM_CHECK_EXTENSION(KVM_CAP_PLANES). ret: %d", r); - r = ioctl(planefd, KVM_CHECK_EXTENSION, KVM_CAP_CHECK_EXTENSION_VM); + r = __plane_ioctl(plane, KVM_CHECK_EXTENSION, (void *)(unsigned long)KVM_CAP_CHECK_EXTENSION_VM); TEST_ASSERT(r == 1, "Checking KVM_CHECK_EXTENSION(KVM_CAP_CHECK_EXTENSION_VM). ret: %d", r); @@ -69,22 +68,18 @@ void test_create_plane(void) "Creating existing plane, expecting EEXIST. ret: %d, errno: %d", r, errno); - plane_vcpufd = ioctl(planefd, KVM_CREATE_VCPU_PLANE, (void *)(unsigned long)vcpu->fd); - TEST_ASSERT(plane_vcpufd >= 0, "Creating vCPU for plane 1, got error: %d", errno); + __vm_plane_vcpu_add(vcpu, plane); - r = ioctl(planefd, KVM_CREATE_VCPU_PLANE, (void *)(unsigned long)vcpu->fd); + r = __plane_ioctl(plane, KVM_CREATE_VCPU_PLANE, (void *)(unsigned long)vcpu->fd); TEST_ASSERT(r == -1 && errno == EEXIST, "Creating vCPU again for plane 1. ret: %d, errno: %d", r, errno); - r = ioctl(planefd, KVM_RUN, (void *)(unsigned long)0); + r = __plane_ioctl(plane, KVM_RUN, (void *)(unsigned long)0); TEST_ASSERT(r == -1 && errno == ENOTTY, "Running plane vCPU again for plane 1. ret: %d, errno: %d", r, errno); - close(plane_vcpufd); - close(planefd); - kvm_vm_free(vm); ksft_test_result_pass("basic planefd and plane_vcpufd operation\n"); } From patchwork Tue Apr 1 16:11:06 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Paolo Bonzini X-Patchwork-Id: 14035117 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 531E4224895 for ; Tue, 1 Apr 2025 16:12:32 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=170.10.133.124 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1743523954; cv=none; b=KAFb0MRXYV6bV2Kh63OPUrwHEbwU3VWpreT8k8jxTV1o4YxuUViEE9i9LL2jB+Tt4h0lVcv4owi550XWBd94uuHTPz616NHcKCEKiqRMYYs/TX3Oy5tAb5bIsWFNsUFx4/SflkdFY0XmBNb6qlTuV3WwVfjUNyt7OrIYXdcQ0wU= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1743523954; c=relaxed/simple; bh=qiOLED8BkRze4KF+KMmrHA+MJ7cag8ndXy7QMTI4Sw0=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=afEhdOCC2dvo0TCxk5xyc4NVAP0mzw5HhgqprbKPJZCEqqKq2lcZEOW7wzxtt9SzrvWNA7ZiyQ5n76fkMEwGtYv0NTPutUa73zCqx3GVh1lzPMzm2RxpTPVCV3LpwBYym4ydMywrsCGd7Lb3q3DXx2P9mr+NZgvSCcgtutMtamA= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=redhat.com; spf=pass smtp.mailfrom=redhat.com; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b=e8ltetUC; arc=none smtp.client-ip=170.10.133.124 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=redhat.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=redhat.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="e8ltetUC" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1743523951; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=tPLDTEU28EVlLVRImtt+kppWrpwzBYCScgMp8fqLlvI=; b=e8ltetUCCmMGBy/1mpEaWFftDwNG9z4/kkdfB+8Cv+M8SLU7KESbJFUHmCS6cfg3P4h94Y In4/2KWLQOOCKu2U/p1HRrJqoudu3nduG/jWcU6WGvZj+XLRqzbk5JBk1f6kD3os456DXM LNosIUg9Lp/u8iP5bBfydTso8VShbS8= Received: from mail-wm1-f70.google.com (mail-wm1-f70.google.com [209.85.128.70]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-610-6VgV04jHMdaA2dctVKFQPg-1; Tue, 01 Apr 2025 12:12:30 -0400 X-MC-Unique: 6VgV04jHMdaA2dctVKFQPg-1 X-Mimecast-MFC-AGG-ID: 6VgV04jHMdaA2dctVKFQPg_1743523949 Received: by mail-wm1-f70.google.com with SMTP id 5b1f17b1804b1-43d0830c3f7so47376495e9.2 for ; Tue, 01 Apr 2025 09:12:30 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1743523948; x=1744128748; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=tPLDTEU28EVlLVRImtt+kppWrpwzBYCScgMp8fqLlvI=; b=lFEHkP/XYb/mnbTYqoOjuzPhtqMV5APJjEgrd2duDDTXvH7kEP56VqSpA5v/zn19vt AZ7w6oMkjIFJNZBuQEPSwehvCI7aMsRizc5K+U8mWMMdHezud34FyTn1+gZL0WSq6Eem k7DN6hyIgR/Oxc6j2WMBtd8Ao40J381lm7TOGqSzNAEJX2QFF7RidvaAzppqNDerqc/k G70xfPAbRiXpRSjvCxlZIsoiaxB0w/x7ujYiR/woWi++3gq1pOmxHKtgyB3uDpot/Uqe IH9qGbGJI/FRRY9fYal1xKQRuBeSb2f9mayrYczdkCeC4/Ff5KaX6k8AsfbyS0piE6To Xj7A== X-Forwarded-Encrypted: i=1; AJvYcCULqXrji7NBTTjKmap8ntd7CN3uAQCcvlA+LzACOslwSdcA1LhWxfvr1lKh4w1rWizBvlA=@vger.kernel.org X-Gm-Message-State: AOJu0Yyhk1958XolezEGvoiuc/Pk/YJEsHJh3rDgu623qP6gSuNd62aP kHxtLR0QmPV+Bfk7nOCvblO7cdyfCSoxjtPPs2BUWLoqHgDtuhGaZMFuZKLlwxpIMMJpQP2kcT2 d53kcL4v+3haa54d2PoqitcvHb98Qnu0YiPfTcuCtVVd3LJ8Jni/Aw2tr7w== X-Gm-Gg: ASbGncuTgDF06oBJTYB/f3ddbZ615dLUJINPY02Y1KnRPgX1jm9+3IwRqt5Ljzt3khn 2Hy8mHEIeQXJzimvegCjQKIlx9WEW2Ubj3MqoLCMWz1o93ZS2ytOKT/GI9vqcQDor49P1hPa95X Yv2lfUFIC53VL1isRktoVRNquYx24aR1XFb6p1l1HXF5Q9oFpi3P2MLCHYLVY1S06jCjiW5IdRv MX00lpcXMBOEOw9tnWF78GqZxdmD+ntko2euX/TVEbpdmRh+4VmoycBjy01hUwgUfo6Kq/dejo1 OSOd1wOWXaDgCalMVVhGYw== X-Received: by 2002:a05:6000:2913:b0:391:122c:8b2 with SMTP id ffacd0b85a97d-39c120e1566mr11925035f8f.31.1743523948530; Tue, 01 Apr 2025 09:12:28 -0700 (PDT) X-Google-Smtp-Source: AGHT+IEEasAFSdvxOC3yWEZX6iq1LZ/fATrSFrxZYBcxLlTIQzYSIpNJNzx61Lq79fmq0JTbS0oYjw== X-Received: by 2002:a05:6000:2913:b0:391:122c:8b2 with SMTP id ffacd0b85a97d-39c120e1566mr11924994f8f.31.1743523948129; Tue, 01 Apr 2025 09:12:28 -0700 (PDT) Received: from [192.168.10.48] ([176.206.111.201]) by smtp.gmail.com with ESMTPSA id 5b1f17b1804b1-43ea8d2bc7fsm13944985e9.0.2025.04.01.09.12.26 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 01 Apr 2025 09:12:26 -0700 (PDT) From: Paolo Bonzini To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: roy.hopkins@suse.com, seanjc@google.com, thomas.lendacky@amd.com, ashish.kalra@amd.com, michael.roth@amd.com, jroedel@suse.de, nsaenz@amazon.com, anelkz@amazon.de, James.Bottomley@HansenPartnership.com Subject: [PATCH 29/29] selftests: kvm: add x86-specific plane test Date: Tue, 1 Apr 2025 18:11:06 +0200 Message-ID: <20250401161106.790710-30-pbonzini@redhat.com> X-Mailer: git-send-email 2.49.0 In-Reply-To: <20250401161106.790710-1-pbonzini@redhat.com> References: <20250401161106.790710-1-pbonzini@redhat.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Add a new test for x86-specific behavior such as vCPU state sharing and interrupts. Signed-off-by: Paolo Bonzini --- tools/testing/selftests/kvm/Makefile.kvm | 1 + .../selftests/kvm/include/x86/processor.h | 1 + .../testing/selftests/kvm/lib/x86/processor.c | 15 + tools/testing/selftests/kvm/x86/plane_test.c | 270 ++++++++++++++++++ 4 files changed, 287 insertions(+) create mode 100644 tools/testing/selftests/kvm/x86/plane_test.c diff --git a/tools/testing/selftests/kvm/Makefile.kvm b/tools/testing/selftests/kvm/Makefile.kvm index b1d0b410cc03..9d94db9d750f 100644 --- a/tools/testing/selftests/kvm/Makefile.kvm +++ b/tools/testing/selftests/kvm/Makefile.kvm @@ -82,6 +82,7 @@ TEST_GEN_PROGS_x86 += x86/kvm_pv_test TEST_GEN_PROGS_x86 += x86/monitor_mwait_test TEST_GEN_PROGS_x86 += x86/nested_emulation_test TEST_GEN_PROGS_x86 += x86/nested_exceptions_test +TEST_GEN_PROGS_x86 += x86/plane_test TEST_GEN_PROGS_x86 += x86/platform_info_test TEST_GEN_PROGS_x86 += x86/pmu_counters_test TEST_GEN_PROGS_x86 += x86/pmu_event_filter_test diff --git a/tools/testing/selftests/kvm/include/x86/processor.h b/tools/testing/selftests/kvm/include/x86/processor.h index 32ab6ca7ec32..cf2095f3a7d5 100644 --- a/tools/testing/selftests/kvm/include/x86/processor.h +++ b/tools/testing/selftests/kvm/include/x86/processor.h @@ -1106,6 +1106,7 @@ static inline void vcpu_clear_cpuid_feature(struct kvm_vcpu *vcpu, uint64_t vcpu_get_msr(struct kvm_vcpu *vcpu, uint64_t msr_index); int _vcpu_set_msr(struct kvm_vcpu *vcpu, uint64_t msr_index, uint64_t msr_value); +int _plane_vcpu_set_msr(struct kvm_plane_vcpu *plane_vcpu, uint64_t msr_index, uint64_t msr_value); /* * Assert on an MSR access(es) and pretty print the MSR name when possible. diff --git a/tools/testing/selftests/kvm/lib/x86/processor.c b/tools/testing/selftests/kvm/lib/x86/processor.c index bd5a802fa7a5..b4431ca7fbca 100644 --- a/tools/testing/selftests/kvm/lib/x86/processor.c +++ b/tools/testing/selftests/kvm/lib/x86/processor.c @@ -917,6 +917,21 @@ uint64_t vcpu_get_msr(struct kvm_vcpu *vcpu, uint64_t msr_index) return buffer.entry.data; } +int _plane_vcpu_set_msr(struct kvm_plane_vcpu *plane_vcpu, uint64_t msr_index, uint64_t msr_value) +{ + struct { + struct kvm_msrs header; + struct kvm_msr_entry entry; + } buffer = {}; + + memset(&buffer, 0, sizeof(buffer)); + buffer.header.nmsrs = 1; + buffer.entry.index = msr_index; + buffer.entry.data = msr_value; + + return __plane_vcpu_ioctl(plane_vcpu, KVM_SET_MSRS, &buffer.header); +} + int _vcpu_set_msr(struct kvm_vcpu *vcpu, uint64_t msr_index, uint64_t msr_value) { struct { diff --git a/tools/testing/selftests/kvm/x86/plane_test.c b/tools/testing/selftests/kvm/x86/plane_test.c new file mode 100644 index 000000000000..0fdd8a066723 --- /dev/null +++ b/tools/testing/selftests/kvm/x86/plane_test.c @@ -0,0 +1,270 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Copyright (C) 2025 Red Hat, Inc. + * + * Test for x86-specific VM plane functionality + */ +#include +#include +#include +#include + +#include "test_util.h" + +#include "kvm_util.h" +#include "processor.h" +#include "apic.h" +#include "asm/kvm.h" +#include "linux/kvm.h" + +static void test_plane_regs(void) +{ + struct kvm_vm *vm; + struct kvm_vcpu *vcpu; + struct kvm_plane *plane; + struct kvm_plane_vcpu *plane_vcpu; + + struct kvm_regs regs0, regs1; + + vm = vm_create_barebones(); + vcpu = __vm_vcpu_add(vm, 0); + plane = vm_plane_add(vm, 1); + plane_vcpu = __vm_plane_vcpu_add(vcpu, plane); + + vcpu_ioctl(vcpu, KVM_GET_REGS, ®s0); + plane_vcpu_ioctl(plane_vcpu, KVM_GET_REGS, ®s1); + regs0.rax = 0x12345678; + regs1.rax = 0x87654321; + + vcpu_ioctl(vcpu, KVM_SET_REGS, ®s0); + plane_vcpu_ioctl(plane_vcpu, KVM_SET_REGS, ®s1); + + vcpu_ioctl(vcpu, KVM_GET_REGS, ®s0); + plane_vcpu_ioctl(plane_vcpu, KVM_GET_REGS, ®s1); + TEST_ASSERT_EQ(regs0.rax, 0x12345678); + TEST_ASSERT_EQ(regs1.rax, 0x87654321); + + kvm_vm_free(vm); + ksft_test_result_pass("get/set regs for planes\n"); +} + +/* Offset of XMM0 in the legacy XSAVE area. */ +#define XSTATE_BV_OFFSET (0x200/4) +#define XMM_OFFSET (0xa0/4) +#define PKRU_OFFSET (0xa80/4) + +static void test_plane_fpu_nonshared(void) +{ + struct kvm_vm *vm; + struct kvm_vcpu *vcpu; + struct kvm_plane *plane; + struct kvm_plane_vcpu *plane_vcpu; + + struct kvm_xsave xsave0, xsave1; + + vm = vm_create_barebones(); + TEST_ASSERT_EQ(vm_check_cap(vm, KVM_CAP_PLANES_FPU), false); + + vcpu = __vm_vcpu_add(vm, 0); + vcpu_init_cpuid(vcpu, kvm_get_supported_cpuid()); + vcpu_set_cpuid(vcpu); + + plane = vm_plane_add(vm, 1); + plane_vcpu = __vm_plane_vcpu_add(vcpu, plane); + + vcpu_ioctl(vcpu, KVM_GET_XSAVE, &xsave0); + xsave0.region[XSTATE_BV_OFFSET] |= XFEATURE_MASK_FP | XFEATURE_MASK_SSE; + xsave0.region[XMM_OFFSET] = 0x12345678; + vcpu_ioctl(vcpu, KVM_SET_XSAVE, &xsave0); + + plane_vcpu_ioctl(plane_vcpu, KVM_GET_XSAVE, &xsave1); + xsave1.region[XSTATE_BV_OFFSET] |= XFEATURE_MASK_FP | XFEATURE_MASK_SSE; + xsave1.region[XMM_OFFSET] = 0x87654321; + plane_vcpu_ioctl(plane_vcpu, KVM_SET_XSAVE, &xsave1); + + memset(&xsave0, 0, sizeof(xsave0)); + vcpu_ioctl(vcpu, KVM_GET_XSAVE, &xsave0); + TEST_ASSERT_EQ(xsave0.region[XMM_OFFSET], 0x12345678); + + memset(&xsave1, 0, sizeof(xsave0)); + plane_vcpu_ioctl(plane_vcpu, KVM_GET_XSAVE, &xsave1); + TEST_ASSERT_EQ(xsave1.region[XMM_OFFSET], 0x87654321); + + ksft_test_result_pass("get/set FPU not shared across planes\n"); +} + +static void test_plane_fpu_shared(void) +{ + struct kvm_vm *vm; + struct kvm_vcpu *vcpu; + struct kvm_plane *plane; + struct kvm_plane_vcpu *plane_vcpu; + + struct kvm_xsave xsave0, xsave1; + + vm = vm_create_barebones(); + vm_enable_cap(vm, KVM_CAP_PLANES_FPU, 1ul); + TEST_ASSERT_EQ(vm_check_cap(vm, KVM_CAP_PLANES_FPU), true); + + vcpu = __vm_vcpu_add(vm, 0); + vcpu_init_cpuid(vcpu, kvm_get_supported_cpuid()); + vcpu_set_cpuid(vcpu); + + plane = vm_plane_add(vm, 1); + plane_vcpu = __vm_plane_vcpu_add(vcpu, plane); + + vcpu_ioctl(vcpu, KVM_GET_XSAVE, &xsave0); + + xsave0.region[XSTATE_BV_OFFSET] |= XFEATURE_MASK_FP | XFEATURE_MASK_SSE; + xsave0.region[XMM_OFFSET] = 0x12345678; + vcpu_ioctl(vcpu, KVM_SET_XSAVE, &xsave0); + plane_vcpu_ioctl(plane_vcpu, KVM_GET_XSAVE, &xsave1); + TEST_ASSERT_EQ(xsave1.region[XMM_OFFSET], 0x12345678); + + xsave1.region[XSTATE_BV_OFFSET] |= XFEATURE_MASK_FP | XFEATURE_MASK_SSE; + xsave1.region[XMM_OFFSET] = 0x87654321; + plane_vcpu_ioctl(plane_vcpu, KVM_SET_XSAVE, &xsave1); + vcpu_ioctl(vcpu, KVM_GET_XSAVE, &xsave0); + TEST_ASSERT_EQ(xsave0.region[XMM_OFFSET], 0x87654321); + + ksft_test_result_pass("get/set FPU shared across planes\n"); + + if (!this_cpu_has(X86_FEATURE_PKU)) { + ksft_test_result_skip("get/set PKRU with shared FPU\n"); + goto exit; + } + + xsave0.region[XSTATE_BV_OFFSET] = XFEATURE_MASK_PKRU; + xsave0.region[PKRU_OFFSET] = 0xffffffff; + vcpu_ioctl(vcpu, KVM_SET_XSAVE, &xsave0); + plane_vcpu_ioctl(plane_vcpu, KVM_GET_XSAVE, &xsave0); + + xsave0.region[XSTATE_BV_OFFSET] = XFEATURE_MASK_PKRU; + xsave0.region[PKRU_OFFSET] = 0xaaaaaaaa; + vcpu_ioctl(vcpu, KVM_SET_XSAVE, &xsave0); + plane_vcpu_ioctl(plane_vcpu, KVM_GET_XSAVE, &xsave1); + assert(xsave1.region[PKRU_OFFSET] == 0xffffffff); + + xsave1.region[XSTATE_BV_OFFSET] = XFEATURE_MASK_PKRU; + xsave1.region[PKRU_OFFSET] = 0x55555555; + plane_vcpu_ioctl(plane_vcpu, KVM_SET_XSAVE, &xsave1); + vcpu_ioctl(vcpu, KVM_GET_XSAVE, &xsave0); + assert(xsave0.region[PKRU_OFFSET] == 0xaaaaaaaa); + + ksft_test_result_pass("get/set PKRU with shared FPU\n"); + +exit: + kvm_vm_free(vm); +} + +#define APIC_SPIV 0xF0 +#define APIC_IRR 0x200 + +#define MYVEC 192 + +#define MAKE_MSI(cpu, vector) ((struct kvm_msi){ \ + .address_lo = APIC_DEFAULT_GPA + (((cpu) & 0xff) << 8), \ + .address_hi = (cpu) & ~0xff, \ + .data = (vector), \ +}) + +static bool has_irr(struct kvm_lapic_state *apic, int vector) +{ + int word = vector >> 5; + int bit_in_word = vector & 31; + int bit = (APIC_IRR + word * 16) * CHAR_BIT + (bit_in_word & 31); + + return apic->regs[bit >> 3] & (1 << (bit & 7)); +} + +static void do_enable_lapic(struct kvm_lapic_state *apic) +{ + /* set bit 8 */ + apic->regs[APIC_SPIV + 1] |= 1; +} + +static void test_plane_msi(void) +{ + struct kvm_vm *vm; + struct kvm_vcpu *vcpu; + struct kvm_plane *plane; + struct kvm_plane_vcpu *plane_vcpu; + int r; + + struct kvm_msi msi = MAKE_MSI(0, MYVEC); + struct kvm_lapic_state lapic0, lapic1; + + vm = __vm_create(VM_SHAPE_DEFAULT, 1, 0); + + vcpu = __vm_vcpu_add(vm, 0); + vcpu_init_cpuid(vcpu, kvm_get_supported_cpuid()); + vcpu_set_cpuid(vcpu); + + plane = vm_plane_add(vm, 1); + plane_vcpu = __vm_plane_vcpu_add(vcpu, plane); + + vcpu_set_msr(vcpu, MSR_IA32_APICBASE, + APIC_DEFAULT_GPA | MSR_IA32_APICBASE_ENABLE | X2APIC_ENABLE); + vcpu_ioctl(vcpu, KVM_GET_LAPIC, &lapic0); + do_enable_lapic(&lapic0); + vcpu_ioctl(vcpu, KVM_SET_LAPIC, &lapic0); + + _plane_vcpu_set_msr(plane_vcpu, MSR_IA32_APICBASE, + APIC_DEFAULT_GPA | MSR_IA32_APICBASE_ENABLE | X2APIC_ENABLE); + plane_vcpu_ioctl(plane_vcpu, KVM_GET_LAPIC, &lapic1); + do_enable_lapic(&lapic1); + plane_vcpu_ioctl(plane_vcpu, KVM_SET_LAPIC, &lapic1); + + r = __plane_ioctl(plane, KVM_SIGNAL_MSI, &msi); + TEST_ASSERT(r == 1, + "Delivering interrupt to plane 1. ret: %d, errno: %d", r, errno); + + vcpu_ioctl(vcpu, KVM_GET_LAPIC, &lapic0); + TEST_ASSERT(!has_irr(&lapic0, MYVEC), "Vector clear in plane 0"); + plane_vcpu_ioctl(plane_vcpu, KVM_GET_LAPIC, &lapic1); + TEST_ASSERT(has_irr(&lapic1, MYVEC), "Vector set in plane 1"); + + /* req_exit_planes always has priority */ + vcpu->run->req_exit_planes = (1 << 1); + vcpu_run(vcpu); + TEST_ASSERT_EQ(vcpu->run->exit_reason, KVM_EXIT_PLANE_EVENT); + TEST_ASSERT_EQ(vcpu->run->plane_event.cause, KVM_PLANE_EVENT_INTERRUPT); + TEST_ASSERT_EQ(vcpu->run->plane_event.pending_event_planes, (1 << 1)); + TEST_ASSERT_EQ(vcpu->run->plane_event.target, (1 << 1)); + + r = __vm_ioctl(vm, KVM_SIGNAL_MSI, &msi); + TEST_ASSERT(r == 1, + "Delivering interrupt to plane 0. ret: %d, errno: %d", r, errno); + vcpu_ioctl(vcpu, KVM_GET_LAPIC, &lapic0); + TEST_ASSERT(has_irr(&lapic0, MYVEC), "Vector set in plane 0"); + + /* req_exit_planes ignores current plane; current plane is cleared */ + vcpu->run->plane = 1; + vcpu->run->req_exit_planes = (1 << 0) | (1 << 1); + vcpu_run(vcpu); + TEST_ASSERT_EQ(vcpu->run->exit_reason, KVM_EXIT_PLANE_EVENT); + TEST_ASSERT_EQ(vcpu->run->plane_event.cause, KVM_PLANE_EVENT_INTERRUPT); + TEST_ASSERT_EQ(vcpu->run->plane_event.pending_event_planes, (1 << 0)); + TEST_ASSERT_EQ(vcpu->run->plane_event.target, (1 << 0)); + + kvm_vm_free(vm); + ksft_test_result_pass("signal MSI for planes\n"); +} + +int main(int argc, char *argv[]) +{ + int cap_planes = kvm_check_cap(KVM_CAP_PLANES); + TEST_REQUIRE(cap_planes && cap_planes > 1); + + ksft_print_header(); + ksft_set_plan(5); + + pr_info("# KVM_CAP_PLANES: %d\n", cap_planes); + + test_plane_regs(); + test_plane_fpu_nonshared(); + test_plane_fpu_shared(); + test_plane_msi(); + + ksft_finished(); +}