From patchwork Wed Aug 21 15:38:43 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Steven Price X-Patchwork-Id: 13771827 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by smtp.subspace.kernel.org (Postfix) with ESMTP id B96801CC170; Wed, 21 Aug 2024 15:41:41 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=217.140.110.172 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1724254903; cv=none; b=LtajuxCsMR4FGmM+Ly7jngSSNTt+HXAfnZT94lMJD+ttxsdphnbp2qZuI/PFwRVxhdRi8GusDu3tU7hbT4c+/HnkuPnyI07OJwmWxiytmJZq1uEVqO6ChWDq9adABj1VAUawGidVZeujpdv9ThkxGTWpXeMOcHHSUbO72GkEyy8= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1724254903; c=relaxed/simple; bh=F+fJre0vkzk/Pxm4i7v/F9bV2FxvSW4+ss0wcm/IOk0=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=o6BvrgwTI0xJpw3wZzSbv0ft4DYQAH4d0H0DSHPZbNfwpoc0EHcgBzm/EWmJ4dnjT/xcV6XifCf4gDu+iF+aU0Y7d8mmktZAbv+ovUEU+V1g/kQpYJdv98SpWG7w/2QaK5oKbA7ju9zag3yxRIxFNQY/q222K09qq95CwhOJaHw= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=arm.com; spf=pass smtp.mailfrom=arm.com; arc=none smtp.client-ip=217.140.110.172 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=arm.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=arm.com Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 27415152B; Wed, 21 Aug 2024 08:42:07 -0700 (PDT) Received: from e122027.cambridge.arm.com (e122027.cambridge.arm.com [10.1.37.19]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 9108C3F73B; Wed, 21 Aug 2024 08:41:37 -0700 (PDT) From: Steven Price To: kvm@vger.kernel.org, kvmarm@lists.linux.dev Cc: Steven Price , Catalin Marinas , Marc Zyngier , Will Deacon , James Morse , Oliver Upton , Suzuki K Poulose , Zenghui Yu , linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, Joey Gouly , Alexandru Elisei , Christoffer Dall , Fuad Tabba , linux-coco@lists.linux.dev, Ganapatrao Kulkarni , Gavin Shan , Shanker Donthineni , Alper Gun Subject: [PATCH v4 42/43] arm64: kvm: Expose support for private memory Date: Wed, 21 Aug 2024 16:38:43 +0100 Message-Id: <20240821153844.60084-43-steven.price@arm.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240821153844.60084-1-steven.price@arm.com> References: <20240821153844.60084-1-steven.price@arm.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Select KVM_GENERIC_PRIVATE_MEM and provide the necessary support functions. Signed-off-by: Steven Price --- Changes since v2: * Switch kvm_arch_has_private_mem() to a macro to avoid overhead of a function call. * Guard definitions of kvm_arch_{pre,post}_set_memory_attributes() with #ifdef CONFIG_KVM_GENERIC_MEMORY_ATTRIBUTES. * Early out in kvm_arch_post_set_memory_attributes() if the WARN_ON should trigger. --- arch/arm64/include/asm/kvm_host.h | 6 ++++++ arch/arm64/kvm/Kconfig | 1 + arch/arm64/kvm/mmu.c | 22 ++++++++++++++++++++++ 3 files changed, 29 insertions(+) diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index 178e02dc4927..a669ff7b0958 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -1367,6 +1367,12 @@ struct kvm *kvm_arch_alloc_vm(void); #define vcpu_is_protected(vcpu) kvm_vm_is_protected((vcpu)->kvm) +#ifdef CONFIG_KVM_PRIVATE_MEM +#define kvm_arch_has_private_mem(kvm) ((kvm)->arch.is_realm) +#else +#define kvm_arch_has_private_mem(kvm) false +#endif + int kvm_arm_vcpu_finalize(struct kvm_vcpu *vcpu, int feature); bool kvm_arm_vcpu_is_finalized(struct kvm_vcpu *vcpu); diff --git a/arch/arm64/kvm/Kconfig b/arch/arm64/kvm/Kconfig index 58f09370d17e..8da57e74c86a 100644 --- a/arch/arm64/kvm/Kconfig +++ b/arch/arm64/kvm/Kconfig @@ -37,6 +37,7 @@ menuconfig KVM select HAVE_KVM_VCPU_RUN_PID_CHANGE select SCHED_INFO select GUEST_PERF_EVENTS if PERF_EVENTS + select KVM_GENERIC_PRIVATE_MEM help Support hosting virtualized guest machines. diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c index 2110dd4c18db..c8bfeda9e1e6 100644 --- a/arch/arm64/kvm/mmu.c +++ b/arch/arm64/kvm/mmu.c @@ -2284,6 +2284,28 @@ int kvm_arch_prepare_memory_region(struct kvm *kvm, return ret; } +#ifdef CONFIG_KVM_GENERIC_MEMORY_ATTRIBUTES +bool kvm_arch_pre_set_memory_attributes(struct kvm *kvm, + struct kvm_gfn_range *range) +{ + WARN_ON_ONCE(!kvm_arch_has_private_mem(kvm)); + return false; +} + +bool kvm_arch_post_set_memory_attributes(struct kvm *kvm, + struct kvm_gfn_range *range) +{ + if (WARN_ON_ONCE(!kvm_arch_has_private_mem(kvm))) + return false; + + if (range->arg.attributes & KVM_MEMORY_ATTRIBUTE_PRIVATE) + range->only_shared = true; + kvm_unmap_gfn_range(kvm, range); + + return false; +} +#endif + void kvm_arch_free_memslot(struct kvm *kvm, struct kvm_memory_slot *slot) { }