From patchwork Mon Dec 18 16:11:39 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Gonda X-Patchwork-Id: 13497244 Received: from mail-pf1-f201.google.com (mail-pf1-f201.google.com [209.85.210.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 2A590129EDF for ; Mon, 18 Dec 2023 16:12:28 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--pgonda.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="cp5q6XgK" Received: by mail-pf1-f201.google.com with SMTP id d2e1a72fcca58-6ce77ba2463so4465898b3a.0 for ; Mon, 18 Dec 2023 08:12:28 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1702915948; x=1703520748; darn=vger.kernel.org; h=cc:to:from:subject:references:mime-version:message-id:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=1ajiqYHTTmxO72SG+HrHC8HXYH3UsjwoPzVuDZPnHMw=; b=cp5q6XgKrdQD3WKfj2W+U94YbQRsz8PeAi7fElt4hUbH22bHVEhOPPdWDB+H+XA5kT NYMttH7W1dB2SRY6axnqca2puVr2YtOnKVcaRTO8oIQjsNM4isW/dGNwnLIQ2eqp7IE8 ACry/vPs57uLg5ROtoC6UzU8iRDG9ydiZWLJ5tB53DhApRr7Kwadr2+Y7JL0uRqFYQqW MMeCsHTDQe43I+so5gXl4S7cSsdKNxHStUNg5MxS9bb667B5N211P5FlYF9oXByIjbHx lkhnweQMNOqoeGtXmwW/Gu+fyOp3i1AOJIxqrga/gNFZEsJ2zUGnAOA+rWCFSvSbNNF8 kfgA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1702915948; x=1703520748; h=cc:to:from:subject:references:mime-version:message-id:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=1ajiqYHTTmxO72SG+HrHC8HXYH3UsjwoPzVuDZPnHMw=; b=L4t4xK7tjiY8mjQLGhwBOL0QFebvTia924L7ICQaGUQda1FWyFo5jTz8xuS9zGuqNU vVAEkZ8qhFPfttk2RGKmIu130s6Z7beNmRgQoYEjGK3j8bsUWWB+qJto54sEtSJFWAtO Ft6T8QZ5t7kdDqAxR1PrsljMhPSSl4US+LU9jDP08LuKYMFbwq54j5qYsKAuottxFdYN vfdKDXtqXa4KarWdf6hccdFWREel0WMC02knPwrH+/C8nsfQg1lZcNqNYjSxpg8iifLL VRB7pRo+C0nTImlOwd/o64lPg9rJ2aCPir7hYUf+q8uy24WuNzGBbuQi8lpv3Es+03aH wkRA== X-Gm-Message-State: AOJu0Ywuhtn1jc20gkDTOu1nw64CWXjg6Ah9F7M/BB59nJlApYZ7wdqa aqHxVI1bwCbmkOkAByD5PpxKySl9Z6fNFrUvlghPk2uDLphoEyTckzlU43tihKkpkQaRkrNKQQ0 4OJpuIkAkQxyIv1fP3b7rqxuq/Fql0zqMAfkVJ4iSrCedIkJbc+N+qmcogg== X-Google-Smtp-Source: AGHT+IFe5q18LGIpzA2mHRcP0IM4SNh0P6/P0pAyAlwOTuf58IYa4l/6GVTRpjvNpxBtzgXiCmqeNOl/XUo= X-Received: from pgonda1.kir.corp.google.com ([2620:0:1008:15:8aeb:e3fa:237c:63a5]) (user=pgonda job=sendgmr) by 2002:a05:6a00:7d8:b0:6d6:9409:a41d with SMTP id n24-20020a056a0007d800b006d69409a41dmr37737pfu.5.1702915948393; Mon, 18 Dec 2023 08:12:28 -0800 (PST) Date: Mon, 18 Dec 2023 08:11:39 -0800 In-Reply-To: <20231218161146.3554657-1-pgonda@google.com> Message-Id: <20231218161146.3554657-2-pgonda@google.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20231218161146.3554657-1-pgonda@google.com> X-Mailer: git-send-email 2.43.0.472.g3155946c3a-goog Subject: [PATCH V7 1/8] KVM: selftests: Extend VM creation's @mode to allow control of VM subtype From: Peter Gonda To: kvm@vger.kernel.org Cc: Peter Gonda , Paolo Bonzini , Sean Christopherson , Vishal Annapurve , Ackerley Tng , Andrew Jones , Tom Lendacky , Michael Roth Carve out space in the @mode passed to the various VM creation helpers to allow using the mode to control the subtype of VM, e.g. to identify x86's SEV VMs (which are "regular" VMs as far as KVM is concerned). Cc: Paolo Bonzini Cc: Sean Christopherson Cc: Vishal Annapurve Cc: Ackerley Tng Cc: Andrew Jones Cc: Tom Lendacky Cc: Michael Roth Signed-off-by: Peter Gonda Signed-off-by: Sean Christopherson --- .../selftests/kvm/include/kvm_util_base.h | 82 ++++++++++++------- tools/testing/selftests/kvm/lib/guest_modes.c | 2 +- tools/testing/selftests/kvm/lib/kvm_util.c | 34 ++++---- 3 files changed, 73 insertions(+), 45 deletions(-) diff --git a/tools/testing/selftests/kvm/include/kvm_util_base.h b/tools/testing/selftests/kvm/include/kvm_util_base.h index a18db6a7b3cf..ca99cc41685d 100644 --- a/tools/testing/selftests/kvm/include/kvm_util_base.h +++ b/tools/testing/selftests/kvm/include/kvm_util_base.h @@ -43,6 +43,48 @@ typedef uint64_t vm_paddr_t; /* Virtual Machine (Guest) physical address */ typedef uint64_t vm_vaddr_t; /* Virtual Machine (Guest) virtual address */ +enum vm_guest_mode { + VM_MODE_P52V48_4K, + VM_MODE_P52V48_64K, + VM_MODE_P48V48_4K, + VM_MODE_P48V48_16K, + VM_MODE_P48V48_64K, + VM_MODE_P40V48_4K, + VM_MODE_P40V48_16K, + VM_MODE_P40V48_64K, + VM_MODE_PXXV48_4K, /* For 48bits VA but ANY bits PA */ + VM_MODE_P47V64_4K, + VM_MODE_P44V64_4K, + VM_MODE_P36V48_4K, + VM_MODE_P36V48_16K, + VM_MODE_P36V48_64K, + VM_MODE_P36V47_16K, + NUM_VM_MODES, +}; + +enum vm_subtype { + VM_SUBTYPE_DEFAULT, + VM_SUBTYPE_SEV, + NUM_VM_SUBTYPES, +}; + +/* + * There are currently two flavors of "modes" that tests can control. The + * primary mode defines the physical and virtual address widths, and page sizes + * configured in hardware. The VM type allows creating alternative types of + * VMs, e.g. architecture specific flavors of protected VMs. + * + * Valid values for the primary mask are "enum vm_guest_mode", and valid values + * for the type mask are "enum vm_subtype". + */ +#define VM_MODE_PRIMARY_MASK GENMASK(7, 0) +#define VM_MODE_SUBTYPE_SHIFT 8 +#define VM_MODE_SUBTYPE_MASK GENMASK(15, 8) + +/* 8 bits in each mask above, i.e. 255 possible values */ +_Static_assert(NUM_VM_MODES < 256); +_Static_assert(NUM_VM_SUBTYPES < 256); + struct userspace_mem_region { struct kvm_userspace_memory_region region; struct sparsebit *unused_phy_pages; @@ -88,7 +130,8 @@ enum kvm_mem_region_type { }; struct kvm_vm { - int mode; + enum vm_guest_mode mode; + enum vm_subtype subtype; unsigned long type; int kvm_fd; int fd; @@ -169,28 +212,9 @@ static inline struct userspace_mem_region *vm_get_mem_region(struct kvm_vm *vm, #define DEFAULT_GUEST_STACK_VADDR_MIN 0xab6000 #define DEFAULT_STACK_PGS 5 -enum vm_guest_mode { - VM_MODE_P52V48_4K, - VM_MODE_P52V48_64K, - VM_MODE_P48V48_4K, - VM_MODE_P48V48_16K, - VM_MODE_P48V48_64K, - VM_MODE_P40V48_4K, - VM_MODE_P40V48_16K, - VM_MODE_P40V48_64K, - VM_MODE_PXXV48_4K, /* For 48bits VA but ANY bits PA */ - VM_MODE_P47V64_4K, - VM_MODE_P44V64_4K, - VM_MODE_P36V48_4K, - VM_MODE_P36V48_16K, - VM_MODE_P36V48_64K, - VM_MODE_P36V47_16K, - NUM_VM_MODES, -}; - #if defined(__aarch64__) -extern enum vm_guest_mode vm_mode_default; +extern uint32_t vm_mode_default; #define VM_MODE_DEFAULT vm_mode_default #define MIN_PAGE_SHIFT 12U @@ -713,8 +737,8 @@ vm_paddr_t vm_alloc_page_table(struct kvm_vm *vm); * __vm_create() does NOT create vCPUs, @nr_runnable_vcpus is used purely to * calculate the amount of memory needed for per-vCPU data, e.g. stacks. */ -struct kvm_vm *____vm_create(enum vm_guest_mode mode); -struct kvm_vm *__vm_create(enum vm_guest_mode mode, uint32_t nr_runnable_vcpus, +struct kvm_vm *____vm_create(uint32_t mode); +struct kvm_vm *__vm_create(uint32_t mode, uint32_t nr_runnable_vcpus, uint64_t nr_extra_pages); static inline struct kvm_vm *vm_create_barebones(void) @@ -727,7 +751,7 @@ static inline struct kvm_vm *vm_create(uint32_t nr_runnable_vcpus) return __vm_create(VM_MODE_DEFAULT, nr_runnable_vcpus, 0); } -struct kvm_vm *__vm_create_with_vcpus(enum vm_guest_mode mode, uint32_t nr_vcpus, +struct kvm_vm *__vm_create_with_vcpus(uint32_t mode, uint32_t nr_vcpus, uint64_t extra_mem_pages, void *guest_code, struct kvm_vcpu *vcpus[]); @@ -761,11 +785,11 @@ void kvm_parse_vcpu_pinning(const char *pcpus_string, uint32_t vcpu_to_pcpu[], int nr_vcpus); unsigned long vm_compute_max_gfn(struct kvm_vm *vm); -unsigned int vm_calc_num_guest_pages(enum vm_guest_mode mode, size_t size); -unsigned int vm_num_host_pages(enum vm_guest_mode mode, unsigned int num_guest_pages); -unsigned int vm_num_guest_pages(enum vm_guest_mode mode, unsigned int num_host_pages); -static inline unsigned int -vm_adjust_num_guest_pages(enum vm_guest_mode mode, unsigned int num_guest_pages) +unsigned int vm_calc_num_guest_pages(uint32_t mode, size_t size); +unsigned int vm_num_host_pages(uint32_t mode, unsigned int num_guest_pages); +unsigned int vm_num_guest_pages(uint32_t mode, unsigned int num_host_pages); +static inline unsigned int vm_adjust_num_guest_pages(uint32_t mode, + unsigned int num_guest_pages) { unsigned int n; n = vm_num_guest_pages(mode, vm_num_host_pages(mode, num_guest_pages)); diff --git a/tools/testing/selftests/kvm/lib/guest_modes.c b/tools/testing/selftests/kvm/lib/guest_modes.c index 1df3ce4b16fd..0f6f2e2200b0 100644 --- a/tools/testing/selftests/kvm/lib/guest_modes.c +++ b/tools/testing/selftests/kvm/lib/guest_modes.c @@ -6,7 +6,7 @@ #ifdef __aarch64__ #include "processor.h" -enum vm_guest_mode vm_mode_default; +uint32_t vm_mode_default; #endif struct guest_mode guest_modes[NUM_VM_MODES]; diff --git a/tools/testing/selftests/kvm/lib/kvm_util.c b/tools/testing/selftests/kvm/lib/kvm_util.c index 7a8af1821f5d..bb8bbebbd935 100644 --- a/tools/testing/selftests/kvm/lib/kvm_util.c +++ b/tools/testing/selftests/kvm/lib/kvm_util.c @@ -209,7 +209,7 @@ __weak void vm_vaddr_populate_bitmap(struct kvm_vm *vm) (1ULL << (vm->va_bits - 1)) >> vm->page_shift); } -struct kvm_vm *____vm_create(enum vm_guest_mode mode) +struct kvm_vm *____vm_create(uint32_t mode) { struct kvm_vm *vm; @@ -221,13 +221,16 @@ struct kvm_vm *____vm_create(enum vm_guest_mode mode) vm->regions.hva_tree = RB_ROOT; hash_init(vm->regions.slot_hash); - vm->mode = mode; vm->type = 0; + vm->subtype = (mode & VM_MODE_SUBTYPE_MASK) >> VM_MODE_SUBTYPE_SHIFT; + vm->mode = mode & VM_MODE_PRIMARY_MASK; + pr_debug("%s: mode='%s'\n", __func__, vm_guest_mode_string(vm->mode)); - vm->pa_bits = vm_guest_mode_params[mode].pa_bits; - vm->va_bits = vm_guest_mode_params[mode].va_bits; - vm->page_size = vm_guest_mode_params[mode].page_size; - vm->page_shift = vm_guest_mode_params[mode].page_shift; + + vm->pa_bits = vm_guest_mode_params[vm->mode].pa_bits; + vm->va_bits = vm_guest_mode_params[vm->mode].va_bits; + vm->page_size = vm_guest_mode_params[vm->mode].page_size; + vm->page_shift = vm_guest_mode_params[vm->mode].page_shift; /* Setup mode specific traits. */ switch (vm->mode) { @@ -285,7 +288,7 @@ struct kvm_vm *____vm_create(enum vm_guest_mode mode) vm->pgtable_levels = 5; break; default: - TEST_FAIL("Unknown guest mode, mode: 0x%x", mode); + TEST_FAIL("Unknown guest mode, mode: 0x%x", vm->mode); } #ifdef __aarch64__ @@ -308,7 +311,7 @@ struct kvm_vm *____vm_create(enum vm_guest_mode mode) return vm; } -static uint64_t vm_nr_pages_required(enum vm_guest_mode mode, +static uint64_t vm_nr_pages_required(uint32_t mode, uint32_t nr_runnable_vcpus, uint64_t extra_mem_pages) { @@ -347,17 +350,18 @@ static uint64_t vm_nr_pages_required(enum vm_guest_mode mode, return vm_adjust_num_guest_pages(mode, nr_pages); } -struct kvm_vm *__vm_create(enum vm_guest_mode mode, uint32_t nr_runnable_vcpus, +struct kvm_vm *__vm_create(uint32_t mode, uint32_t nr_runnable_vcpus, uint64_t nr_extra_pages) { - uint64_t nr_pages = vm_nr_pages_required(mode, nr_runnable_vcpus, + uint32_t primary_mode = mode & VM_MODE_PRIMARY_MASK; + uint64_t nr_pages = vm_nr_pages_required(primary_mode, nr_runnable_vcpus, nr_extra_pages); struct userspace_mem_region *slot0; struct kvm_vm *vm; int i; pr_debug("%s: mode='%s' pages='%ld'\n", __func__, - vm_guest_mode_string(mode), nr_pages); + vm_guest_mode_string(primary_mode), nr_pages); vm = ____vm_create(mode); @@ -400,7 +404,7 @@ struct kvm_vm *__vm_create(enum vm_guest_mode mode, uint32_t nr_runnable_vcpus, * extra_mem_pages is only used to calculate the maximum page table size, * no real memory allocation for non-slot0 memory in this function. */ -struct kvm_vm *__vm_create_with_vcpus(enum vm_guest_mode mode, uint32_t nr_vcpus, +struct kvm_vm *__vm_create_with_vcpus(uint32_t mode, uint32_t nr_vcpus, uint64_t extra_mem_pages, void *guest_code, struct kvm_vcpu *vcpus[]) { @@ -2030,7 +2034,7 @@ static inline int getpageshift(void) } unsigned int -vm_num_host_pages(enum vm_guest_mode mode, unsigned int num_guest_pages) +vm_num_host_pages(uint32_t mode, unsigned int num_guest_pages) { return vm_calc_num_pages(num_guest_pages, vm_guest_mode_params[mode].page_shift, @@ -2038,13 +2042,13 @@ vm_num_host_pages(enum vm_guest_mode mode, unsigned int num_guest_pages) } unsigned int -vm_num_guest_pages(enum vm_guest_mode mode, unsigned int num_host_pages) +vm_num_guest_pages(uint32_t mode, unsigned int num_host_pages) { return vm_calc_num_pages(num_host_pages, getpageshift(), vm_guest_mode_params[mode].page_shift, false); } -unsigned int vm_calc_num_guest_pages(enum vm_guest_mode mode, size_t size) +unsigned int vm_calc_num_guest_pages(uint32_t mode, size_t size) { unsigned int n; n = DIV_ROUND_UP(size, vm_guest_mode_params[mode].page_size); From patchwork Mon Dec 18 16:11:40 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Gonda X-Patchwork-Id: 13497245 Received: from mail-yw1-f201.google.com (mail-yw1-f201.google.com [209.85.128.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 73018498A8 for ; Mon, 18 Dec 2023 16:12:31 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--pgonda.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="mkk6nrL4" Received: by mail-yw1-f201.google.com with SMTP id 00721157ae682-5e74c97832aso4439457b3.2 for ; Mon, 18 Dec 2023 08:12:31 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1702915950; x=1703520750; darn=vger.kernel.org; h=cc:to:from:subject:references:mime-version:message-id:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=teKIIGiquHq+1YOg1okkWYa610kfWCGwfg6B3Yy0FU8=; b=mkk6nrL4LQyte+H18tRFtDCD9RXy2gWJM1xSTGqSM6/5JnFsQgPSavsaxAY0bLszei hU6COGaqDAk+ir1x835lNf9VFI+V1V2gps2l9KnzIa7jxdps3/dYorDHMZm3DpwpxvAU bk6mpqOYZclc3LFSU8Z6YhDBGByVea9MyX0wxHZ3NAUrXpyGQhgm1d+ykrDkcQBAFHex Qzz8VlfqYnjG8uEbrscCTzmjJWh4Q98hRbRX5/XJxoSInMJn6ZfI7DWqPSt3CY8pfLan zduagbbDtD+DskUJ3F2351mOV5EE+mwo8ok7v8oAcwwAPh9J71UaYKT7Z9jsT1FnuDNM Yh9w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1702915950; x=1703520750; h=cc:to:from:subject:references:mime-version:message-id:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=teKIIGiquHq+1YOg1okkWYa610kfWCGwfg6B3Yy0FU8=; b=R4ZIC94IFnyPBtqN8pNLtXliwgEI7Vof8eAOkShtbIdvnvjfpYw71CXM1hl2ybsl9F u3eG7vOUvujw2dn0RSPo+BJ6ifXlPfmrG4mwOUSYfTxZP1Tbs0a+89B+jvkC91uDGsqf 7YArfCWU7mcBWeacWS21ysl3nrv5AMHBECJIwoWnQCRUBiDbDXj+4mrHteSJLgvxAeJ+ yBDbX5rGYq2s8I57lRvZh/BrHQhuQ+NRrWbhyvFSKTSvYgoHfDQD8W30gdOJMwaUHU4D MWvrGkYiwfEckrXPo5PbfCR8I7p6N+R3emDIL0aHywVS2Y0m8K/i99FLq707xVTP+z3S tN9A== X-Gm-Message-State: AOJu0Yzoc3OGEpNn7BIlGD4SBLVIvVzmpOHzPM/6eVT1jmSTOwwBtatM FzDpdcv4SrNkYmDGLmfWw5MezMVQlkJR9ggzF+glJ5IdFclYWX22zLLWHOfxqPJdacTiAiKI+9S nP4S8wXP/PkAsmxz813P66bmO5N1U6h+jpXpg/lYNqFwhYr7Q1/l/9dHSlQ== X-Google-Smtp-Source: AGHT+IHHJ/h1id6QM5gTHv8R0WCtorYqxwTKhy6PdHSwH0fhR0UFNjc180YRqc/0P4Sg9gIgccqH5YukuZg= X-Received: from pgonda1.kir.corp.google.com ([2620:0:1008:15:8aeb:e3fa:237c:63a5]) (user=pgonda job=sendgmr) by 2002:a05:690c:10c:b0:5e6:1e40:e2e3 with SMTP id bd12-20020a05690c010c00b005e61e40e2e3mr964727ywb.5.1702915950328; Mon, 18 Dec 2023 08:12:30 -0800 (PST) Date: Mon, 18 Dec 2023 08:11:40 -0800 In-Reply-To: <20231218161146.3554657-1-pgonda@google.com> Message-Id: <20231218161146.3554657-3-pgonda@google.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20231218161146.3554657-1-pgonda@google.com> X-Mailer: git-send-email 2.43.0.472.g3155946c3a-goog Subject: [PATCH V7 2/8] KVM: selftests: Make sparsebit structs const where appropriate From: Peter Gonda To: kvm@vger.kernel.org Cc: Michael Roth , Paolo Bonzini , Sean Christopherson , Vishal Annapurve , Ackerley Tng , Andrew Jones , Tom Lendacky , Peter Gonda From: Michael Roth Subsequent patches will introduce an encryption bitmap in kvm_util that would be useful to allow tests to access in read-only fashion. This will be done via a const sparsebit*. To avoid warnings or the need to add casts everywhere, add const to the various sparsebit functions that are applicable for read-only usage of sparsebit. Cc: Paolo Bonzini Cc: Sean Christopherson Cc: Vishal Annapurve Cc: Ackerley Tng Cc: Andrew Jones Cc: Tom Lendacky Cc: Michael Roth Signed-off-by: Michael Roth Signed-off-by: Peter Gonda --- .../testing/selftests/kvm/include/sparsebit.h | 36 +++++++------- tools/testing/selftests/kvm/lib/sparsebit.c | 48 +++++++++---------- 2 files changed, 42 insertions(+), 42 deletions(-) diff --git a/tools/testing/selftests/kvm/include/sparsebit.h b/tools/testing/selftests/kvm/include/sparsebit.h index 12a9a4b9cead..fb5170d57fcb 100644 --- a/tools/testing/selftests/kvm/include/sparsebit.h +++ b/tools/testing/selftests/kvm/include/sparsebit.h @@ -30,26 +30,26 @@ typedef uint64_t sparsebit_num_t; struct sparsebit *sparsebit_alloc(void); void sparsebit_free(struct sparsebit **sbitp); -void sparsebit_copy(struct sparsebit *dstp, struct sparsebit *src); +void sparsebit_copy(struct sparsebit *dstp, const struct sparsebit *src); -bool sparsebit_is_set(struct sparsebit *sbit, sparsebit_idx_t idx); -bool sparsebit_is_set_num(struct sparsebit *sbit, +bool sparsebit_is_set(const struct sparsebit *sbit, sparsebit_idx_t idx); +bool sparsebit_is_set_num(const struct sparsebit *sbit, sparsebit_idx_t idx, sparsebit_num_t num); -bool sparsebit_is_clear(struct sparsebit *sbit, sparsebit_idx_t idx); -bool sparsebit_is_clear_num(struct sparsebit *sbit, +bool sparsebit_is_clear(const struct sparsebit *sbit, sparsebit_idx_t idx); +bool sparsebit_is_clear_num(const struct sparsebit *sbit, sparsebit_idx_t idx, sparsebit_num_t num); -sparsebit_num_t sparsebit_num_set(struct sparsebit *sbit); -bool sparsebit_any_set(struct sparsebit *sbit); -bool sparsebit_any_clear(struct sparsebit *sbit); -bool sparsebit_all_set(struct sparsebit *sbit); -bool sparsebit_all_clear(struct sparsebit *sbit); -sparsebit_idx_t sparsebit_first_set(struct sparsebit *sbit); -sparsebit_idx_t sparsebit_first_clear(struct sparsebit *sbit); -sparsebit_idx_t sparsebit_next_set(struct sparsebit *sbit, sparsebit_idx_t prev); -sparsebit_idx_t sparsebit_next_clear(struct sparsebit *sbit, sparsebit_idx_t prev); -sparsebit_idx_t sparsebit_next_set_num(struct sparsebit *sbit, +sparsebit_num_t sparsebit_num_set(const struct sparsebit *sbit); +bool sparsebit_any_set(const struct sparsebit *sbit); +bool sparsebit_any_clear(const struct sparsebit *sbit); +bool sparsebit_all_set(const struct sparsebit *sbit); +bool sparsebit_all_clear(const struct sparsebit *sbit); +sparsebit_idx_t sparsebit_first_set(const struct sparsebit *sbit); +sparsebit_idx_t sparsebit_first_clear(const struct sparsebit *sbit); +sparsebit_idx_t sparsebit_next_set(const struct sparsebit *sbit, sparsebit_idx_t prev); +sparsebit_idx_t sparsebit_next_clear(const struct sparsebit *sbit, sparsebit_idx_t prev); +sparsebit_idx_t sparsebit_next_set_num(const struct sparsebit *sbit, sparsebit_idx_t start, sparsebit_num_t num); -sparsebit_idx_t sparsebit_next_clear_num(struct sparsebit *sbit, +sparsebit_idx_t sparsebit_next_clear_num(const struct sparsebit *sbit, sparsebit_idx_t start, sparsebit_num_t num); void sparsebit_set(struct sparsebit *sbitp, sparsebit_idx_t idx); @@ -62,9 +62,9 @@ void sparsebit_clear_num(struct sparsebit *sbitp, sparsebit_idx_t start, sparsebit_num_t num); void sparsebit_clear_all(struct sparsebit *sbitp); -void sparsebit_dump(FILE *stream, struct sparsebit *sbit, +void sparsebit_dump(FILE *stream, const struct sparsebit *sbit, unsigned int indent); -void sparsebit_validate_internal(struct sparsebit *sbit); +void sparsebit_validate_internal(const struct sparsebit *sbit); #ifdef __cplusplus } diff --git a/tools/testing/selftests/kvm/lib/sparsebit.c b/tools/testing/selftests/kvm/lib/sparsebit.c index 88cb6b84e6f3..cfed9d26cc71 100644 --- a/tools/testing/selftests/kvm/lib/sparsebit.c +++ b/tools/testing/selftests/kvm/lib/sparsebit.c @@ -202,7 +202,7 @@ static sparsebit_num_t node_num_set(struct node *nodep) /* Returns a pointer to the node that describes the * lowest bit index. */ -static struct node *node_first(struct sparsebit *s) +static struct node *node_first(const struct sparsebit *s) { struct node *nodep; @@ -216,7 +216,7 @@ static struct node *node_first(struct sparsebit *s) * lowest bit index > the index of the node pointed to by np. * Returns NULL if no node with a higher index exists. */ -static struct node *node_next(struct sparsebit *s, struct node *np) +static struct node *node_next(const struct sparsebit *s, struct node *np) { struct node *nodep = np; @@ -244,7 +244,7 @@ static struct node *node_next(struct sparsebit *s, struct node *np) * highest index < the index of the node pointed to by np. * Returns NULL if no node with a lower index exists. */ -static struct node *node_prev(struct sparsebit *s, struct node *np) +static struct node *node_prev(const struct sparsebit *s, struct node *np) { struct node *nodep = np; @@ -273,7 +273,7 @@ static struct node *node_prev(struct sparsebit *s, struct node *np) * subtree and duplicates the bit settings to the newly allocated nodes. * Returns the newly allocated copy of subtree. */ -static struct node *node_copy_subtree(struct node *subtree) +static struct node *node_copy_subtree(const struct node *subtree) { struct node *root; @@ -307,7 +307,7 @@ static struct node *node_copy_subtree(struct node *subtree) * index is within the bits described by the mask bits or the number of * contiguous bits set after the mask. Returns NULL if there is no such node. */ -static struct node *node_find(struct sparsebit *s, sparsebit_idx_t idx) +static struct node *node_find(const struct sparsebit *s, sparsebit_idx_t idx) { struct node *nodep; @@ -393,7 +393,7 @@ static struct node *node_add(struct sparsebit *s, sparsebit_idx_t idx) } /* Returns whether all the bits in the sparsebit array are set. */ -bool sparsebit_all_set(struct sparsebit *s) +bool sparsebit_all_set(const struct sparsebit *s) { /* * If any nodes there must be at least one bit set. Only case @@ -775,7 +775,7 @@ static void node_reduce(struct sparsebit *s, struct node *nodep) /* Returns whether the bit at the index given by idx, within the * sparsebit array is set or not. */ -bool sparsebit_is_set(struct sparsebit *s, sparsebit_idx_t idx) +bool sparsebit_is_set(const struct sparsebit *s, sparsebit_idx_t idx) { struct node *nodep; @@ -921,7 +921,7 @@ static inline sparsebit_idx_t node_first_clear(struct node *nodep, int start) * used by test cases after they detect an unexpected condition, as a means * to capture diagnostic information. */ -static void sparsebit_dump_internal(FILE *stream, struct sparsebit *s, +static void sparsebit_dump_internal(FILE *stream, const struct sparsebit *s, unsigned int indent) { /* Dump the contents of s */ @@ -969,7 +969,7 @@ void sparsebit_free(struct sparsebit **sbitp) * sparsebit_alloc(). It can though already have bits set, which * if different from src will be cleared. */ -void sparsebit_copy(struct sparsebit *d, struct sparsebit *s) +void sparsebit_copy(struct sparsebit *d, const struct sparsebit *s) { /* First clear any bits already set in the destination */ sparsebit_clear_all(d); @@ -981,7 +981,7 @@ void sparsebit_copy(struct sparsebit *d, struct sparsebit *s) } /* Returns whether num consecutive bits starting at idx are all set. */ -bool sparsebit_is_set_num(struct sparsebit *s, +bool sparsebit_is_set_num(const struct sparsebit *s, sparsebit_idx_t idx, sparsebit_num_t num) { sparsebit_idx_t next_cleared; @@ -1005,14 +1005,14 @@ bool sparsebit_is_set_num(struct sparsebit *s, } /* Returns whether the bit at the index given by idx. */ -bool sparsebit_is_clear(struct sparsebit *s, +bool sparsebit_is_clear(const struct sparsebit *s, sparsebit_idx_t idx) { return !sparsebit_is_set(s, idx); } /* Returns whether num consecutive bits starting at idx are all cleared. */ -bool sparsebit_is_clear_num(struct sparsebit *s, +bool sparsebit_is_clear_num(const struct sparsebit *s, sparsebit_idx_t idx, sparsebit_num_t num) { sparsebit_idx_t next_set; @@ -1041,13 +1041,13 @@ bool sparsebit_is_clear_num(struct sparsebit *s, * value. Use sparsebit_any_set(), instead of sparsebit_num_set() > 0, * to determine if the sparsebit array has any bits set. */ -sparsebit_num_t sparsebit_num_set(struct sparsebit *s) +sparsebit_num_t sparsebit_num_set(const struct sparsebit *s) { return s->num_set; } /* Returns whether any bit is set in the sparsebit array. */ -bool sparsebit_any_set(struct sparsebit *s) +bool sparsebit_any_set(const struct sparsebit *s) { /* * Nodes only describe set bits. If any nodes then there @@ -1070,20 +1070,20 @@ bool sparsebit_any_set(struct sparsebit *s) } /* Returns whether all the bits in the sparsebit array are cleared. */ -bool sparsebit_all_clear(struct sparsebit *s) +bool sparsebit_all_clear(const struct sparsebit *s) { return !sparsebit_any_set(s); } /* Returns whether all the bits in the sparsebit array are set. */ -bool sparsebit_any_clear(struct sparsebit *s) +bool sparsebit_any_clear(const struct sparsebit *s) { return !sparsebit_all_set(s); } /* Returns the index of the first set bit. Abort if no bits are set. */ -sparsebit_idx_t sparsebit_first_set(struct sparsebit *s) +sparsebit_idx_t sparsebit_first_set(const struct sparsebit *s) { struct node *nodep; @@ -1097,7 +1097,7 @@ sparsebit_idx_t sparsebit_first_set(struct sparsebit *s) /* Returns the index of the first cleared bit. Abort if * no bits are cleared. */ -sparsebit_idx_t sparsebit_first_clear(struct sparsebit *s) +sparsebit_idx_t sparsebit_first_clear(const struct sparsebit *s) { struct node *nodep1, *nodep2; @@ -1151,7 +1151,7 @@ sparsebit_idx_t sparsebit_first_clear(struct sparsebit *s) /* Returns index of next bit set within s after the index given by prev. * Returns 0 if there are no bits after prev that are set. */ -sparsebit_idx_t sparsebit_next_set(struct sparsebit *s, +sparsebit_idx_t sparsebit_next_set(const struct sparsebit *s, sparsebit_idx_t prev) { sparsebit_idx_t lowest_possible = prev + 1; @@ -1244,7 +1244,7 @@ sparsebit_idx_t sparsebit_next_set(struct sparsebit *s, /* Returns index of next bit cleared within s after the index given by prev. * Returns 0 if there are no bits after prev that are cleared. */ -sparsebit_idx_t sparsebit_next_clear(struct sparsebit *s, +sparsebit_idx_t sparsebit_next_clear(const struct sparsebit *s, sparsebit_idx_t prev) { sparsebit_idx_t lowest_possible = prev + 1; @@ -1300,7 +1300,7 @@ sparsebit_idx_t sparsebit_next_clear(struct sparsebit *s, * and returns the index of the first sequence of num consecutively set * bits. Returns a value of 0 of no such sequence exists. */ -sparsebit_idx_t sparsebit_next_set_num(struct sparsebit *s, +sparsebit_idx_t sparsebit_next_set_num(const struct sparsebit *s, sparsebit_idx_t start, sparsebit_num_t num) { sparsebit_idx_t idx; @@ -1335,7 +1335,7 @@ sparsebit_idx_t sparsebit_next_set_num(struct sparsebit *s, * and returns the index of the first sequence of num consecutively cleared * bits. Returns a value of 0 of no such sequence exists. */ -sparsebit_idx_t sparsebit_next_clear_num(struct sparsebit *s, +sparsebit_idx_t sparsebit_next_clear_num(const struct sparsebit *s, sparsebit_idx_t start, sparsebit_num_t num) { sparsebit_idx_t idx; @@ -1583,7 +1583,7 @@ static size_t display_range(FILE *stream, sparsebit_idx_t low, * contiguous bits. This is done because '-' is used to specify command-line * options, and sometimes ranges are specified as command-line arguments. */ -void sparsebit_dump(FILE *stream, struct sparsebit *s, +void sparsebit_dump(FILE *stream, const struct sparsebit *s, unsigned int indent) { size_t current_line_len = 0; @@ -1681,7 +1681,7 @@ void sparsebit_dump(FILE *stream, struct sparsebit *s, * s. On error, diagnostic information is printed to stderr and * abort is called. */ -void sparsebit_validate_internal(struct sparsebit *s) +void sparsebit_validate_internal(const struct sparsebit *s) { bool error_detected = false; struct node *nodep, *prev = NULL; From patchwork Mon Dec 18 16:11:41 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Gonda X-Patchwork-Id: 13497246 Received: from mail-pf1-f202.google.com (mail-pf1-f202.google.com [209.85.210.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id CD30242392 for ; Mon, 18 Dec 2023 16:12:33 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--pgonda.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="fPC+Rxku" Received: by mail-pf1-f202.google.com with SMTP id d2e1a72fcca58-6d8668f2d43so535469b3a.2 for ; Mon, 18 Dec 2023 08:12:33 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1702915953; x=1703520753; darn=vger.kernel.org; h=cc:to:from:subject:references:mime-version:message-id:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=SbICF1fkgvxxCgQV2nManH0yP77HuHuA1kyLJRiJrn0=; b=fPC+Rxku6r2mv1o+BQobchTFL1qSciOnsmEdmbHR+KxqGCiK3EMji2GFGEL8OjfaCZ Kp0Vg4SWoMC8/BMrIRY+Rl7zzlvt2JJqiv5LAyDj94gJ4iH/NQGPlMK+6I6fXh0qhSLg mnovj4hplM59PCJWUyo2v9aZ0dpAj0FBgbB/DB8l3E8VTvMWsxZR/yS4egLk7Dkq2Bpr MCqkWhw1+ce9WlBfDzqcaLmpn8wEf0zHuWNMgbaPUw3QPKFTF/UFs2pJMCJcF9bZPiZW 5jMP5jp1O1HUN5PsVZSU2+ThLyMp0krckPzIpokxpz9CY+uXFy3uO7jmWMWapfZzZkrw LYeA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1702915953; x=1703520753; h=cc:to:from:subject:references:mime-version:message-id:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=SbICF1fkgvxxCgQV2nManH0yP77HuHuA1kyLJRiJrn0=; b=bAJvMm25IlgPjeZzqp1117WKizLPqBa+pxTrG+8gfZoOJUAGOchW6i3uItj1AXhS4r rXWmIB8885ZFDUxvBi2FAlQ0j3VviPtj/foOh3WNv4oCavNUMPOuYz0iXA4XJ5JGa8Fv KkT7ktzxBPpx/O5XcRl2rd91SSYoDJuDN8pIuOh87Ip5LtxZe0Kxx4DDi8/c3Tngrwyi aLVYX4/1FKoX9y8oFIB0dBJwI8B7eVphfPSB1fM9pKY0Ts04dLoI/KQ6G/Mk9G+ASTCE 7mv7aq5vMZvSu1BPDekO9ULPhPbBEgiW8ndt+bV9hRnwaV9nSVmN9RQucCKTFwZyiXIj 8EQQ== X-Gm-Message-State: AOJu0YyigIHgi1yN6S6tmaRHCTr5m50CTNCIM4UoACX/rtYAkqlVszRl gK/KPBP1t03zvBtU62mCN7LG8xLvhVV0KSmLE3/6U/tTPC+p/Zyz/b2IC2p4EVsZHPBGHJw1l8z kz/kYxlD6FCxmOmhrKGlmiwYMpjbzMuy3nB0/929Jx3XqZfUWrwreWWZjRg== X-Google-Smtp-Source: AGHT+IFqEfyGWyuO0VoQqJSo3LI4BpPlPeAwvNMEUC+okeGFXa9Xlaab/qukz4zpMxfysEJRm0JqnHNPgf8= X-Received: from pgonda1.kir.corp.google.com ([2620:0:1008:15:8aeb:e3fa:237c:63a5]) (user=pgonda job=sendgmr) by 2002:a05:6a00:c81:b0:6ce:5904:6e85 with SMTP id a1-20020a056a000c8100b006ce59046e85mr2380919pfv.0.1702915952236; Mon, 18 Dec 2023 08:12:32 -0800 (PST) Date: Mon, 18 Dec 2023 08:11:41 -0800 In-Reply-To: <20231218161146.3554657-1-pgonda@google.com> Message-Id: <20231218161146.3554657-4-pgonda@google.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20231218161146.3554657-1-pgonda@google.com> X-Mailer: git-send-email 2.43.0.472.g3155946c3a-goog Subject: [PATCH V7 3/8] KVM: selftests: add hooks for managing protected guest memory From: Peter Gonda To: kvm@vger.kernel.org Cc: Peter Gonda , Paolo Bonzini , Sean Christopherson , Vishal Annapurve , Ackerley Tng , Andrew Jones , Tom Lendacky , Michael Roth Add kvm_vm.protected metadata. Protected VMs memory, potentially register and other state may not be accessible to KVM. This combined with a new protected_phy_pages bitmap will allow the selftests to check if a given pages is accessible. Cc: Paolo Bonzini Cc: Sean Christopherson Cc: Vishal Annapurve Cc: Ackerley Tng cc: Andrew Jones Cc: Tom Lendacky Cc: Michael Roth Originally-by: Michael Roth Signed-off-by: Peter Gonda --- .../selftests/kvm/include/kvm_util_base.h | 15 +++++++++++++-- tools/testing/selftests/kvm/lib/kvm_util.c | 16 +++++++++++++--- 2 files changed, 26 insertions(+), 5 deletions(-) diff --git a/tools/testing/selftests/kvm/include/kvm_util_base.h b/tools/testing/selftests/kvm/include/kvm_util_base.h index ca99cc41685d..71c0ed6a1197 100644 --- a/tools/testing/selftests/kvm/include/kvm_util_base.h +++ b/tools/testing/selftests/kvm/include/kvm_util_base.h @@ -88,6 +88,7 @@ _Static_assert(NUM_VM_SUBTYPES < 256); struct userspace_mem_region { struct kvm_userspace_memory_region region; struct sparsebit *unused_phy_pages; + struct sparsebit *protected_phy_pages; int fd; off_t offset; enum vm_mem_backing_src_type backing_src_type; @@ -155,6 +156,9 @@ struct kvm_vm { vm_vaddr_t handlers; uint32_t dirty_ring_size; + /* VM protection enabled: SEV, etc*/ + bool protected; + /* Cache of information for binary stats interface */ int stats_fd; struct kvm_stats_header stats_header; @@ -727,10 +731,17 @@ const char *exit_reason_str(unsigned int exit_reason); vm_paddr_t vm_phy_page_alloc(struct kvm_vm *vm, vm_paddr_t paddr_min, uint32_t memslot); -vm_paddr_t vm_phy_pages_alloc(struct kvm_vm *vm, size_t num, - vm_paddr_t paddr_min, uint32_t memslot); +vm_paddr_t __vm_phy_pages_alloc(struct kvm_vm *vm, size_t num, + vm_paddr_t paddr_min, uint32_t memslot, + bool protected); vm_paddr_t vm_alloc_page_table(struct kvm_vm *vm); +static inline vm_paddr_t vm_phy_pages_alloc(struct kvm_vm *vm, size_t num, + vm_paddr_t paddr_min, uint32_t memslot) +{ + return __vm_phy_pages_alloc(vm, num, paddr_min, memslot, vm->protected); +} + /* * ____vm_create() does KVM_CREATE_VM and little else. __vm_create() also * loads the test binary into guest memory and creates an IRQ chip (x86 only). diff --git a/tools/testing/selftests/kvm/lib/kvm_util.c b/tools/testing/selftests/kvm/lib/kvm_util.c index bb8bbebbd935..6b94b84ce2e0 100644 --- a/tools/testing/selftests/kvm/lib/kvm_util.c +++ b/tools/testing/selftests/kvm/lib/kvm_util.c @@ -693,6 +693,7 @@ static void __vm_mem_region_delete(struct kvm_vm *vm, vm_ioctl(vm, KVM_SET_USER_MEMORY_REGION, ®ion->region); sparsebit_free(®ion->unused_phy_pages); + sparsebit_free(®ion->protected_phy_pages); ret = munmap(region->mmap_start, region->mmap_size); TEST_ASSERT(!ret, __KVM_SYSCALL_ERROR("munmap()", ret)); if (region->fd >= 0) { @@ -1040,6 +1041,7 @@ void vm_userspace_mem_region_add(struct kvm_vm *vm, region->backing_src_type = src_type; region->unused_phy_pages = sparsebit_alloc(); + region->protected_phy_pages = sparsebit_alloc(); sparsebit_set_num(region->unused_phy_pages, guest_paddr >> vm->page_shift, npages); region->region.slot = slot; @@ -1829,6 +1831,10 @@ void vm_dump(FILE *stream, struct kvm_vm *vm, uint8_t indent) region->host_mem); fprintf(stream, "%*sunused_phy_pages: ", indent + 2, ""); sparsebit_dump(stream, region->unused_phy_pages, 0); + if (vm->protected) { + fprintf(stream, "%*sprotected_phy_pages: ", indent + 2, ""); + sparsebit_dump(stream, region->protected_phy_pages, 0); + } } fprintf(stream, "%*sMapped Virtual Pages:\n", indent, ""); sparsebit_dump(stream, vm->vpages_mapped, indent + 2); @@ -1941,8 +1947,9 @@ const char *exit_reason_str(unsigned int exit_reason) * and their base address is returned. A TEST_ASSERT failure occurs if * not enough pages are available at or above paddr_min. */ -vm_paddr_t vm_phy_pages_alloc(struct kvm_vm *vm, size_t num, - vm_paddr_t paddr_min, uint32_t memslot) +vm_paddr_t __vm_phy_pages_alloc(struct kvm_vm *vm, size_t num, + vm_paddr_t paddr_min, uint32_t memslot, + bool protected) { struct userspace_mem_region *region; sparsebit_idx_t pg, base; @@ -1975,8 +1982,11 @@ vm_paddr_t vm_phy_pages_alloc(struct kvm_vm *vm, size_t num, abort(); } - for (pg = base; pg < base + num; ++pg) + for (pg = base; pg < base + num; ++pg) { sparsebit_clear(region->unused_phy_pages, pg); + if (protected) + sparsebit_set(region->protected_phy_pages, pg); + } return base * vm->page_size; } From patchwork Mon Dec 18 16:11:42 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Gonda X-Patchwork-Id: 13497247 Received: from mail-pg1-f202.google.com (mail-pg1-f202.google.com [209.85.215.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 47DE94FF69 for ; Mon, 18 Dec 2023 16:12:35 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--pgonda.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="yWieuDJc" Received: by mail-pg1-f202.google.com with SMTP id 41be03b00d2f7-5ca2b4038f7so1055686a12.0 for ; Mon, 18 Dec 2023 08:12:35 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1702915954; x=1703520754; darn=vger.kernel.org; h=cc:to:from:subject:references:mime-version:message-id:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=WMKQ/MpYzLcktvJgAHrTIOV9eibXb9CzQKMta+SXpg8=; b=yWieuDJc0jn8BkjMKxNgKiMx9ElalTufBUjnVwae9X3KcAREB4c79wjomijwJYLBxZ PFeD3rSH9InO3UHXB1PK7rdwux7OucXxTnIrUDPb2IDbeLA11tZ1L6Uk1Cty0I5a9Mud IbrOlsqrwmxZa5hPR7uHa1B5jEZ20h2j5HXHeZNu5lgUK1O4TBiaFoX5l+X4Cz81kfGS eXKzg6KUFnL56nt8SKgKqs2btUiA8Z1svrKUGO4woAgPjtLX/C60jAuovOErl/pe5iRo qkkAjPckGBCD2NR4ZsHkSmNc3PaUtP1XCuXk4xRqPm3C063ddBH2Db7ln/v051cNLjMj yS0g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1702915954; x=1703520754; h=cc:to:from:subject:references:mime-version:message-id:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=WMKQ/MpYzLcktvJgAHrTIOV9eibXb9CzQKMta+SXpg8=; b=tzXZizdgJMJCkHgs4HEf83cShvbaYy5kJyE6xBZXCqpkmbhGv+gzScZjpT+nTivTUJ HVtnBF2s1pr3XAPc25ODPBBcH2OJxSGGMcnZKF8LUNJs9uYXwN+knSJnLgt6ZxKbPIt0 ChlngqfdYI4bffjGrwXhAiUlm9IHWnqkQefWYhdUtvTBEemFC8YFtS07GvMv+idbYlnF 76ygKN1DoQfGBYN3uW+zyqm5DtKc4jakRa4ZZ0xfUzKf2COm370EQxL57Sku50ElW9c/ e0WMVvylOBNpaUAc/i/ahgQ19Tp2O/6pjleJlLtlIdaB+u7JcsOE9w4/+/t/gVeTaKFJ qX2A== X-Gm-Message-State: AOJu0YwqKdWxj8/4uwDdtuXd/7eFVTMyflE6KfM/i3ex5XxVbTCWKM/i 1m8wn+7h9iiKFf6GWEnAYFurAv659VV5l34sQXnEzUIfnGM0UXpXamRGzzWCm0cM9Ff44UcgU/Y b8zKytkwk6pJ/qXwrWt+FScOAaVsJHqYvqiNFJlzGTMFkoiI6N+CAOaV5Ag== X-Google-Smtp-Source: AGHT+IGVfChgq/lq5n3pDM7qXYK4lV2+2/tppXq3iWgROFoV3Y08xCxcJrGW2Xi5imujTJH/0zHUWlwQTdc= X-Received: from pgonda1.kir.corp.google.com ([2620:0:1008:15:8aeb:e3fa:237c:63a5]) (user=pgonda job=sendgmr) by 2002:a63:da41:0:b0:5ca:32cd:b524 with SMTP id l1-20020a63da41000000b005ca32cdb524mr902852pgj.3.1702915954072; Mon, 18 Dec 2023 08:12:34 -0800 (PST) Date: Mon, 18 Dec 2023 08:11:42 -0800 In-Reply-To: <20231218161146.3554657-1-pgonda@google.com> Message-Id: <20231218161146.3554657-5-pgonda@google.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20231218161146.3554657-1-pgonda@google.com> X-Mailer: git-send-email 2.43.0.472.g3155946c3a-goog Subject: [PATCH V7 4/8] KVM: selftests: Allow tagging protected memory in guest page tables From: Peter Gonda To: kvm@vger.kernel.org Cc: Peter Gonda , Paolo Bonzini , Sean Christopherson , Vishal Annapurve , Ackerly Tng , Andrew Jones , Tom Lendacky , Michael Roth SEV guests rely on an encyption bit (C-bit) which resides within the physical address range, i.e. is stolen from the guest's GPA space. Guest code in selftests will expect the C-Bit to be set appropriately in the guest's page tables, whereas the rest of the kvm_util functions will generally expect these bits to not be present. Introduce pte_me_mask and struct kvm_vm_arch to allow for arch specific address tagging. Currently just adding x86 c_bit and s_bit support for SEV and TDX. Cc: Paolo Bonzini Cc: Sean Christopherson Cc: Vishal Annapurve Cc: Ackerly Tng cc: Andrew Jones Cc: Tom Lendacky Cc: Michael Roth Originally-by: Michael Roth Signed-off-by: Peter Gonda --- tools/arch/arm64/include/asm/kvm_host.h | 7 +++++ tools/arch/riscv/include/asm/kvm_host.h | 7 +++++ tools/arch/s390/include/asm/kvm_host.h | 7 +++++ tools/arch/x86/include/asm/kvm_host.h | 13 +++++++++ .../selftests/kvm/include/kvm_util_base.h | 13 +++++++++ tools/testing/selftests/kvm/lib/kvm_util.c | 27 ++++++++++++++++--- .../selftests/kvm/lib/x86_64/processor.c | 15 ++++++++++- 7 files changed, 85 insertions(+), 4 deletions(-) create mode 100644 tools/arch/arm64/include/asm/kvm_host.h create mode 100644 tools/arch/riscv/include/asm/kvm_host.h create mode 100644 tools/arch/s390/include/asm/kvm_host.h create mode 100644 tools/arch/x86/include/asm/kvm_host.h diff --git a/tools/arch/arm64/include/asm/kvm_host.h b/tools/arch/arm64/include/asm/kvm_host.h new file mode 100644 index 000000000000..218f5cdf0d86 --- /dev/null +++ b/tools/arch/arm64/include/asm/kvm_host.h @@ -0,0 +1,7 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +#ifndef _TOOLS_LINUX_ASM_ARM64_KVM_HOST_H +#define _TOOLS_LINUX_ASM_ARM64_KVM_HOST_H + +struct kvm_vm_arch {}; + +#endif // _TOOLS_LINUX_ASM_ARM64_KVM_HOST_H diff --git a/tools/arch/riscv/include/asm/kvm_host.h b/tools/arch/riscv/include/asm/kvm_host.h new file mode 100644 index 000000000000..c8280d5659ce --- /dev/null +++ b/tools/arch/riscv/include/asm/kvm_host.h @@ -0,0 +1,7 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +#ifndef _TOOLS_LINUX_ASM_RISCV_KVM_HOST_H +#define _TOOLS_LINUX_ASM_RISCV_KVM_HOST_H + +struct kvm_vm_arch {}; + +#endif // _TOOLS_LINUX_ASM_RISCV_KVM_HOST_H diff --git a/tools/arch/s390/include/asm/kvm_host.h b/tools/arch/s390/include/asm/kvm_host.h new file mode 100644 index 000000000000..4c4c1c1e4bf8 --- /dev/null +++ b/tools/arch/s390/include/asm/kvm_host.h @@ -0,0 +1,7 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +#ifndef _TOOLS_LINUX_ASM_S390_KVM_HOST_H +#define _TOOLS_LINUX_ASM_S390_KVM_HOST_H + +struct kvm_vm_arch {}; + +#endif // _TOOLS_LINUX_ASM_S390_KVM_HOST_H diff --git a/tools/arch/x86/include/asm/kvm_host.h b/tools/arch/x86/include/asm/kvm_host.h new file mode 100644 index 000000000000..d8f48fe835fb --- /dev/null +++ b/tools/arch/x86/include/asm/kvm_host.h @@ -0,0 +1,13 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +#ifndef _TOOLS_LINUX_ASM_X86_KVM_HOST_H +#define _TOOLS_LINUX_ASM_X86_KVM_HOST_H + +#include +#include + +struct kvm_vm_arch { + uint64_t c_bit; + uint64_t s_bit; +}; + +#endif // _TOOLS_LINUX_ASM_X86_KVM_HOST_H diff --git a/tools/testing/selftests/kvm/include/kvm_util_base.h b/tools/testing/selftests/kvm/include/kvm_util_base.h index 71c0ed6a1197..8267476c76df 100644 --- a/tools/testing/selftests/kvm/include/kvm_util_base.h +++ b/tools/testing/selftests/kvm/include/kvm_util_base.h @@ -18,6 +18,8 @@ #include #include +#include +#include #include @@ -155,6 +157,9 @@ struct kvm_vm { vm_vaddr_t idt; vm_vaddr_t handlers; uint32_t dirty_ring_size; + uint64_t gpa_tag_mask; + + struct kvm_vm_arch arch; /* VM protection enabled: SEV, etc*/ bool protected; @@ -489,6 +494,12 @@ void *addr_gva2hva(struct kvm_vm *vm, vm_vaddr_t gva); vm_paddr_t addr_hva2gpa(struct kvm_vm *vm, void *hva); void *addr_gpa2alias(struct kvm_vm *vm, vm_paddr_t gpa); + +static inline vm_paddr_t vm_untag_gpa(struct kvm_vm *vm, vm_paddr_t gpa) +{ + return gpa & ~vm->gpa_tag_mask; +} + void vcpu_run(struct kvm_vcpu *vcpu); int _vcpu_run(struct kvm_vcpu *vcpu); @@ -967,4 +978,6 @@ void kvm_selftest_arch_init(void); void kvm_arch_vm_post_create(struct kvm_vm *vm); +bool vm_is_gpa_protected(struct kvm_vm *vm, vm_paddr_t paddr); + #endif /* SELFTEST_KVM_UTIL_BASE_H */ diff --git a/tools/testing/selftests/kvm/lib/kvm_util.c b/tools/testing/selftests/kvm/lib/kvm_util.c index 6b94b84ce2e0..3ab0fb0b6136 100644 --- a/tools/testing/selftests/kvm/lib/kvm_util.c +++ b/tools/testing/selftests/kvm/lib/kvm_util.c @@ -165,10 +165,11 @@ const char *vm_guest_mode_string(uint32_t i) }; _Static_assert(sizeof(strings)/sizeof(char *) == NUM_VM_MODES, "Missing new mode strings?"); + const uint32_t mode = i & VM_MODE_PRIMARY_MASK; - TEST_ASSERT(i < NUM_VM_MODES, "Guest mode ID %d too big", i); + TEST_ASSERT(mode < NUM_VM_MODES, "Guest mode ID %d too big", mode); - return strings[i]; + return strings[mode]; } const struct vm_guest_mode_params vm_guest_mode_params[] = { @@ -315,7 +316,7 @@ static uint64_t vm_nr_pages_required(uint32_t mode, uint32_t nr_runnable_vcpus, uint64_t extra_mem_pages) { - uint64_t page_size = vm_guest_mode_params[mode].page_size; + uint64_t page_size = vm_guest_mode_params[mode & VM_MODE_PRIMARY_MASK].page_size; uint64_t nr_pages; TEST_ASSERT(nr_runnable_vcpus, @@ -1485,6 +1486,8 @@ void *addr_gpa2hva(struct kvm_vm *vm, vm_paddr_t gpa) { struct userspace_mem_region *region; + gpa = vm_untag_gpa(vm, gpa); + region = userspace_mem_region_find(vm, gpa, gpa); if (!region) { TEST_FAIL("No vm physical memory at 0x%lx", gpa); @@ -2190,3 +2193,21 @@ void __attribute((constructor)) kvm_selftest_init(void) kvm_selftest_arch_init(); } + +bool vm_is_gpa_protected(struct kvm_vm *vm, vm_paddr_t paddr) +{ + sparsebit_idx_t pg = 0; + struct userspace_mem_region *region; + + if (!vm->protected) + return false; + + region = userspace_mem_region_find(vm, paddr, paddr); + if (!region) { + TEST_FAIL("No vm physical memory at 0x%lx", paddr); + return false; + } + + pg = paddr >> vm->page_shift; + return sparsebit_is_set(region->protected_phy_pages, pg); +} diff --git a/tools/testing/selftests/kvm/lib/x86_64/processor.c b/tools/testing/selftests/kvm/lib/x86_64/processor.c index d8288374078e..c18e2e9d3d75 100644 --- a/tools/testing/selftests/kvm/lib/x86_64/processor.c +++ b/tools/testing/selftests/kvm/lib/x86_64/processor.c @@ -157,6 +157,8 @@ static uint64_t *virt_create_upper_pte(struct kvm_vm *vm, { uint64_t *pte = virt_get_pte(vm, parent_pte, vaddr, current_level); + paddr = vm_untag_gpa(vm, paddr); + if (!(*pte & PTE_PRESENT_MASK)) { *pte = PTE_PRESENT_MASK | PTE_WRITABLE_MASK; if (current_level == target_level) @@ -200,6 +202,8 @@ void __virt_pg_map(struct kvm_vm *vm, uint64_t vaddr, uint64_t paddr, int level) "Physical address beyond maximum supported,\n" " paddr: 0x%lx vm->max_gfn: 0x%lx vm->page_size: 0x%x", paddr, vm->max_gfn, vm->page_size); + TEST_ASSERT(vm_untag_gpa(vm, paddr) == paddr, + "Unexpected bits in paddr: %lx", paddr); /* * Allocate upper level page tables, if not already present. Return @@ -222,6 +226,15 @@ void __virt_pg_map(struct kvm_vm *vm, uint64_t vaddr, uint64_t paddr, int level) TEST_ASSERT(!(*pte & PTE_PRESENT_MASK), "PTE already present for 4k page at vaddr: 0x%lx\n", vaddr); *pte = PTE_PRESENT_MASK | PTE_WRITABLE_MASK | (paddr & PHYSICAL_PAGE_MASK); + + /* + * Neither SEV nor TDX supports shared page tables, so only the final + * leaf PTE needs manually set the C/S-bit. + */ + if (vm_is_gpa_protected(vm, paddr)) + *pte |= vm->arch.c_bit; + else + *pte |= vm->arch.s_bit; } void virt_arch_pg_map(struct kvm_vm *vm, uint64_t vaddr, uint64_t paddr) @@ -496,7 +509,7 @@ vm_paddr_t addr_arch_gva2gpa(struct kvm_vm *vm, vm_vaddr_t gva) * No need for a hugepage mask on the PTE, x86-64 requires the "unused" * address bits to be zero. */ - return PTE_GET_PA(*pte) | (gva & ~HUGEPAGE_MASK(level)); + return vm_untag_gpa(vm, PTE_GET_PA(*pte)) | (gva & ~HUGEPAGE_MASK(level)); } static void kvm_setup_gdt(struct kvm_vm *vm, struct kvm_dtable *dt) From patchwork Mon Dec 18 16:11:43 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Gonda X-Patchwork-Id: 13497248 Received: from mail-pg1-f202.google.com (mail-pg1-f202.google.com [209.85.215.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 849AE498B5 for ; Mon, 18 Dec 2023 16:12:37 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--pgonda.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="EUu2AsJl" Received: by mail-pg1-f202.google.com with SMTP id 41be03b00d2f7-5cd8c7cc13fso351638a12.1 for ; Mon, 18 Dec 2023 08:12:37 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1702915957; x=1703520757; darn=vger.kernel.org; h=cc:to:from:subject:references:mime-version:message-id:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=VzghRQCe+c1bjrH02U4KTpcWvp5L0do2OThRRrP9pkk=; b=EUu2AsJl94vnC1BqTjLburrhfsOAJaQaKyBCRbYe9Of+Fp3pncAEW1jy42Cdrqkn4I B/iFNhiDLELZe0U83n+80hUnjkhFy3DgXA6xNiDB/j00kuXjkgIbVhLov1fM+62Ciyoz Ir1ymu3K0TbyMjCqT4kC7WTXSG+CnOKA27bHZCEI+3tqh6th76VlvkDyjaHauFjqcIN6 qVXZUovAHzIgRtsnnDoK290tHx/nAouZRApVGFS013jb6IeJP6H8YCFgHWjITfsUw4fO u9i6sI5IgLEV/nGVxFC2mkAcuAc+qTvGd457n71kKKx4o4bodb4soG14d+oemuavOY11 neXw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1702915957; x=1703520757; h=cc:to:from:subject:references:mime-version:message-id:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=VzghRQCe+c1bjrH02U4KTpcWvp5L0do2OThRRrP9pkk=; b=DYjm41qhLDtNmCylE8UQTCfK7fP+ljMuuAYHjz5uMQEL0dj454NZYP+/cCngwJCq1G N7HIGT0TELm4j/3taymDo9ZoE2/saaMbIVAVZzQ8bW0MDLLH6SoYi3Lpgc8sTfW2oS8k Wsf8Et0dzwaKvKfHkTqNKSgk93V3iphjdVhW7oBGBYnDVPDOyCXwXvONCkppJM0GkIZN 6SUWJuZeFCBzGuEv4pT+7aGKdvJUvUPtyx+D0vrXAGMpPUXtQT/xJJ4yfJE0MzdQ78KE ce7NIWXSHIYaIO5fd1CSiGEt4Bu2MezloY71IaRQU3OZK1mhAZr462re8clopZ5uxFBh /Vrw== X-Gm-Message-State: AOJu0YzTXOFGEY0evoqbgMhH31PH9c2wM0wcN8kjexFjPNGEXJuA9g0V 3GupaIVTOYEkpmVSBUY0Bdn777m/YVdbp17nbMNGpTTQ9GkI5DZh7o39IhmiRaRvMPAgMO86dm8 VT0eni+syCQxDtXWlMFNUSF7L56J4a8FUqcfik8U6iQJqLbj4OglELGLPtg== X-Google-Smtp-Source: AGHT+IFSmCJccK5VsIe5wPQ4Uu/Q+qj0pzOQp9lPC5Eptd1gfdJ++Ojzmqdq34Rh+9zDmPauvvruLv5VRWc= X-Received: from pgonda1.kir.corp.google.com ([2620:0:1008:15:8aeb:e3fa:237c:63a5]) (user=pgonda job=sendgmr) by 2002:a05:6a02:4a5:b0:5cd:927b:2d20 with SMTP id bw37-20020a056a0204a500b005cd927b2d20mr14062pgb.10.1702915956083; Mon, 18 Dec 2023 08:12:36 -0800 (PST) Date: Mon, 18 Dec 2023 08:11:43 -0800 In-Reply-To: <20231218161146.3554657-1-pgonda@google.com> Message-Id: <20231218161146.3554657-6-pgonda@google.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20231218161146.3554657-1-pgonda@google.com> X-Mailer: git-send-email 2.43.0.472.g3155946c3a-goog Subject: [PATCH V7 5/8] KVM: selftests: add support for protected vm_vaddr_* allocations From: Peter Gonda To: kvm@vger.kernel.org Cc: Michael Roth , Paolo Bonzini , Sean Christopherson , Vishal Annapurve , Ackerly Tng , Andrew Jones , Tom Lendacky , Peter Gonda From: Michael Roth Test programs may wish to allocate shared vaddrs for things like sharing memory with the guest. Since protected vms will have their memory encrypted by default an interface is needed to explicitly request shared pages. Implement this by splitting the common code out from vm_vaddr_alloc() and introducing a new vm_vaddr_alloc_shared(). Cc: Paolo Bonzini Cc: Sean Christopherson Cc: Vishal Annapurve Cc: Ackerly Tng cc: Andrew Jones Cc: Tom Lendacky Cc: Michael Roth Signed-off-by: Michael Roth Signed-off-by: Peter Gonda --- .../selftests/kvm/include/kvm_util_base.h | 3 +++ tools/testing/selftests/kvm/lib/kvm_util.c | 25 +++++++++++++++---- 2 files changed, 23 insertions(+), 5 deletions(-) diff --git a/tools/testing/selftests/kvm/include/kvm_util_base.h b/tools/testing/selftests/kvm/include/kvm_util_base.h index 8267476c76df..1b1a29ff035e 100644 --- a/tools/testing/selftests/kvm/include/kvm_util_base.h +++ b/tools/testing/selftests/kvm/include/kvm_util_base.h @@ -482,6 +482,9 @@ vm_vaddr_t vm_vaddr_unused_gap(struct kvm_vm *vm, size_t sz, vm_vaddr_t vaddr_mi vm_vaddr_t vm_vaddr_alloc(struct kvm_vm *vm, size_t sz, vm_vaddr_t vaddr_min); vm_vaddr_t __vm_vaddr_alloc(struct kvm_vm *vm, size_t sz, vm_vaddr_t vaddr_min, enum kvm_mem_region_type type); +vm_vaddr_t vm_vaddr_alloc_shared(struct kvm_vm *vm, size_t sz, + vm_vaddr_t vaddr_min, + enum kvm_mem_region_type type); vm_vaddr_t vm_vaddr_alloc_pages(struct kvm_vm *vm, int nr_pages); vm_vaddr_t __vm_vaddr_alloc_page(struct kvm_vm *vm, enum kvm_mem_region_type type); diff --git a/tools/testing/selftests/kvm/lib/kvm_util.c b/tools/testing/selftests/kvm/lib/kvm_util.c index 3ab0fb0b6136..4a4ee1afd738 100644 --- a/tools/testing/selftests/kvm/lib/kvm_util.c +++ b/tools/testing/selftests/kvm/lib/kvm_util.c @@ -1336,15 +1336,17 @@ vm_vaddr_t vm_vaddr_unused_gap(struct kvm_vm *vm, size_t sz, return pgidx_start * vm->page_size; } -vm_vaddr_t __vm_vaddr_alloc(struct kvm_vm *vm, size_t sz, vm_vaddr_t vaddr_min, - enum kvm_mem_region_type type) +static vm_vaddr_t ____vm_vaddr_alloc(struct kvm_vm *vm, size_t sz, + vm_vaddr_t vaddr_min, + enum kvm_mem_region_type type, + bool protected) { uint64_t pages = (sz >> vm->page_shift) + ((sz % vm->page_size) != 0); virt_pgd_alloc(vm); - vm_paddr_t paddr = vm_phy_pages_alloc(vm, pages, - KVM_UTIL_MIN_PFN * vm->page_size, - vm->memslots[type]); + vm_paddr_t paddr = __vm_phy_pages_alloc(vm, pages, + KVM_UTIL_MIN_PFN * vm->page_size, + vm->memslots[type], protected); /* * Find an unused range of virtual page addresses of at least @@ -1364,6 +1366,19 @@ vm_vaddr_t __vm_vaddr_alloc(struct kvm_vm *vm, size_t sz, vm_vaddr_t vaddr_min, return vaddr_start; } +vm_vaddr_t __vm_vaddr_alloc(struct kvm_vm *vm, size_t sz, vm_vaddr_t vaddr_min, + enum kvm_mem_region_type type) +{ + return ____vm_vaddr_alloc(vm, sz, vaddr_min, type, vm->protected); +} + +vm_vaddr_t vm_vaddr_alloc_shared(struct kvm_vm *vm, size_t sz, + vm_vaddr_t vaddr_min, + enum kvm_mem_region_type type) +{ + return ____vm_vaddr_alloc(vm, sz, vaddr_min, type, false); +} + /* * VM Virtual Address Allocate * From patchwork Mon Dec 18 16:11:44 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Gonda X-Patchwork-Id: 13497249 Received: from mail-pg1-f202.google.com (mail-pg1-f202.google.com [209.85.215.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id AEA3442388 for ; Mon, 18 Dec 2023 16:12:39 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--pgonda.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="Ty+ORmHh" Received: by mail-pg1-f202.google.com with SMTP id 41be03b00d2f7-5c1b986082dso2250900a12.0 for ; Mon, 18 Dec 2023 08:12:39 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1702915959; x=1703520759; darn=vger.kernel.org; h=cc:to:from:subject:references:mime-version:message-id:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=6cslIywLm7lf5nZPMneVBjRtiwvItz2ktN/qxGpXzvk=; b=Ty+ORmHh4l/btXuS0fkmuAO11attHIyF0eXwlOhXXIzAiO2FZIAZvL4UTDZ2UqhULo QOm3djnAzZ9ra4MaqsHA3OeTb5c/x/yvqAGnq1flK6j3ahJJQAcsDm6lqVv4DM3qt+cx UvhinLOmWkMNrVBv/yAyGX3Bj20x9FjOj2xhggNaykk7g0g+ON2wzSX88iihoezsb/rk K+hMeo0lRu7u+W8pC9g2vq4/NpYjyCykU2XZQNha+2gDZAqUvi5SyrAj8sR/PU5lQ0hh jmHqHbwlv3zGKprtGiopE/fGynJXkaXiMF+FTC94h/0P72Q1nUVYviQ7R5R8AF3/xXPZ 1E4w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1702915959; x=1703520759; h=cc:to:from:subject:references:mime-version:message-id:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=6cslIywLm7lf5nZPMneVBjRtiwvItz2ktN/qxGpXzvk=; b=wpdmvFz7JbHVJI81wV306eC2GWweCfoaZwtIH1Qh4vluhsIzJtDYETNntnbO2NBfRK nRWgSmvVvnVXXQMxpbEwaEVdyezF30SD+zZ8prBsLYKkZ1PZq0NZk3AETCPEAvJ+AVlN tgt20K6aHMIDqciXYA9TRyMQqgff7b3FVDh6uergSOphtR1CVgVwpuEK67kQtGbBlODB fdjgqtKCV57aPSmD9sVl+n2E7dPawCPugsyMmSqHmd0PUJeDY3UcI2s22obpO/NHr8Hq uOuWnfkS78aF0SoUm3Vb8/59FqVGfnuO8TtS9I68fKnHgWG62kIXtxfULsTbQ1840Hxw nC1A== X-Gm-Message-State: AOJu0Yx94oMTkr1GFZKa5Gse5l1F9hgikeYrF+G9OgZQdqin8cHur9jh D6hab/RoPGluwc0O3+2Iwll/q4o5ReGb0gbCJeO0GgxPI9U65w4SUA/pJFV7tB34QmmbTTS5FpF zUvToqSRhuiEhdX/lrfuD+PT5Id5j04MQFDMT+G8ykShqskbf7e/0Qpm5lA== X-Google-Smtp-Source: AGHT+IE2MI9NE3j2NMpTi9L9wDZ/EVwNklb6uZ0kOW2FKofzrHntq75jiSF3MKPCgHvCMJv8QwT4W2eDAis= X-Received: from pgonda1.kir.corp.google.com ([2620:0:1008:15:8aeb:e3fa:237c:63a5]) (user=pgonda job=sendgmr) by 2002:a65:63d4:0:b0:5c6:27fc:1155 with SMTP id n20-20020a6563d4000000b005c627fc1155mr952713pgv.3.1702915958193; Mon, 18 Dec 2023 08:12:38 -0800 (PST) Date: Mon, 18 Dec 2023 08:11:44 -0800 In-Reply-To: <20231218161146.3554657-1-pgonda@google.com> Message-Id: <20231218161146.3554657-7-pgonda@google.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20231218161146.3554657-1-pgonda@google.com> X-Mailer: git-send-email 2.43.0.472.g3155946c3a-goog Subject: [PATCH V7 6/8] KVM: selftests: add library for creating/interacting with SEV guests From: Peter Gonda To: kvm@vger.kernel.org Cc: Peter Gonda , Paolo Bonzini , Sean Christopherson , Vishal Annapurve , Ackerly Tng , Andrew Jones , Tom Lendacky , Michael Roth Add interfaces to allow tests to create SEV guests. The additional requirements for SEV guests PTs and other state is encapsulated by the new vm_sev_create_with_one_vcpu() function. This can future be generalized for more vCPUs but the first set of SEV selftests in this series only uses a single vCPU. Cc: Paolo Bonzini Cc: Sean Christopherson Cc: Vishal Annapurve Cc: Ackerly Tng cc: Andrew Jones Cc: Tom Lendacky Cc: Michael Roth Originally-by: Michael Roth Co-developed-by: Ackerly Tng Signed-off-by: Peter Gonda --- include/uapi/linux/kvm.h | 2 +- tools/arch/x86/include/asm/kvm_host.h | 2 + tools/testing/selftests/kvm/Makefile | 1 + .../testing/selftests/kvm/include/sparsebit.h | 22 ++ .../selftests/kvm/include/x86_64/processor.h | 2 + .../selftests/kvm/include/x86_64/sev.h | 27 +++ tools/testing/selftests/kvm/lib/kvm_util.c | 1 + .../selftests/kvm/lib/x86_64/processor.c | 16 ++ tools/testing/selftests/kvm/lib/x86_64/sev.c | 202 ++++++++++++++++++ 9 files changed, 274 insertions(+), 1 deletion(-) create mode 100644 tools/testing/selftests/kvm/include/x86_64/sev.h create mode 100644 tools/testing/selftests/kvm/lib/x86_64/sev.c diff --git a/include/uapi/linux/kvm.h b/include/uapi/linux/kvm.h index 13065dd96132..251f422bcaa7 100644 --- a/include/uapi/linux/kvm.h +++ b/include/uapi/linux/kvm.h @@ -1660,7 +1660,7 @@ struct kvm_s390_ucas_mapping { #define KVM_S390_GET_CMMA_BITS _IOWR(KVMIO, 0xb8, struct kvm_s390_cmma_log) #define KVM_S390_SET_CMMA_BITS _IOW(KVMIO, 0xb9, struct kvm_s390_cmma_log) /* Memory Encryption Commands */ -#define KVM_MEMORY_ENCRYPT_OP _IOWR(KVMIO, 0xba, unsigned long) +#define KVM_MEMORY_ENCRYPT_OP _IOWR(KVMIO, 0xba, struct kvm_sev_cmd) struct kvm_enc_region { __u64 addr; diff --git a/tools/arch/x86/include/asm/kvm_host.h b/tools/arch/x86/include/asm/kvm_host.h index d8f48fe835fb..12a7902216be 100644 --- a/tools/arch/x86/include/asm/kvm_host.h +++ b/tools/arch/x86/include/asm/kvm_host.h @@ -8,6 +8,8 @@ struct kvm_vm_arch { uint64_t c_bit; uint64_t s_bit; + int sev_fd; + bool is_pt_protected; }; #endif // _TOOLS_LINUX_ASM_X86_KVM_HOST_H diff --git a/tools/testing/selftests/kvm/Makefile b/tools/testing/selftests/kvm/Makefile index a3bb36fb3cfc..c932bcea4198 100644 --- a/tools/testing/selftests/kvm/Makefile +++ b/tools/testing/selftests/kvm/Makefile @@ -37,6 +37,7 @@ LIBKVM_x86_64 += lib/x86_64/handlers.S LIBKVM_x86_64 += lib/x86_64/hyperv.c LIBKVM_x86_64 += lib/x86_64/memstress.c LIBKVM_x86_64 += lib/x86_64/processor.c +LIBKVM_x86_64 += lib/x86_64/sev.c LIBKVM_x86_64 += lib/x86_64/svm.c LIBKVM_x86_64 += lib/x86_64/ucall.c LIBKVM_x86_64 += lib/x86_64/vmx.c diff --git a/tools/testing/selftests/kvm/include/sparsebit.h b/tools/testing/selftests/kvm/include/sparsebit.h index fb5170d57fcb..a63577e53919 100644 --- a/tools/testing/selftests/kvm/include/sparsebit.h +++ b/tools/testing/selftests/kvm/include/sparsebit.h @@ -66,6 +66,28 @@ void sparsebit_dump(FILE *stream, const struct sparsebit *sbit, unsigned int indent); void sparsebit_validate_internal(const struct sparsebit *sbit); +/* + * Iterate over set ranges within sparsebit @s. In each iteration, + * @range_begin and @range_end will take the beginning and end of the set + * range, which are of type sparsebit_idx_t. + * + * For example, if the range [3, 7] (inclusive) is set, within the + * iteration,@range_begin will take the value 3 and @range_end will take + * the value 7. + * + * Ensure that there is at least one bit set before using this macro with + * sparsebit_any_set(), because sparsebit_first_set() will abort if none + * are set. + */ +#define sparsebit_for_each_set_range(s, range_begin, range_end) \ + for (range_begin = sparsebit_first_set(s), \ + range_end = \ + sparsebit_next_clear(s, range_begin) - 1; \ + range_begin && range_end; \ + range_begin = sparsebit_next_set(s, range_end), \ + range_end = \ + sparsebit_next_clear(s, range_begin) - 1) + #ifdef __cplusplus } #endif diff --git a/tools/testing/selftests/kvm/include/x86_64/processor.h b/tools/testing/selftests/kvm/include/x86_64/processor.h index 4fd042112526..67cc32b1a29a 100644 --- a/tools/testing/selftests/kvm/include/x86_64/processor.h +++ b/tools/testing/selftests/kvm/include/x86_64/processor.h @@ -266,6 +266,7 @@ struct kvm_x86_cpu_property { #define X86_PROPERTY_MAX_PHY_ADDR KVM_X86_CPU_PROPERTY(0x80000008, 0, EAX, 0, 7) #define X86_PROPERTY_MAX_VIRT_ADDR KVM_X86_CPU_PROPERTY(0x80000008, 0, EAX, 8, 15) #define X86_PROPERTY_PHYS_ADDR_REDUCTION KVM_X86_CPU_PROPERTY(0x8000001F, 0, EBX, 6, 11) +#define X86_PROPERTY_SEV_C_BIT KVM_X86_CPU_PROPERTY(0x8000001F, 0, EBX, 0, 5) #define X86_PROPERTY_MAX_CENTAUR_LEAF KVM_X86_CPU_PROPERTY(0xC0000000, 0, EAX, 0, 31) @@ -1035,6 +1036,7 @@ do { \ } while (0) void kvm_get_cpu_address_width(unsigned int *pa_bits, unsigned int *va_bits); +void kvm_init_vm_address_properties(struct kvm_vm *vm); bool vm_is_unrestricted_guest(struct kvm_vm *vm); struct ex_regs { diff --git a/tools/testing/selftests/kvm/include/x86_64/sev.h b/tools/testing/selftests/kvm/include/x86_64/sev.h new file mode 100644 index 000000000000..e212b032cd77 --- /dev/null +++ b/tools/testing/selftests/kvm/include/x86_64/sev.h @@ -0,0 +1,27 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* + * Helpers used for SEV guests + * + */ +#ifndef SELFTEST_KVM_SEV_H +#define SELFTEST_KVM_SEV_H + +#include +#include + +#include "kvm_util.h" + +#define CPUID_MEM_ENC_LEAF 0x8000001f +#define CPUID_EBX_CBIT_MASK 0x3f + +#define SEV_POLICY_NO_DBG (1UL << 0) +#define SEV_POLICY_ES (1UL << 2) + +bool is_kvm_sev_supported(void); + +void sev_vm_init(struct kvm_vm *vm); + +struct kvm_vm *vm_sev_create_with_one_vcpu(uint32_t policy, void *guest_code, + struct kvm_vcpu **cpu); + +#endif /* SELFTEST_KVM_SEV_H */ diff --git a/tools/testing/selftests/kvm/lib/kvm_util.c b/tools/testing/selftests/kvm/lib/kvm_util.c index 4a4ee1afd738..b758cc6497c7 100644 --- a/tools/testing/selftests/kvm/lib/kvm_util.c +++ b/tools/testing/selftests/kvm/lib/kvm_util.c @@ -266,6 +266,7 @@ struct kvm_vm *____vm_create(uint32_t mode) case VM_MODE_PXXV48_4K: #ifdef __x86_64__ kvm_get_cpu_address_width(&vm->pa_bits, &vm->va_bits); + kvm_init_vm_address_properties(vm); /* * Ignore KVM support for 5-level paging (vm->va_bits == 57), * it doesn't take effect unless a CR4.LA57 is set, which it diff --git a/tools/testing/selftests/kvm/lib/x86_64/processor.c b/tools/testing/selftests/kvm/lib/x86_64/processor.c index c18e2e9d3d75..4a3ce181a19f 100644 --- a/tools/testing/selftests/kvm/lib/x86_64/processor.c +++ b/tools/testing/selftests/kvm/lib/x86_64/processor.c @@ -9,6 +9,7 @@ #include "test_util.h" #include "kvm_util.h" #include "processor.h" +#include "sev.h" #ifndef NUM_INTERRUPTS #define NUM_INTERRUPTS 256 @@ -278,6 +279,9 @@ uint64_t *__vm_get_page_table_entry(struct kvm_vm *vm, uint64_t vaddr, { uint64_t *pml4e, *pdpe, *pde; + TEST_ASSERT(!vm->arch.is_pt_protected, + "Walking page tables of protected guests is impossible"); + TEST_ASSERT(*level >= PG_LEVEL_NONE && *level < PG_LEVEL_NUM, "Invalid PG_LEVEL_* '%d'", *level); @@ -573,6 +577,9 @@ void kvm_arch_vm_post_create(struct kvm_vm *vm) vm_create_irqchip(vm); sync_global_to_guest(vm, host_cpu_is_intel); sync_global_to_guest(vm, host_cpu_is_amd); + + if (vm->subtype == VM_SUBTYPE_SEV) + sev_vm_init(vm); } struct kvm_vcpu *vm_arch_vcpu_add(struct kvm_vm *vm, uint32_t vcpu_id, @@ -1054,6 +1061,15 @@ void kvm_get_cpu_address_width(unsigned int *pa_bits, unsigned int *va_bits) } } +void kvm_init_vm_address_properties(struct kvm_vm *vm) +{ + if (vm->subtype == VM_SUBTYPE_SEV) { + vm->protected = true; + vm->arch.c_bit = 1ULL << this_cpu_property(X86_PROPERTY_SEV_C_BIT); + vm->gpa_tag_mask = vm->arch.c_bit; + } +} + static void set_idt_entry(struct kvm_vm *vm, int vector, unsigned long addr, int dpl, unsigned short selector) { diff --git a/tools/testing/selftests/kvm/lib/x86_64/sev.c b/tools/testing/selftests/kvm/lib/x86_64/sev.c new file mode 100644 index 000000000000..f2bac717cac1 --- /dev/null +++ b/tools/testing/selftests/kvm/lib/x86_64/sev.c @@ -0,0 +1,202 @@ +// SPDX-License-Identifier: GPL-2.0-only +#define _GNU_SOURCE /* for program_invocation_short_name */ +#include +#include + +#include "kvm_util.h" +#include "svm_util.h" +#include "linux/psp-sev.h" +#include "processor.h" +#include "sev.h" + +#define SEV_FW_REQ_VER_MAJOR 0 +#define SEV_FW_REQ_VER_MINOR 17 + +enum sev_guest_state { + SEV_GSTATE_UNINIT = 0, + SEV_GSTATE_LUPDATE, + SEV_GSTATE_LSECRET, + SEV_GSTATE_RUNNING, +}; + +static void sev_ioctl(int cmd, void *data) +{ + int sev_fd = open_sev_dev_path_or_exit(); + struct sev_issue_cmd arg = { + .cmd = cmd, + .data = (unsigned long)data, + }; + + kvm_ioctl(sev_fd, SEV_ISSUE_CMD, &arg); + close(sev_fd); +} + +static void kvm_sev_ioctl(struct kvm_vm *vm, int cmd, void *data) +{ + struct kvm_sev_cmd sev_cmd = { + .id = cmd, + .sev_fd = vm->arch.sev_fd, + .data = (unsigned long)data, + }; + + vm_ioctl(vm, KVM_MEMORY_ENCRYPT_OP, &sev_cmd); +} + +static void sev_register_encrypted_memory(struct kvm_vm *vm, + struct userspace_mem_region *region) +{ + struct kvm_enc_region range = { + .addr = region->region.userspace_addr, + .size = region->region.memory_size, + }; + + vm_ioctl(vm, KVM_MEMORY_ENCRYPT_REG_REGION, &range); +} + +static void sev_launch_update_data(struct kvm_vm *vm, vm_paddr_t gpa, + uint64_t size) +{ + struct kvm_sev_launch_update_data update_data = { + .uaddr = (unsigned long)addr_gpa2hva(vm, gpa), + .len = size, + }; + + kvm_sev_ioctl(vm, KVM_SEV_LAUNCH_UPDATE_DATA, &update_data); +} + +/* + * sparsebit_next_clear() can return 0 if [x, 2**64-1] are all set, and the + * -1 would then cause an underflow back to 2**64 - 1. This is expected and + * correct. + * + * If the last range in the sparsebit is [x, y] and we try to iterate, + * sparsebit_next_set() will return 0, and sparsebit_next_clear() will try + * and find the first range, but that's correct because the condition + * expression would cause us to quit the loop. + */ +static void encrypt_region(struct kvm_vm *vm, struct userspace_mem_region *region) +{ + const struct sparsebit *protected_phy_pages = region->protected_phy_pages; + const vm_paddr_t gpa_base = region->region.guest_phys_addr; + const sparsebit_idx_t lowest_page_in_region = gpa_base >> vm->page_shift; + sparsebit_idx_t i, j; + + if (!sparsebit_any_set(protected_phy_pages)) + return; + + sev_register_encrypted_memory(vm, region); + + sparsebit_for_each_set_range(protected_phy_pages, i, j) { + const uint64_t size = (j - i + 1) * vm->page_size; + const uint64_t offset = (i - lowest_page_in_region) * vm->page_size; + + sev_launch_update_data(vm, gpa_base + offset, size); + } +} + +bool is_kvm_sev_supported(void) +{ + struct sev_user_data_status sev_status; + + sev_ioctl(SEV_PLATFORM_STATUS, &sev_status); + + return sev_status.api_major > SEV_FW_REQ_VER_MAJOR || + (sev_status.api_major == SEV_FW_REQ_VER_MAJOR && + sev_status.api_minor >= SEV_FW_REQ_VER_MINOR); +} + +static void sev_vm_launch(struct kvm_vm *vm, uint32_t policy) +{ + struct kvm_sev_launch_start launch_start = { + .policy = policy, + }; + struct userspace_mem_region *region; + struct kvm_sev_guest_status status; + int ctr; + + kvm_sev_ioctl(vm, KVM_SEV_LAUNCH_START, &launch_start); + kvm_sev_ioctl(vm, KVM_SEV_GUEST_STATUS, &status); + + TEST_ASSERT(status.policy == policy, "Expected policy %d, got %d", + policy, status.policy); + TEST_ASSERT(status.state == SEV_GSTATE_LUPDATE, + "Expected guest state %d, got %d", + SEV_GSTATE_LUPDATE, status.state); + + hash_for_each(vm->regions.slot_hash, ctr, region, slot_node) + encrypt_region(vm, region); + + vm->arch.is_pt_protected = true; +} + +static void sev_vm_launch_measure(struct kvm_vm *vm, uint8_t *measurement) +{ + struct kvm_sev_launch_measure launch_measure; + struct kvm_sev_guest_status guest_status; + + launch_measure.len = 256; + launch_measure.uaddr = (__u64)measurement; + kvm_sev_ioctl(vm, KVM_SEV_LAUNCH_MEASURE, &launch_measure); + + kvm_sev_ioctl(vm, KVM_SEV_GUEST_STATUS, &guest_status); + TEST_ASSERT(guest_status.state == SEV_GSTATE_LSECRET, + "Unexpected guest state: %d", guest_status.state); +} + +static void sev_vm_launch_finish(struct kvm_vm *vm) +{ + struct kvm_sev_guest_status status; + + kvm_sev_ioctl(vm, KVM_SEV_GUEST_STATUS, &status); + TEST_ASSERT(status.state == SEV_GSTATE_LUPDATE || + status.state == SEV_GSTATE_LSECRET, + "Unexpected guest state: %d", status.state); + + kvm_sev_ioctl(vm, KVM_SEV_LAUNCH_FINISH, NULL); + + kvm_sev_ioctl(vm, KVM_SEV_GUEST_STATUS, &status); + TEST_ASSERT(status.state == SEV_GSTATE_RUNNING, + "Unexpected guest state: %d", status.state); +} + +static void sev_vm_measure(struct kvm_vm *vm) +{ + uint8_t measurement[512]; + int i; + + sev_vm_launch_measure(vm, measurement); + + /* TODO: Validate the measurement is as expected. */ + pr_debug("guest measurement: "); + for (i = 0; i < 32; ++i) + pr_debug("%02x", measurement[i]); + pr_debug("\n"); +} + +void sev_vm_init(struct kvm_vm *vm) +{ + vm->arch.sev_fd = open_sev_dev_path_or_exit(); + + kvm_sev_ioctl(vm, KVM_SEV_INIT, NULL); +} + +struct kvm_vm *vm_sev_create_with_one_vcpu(uint32_t policy, void *guest_code, + struct kvm_vcpu **cpu) +{ + uint32_t mode = VM_MODE_PXXV48_4K | VM_SUBTYPE_SEV << VM_MODE_SUBTYPE_SHIFT; + struct kvm_vm *vm; + struct kvm_vcpu *cpus[1]; + + vm = __vm_create_with_vcpus(mode, 1, 0, guest_code, cpus); + *cpu = cpus[0]; + + sev_vm_launch(vm, policy); + + sev_vm_measure(vm); + + sev_vm_launch_finish(vm); + + pr_debug("SEV guest created, policy: 0x%x\n", policy); + + return vm; +} From patchwork Mon Dec 18 16:11:45 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Gonda X-Patchwork-Id: 13497250 Received: from mail-pj1-f74.google.com (mail-pj1-f74.google.com [209.85.216.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id A641C4FF9D for ; Mon, 18 Dec 2023 16:12:41 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--pgonda.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="cnIo1V9b" Received: by mail-pj1-f74.google.com with SMTP id 98e67ed59e1d1-28b6819999bso553539a91.1 for ; Mon, 18 Dec 2023 08:12:41 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1702915961; x=1703520761; darn=vger.kernel.org; h=cc:to:from:subject:references:mime-version:message-id:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=dO4QxICI4OY8n1NmTGy/GBpUmjeoRLVn201/SHi6jvw=; b=cnIo1V9b/P8qOrQtC+9PfQ2IT1AlagPL8v1HGFoxK8psXH/pVtuz4kbgHQ7IjAeaqL P1Gxok4/KPzg/5N4DFh3doFcCduSIT4io25gZ+ZZ/KSfcwC5bW6RrJDH43etmXPVUq7w vFZqyRkupeI9DuY17IudkIvV7CXOE/k1VejlozF5MNJKvk21q5zP4umjre+2pPqv2IU7 LuhWW06+Biu+01OtUfPbn00ZqTsFoG/wSUjiKvcotRBBhifdpxgICHpE/ZbdpXGD09sN ucpPtZPUeR517Ri9K+WaRtQA9pdfU0Xt/lkJNN/H3YogAWFI49J7M+mvTDXK3B1yz5FZ fomg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1702915961; x=1703520761; h=cc:to:from:subject:references:mime-version:message-id:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=dO4QxICI4OY8n1NmTGy/GBpUmjeoRLVn201/SHi6jvw=; b=FDBXCOHRwGnSekCOMNAAEl6FcyPY44BLwDzxDexejUi2PpZf8sez1hwAfA+8WURbjo YmsHJTkvcNtGOJp/F5e7CZmawBcu6VF/2chFO3mEa72O3KuL3cQcNvYLuWoUrASOCgtU hWpK3OnvZzQK8NFjjwlnQFwkW6KtMXTmOA/OuJ6t2Az2tBcJ+0l3iZDcdUwMaDBuCFD4 IdpHq/AmH1r2eD7BtrIvS0Bt+8MFUl+B4dnssTN85UnWN1Lc+DTHVeeGYB/QOTP/+Iqa g2zhgiSxiyYNp2q88KAwzVBm3J7a7joLRmfyYpCF2kniP1vU75daaXPi8J2egzSJJdP4 txkg== X-Gm-Message-State: AOJu0YyQbKMd4X0zfBGZS+wlabgENy1FGzWNGpTNVmUE1hDP6seumagl jY4znvSjtHJAh5YIc/4uSXaaGD1gDRK9mTY7VlcH0Ph3thgwrPvhcI1K/7cFJtP9HRwPVyZJ2bC GcnSypLb4L+XhTTE4iBd59OmU9ANqCJ7svIqHE0fJF4VYnuomVQD4EPrJzg== X-Google-Smtp-Source: AGHT+IGKJhSsVLXHbzqEatX01C+Y9wmyIMdMQhGLjtBbw24nNELzFO6GZEfyWt1g9dsBzhfCJAdop+r/umc= X-Received: from pgonda1.kir.corp.google.com ([2620:0:1008:15:8aeb:e3fa:237c:63a5]) (user=pgonda job=sendgmr) by 2002:a17:903:2452:b0:1d2:eb13:5cd7 with SMTP id l18-20020a170903245200b001d2eb135cd7mr2040499pls.12.1702915960012; Mon, 18 Dec 2023 08:12:40 -0800 (PST) Date: Mon, 18 Dec 2023 08:11:45 -0800 In-Reply-To: <20231218161146.3554657-1-pgonda@google.com> Message-Id: <20231218161146.3554657-8-pgonda@google.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20231218161146.3554657-1-pgonda@google.com> X-Mailer: git-send-email 2.43.0.472.g3155946c3a-goog Subject: [PATCH V7 7/8] KVM: selftests: Update ucall pool to allocate from shared memory From: Peter Gonda To: kvm@vger.kernel.org Cc: Peter Gonda , Paolo Bonzini , Sean Christopherson , Vishal Annapurve , Ackerly Tng , Andrew Jones , Tom Lendacky , Michael Roth Update the per VM ucall_header allocation from vm_vaddr_alloc() to vm_vaddr_alloc_shared(). This allows encrypted guests to use ucall pools by placing their shared ucall structures in unencrypted (shared) memory. No behavior change for non encrypted guests. Cc: Paolo Bonzini Cc: Sean Christopherson Cc: Vishal Annapurve Cc: Ackerly Tng cc: Andrew Jones Cc: Tom Lendacky Cc: Michael Roth Signed-off-by: Peter Gonda --- tools/testing/selftests/kvm/lib/ucall_common.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/tools/testing/selftests/kvm/lib/ucall_common.c b/tools/testing/selftests/kvm/lib/ucall_common.c index 816a3fa109bf..f5af65a41c29 100644 --- a/tools/testing/selftests/kvm/lib/ucall_common.c +++ b/tools/testing/selftests/kvm/lib/ucall_common.c @@ -29,7 +29,8 @@ void ucall_init(struct kvm_vm *vm, vm_paddr_t mmio_gpa) vm_vaddr_t vaddr; int i; - vaddr = __vm_vaddr_alloc(vm, sizeof(*hdr), KVM_UTIL_MIN_VADDR, MEM_REGION_DATA); + vaddr = vm_vaddr_alloc_shared(vm, sizeof(*hdr), KVM_UTIL_MIN_VADDR, + MEM_REGION_DATA); hdr = (struct ucall_header *)addr_gva2hva(vm, vaddr); memset(hdr, 0, sizeof(*hdr)); From patchwork Mon Dec 18 16:11:46 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Gonda X-Patchwork-Id: 13497251 Received: from mail-pf1-f202.google.com (mail-pf1-f202.google.com [209.85.210.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id BECA05A84F for ; Mon, 18 Dec 2023 16:12:42 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--pgonda.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="QfbvNgiQ" Received: by mail-pf1-f202.google.com with SMTP id d2e1a72fcca58-6d7121ceaadso628722b3a.0 for ; Mon, 18 Dec 2023 08:12:42 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1702915962; x=1703520762; darn=vger.kernel.org; h=cc:to:from:subject:references:mime-version:message-id:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=5r+gfDTwXG4Hr3wDcClACrvPGp96QulbR+3EJ5OBefA=; b=QfbvNgiQCz8xtg/zDs4vCC56dlY+UGegXqf7MeT4u9nYxEPwVpnY7e/ZrxlyEjRw0X cWyF3E3xQOC0sUZjKCs+mcURT6z34cn/TQgbTo+XbLczTOuWirLxp97NArbgpE5MX+Cy e3pp+bQac2Cw/ODQbjkLwtje33V1BaF/5xFYsobPZRquWqQ+X/W2p9191t9ogdFSD8+Z yj6gsAgLam0IDekwIxR9FYWldWxS6xD9ckrFD9F7qpEf8hCb2ux3YlPw0nAhoE8UzjJw YGpGTRK+U1xz6boJ7qXofcO3WAj8uoDUTNgp6V06rIVvGkpiVKj8qyNTbD67RwqM6KPD qOgQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1702915962; x=1703520762; h=cc:to:from:subject:references:mime-version:message-id:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=5r+gfDTwXG4Hr3wDcClACrvPGp96QulbR+3EJ5OBefA=; b=MVoxvb3JMCLE03OPMfD6Z59HqDznpv8YLzfck/R2YUDWDESeXmap4orliriWXEwgSN BXLi4eEH2O6NIGc4MgkBxqSq1Rm3I1VUtZFd+1kHjN3zng2mDbJsLixh3HDl7+kpn+02 a1F7Ian89zVCFabh1eV/2yaXuFTMAzOfKNgI2E52WtyHOMtqUN4t6e++tkxBHyImTDl/ 0LZwTziaA8FHbAX2XeEDbKedtgK0Ea7RGZEDfyWjP0TWrIG6DJxm72sdXvUw6OoRSM6i /tCfew1b8DmxjyyPECvAYgg2geKfOwLMcZbDh/45cym2S3WuOXno0kxxjw2ve99jRrA7 GXbQ== X-Gm-Message-State: AOJu0YyKjmDC87DCunGJBGPNbB2uJE6xAtsT3uijXtX4TjANKjan1KEv EMOsLY/Ps7jbTrQxl/dbB5fyQNb9YQhn94OE7uOIfkVTz08fZe231Ebx1PE2NceUeiROE7PzeZe X7cNYyUzHtM0rQR43B6B5jfdBaHX95f0+PY64z4k6lZ23R6cAJuLH7/DFQA== X-Google-Smtp-Source: AGHT+IHUr7HJPn5OG9lHgTUZeIq0OfaiOJwxlfxxIRQ/eQYumozYumkGcMzkn3JhDh3j8iGQOLFwVJ2Z1WY= X-Received: from pgonda1.kir.corp.google.com ([2620:0:1008:15:8aeb:e3fa:237c:63a5]) (user=pgonda job=sendgmr) by 2002:a05:6a00:4b0a:b0:6c0:ec5b:bb2d with SMTP id kq10-20020a056a004b0a00b006c0ec5bbb2dmr2808969pfb.2.1702915961923; Mon, 18 Dec 2023 08:12:41 -0800 (PST) Date: Mon, 18 Dec 2023 08:11:46 -0800 In-Reply-To: <20231218161146.3554657-1-pgonda@google.com> Message-Id: <20231218161146.3554657-9-pgonda@google.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20231218161146.3554657-1-pgonda@google.com> X-Mailer: git-send-email 2.43.0.472.g3155946c3a-goog Subject: [PATCH V7 8/8] KVM: selftests: Add simple sev vm testing From: Peter Gonda To: kvm@vger.kernel.org Cc: Peter Gonda , Paolo Bonzini , Sean Christopherson , Vishal Annapurve , Ackerly Tng , Andrew Jones , Tom Lendacky , Michael Roth A very simple of booting SEV guests that checks SEV related CPUID bits and the SEV MSR. Cc: Paolo Bonzini Cc: Sean Christopherson Cc: Vishal Annapurve Cc: Ackerly Tng cc: Andrew Jones Cc: Tom Lendacky Cc: Michael Roth Suggested-by: Michael Roth Signed-off-by: Peter Gonda --- tools/testing/selftests/kvm/Makefile | 1 + .../selftests/kvm/x86_64/sev_all_boot_test.c | 59 +++++++++++++++++++ 2 files changed, 60 insertions(+) create mode 100644 tools/testing/selftests/kvm/x86_64/sev_all_boot_test.c diff --git a/tools/testing/selftests/kvm/Makefile b/tools/testing/selftests/kvm/Makefile index c932bcea4198..320d7907ed4f 100644 --- a/tools/testing/selftests/kvm/Makefile +++ b/tools/testing/selftests/kvm/Makefile @@ -115,6 +115,7 @@ TEST_GEN_PROGS_x86_64 += x86_64/tsc_msrs_test TEST_GEN_PROGS_x86_64 += x86_64/vmx_pmu_caps_test TEST_GEN_PROGS_x86_64 += x86_64/xen_shinfo_test TEST_GEN_PROGS_x86_64 += x86_64/xen_vmcall_test +TEST_GEN_PROGS_x86_64 += x86_64/sev_all_boot_test TEST_GEN_PROGS_x86_64 += x86_64/sev_migrate_tests TEST_GEN_PROGS_x86_64 += x86_64/amx_test TEST_GEN_PROGS_x86_64 += x86_64/max_vcpuid_cap_test diff --git a/tools/testing/selftests/kvm/x86_64/sev_all_boot_test.c b/tools/testing/selftests/kvm/x86_64/sev_all_boot_test.c new file mode 100644 index 000000000000..b4139cf90c8e --- /dev/null +++ b/tools/testing/selftests/kvm/x86_64/sev_all_boot_test.c @@ -0,0 +1,59 @@ +// SPDX-License-Identifier: GPL-2.0-only +#include +#include +#include +#include +#include + +#include "test_util.h" +#include "kvm_util.h" +#include "processor.h" +#include "svm_util.h" +#include "linux/psp-sev.h" +#include "sev.h" + +static void guest_sev_code(void) +{ + GUEST_ASSERT(this_cpu_has(X86_FEATURE_SEV)); + GUEST_ASSERT(rdmsr(MSR_AMD64_SEV) & MSR_AMD64_SEV_ENABLED); + + GUEST_DONE(); +} + +static void test_sev(void *guest_code, uint64_t policy) +{ + struct kvm_vcpu *vcpu; + struct kvm_vm *vm; + struct ucall uc; + int i; + + vm = vm_sev_create_with_one_vcpu(policy, guest_code, &vcpu); + + for (;;) { + vcpu_run(vcpu); + + switch (get_ucall(vcpu, &uc)) { + case UCALL_SYNC: + continue; + case UCALL_DONE: + return; + case UCALL_ABORT: + REPORT_GUEST_ASSERT(uc); + default: + TEST_FAIL("Unexpected exit: %s", + exit_reason_str(vcpu->run->exit_reason)); + } + } + + kvm_vm_free(vm); +} + +int main(int argc, char *argv[]) +{ + TEST_REQUIRE(is_kvm_sev_supported()); + + test_sev(guest_sev_code, SEV_POLICY_NO_DBG); + test_sev(guest_sev_code, 0); + + return 0; +}