From patchwork Thu Aug 29 02:21:14 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Xu X-Patchwork-Id: 11120207 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 43F4F184E for ; Thu, 29 Aug 2019 02:21:46 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 2DBA52173E for ; Thu, 29 Aug 2019 02:21:46 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727008AbfH2CVg (ORCPT ); Wed, 28 Aug 2019 22:21:36 -0400 Received: from mx1.redhat.com ([209.132.183.28]:43278 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727005AbfH2CVf (ORCPT ); Wed, 28 Aug 2019 22:21:35 -0400 Received: from mail-pl1-f198.google.com (mail-pl1-f198.google.com [209.85.214.198]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id EC8F03B738 for ; Thu, 29 Aug 2019 02:21:34 +0000 (UTC) Received: by mail-pl1-f198.google.com with SMTP id u24so1112546plq.22 for ; Wed, 28 Aug 2019 19:21:34 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=qq7LOeTMvAm3jyuMi4wh7zBYPMaFFoNWn+mmSJ8a4xo=; b=ehKkYCkyyZoKfUuJJtOopn+KCcABnRhG0BJ3xOZaCzeqPTzvOVBzptmOzn5XMfIoW/ l8IgWNuDtaMd2nq5pGusMRXvN0HLStENmqOsfUQzAEaPxfZ2g2/87uCb8QKQQy8w9Fyz Zt1qkz6ttAwOInm/GHpFp/Qfy0urlpYelh0W/EoJ1GLc0NrIecDV9zcSLQR/ZMHkSpNM wnFPnMmyDTTxhIeJrnVGVEbvVKJRvIiPyaqd0VFhmjtl8bvpECqVehTiOKO+3GOJjzqA GZ7sNOV8YgkPl3K7rLY1WaUVuGXAK7hmr3GD2VAMhelYXs6ZHIXBc8nYJ8p/BJ65/cRp 3yYQ== X-Gm-Message-State: APjAAAWEubJGNGve/TBAf9WUNNns9ML2IInFArqSohUTlM0LggZZvI7X IbN4QMOxunsKACnILEofvKGtZSmumteno+QLk6q0iLV9Dw+n53Q/jXtfS5gAZOfFaxv7AiIGZww k7P1yuZepd/HA X-Received: by 2002:aa7:8814:: with SMTP id c20mr8246426pfo.87.1567045294369; Wed, 28 Aug 2019 19:21:34 -0700 (PDT) X-Google-Smtp-Source: APXvYqyLModn3+xEip4lnvaDVhJ+kNU0sZ7K/GfG106UEbF5i3YTiJO3tlCAHgpmfut3j7MVILz9Tg== X-Received: by 2002:aa7:8814:: with SMTP id c20mr8246414pfo.87.1567045294104; Wed, 28 Aug 2019 19:21:34 -0700 (PDT) Received: from xz-x1.redhat.com ([209.132.188.80]) by smtp.gmail.com with ESMTPSA id j187sm750140pfg.178.2019.08.28.19.21.29 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 28 Aug 2019 19:21:33 -0700 (PDT) From: Peter Xu To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: Paolo Bonzini , Vitaly Kuznetsov , =?utf-8?b?UmFkaW0gS3LEjW3DocWZ?= , Thomas Huth , Andrew Jones , peterx@redhat.com Subject: [PATCH v2 1/4] KVM: selftests: Move vm type into _vm_create() internally Date: Thu, 29 Aug 2019 10:21:14 +0800 Message-Id: <20190829022117.10191-2-peterx@redhat.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20190829022117.10191-1-peterx@redhat.com> References: <20190829022117.10191-1-peterx@redhat.com> MIME-Version: 1.0 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Rather than passing the vm type from the top level to the end of vm creation, let's simply keep that as an internal of kvm_vm struct and decide the type in _vm_create(). Several reasons for doing this: - The vm type is only decided by physical address width and currently only used in aarch64, so we've got enough information as long as we're passing vm_guest_mode into _vm_create(), - This removes a loop dependency between the vm->type and creation of vms. That's why now we need to parse vm_guest_mode twice sometimes, once in run_test() and then again in _vm_create(). The follow up patches will move on to clean up that as well so we can have a single place to decide guest machine types and so. Note that this patch will slightly change the behavior of aarch64 tests in that previously most vm_create() callers will directly pass in type==0 into _vm_create() but now the type will depend on vm_guest_mode, however it shouldn't affect any user because all vm_create() users of aarch64 will be using VM_MODE_DEFAULT guest mode (which is VM_MODE_P40V48_4K) so at last type will still be zero. Signed-off-by: Peter Xu Reviewed-by: Andrew Jones --- tools/testing/selftests/kvm/dirty_log_test.c | 13 +++--------- .../testing/selftests/kvm/include/kvm_util.h | 3 +-- tools/testing/selftests/kvm/lib/kvm_util.c | 21 ++++++++++++------- 3 files changed, 17 insertions(+), 20 deletions(-) diff --git a/tools/testing/selftests/kvm/dirty_log_test.c b/tools/testing/selftests/kvm/dirty_log_test.c index ceb52b952637..135cba5c6d0d 100644 --- a/tools/testing/selftests/kvm/dirty_log_test.c +++ b/tools/testing/selftests/kvm/dirty_log_test.c @@ -216,14 +216,12 @@ static void vm_dirty_log_verify(unsigned long *bmap) } static struct kvm_vm *create_vm(enum vm_guest_mode mode, uint32_t vcpuid, - uint64_t extra_mem_pages, void *guest_code, - unsigned long type) + uint64_t extra_mem_pages, void *guest_code) { struct kvm_vm *vm; uint64_t extra_pg_pages = extra_mem_pages / 512 * 2; - vm = _vm_create(mode, DEFAULT_GUEST_PHY_PAGES + extra_pg_pages, - O_RDWR, type); + vm = _vm_create(mode, DEFAULT_GUEST_PHY_PAGES + extra_pg_pages, O_RDWR); kvm_vm_elf_load(vm, program_invocation_name, 0, 0); #ifdef __x86_64__ vm_create_irqchip(vm); @@ -240,7 +238,6 @@ static void run_test(enum vm_guest_mode mode, unsigned long iterations, struct kvm_vm *vm; uint64_t max_gfn; unsigned long *bmap; - unsigned long type = 0; switch (mode) { case VM_MODE_P52V48_4K: @@ -281,10 +278,6 @@ static void run_test(enum vm_guest_mode mode, unsigned long iterations, * bits we can change to 39. */ guest_pa_bits = 39; -#endif -#ifdef __aarch64__ - if (guest_pa_bits != 40) - type = KVM_VM_TYPE_ARM_IPA_SIZE(guest_pa_bits); #endif max_gfn = (1ul << (guest_pa_bits - guest_page_shift)) - 1; guest_page_size = (1ul << guest_page_shift); @@ -309,7 +302,7 @@ static void run_test(enum vm_guest_mode mode, unsigned long iterations, bmap = bitmap_alloc(host_num_pages); host_bmap_track = bitmap_alloc(host_num_pages); - vm = create_vm(mode, VCPU_ID, guest_num_pages, guest_code, type); + vm = create_vm(mode, VCPU_ID, guest_num_pages, guest_code); #ifdef USE_CLEAR_DIRTY_LOG struct kvm_enable_cap cap = {}; diff --git a/tools/testing/selftests/kvm/include/kvm_util.h b/tools/testing/selftests/kvm/include/kvm_util.h index e0e66b115ef2..c78faa2ff7f3 100644 --- a/tools/testing/selftests/kvm/include/kvm_util.h +++ b/tools/testing/selftests/kvm/include/kvm_util.h @@ -60,8 +60,7 @@ int kvm_check_cap(long cap); int vm_enable_cap(struct kvm_vm *vm, struct kvm_enable_cap *cap); struct kvm_vm *vm_create(enum vm_guest_mode mode, uint64_t phy_pages, int perm); -struct kvm_vm *_vm_create(enum vm_guest_mode mode, uint64_t phy_pages, - int perm, unsigned long type); +struct kvm_vm *_vm_create(enum vm_guest_mode mode, uint64_t phy_pages, int perm); void kvm_vm_free(struct kvm_vm *vmp); void kvm_vm_restart(struct kvm_vm *vmp, int perm); void kvm_vm_release(struct kvm_vm *vmp); diff --git a/tools/testing/selftests/kvm/lib/kvm_util.c b/tools/testing/selftests/kvm/lib/kvm_util.c index 6e49bb039376..34a8a6572c7c 100644 --- a/tools/testing/selftests/kvm/lib/kvm_util.c +++ b/tools/testing/selftests/kvm/lib/kvm_util.c @@ -84,7 +84,7 @@ int vm_enable_cap(struct kvm_vm *vm, struct kvm_enable_cap *cap) return ret; } -static void vm_open(struct kvm_vm *vm, int perm, unsigned long type) +static void vm_open(struct kvm_vm *vm, int perm) { vm->kvm_fd = open(KVM_DEV_PATH, perm); if (vm->kvm_fd < 0) @@ -95,7 +95,7 @@ static void vm_open(struct kvm_vm *vm, int perm, unsigned long type) exit(KSFT_SKIP); } - vm->fd = ioctl(vm->kvm_fd, KVM_CREATE_VM, type); + vm->fd = ioctl(vm->kvm_fd, KVM_CREATE_VM, vm->type); TEST_ASSERT(vm->fd >= 0, "KVM_CREATE_VM ioctl failed, " "rc: %i errno: %i", vm->fd, errno); } @@ -130,8 +130,7 @@ _Static_assert(sizeof(vm_guest_mode_string)/sizeof(char *) == NUM_VM_MODES, * descriptor to control the created VM is created with the permissions * given by perm (e.g. O_RDWR). */ -struct kvm_vm *_vm_create(enum vm_guest_mode mode, uint64_t phy_pages, - int perm, unsigned long type) +struct kvm_vm *_vm_create(enum vm_guest_mode mode, uint64_t phy_pages, int perm) { struct kvm_vm *vm; @@ -139,8 +138,7 @@ struct kvm_vm *_vm_create(enum vm_guest_mode mode, uint64_t phy_pages, TEST_ASSERT(vm != NULL, "Insufficient Memory"); vm->mode = mode; - vm->type = type; - vm_open(vm, perm, type); + vm->type = 0; /* Setup mode specific traits. */ switch (vm->mode) { @@ -190,6 +188,13 @@ struct kvm_vm *_vm_create(enum vm_guest_mode mode, uint64_t phy_pages, TEST_ASSERT(false, "Unknown guest mode, mode: 0x%x", mode); } +#ifdef __aarch64__ + if (vm->pa_bits != 40) + vm->type = KVM_VM_TYPE_ARM_IPA_SIZE(vm->pa_bits); +#endif + + vm_open(vm, perm); + /* Limit to VA-bit canonical virtual addresses. */ vm->vpages_valid = sparsebit_alloc(); sparsebit_set_num(vm->vpages_valid, @@ -212,7 +217,7 @@ struct kvm_vm *_vm_create(enum vm_guest_mode mode, uint64_t phy_pages, struct kvm_vm *vm_create(enum vm_guest_mode mode, uint64_t phy_pages, int perm) { - return _vm_create(mode, phy_pages, perm, 0); + return _vm_create(mode, phy_pages, perm); } /* @@ -232,7 +237,7 @@ void kvm_vm_restart(struct kvm_vm *vmp, int perm) { struct userspace_mem_region *region; - vm_open(vmp, perm, vmp->type); + vm_open(vmp, perm); if (vmp->has_irqchip) vm_create_irqchip(vmp); From patchwork Thu Aug 29 02:21:15 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Xu X-Patchwork-Id: 11120209 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 71A3218EC for ; Thu, 29 Aug 2019 02:21:46 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 4F0682173E for ; Thu, 29 Aug 2019 02:21:46 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727116AbfH2CVj (ORCPT ); Wed, 28 Aug 2019 22:21:39 -0400 Received: from mx1.redhat.com ([209.132.183.28]:40328 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727188AbfH2CVj (ORCPT ); Wed, 28 Aug 2019 22:21:39 -0400 Received: from mail-pg1-f200.google.com (mail-pg1-f200.google.com [209.85.215.200]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id CCD6B155E0 for ; Thu, 29 Aug 2019 02:21:38 +0000 (UTC) Received: by mail-pg1-f200.google.com with SMTP id b18so1086940pgg.8 for ; Wed, 28 Aug 2019 19:21:38 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=cG9laiQiwZAynctMYNHvaREftIm3iI6/rD0VII6FmvI=; b=qm5tfNp/d/B+1FJW7LkZA1Ze/+RD5pxg8hs97jlC5ieVG/i1juRphO7KauXUGvLNKS iQmavjjWCGO8jod8su+tNzX8OxQgYb7/p68916/dgVtzw0EZuuAsxYyro8gjmofj1+Pf nqFi+ZBLZ2akDLeZjpQpHDW0sYzrc95T9J06vGXLD7EsouE0akxZ3RvuqlDwy/TvUyOu D8P0RWBXog3yfF6PvNVyjKda3+RCElxzPb46TkcGqHWZEssjMvAh0eoFB1qg/inSuI9y OrSm6F8DM9lv0jzp7nri0M+n9lomQbKuSA9ApIRJ4jrvnG5R8RgezzNhuXI+eON59J0l 5fNw== X-Gm-Message-State: APjAAAUUyzOV9oFfiR1YGriQ5KsLZJJZBGOsKplfyeYpMZihBEHh3wx8 Ex0PTyDv31JBF9IiYlsG5vyw4QgYNoYVaAuIWZvf3tdJQeROVx7aBMa1JUIFKgpwcEEy2HSoTIU OJP1aImIv4I4s X-Received: by 2002:a65:6454:: with SMTP id s20mr6104233pgv.15.1567045298287; Wed, 28 Aug 2019 19:21:38 -0700 (PDT) X-Google-Smtp-Source: APXvYqx/fwX5Wj8RbB303J6h4GpFkNemwBAOcjXKDMYBOeww1k/Tq5GnIbmk9x6Du/3iQlQaMsKr7Q== X-Received: by 2002:a65:6454:: with SMTP id s20mr6104223pgv.15.1567045298037; Wed, 28 Aug 2019 19:21:38 -0700 (PDT) Received: from xz-x1.redhat.com ([209.132.188.80]) by smtp.gmail.com with ESMTPSA id j187sm750140pfg.178.2019.08.28.19.21.34 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 28 Aug 2019 19:21:37 -0700 (PDT) From: Peter Xu To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: Paolo Bonzini , Vitaly Kuznetsov , =?utf-8?b?UmFkaW0gS3LEjW3DocWZ?= , Thomas Huth , Andrew Jones , peterx@redhat.com Subject: [PATCH v2 2/4] KVM: selftests: Create VM earlier for dirty log test Date: Thu, 29 Aug 2019 10:21:15 +0800 Message-Id: <20190829022117.10191-3-peterx@redhat.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20190829022117.10191-1-peterx@redhat.com> References: <20190829022117.10191-1-peterx@redhat.com> MIME-Version: 1.0 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Since we've just removed the dependency of vm type in previous patch, now we can create the vm much earlier. Note that to move it earlier we used an approximation of number of extra pages but it should be fine. This prepares for the follow up patches to finally remove the duplication of guest mode parsings. Reviewed-by: Andrew Jones Signed-off-by: Peter Xu --- tools/testing/selftests/kvm/dirty_log_test.c | 19 ++++++++++++++++--- 1 file changed, 16 insertions(+), 3 deletions(-) diff --git a/tools/testing/selftests/kvm/dirty_log_test.c b/tools/testing/selftests/kvm/dirty_log_test.c index 135cba5c6d0d..efb7746a7e99 100644 --- a/tools/testing/selftests/kvm/dirty_log_test.c +++ b/tools/testing/selftests/kvm/dirty_log_test.c @@ -230,6 +230,9 @@ static struct kvm_vm *create_vm(enum vm_guest_mode mode, uint32_t vcpuid, return vm; } +#define DIRTY_MEM_BITS 30 /* 1G */ +#define PAGE_SHIFT_4K 12 + static void run_test(enum vm_guest_mode mode, unsigned long iterations, unsigned long interval, uint64_t phys_offset) { @@ -239,6 +242,18 @@ static void run_test(enum vm_guest_mode mode, unsigned long iterations, uint64_t max_gfn; unsigned long *bmap; + /* + * We reserve page table for 2 times of extra dirty mem which + * will definitely cover the original (1G+) test range. Here + * we do the calculation with 4K page size which is the + * smallest so the page number will be enough for all archs + * (e.g., 64K page size guest will need even less memory for + * page tables). + */ + vm = create_vm(mode, VCPU_ID, + 2ul << (DIRTY_MEM_BITS - PAGE_SHIFT_4K), + guest_code); + switch (mode) { case VM_MODE_P52V48_4K: guest_pa_bits = 52; @@ -285,7 +300,7 @@ static void run_test(enum vm_guest_mode mode, unsigned long iterations, * A little more than 1G of guest page sized pages. Cover the * case where the size is not aligned to 64 pages. */ - guest_num_pages = (1ul << (30 - guest_page_shift)) + 16; + guest_num_pages = (1ul << (DIRTY_MEM_BITS - guest_page_shift)) + 16; host_page_size = getpagesize(); host_num_pages = (guest_num_pages * guest_page_size) / host_page_size + !!((guest_num_pages * guest_page_size) % host_page_size); @@ -302,8 +317,6 @@ static void run_test(enum vm_guest_mode mode, unsigned long iterations, bmap = bitmap_alloc(host_num_pages); host_bmap_track = bitmap_alloc(host_num_pages); - vm = create_vm(mode, VCPU_ID, guest_num_pages, guest_code); - #ifdef USE_CLEAR_DIRTY_LOG struct kvm_enable_cap cap = {}; From patchwork Thu Aug 29 02:21:16 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Xu X-Patchwork-Id: 11120211 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 90456174A for ; Thu, 29 Aug 2019 02:21:46 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 7A3922077B for ; Thu, 29 Aug 2019 02:21:46 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727257AbfH2CVo (ORCPT ); Wed, 28 Aug 2019 22:21:44 -0400 Received: from mx1.redhat.com ([209.132.183.28]:49244 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727267AbfH2CVo (ORCPT ); Wed, 28 Aug 2019 22:21:44 -0400 Received: from mail-pf1-f199.google.com (mail-pf1-f199.google.com [209.85.210.199]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id 082B4811A9 for ; Thu, 29 Aug 2019 02:21:44 +0000 (UTC) Received: by mail-pf1-f199.google.com with SMTP id x1so1281856pfq.2 for ; Wed, 28 Aug 2019 19:21:44 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=XR16i3pFbQ8HyVGmae3Qt/pyaJQ1/ke40K7lppRQUl4=; b=luRgrUelxxZFraLdIKQt8tCf/UEZ6nDqeM4yf5w2ltVvS7ymJ3lE901hCLw01mg9bw bHp5bQ1AsGPzYIW390+hXvqEiOzNrD6ny1XKIRoWVZyZ6+AK+y6LM6iSs4ypXxJm6Zla PYiumN350NTb9FzDbc/LqRk/0JS6aS+jMeacn0uVv0n9607DCNFLf9Wkou+R9n3wVOF/ OJANV6WeRCX3euW2Os3vnsPEEETQ9M322zD15F38fmbcSvzizyGrMbghAUKob/s2bmNt Rkp4w4nOnM/c/2GLkfkO92YafkRHX3cUpS2NUscHNDynfekvJ4/QQUDkafXnGsdBtOZV PG4Q== X-Gm-Message-State: APjAAAUoOpiLSHLNx2nlkVCnbcvI6zcPk2mBpL2ZTHjQ3ULJCpYewMJh wDapiFITkC/vZ/MSBnRgNXtIKduItkKfU/NTbFXDF3WFSsDghs8AGQe4LvKJM2LNFuEYOXZ7Ger Mi9xm2bnKlDNE X-Received: by 2002:aa7:980d:: with SMTP id e13mr8470645pfl.33.1567045302506; Wed, 28 Aug 2019 19:21:42 -0700 (PDT) X-Google-Smtp-Source: APXvYqwy1fSv/dMjtEeRsTz0ZmOXD7K/3gHxQ2+ZoWrTTlHFt5TcAFocbmcizt32apcRKEVSk5nv7w== X-Received: by 2002:aa7:980d:: with SMTP id e13mr8470622pfl.33.1567045302173; Wed, 28 Aug 2019 19:21:42 -0700 (PDT) Received: from xz-x1.redhat.com ([209.132.188.80]) by smtp.gmail.com with ESMTPSA id j187sm750140pfg.178.2019.08.28.19.21.38 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 28 Aug 2019 19:21:41 -0700 (PDT) From: Peter Xu To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: Paolo Bonzini , Vitaly Kuznetsov , =?utf-8?b?UmFkaW0gS3LEjW3DocWZ?= , Thomas Huth , Andrew Jones , peterx@redhat.com Subject: [PATCH v2 3/4] KVM: selftests: Introduce VM_MODE_PXXV48_4K Date: Thu, 29 Aug 2019 10:21:16 +0800 Message-Id: <20190829022117.10191-4-peterx@redhat.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20190829022117.10191-1-peterx@redhat.com> References: <20190829022117.10191-1-peterx@redhat.com> MIME-Version: 1.0 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org The naming VM_MODE_P52V48_4K is explicit but unclear when used on x86_64 machines, because x86_64 machines are having various physical address width rather than some static values. Here's some examples: - Intel Xeon E3-1220: 36 bits - Intel Core i7-8650: 39 bits - AMD EPYC 7251: 48 bits All of them are using 48 bits linear address width but with totally different physical address width (and most of the old machines should be less than 52 bits). Let's create a new guest mode called VM_MODE_PXXV48_4K for current x86_64 tests and make it as the default to replace the old naming of VM_MODE_P52V48_4K because it shows more clearly that the PA width is not really a constant. Meanwhile we also stop assuming all the x86 machines are having 52 bits PA width but instead we fetch the real vm->pa_bits from CPUID 0x80000008 during runtime. We currently make this exclusively used by x86_64 but no other arch. As a slight touch up, moving DEBUG macro from dirty_log_test.c to kvm_util.h so lib can use it too. Signed-off-by: Peter Xu --- tools/testing/selftests/kvm/dirty_log_test.c | 5 ++-- .../testing/selftests/kvm/include/kvm_util.h | 9 +++++- .../selftests/kvm/include/x86_64/processor.h | 3 ++ .../selftests/kvm/lib/aarch64/processor.c | 3 ++ tools/testing/selftests/kvm/lib/kvm_util.c | 29 ++++++++++++++---- .../selftests/kvm/lib/x86_64/processor.c | 30 ++++++++++++++++--- 6 files changed, 65 insertions(+), 14 deletions(-) diff --git a/tools/testing/selftests/kvm/dirty_log_test.c b/tools/testing/selftests/kvm/dirty_log_test.c index efb7746a7e99..c86f83cb33e5 100644 --- a/tools/testing/selftests/kvm/dirty_log_test.c +++ b/tools/testing/selftests/kvm/dirty_log_test.c @@ -19,8 +19,6 @@ #include "kvm_util.h" #include "processor.h" -#define DEBUG printf - #define VCPU_ID 1 /* The memory slot index to track dirty pages */ @@ -256,6 +254,7 @@ static void run_test(enum vm_guest_mode mode, unsigned long iterations, switch (mode) { case VM_MODE_P52V48_4K: + case VM_MODE_PXXV48_4K: guest_pa_bits = 52; guest_page_shift = 12; break; @@ -446,7 +445,7 @@ int main(int argc, char *argv[]) #endif #ifdef __x86_64__ - vm_guest_mode_params_init(VM_MODE_P52V48_4K, true, true); + vm_guest_mode_params_init(VM_MODE_PXXV48_4K, true, true); #endif #ifdef __aarch64__ vm_guest_mode_params_init(VM_MODE_P40V48_4K, true, true); diff --git a/tools/testing/selftests/kvm/include/kvm_util.h b/tools/testing/selftests/kvm/include/kvm_util.h index c78faa2ff7f3..430edbacb9b2 100644 --- a/tools/testing/selftests/kvm/include/kvm_util.h +++ b/tools/testing/selftests/kvm/include/kvm_util.h @@ -24,6 +24,10 @@ struct kvm_vm; typedef uint64_t vm_paddr_t; /* Virtual Machine (Guest) physical address */ typedef uint64_t vm_vaddr_t; /* Virtual Machine (Guest) virtual address */ +#ifndef DEBUG +#define DEBUG printf +#endif + /* Minimum allocated guest virtual and physical addresses */ #define KVM_UTIL_MIN_VADDR 0x2000 @@ -38,11 +42,14 @@ enum vm_guest_mode { VM_MODE_P48V48_64K, VM_MODE_P40V48_4K, VM_MODE_P40V48_64K, + VM_MODE_PXXV48_4K, /* For 48bits VA but ANY bits PA */ NUM_VM_MODES, }; -#ifdef __aarch64__ +#if defined(__aarch64__) #define VM_MODE_DEFAULT VM_MODE_P40V48_4K +#elif defined(__x86_64__) +#define VM_MODE_DEFAULT VM_MODE_PXXV48_4K #else #define VM_MODE_DEFAULT VM_MODE_P52V48_4K #endif diff --git a/tools/testing/selftests/kvm/include/x86_64/processor.h b/tools/testing/selftests/kvm/include/x86_64/processor.h index 80d19740d2dc..0c17f2ee685e 100644 --- a/tools/testing/selftests/kvm/include/x86_64/processor.h +++ b/tools/testing/selftests/kvm/include/x86_64/processor.h @@ -325,6 +325,9 @@ uint64_t vcpu_get_msr(struct kvm_vm *vm, uint32_t vcpuid, uint64_t msr_index); void vcpu_set_msr(struct kvm_vm *vm, uint32_t vcpuid, uint64_t msr_index, uint64_t msr_value); +uint32_t kvm_get_cpuid_max(void); +void kvm_get_cpu_address_width(unsigned int *pa_bits, unsigned int *va_bits); + /* * Basic CPU control in CR0 */ diff --git a/tools/testing/selftests/kvm/lib/aarch64/processor.c b/tools/testing/selftests/kvm/lib/aarch64/processor.c index 486400a97374..86036a59a668 100644 --- a/tools/testing/selftests/kvm/lib/aarch64/processor.c +++ b/tools/testing/selftests/kvm/lib/aarch64/processor.c @@ -264,6 +264,9 @@ void aarch64_vcpu_setup(struct kvm_vm *vm, int vcpuid, struct kvm_vcpu_init *ini case VM_MODE_P52V48_4K: TEST_ASSERT(false, "AArch64 does not support 4K sized pages " "with 52-bit physical address ranges"); + case VM_MODE_PXXV48_4K: + TEST_ASSERT(false, "AArch64 does not support 4K sized pages " + "with ANY-bit physical address ranges"); case VM_MODE_P52V48_64K: tcr_el1 |= 1ul << 14; /* TG0 = 64KB */ tcr_el1 |= 6ul << 32; /* IPS = 52 bits */ diff --git a/tools/testing/selftests/kvm/lib/kvm_util.c b/tools/testing/selftests/kvm/lib/kvm_util.c index 34a8a6572c7c..bb8f993b25fb 100644 --- a/tools/testing/selftests/kvm/lib/kvm_util.c +++ b/tools/testing/selftests/kvm/lib/kvm_util.c @@ -8,6 +8,7 @@ #include "test_util.h" #include "kvm_util.h" #include "kvm_util_internal.h" +#include "processor.h" #include #include @@ -101,12 +102,13 @@ static void vm_open(struct kvm_vm *vm, int perm) } const char * const vm_guest_mode_string[] = { - "PA-bits:52, VA-bits:48, 4K pages", - "PA-bits:52, VA-bits:48, 64K pages", - "PA-bits:48, VA-bits:48, 4K pages", - "PA-bits:48, VA-bits:48, 64K pages", - "PA-bits:40, VA-bits:48, 4K pages", - "PA-bits:40, VA-bits:48, 64K pages", + "PA-bits:52, VA-bits:48, 4K pages", + "PA-bits:52, VA-bits:48, 64K pages", + "PA-bits:48, VA-bits:48, 4K pages", + "PA-bits:48, VA-bits:48, 64K pages", + "PA-bits:40, VA-bits:48, 4K pages", + "PA-bits:40, VA-bits:48, 64K pages", + "PA-bits:ANY, VA-bits:48, 4K pages", }; _Static_assert(sizeof(vm_guest_mode_string)/sizeof(char *) == NUM_VM_MODES, "Missing new mode strings?"); @@ -184,6 +186,21 @@ struct kvm_vm *_vm_create(enum vm_guest_mode mode, uint64_t phy_pages, int perm) vm->page_size = 0x10000; vm->page_shift = 16; break; + case VM_MODE_PXXV48_4K: +#ifdef __x86_64__ + kvm_get_cpu_address_width(&vm->pa_bits, &vm->va_bits); + TEST_ASSERT(vm->va_bits == 48, "Linear address width " + "(%d bits) not supported", vm->va_bits); + vm->pgtable_levels = 4; + vm->page_size = 0x1000; + vm->page_shift = 12; + DEBUG("Guest physical address width detected: %d\n", + vm->pa_bits); +#else + TEST_ASSERT(false, "VM_MODE_PXXV48_4K not supported on " + "non-x86 platforms"); +#endif + break; default: TEST_ASSERT(false, "Unknown guest mode, mode: 0x%x", mode); } diff --git a/tools/testing/selftests/kvm/lib/x86_64/processor.c b/tools/testing/selftests/kvm/lib/x86_64/processor.c index 6cb34a0fa200..48467210ccfc 100644 --- a/tools/testing/selftests/kvm/lib/x86_64/processor.c +++ b/tools/testing/selftests/kvm/lib/x86_64/processor.c @@ -228,7 +228,7 @@ void sregs_dump(FILE *stream, struct kvm_sregs *sregs, void virt_pgd_alloc(struct kvm_vm *vm, uint32_t pgd_memslot) { - TEST_ASSERT(vm->mode == VM_MODE_P52V48_4K, "Attempt to use " + TEST_ASSERT(vm->mode == VM_MODE_PXXV48_4K, "Attempt to use " "unknown or unsupported guest mode, mode: 0x%x", vm->mode); /* If needed, create page map l4 table. */ @@ -261,7 +261,7 @@ void virt_pg_map(struct kvm_vm *vm, uint64_t vaddr, uint64_t paddr, uint16_t index[4]; struct pageMapL4Entry *pml4e; - TEST_ASSERT(vm->mode == VM_MODE_P52V48_4K, "Attempt to use " + TEST_ASSERT(vm->mode == VM_MODE_PXXV48_4K, "Attempt to use " "unknown or unsupported guest mode, mode: 0x%x", vm->mode); TEST_ASSERT((vaddr % vm->page_size) == 0, @@ -547,7 +547,7 @@ vm_paddr_t addr_gva2gpa(struct kvm_vm *vm, vm_vaddr_t gva) struct pageDirectoryEntry *pde; struct pageTableEntry *pte; - TEST_ASSERT(vm->mode == VM_MODE_P52V48_4K, "Attempt to use " + TEST_ASSERT(vm->mode == VM_MODE_PXXV48_4K, "Attempt to use " "unknown or unsupported guest mode, mode: 0x%x", vm->mode); index[0] = (gva >> 12) & 0x1ffu; @@ -621,7 +621,7 @@ static void vcpu_setup(struct kvm_vm *vm, int vcpuid, int pgd_memslot, int gdt_m kvm_setup_gdt(vm, &sregs.gdt, gdt_memslot, pgd_memslot); switch (vm->mode) { - case VM_MODE_P52V48_4K: + case VM_MODE_PXXV48_4K: sregs.cr0 = X86_CR0_PE | X86_CR0_NE | X86_CR0_PG; sregs.cr4 |= X86_CR4_PAE | X86_CR4_OSFXSR; sregs.efer |= (EFER_LME | EFER_LMA | EFER_NX); @@ -1153,3 +1153,25 @@ bool is_intel_cpu(void) chunk = (const uint32_t *)("GenuineIntel"); return (ebx == chunk[0] && edx == chunk[1] && ecx == chunk[2]); } + +uint32_t kvm_get_cpuid_max(void) +{ + return kvm_get_supported_cpuid_entry(0x80000000)->eax; +} + +void kvm_get_cpu_address_width(unsigned int *pa_bits, unsigned int *va_bits) +{ + struct kvm_cpuid_entry2 *entry; + bool pae; + + /* SDM 4.1.4 */ + if (kvm_get_cpuid_max() < 0x80000008) { + pae = kvm_get_supported_cpuid_entry(1)->edx & (1 << 6); + *pa_bits = pae ? 36 : 32; + *va_bits = 32; + } else { + entry = kvm_get_supported_cpuid_entry(0x80000008); + *pa_bits = entry->eax & 0xff; + *va_bits = (entry->eax >> 8) & 0xff; + } +} From patchwork Thu Aug 29 02:21:17 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Xu X-Patchwork-Id: 11120213 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id A75CC174A for ; Thu, 29 Aug 2019 02:21:56 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 8711322CF8 for ; Thu, 29 Aug 2019 02:21:56 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727251AbfH2CVz (ORCPT ); Wed, 28 Aug 2019 22:21:55 -0400 Received: from mx1.redhat.com ([209.132.183.28]:40146 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727308AbfH2CVq (ORCPT ); Wed, 28 Aug 2019 22:21:46 -0400 Received: from mail-pf1-f198.google.com (mail-pf1-f198.google.com [209.85.210.198]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id 92E51C04959E for ; Thu, 29 Aug 2019 02:21:46 +0000 (UTC) Received: by mail-pf1-f198.google.com with SMTP id y66so1248592pfb.21 for ; Wed, 28 Aug 2019 19:21:46 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=FiSGPlzYCoM0ai3pX9aVq//cJW1SbVNJ9li6nBfkEJc=; b=okO2w6ee1H4xgaWhCWca5AZjsjTiXWwrizN7P5plsSSbtIRvstjXa0l/L8QA9iWhRS WrJ9tTTQkyA+JkZs71lHvhdAOBeQxOjRkO2kh0r+Hq++qoiNKW1MIsweui4iGs9TX5Z4 faZKznW6Tk/Ym7ScTH3/FsMI47/PWpY2egZUVbZwyNamBOi7pdDh6UgxIytiEpzmSodX 1aCsKk10aZkHthQdQJROJcK++Ai8G2vj3Iv59As8IiQ/17N2RB4QIhCAKUB8lo1JzZZw G0PPg5K4iQU1ouOotvo1//LUJvtP8gsjzqPKGFTs7t96rnMPL82J0tobqBkSUSeRLe4l 0A+g== X-Gm-Message-State: APjAAAWBNykwf00uAlZuMpKi8KQc04I8lTfcsLlijYUKtjCttu0a74iM u4vNP1XvoD3r5eeCNr74bLHnmyMl1Db2rstRbv4leMQjdwY8pEyUT+VYkXbfmPlkOOtJ/IM7TF1 EzdRP/H/1e00R X-Received: by 2002:a17:902:7483:: with SMTP id h3mr4764528pll.163.1567045306145; Wed, 28 Aug 2019 19:21:46 -0700 (PDT) X-Google-Smtp-Source: APXvYqwk8JJu9MRWzzxTE/wy4Jr8XBPCqg0NClgRvq9B7P5MgDMt61DEmMapUZzTtNS6E2uIi+yqjA== X-Received: by 2002:a17:902:7483:: with SMTP id h3mr4764520pll.163.1567045305953; Wed, 28 Aug 2019 19:21:45 -0700 (PDT) Received: from xz-x1.redhat.com ([209.132.188.80]) by smtp.gmail.com with ESMTPSA id j187sm750140pfg.178.2019.08.28.19.21.42 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 28 Aug 2019 19:21:45 -0700 (PDT) From: Peter Xu To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: Paolo Bonzini , Vitaly Kuznetsov , =?utf-8?b?UmFkaW0gS3LEjW3DocWZ?= , Thomas Huth , Andrew Jones , peterx@redhat.com Subject: [PATCH v2 4/4] KVM: selftests: Remove duplicate guest mode handling Date: Thu, 29 Aug 2019 10:21:17 +0800 Message-Id: <20190829022117.10191-5-peterx@redhat.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20190829022117.10191-1-peterx@redhat.com> References: <20190829022117.10191-1-peterx@redhat.com> MIME-Version: 1.0 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Remove the duplication code in run_test() of dirty_log_test because after some reordering of functions now we can directly use the outcome of vm_create(). Meanwhile, with the new VM_MODE_PXXV48_4K, we can safely revert b442324b58 too where we stick the x86_64 PA width to 39 bits for dirty_log_test. Reviewed-by: Andrew Jones Signed-off-by: Peter Xu --- tools/testing/selftests/kvm/dirty_log_test.c | 52 ++----------------- .../testing/selftests/kvm/include/kvm_util.h | 4 ++ tools/testing/selftests/kvm/lib/kvm_util.c | 17 ++++++ 3 files changed, 26 insertions(+), 47 deletions(-) diff --git a/tools/testing/selftests/kvm/dirty_log_test.c b/tools/testing/selftests/kvm/dirty_log_test.c index c86f83cb33e5..89fac11733a5 100644 --- a/tools/testing/selftests/kvm/dirty_log_test.c +++ b/tools/testing/selftests/kvm/dirty_log_test.c @@ -234,10 +234,8 @@ static struct kvm_vm *create_vm(enum vm_guest_mode mode, uint32_t vcpuid, static void run_test(enum vm_guest_mode mode, unsigned long iterations, unsigned long interval, uint64_t phys_offset) { - unsigned int guest_pa_bits, guest_page_shift; pthread_t vcpu_thread; struct kvm_vm *vm; - uint64_t max_gfn; unsigned long *bmap; /* @@ -252,60 +250,20 @@ static void run_test(enum vm_guest_mode mode, unsigned long iterations, 2ul << (DIRTY_MEM_BITS - PAGE_SHIFT_4K), guest_code); - switch (mode) { - case VM_MODE_P52V48_4K: - case VM_MODE_PXXV48_4K: - guest_pa_bits = 52; - guest_page_shift = 12; - break; - case VM_MODE_P52V48_64K: - guest_pa_bits = 52; - guest_page_shift = 16; - break; - case VM_MODE_P48V48_4K: - guest_pa_bits = 48; - guest_page_shift = 12; - break; - case VM_MODE_P48V48_64K: - guest_pa_bits = 48; - guest_page_shift = 16; - break; - case VM_MODE_P40V48_4K: - guest_pa_bits = 40; - guest_page_shift = 12; - break; - case VM_MODE_P40V48_64K: - guest_pa_bits = 40; - guest_page_shift = 16; - break; - default: - TEST_ASSERT(false, "Unknown guest mode, mode: 0x%x", mode); - } - - DEBUG("Testing guest mode: %s\n", vm_guest_mode_string(mode)); - -#ifdef __x86_64__ - /* - * FIXME - * The x86_64 kvm selftests framework currently only supports a - * single PML4 which restricts the number of physical address - * bits we can change to 39. - */ - guest_pa_bits = 39; -#endif - max_gfn = (1ul << (guest_pa_bits - guest_page_shift)) - 1; - guest_page_size = (1ul << guest_page_shift); + guest_page_size = vm_get_page_size(vm); /* * A little more than 1G of guest page sized pages. Cover the * case where the size is not aligned to 64 pages. */ - guest_num_pages = (1ul << (DIRTY_MEM_BITS - guest_page_shift)) + 16; + guest_num_pages = (1ul << (DIRTY_MEM_BITS - + vm_get_page_shift(vm))) + 16; host_page_size = getpagesize(); host_num_pages = (guest_num_pages * guest_page_size) / host_page_size + !!((guest_num_pages * guest_page_size) % host_page_size); if (!phys_offset) { - guest_test_phys_mem = (max_gfn - guest_num_pages) * guest_page_size; + guest_test_phys_mem = (vm_get_max_gfn(vm) - + guest_num_pages) * guest_page_size; guest_test_phys_mem &= ~(host_page_size - 1); } else { guest_test_phys_mem = phys_offset; diff --git a/tools/testing/selftests/kvm/include/kvm_util.h b/tools/testing/selftests/kvm/include/kvm_util.h index 430edbacb9b2..070e3ba193a6 100644 --- a/tools/testing/selftests/kvm/include/kvm_util.h +++ b/tools/testing/selftests/kvm/include/kvm_util.h @@ -152,6 +152,10 @@ void vm_vcpu_add_default(struct kvm_vm *vm, uint32_t vcpuid, void *guest_code); bool vm_is_unrestricted_guest(struct kvm_vm *vm); +unsigned int vm_get_page_size(struct kvm_vm *vm); +unsigned int vm_get_page_shift(struct kvm_vm *vm); +unsigned int vm_get_max_gfn(struct kvm_vm *vm); + struct kvm_userspace_memory_region * kvm_userspace_memory_region_find(struct kvm_vm *vm, uint64_t start, uint64_t end); diff --git a/tools/testing/selftests/kvm/lib/kvm_util.c b/tools/testing/selftests/kvm/lib/kvm_util.c index bb8f993b25fb..80a338b5403c 100644 --- a/tools/testing/selftests/kvm/lib/kvm_util.c +++ b/tools/testing/selftests/kvm/lib/kvm_util.c @@ -136,6 +136,8 @@ struct kvm_vm *_vm_create(enum vm_guest_mode mode, uint64_t phy_pages, int perm) { struct kvm_vm *vm; + DEBUG("Testing guest mode: %s\n", vm_guest_mode_string(mode)); + vm = calloc(1, sizeof(*vm)); TEST_ASSERT(vm != NULL, "Insufficient Memory"); @@ -1650,3 +1652,18 @@ bool vm_is_unrestricted_guest(struct kvm_vm *vm) return val == 'Y'; } + +unsigned int vm_get_page_size(struct kvm_vm *vm) +{ + return vm->page_size; +} + +unsigned int vm_get_page_shift(struct kvm_vm *vm) +{ + return vm->page_shift; +} + +unsigned int vm_get_max_gfn(struct kvm_vm *vm) +{ + return vm->max_gfn; +}