From patchwork Sun Aug 29 18:26:40 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mingwei Zhang X-Patchwork-Id: 12464243 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 05EEEC432BE for ; Sun, 29 Aug 2021 18:26:52 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id E0D5D60F42 for ; Sun, 29 Aug 2021 18:26:51 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235933AbhH2S1m (ORCPT ); Sun, 29 Aug 2021 14:27:42 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60092 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235914AbhH2S1k (ORCPT ); Sun, 29 Aug 2021 14:27:40 -0400 Received: from mail-yb1-xb4a.google.com (mail-yb1-xb4a.google.com [IPv6:2607:f8b0:4864:20::b4a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id EFEE5C061756 for ; Sun, 29 Aug 2021 11:26:46 -0700 (PDT) Received: by mail-yb1-xb4a.google.com with SMTP id w8-20020a259188000000b0059bf0bed21fso2544940ybl.18 for ; Sun, 29 Aug 2021 11:26:46 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=reply-to:date:in-reply-to:message-id:mime-version:references :subject:from:to:cc; bh=ES6xeSpSzV2YKHVRVxZoTVRqNIvGyeamoedzW1dJrvM=; b=TWaLCwibnvZ4QgEUDk+VX8xko244251Xt+26UeFqjbE5yihNK9hZkV50XHM7Jehtm/ eBYwDRc10MmQxG5BbJnaaNO/yJFhYHMkIffSDTUhSHQ09w5ADGgLNke5VhQ+jTMCOl1c QhV+svgHpWF9X1W3bMR1ShLnQ2eRrXlrCLqYTyEVA4mV2kPjtmJg2uwtqjo6Ws0iA12Z qIHuV7kf3aeRNI4vkq058dcVMlDPV1cNaFOwkung732fj05ZvGNcXwx8WEgzn7CuM3mq SYRYvLdviinRgLeWWz6Qxd5yqak25GvbQKEXfQFmU2dT3NVZux1jZSg+dkSwAhDBKMxe Hz8w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:reply-to:date:in-reply-to:message-id :mime-version:references:subject:from:to:cc; bh=ES6xeSpSzV2YKHVRVxZoTVRqNIvGyeamoedzW1dJrvM=; b=CTgpOAqN6ARQ6Gb4apTXODQIH7R1mnCwvfgMtf8sY+j1hxZjECRx4eD3uTfinCxGOy SZrncgjm3e3kABzsErsJdEHp+HgiYrAjP3f7JjXmNSEuZyrRnX5re9Ld+SthUO/ksx8i xn7QYA/W6uTMqgWR4VN8pCEFVhnEgHjd2A47eWkOhNvgUOMzZrd3qEcxwtNYdSkUPYIH GK6nCUwWhXgLvZPH82Pr28CXe+p7zUlIikT2nFv/MFz8ckqH5so/05b6bl0VLebIFUej bDR7WgaiLlGU4HJ5AYJ4FZmKu0MSZ8f+P0uyLd2NXXpB1ioY5oTpG1okaB/xZ1kzlXcI h0pA== X-Gm-Message-State: AOAM530mecPZGrZgLRsOkYJ+wBRVcNi/bT+87+tQNwnuJ48ws4+X1i3y UDf0PuZ3o2Dn5PRgSQrUUYMViDFxih8v X-Google-Smtp-Source: ABdhPJydK/qZns+rLh37YkyaCJUZMkiNEJYCYteJ9vdBtM+LsrdU05PG45D72/z4MCnXv/IRNVJzCwPByfCm X-Received: from mizhang-super.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:1071]) (user=mizhang job=sendgmr) by 2002:a25:4441:: with SMTP id r62mr17734678yba.10.1630261606042; Sun, 29 Aug 2021 11:26:46 -0700 (PDT) Reply-To: Mingwei Zhang Date: Sun, 29 Aug 2021 18:26:40 +0000 In-Reply-To: <20210829182641.2505220-1-mizhang@google.com> Message-Id: <20210829182641.2505220-2-mizhang@google.com> Mime-Version: 1.0 References: <20210829182641.2505220-1-mizhang@google.com> X-Mailer: git-send-email 2.33.0.259.gc128427fd7-goog Subject: [PATCH v2 1/2] selftests: KVM: align guest physical memory base address to 1GB From: Mingwei Zhang To: Paolo Bonzini Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Sean Christopherson , David Matlack , Jing Zhang , Peter Xu , Ben Gardon , Mingwei Zhang Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Existing selftest library function always allocates GPA range that aligns to the end of GPA address space, ie., the allocated GPA range guarantees to end at the last available GPA. This ends up with the fact that selftest programs cannot control the alignment of the base GPA. Depending on the size of the allocation, the base GPA may align only on a 4K based bounday. The alignment of base GPA sometimes creates problems for dirty logging selftest where a 2MB-aligned or 1GB-aligned base GPA is needed to create NPT/EPT mappings for hugepages. So, fix this issue and ensure all GPA allocation starts from a 1GB bounary in all architectures. Cc: Sean Christopherson Cc: David Matlack Cc: Jing Zhang Cc: Peter Xu Suggested-by: Ben Gardon Signed-off-by: Mingwei Zhang --- tools/testing/selftests/kvm/lib/perf_test_util.c | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/tools/testing/selftests/kvm/lib/perf_test_util.c b/tools/testing/selftests/kvm/lib/perf_test_util.c index 0ef80dbdc116..96c30b8d6593 100644 --- a/tools/testing/selftests/kvm/lib/perf_test_util.c +++ b/tools/testing/selftests/kvm/lib/perf_test_util.c @@ -93,10 +93,10 @@ struct kvm_vm *perf_test_create_vm(enum vm_guest_mode mode, int vcpus, guest_test_phys_mem = (vm_get_max_gfn(vm) - guest_num_pages) * perf_test_args.guest_page_size; guest_test_phys_mem &= ~(perf_test_args.host_page_size - 1); -#ifdef __s390x__ - /* Align to 1M (segment size) */ - guest_test_phys_mem &= ~((1 << 20) - 1); -#endif + + /* Align to 1G for all architectures */ + guest_test_phys_mem &= ~((1 << 30) - 1); + pr_info("guest physical test memory offset: 0x%lx\n", guest_test_phys_mem); /* Add extra memory slots for testing */ From patchwork Sun Aug 29 18:26:41 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mingwei Zhang X-Patchwork-Id: 12464245 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0E247C432BE for ; Sun, 29 Aug 2021 18:26:55 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id EA13760F3A for ; Sun, 29 Aug 2021 18:26:54 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235925AbhH2S1l (ORCPT ); Sun, 29 Aug 2021 14:27:41 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60098 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235916AbhH2S1k (ORCPT ); Sun, 29 Aug 2021 14:27:40 -0400 Received: from mail-pg1-x54a.google.com (mail-pg1-x54a.google.com [IPv6:2607:f8b0:4864:20::54a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B4C54C061760 for ; Sun, 29 Aug 2021 11:26:47 -0700 (PDT) Received: by mail-pg1-x54a.google.com with SMTP id r21-20020a63d9150000b029023ccd23c20cso2673029pgg.19 for ; Sun, 29 Aug 2021 11:26:47 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=reply-to:date:in-reply-to:message-id:mime-version:references :subject:from:to:cc; bh=aZ7Pxe57fEMPbk2rJbVH9QT9ZPZ0Llj+AGW3Zhhzqrg=; b=pMm/Mr/m+aeIjuPR3E6Tu57eT5lCddN9PZV3IMO6Pjou3ecMrqfq8+Qj6Fen1HDurR Y2p+Z35+8u+klOa2DFSVyoCFHOv883/AKQR3Ef2+h+noBzSv1ohLwwphTLFL1kYaFkzk Dy7JGgsxGDPDUvPXwJYz62ArCioB0ABY/Hin+iwPvE8iEBtrODAyduBi5vRzsh4j4SYA ulBk++yfF46kwYWRdN8vyOWOw5JUGrg+IqVxgzINTcDykJoZdpyKoOCgow3gkx2hbVvz ncSh/d89hB32jVFQUo1tBXUEsSCWOUI8GePUmgbvp5vBBDqKke0Pnlb/NG9bOVhicn5r sLmQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:reply-to:date:in-reply-to:message-id :mime-version:references:subject:from:to:cc; bh=aZ7Pxe57fEMPbk2rJbVH9QT9ZPZ0Llj+AGW3Zhhzqrg=; b=p8T3p8qtLS7t88n9dvrCpeDs3KKqKLaUkwbiFs+RR/QBPHMSpc1ojupoU1bDvKIP8H IncJDbdrc3GDveLVABHHjskmBuk19VuTcSJTuouie29zLIizxCaeSsoixrqHpTzbYTNx A1XC9R1zEu1JQEF3x+maeRu6mO4yM+Pus0KdcykL7pwQd58RZnsLFJ97TnBVwBgTAIL0 3yr9ro6lQOPobTN710JXXj3jqr8l0l8Ss/eBsdbMBh9fDBTXkzo4EH5U18ZnCk5sd3wX 0KiAIBilJqrVlUDrDvg+RPAeDz4oQs22XnEX4sfoiZGjjU1JrKpQ0oq5T/nWs1XxHWdl Ji4g== X-Gm-Message-State: AOAM531UaBNVFPcT+pGWFkis6ZyQuW1zm1+T2kRW966T7TgKD5sdj3Fi pUrmIu1/ahHgEhXt6XSH+EFnCQcWns2h X-Google-Smtp-Source: ABdhPJzNdyDE37ZnSg/vofjbh4REKSTi7y/r+ggckFH2AlyLQ7A6Tw01hVhzg48Yv/9ZsyehNgKDIRPJeup2 X-Received: from mizhang-super.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:1071]) (user=mizhang job=sendgmr) by 2002:a17:903:2c2:b029:101:9c88:d928 with SMTP id s2-20020a17090302c2b02901019c88d928mr18444239plk.62.1630261607275; Sun, 29 Aug 2021 11:26:47 -0700 (PDT) Reply-To: Mingwei Zhang Date: Sun, 29 Aug 2021 18:26:41 +0000 In-Reply-To: <20210829182641.2505220-1-mizhang@google.com> Message-Id: <20210829182641.2505220-3-mizhang@google.com> Mime-Version: 1.0 References: <20210829182641.2505220-1-mizhang@google.com> X-Mailer: git-send-email 2.33.0.259.gc128427fd7-goog Subject: [PATCH v2 2/2] selftests: KVM: use dirty logging to check if page stats work correctly From: Mingwei Zhang To: Paolo Bonzini Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Sean Christopherson , David Matlack , Jing Zhang , Peter Xu , Ben Gardon , Mingwei Zhang Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org When dirty logging is enabled, KVM splits the hugepage mapping in NPT/EPT into the smallest 4K size after guest VM writes to it. This property could be used to check if the page stats metrics work properly in KVM x86/mmu. At the same time, this logic might be used the other way around: using page stats to verify if dirty logging really splits all huge pages after guest VM writes to all memory. So add page stats checking in dirty logging performance selftest. In particular, add checks in three locations: - just after vm is created; - after populating memory into vm without turning on dirty logging; - after guest vm writing to all memory again with dirty logging turned on. Tested using commands: - ./dirty_log_perf_test -s anonymous_hugetlb_1gb - ./dirty_log_perf_test -s anonymous_hugetlb_2mb - ./dirty_log_perf_test -s anonymous_thp Cc: Sean Christopherson Cc: David Matlack Cc: Jing Zhang Cc: Peter Xu Suggested-by: Ben Gardon Signed-off-by: Mingwei Zhang Reported-by: kernel test robot --- .../selftests/kvm/dirty_log_perf_test.c | 44 +++++++++++++++++++ .../testing/selftests/kvm/include/test_util.h | 1 + .../selftests/kvm/include/x86_64/processor.h | 7 +++ tools/testing/selftests/kvm/lib/test_util.c | 29 ++++++++++++ 4 files changed, 81 insertions(+) diff --git a/tools/testing/selftests/kvm/dirty_log_perf_test.c b/tools/testing/selftests/kvm/dirty_log_perf_test.c index 3c30d0045d8d..1fc63ad55cf3 100644 --- a/tools/testing/selftests/kvm/dirty_log_perf_test.c +++ b/tools/testing/selftests/kvm/dirty_log_perf_test.c @@ -19,6 +19,10 @@ #include "perf_test_util.h" #include "guest_modes.h" +#ifdef __x86_64__ +#include "processor.h" +#endif + /* How many host loops to run by default (one KVM_GET_DIRTY_LOG for each loop)*/ #define TEST_HOST_LOOP_N 2UL @@ -166,6 +170,18 @@ static void run_test(enum vm_guest_mode mode, void *arg) vm = perf_test_create_vm(mode, nr_vcpus, guest_percpu_mem_size, p->slots, p->backing_src); +#ifdef __x86_64__ + /* + * No vCPUs have been started yet, so KVM should not have created any + * mapping at this moment. + */ + TEST_ASSERT(get_page_stats(X86_PAGE_SIZE_4K) == 0, + "4K page is non zero"); + TEST_ASSERT(get_page_stats(X86_PAGE_SIZE_2M) == 0, + "2M page is non zero"); + TEST_ASSERT(get_page_stats(X86_PAGE_SIZE_1G) == 0, + "1G page is non zero"); +#endif perf_test_args.wr_fract = p->wr_fract; guest_num_pages = (nr_vcpus * guest_percpu_mem_size) >> vm_get_page_shift(vm); @@ -211,6 +227,22 @@ static void run_test(enum vm_guest_mode mode, void *arg) pr_info("Populate memory time: %ld.%.9lds\n", ts_diff.tv_sec, ts_diff.tv_nsec); +#ifdef __x86_64__ + TEST_ASSERT(get_page_stats(X86_PAGE_SIZE_4K) != 0, + "4K page is zero"); + /* Ensure THP page stats is non-zero to minimize the flakiness. */ + if (p->backing_src == VM_MEM_SRC_ANONYMOUS_THP) + TEST_ASSERT(get_page_stats(X86_PAGE_SIZE_2M) > 0 + "2M page number is zero"); + else if (p->backing_src == VM_MEM_SRC_ANONYMOUS_HUGETLB_2MB) + TEST_ASSERT(get_page_stats(X86_PAGE_SIZE_2M) == + (guest_percpu_mem_size * nr_vcpus) >> X86_PAGE_2M_SHIFT, + "2M page number does not match"); + else if (p->backing_src == VM_MEM_SRC_ANONYMOUS_HUGETLB_1GB) + TEST_ASSERT(get_page_stats(X86_PAGE_SIZE_1G) == + (guest_percpu_mem_size * nr_vcpus) >> X86_PAGE_1G_SHIFT, + "1G page number does not match"); +#endif /* Enable dirty logging */ clock_gettime(CLOCK_MONOTONIC, &start); enable_dirty_logging(vm, p->slots); @@ -256,6 +288,18 @@ static void run_test(enum vm_guest_mode mode, void *arg) iteration, ts_diff.tv_sec, ts_diff.tv_nsec); } } +#ifdef __x86_64__ + /* + * When vCPUs writes to all memory again with dirty logging enabled, we + * should see only 4K page mappings exist in KVM mmu. + */ + TEST_ASSERT(get_page_stats(X86_PAGE_SIZE_4K) != 0, + "4K page is zero after dirtying memory"); + TEST_ASSERT(get_page_stats(X86_PAGE_SIZE_2M) == 0, + "2M page is non-zero after dirtying memory"); + TEST_ASSERT(get_page_stats(X86_PAGE_SIZE_1G) == 0, + "1G page is non-zero after dirtying memory"); +#endif /* Disable dirty logging */ clock_gettime(CLOCK_MONOTONIC, &start); diff --git a/tools/testing/selftests/kvm/include/test_util.h b/tools/testing/selftests/kvm/include/test_util.h index d79be15dd3d2..dca5fcf7aa87 100644 --- a/tools/testing/selftests/kvm/include/test_util.h +++ b/tools/testing/selftests/kvm/include/test_util.h @@ -102,6 +102,7 @@ const struct vm_mem_backing_src_alias *vm_mem_backing_src_alias(uint32_t i); size_t get_backing_src_pagesz(uint32_t i); void backing_src_help(void); enum vm_mem_backing_src_type parse_backing_src_type(const char *type_name); +size_t get_page_stats(uint32_t page_level); /* * Whether or not the given source type is shared memory (as opposed to diff --git a/tools/testing/selftests/kvm/include/x86_64/processor.h b/tools/testing/selftests/kvm/include/x86_64/processor.h index 242ae8e09a65..9749319821a3 100644 --- a/tools/testing/selftests/kvm/include/x86_64/processor.h +++ b/tools/testing/selftests/kvm/include/x86_64/processor.h @@ -39,6 +39,13 @@ #define X86_CR4_SMAP (1ul << 21) #define X86_CR4_PKE (1ul << 22) +#define X86_PAGE_4K_SHIFT 12 +#define X86_PAGE_4K (1ul << X86_PAGE_4K_SHIFT) +#define X86_PAGE_2M_SHIFT 21 +#define X86_PAGE_2M (1ul << X86_PAGE_2M_SHIFT) +#define X86_PAGE_1G_SHIFT 30 +#define X86_PAGE_1G (1ul << X86_PAGE_1G_SHIFT) + /* CPUID.1.ECX */ #define CPUID_VMX (1ul << 5) #define CPUID_SMX (1ul << 6) diff --git a/tools/testing/selftests/kvm/lib/test_util.c b/tools/testing/selftests/kvm/lib/test_util.c index af1031fed97f..07eb6b5c125e 100644 --- a/tools/testing/selftests/kvm/lib/test_util.c +++ b/tools/testing/selftests/kvm/lib/test_util.c @@ -15,6 +15,13 @@ #include "linux/kernel.h" #include "test_util.h" +#include "processor.h" + +static const char * const pagestat_filepaths[] = { + "/sys/kernel/debug/kvm/pages_4k", + "/sys/kernel/debug/kvm/pages_2m", + "/sys/kernel/debug/kvm/pages_1g", +}; /* * Parses "[0-9]+[kmgt]?". @@ -141,6 +148,28 @@ size_t get_trans_hugepagesz(void) return size; } +#ifdef __x86_64__ +size_t get_stats_from_file(const char *path) +{ + size_t value; + FILE *f; + + f = fopen(path, "r"); + TEST_ASSERT(f != NULL, "Error in opening file: %s\n", path); + + fscanf(f, "%ld", &value); + fclose(f); + + return value; +} + +size_t get_page_stats(uint32_t page_level) +{ + TEST_ASSERT(page_level <= X86_PAGE_SIZE_1G, "page type error."); + return get_stats_from_file(pagestat_filepaths[page_level]); +} +#endif + size_t get_def_hugetlb_pagesz(void) { char buf[64];