From patchwork Wed Oct 19 22:13:19 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Colton Lewis X-Patchwork-Id: 13012379 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 39D54C4332F for ; Wed, 19 Oct 2022 22:13:57 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231145AbiJSWNw (ORCPT ); Wed, 19 Oct 2022 18:13:52 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40882 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230253AbiJSWNv (ORCPT ); Wed, 19 Oct 2022 18:13:51 -0400 Received: from mail-io1-xd49.google.com (mail-io1-xd49.google.com [IPv6:2607:f8b0:4864:20::d49]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 61EAA48A30 for ; Wed, 19 Oct 2022 15:13:50 -0700 (PDT) Received: by mail-io1-xd49.google.com with SMTP id n23-20020a056602341700b00689fc6dbfd6so14576648ioz.8 for ; Wed, 19 Oct 2022 15:13:50 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=zHCN1NpOBIjvJ3dOyBokgsfXrMU3XntQ/lV2trTRMnY=; b=AIsEXGNHQkLZcACXCjLRLHw7oAbFXTYbBeqOJgK86yQyNBsZdBjGgkwMxJpMwW9v6I k2xYRrNC6r2Z7rN/8fBisMgGQ0gD2ayzcGpfeLWztaTThcS3RNYS1bNNjuylSsmqN2S1 IZuUv2SGEpxQQCotnFdpUu1K+R+Jev8Y9AC/K7IDJcspV2DHStWcA0ZQo0v0bj7kkwDX iCuGFEQV89hTIvWVhUd13viqOF1Q5kghfcJ0kDsXi5nAO4c/RXhLfzFMG4FQU3u+oaQD DooGaBtN5pJPshkxA1uNA/F5ETYze+qwJUJNhQAS/mR+G35XMTLU2dSGxEm4alMfucMv lPCw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=zHCN1NpOBIjvJ3dOyBokgsfXrMU3XntQ/lV2trTRMnY=; b=UfMDLahPuW+KDu7+WmXcPUaF+ASf7PTSMnPlqYbPJkJeflGkjoNM87IupYH4m4sBO1 kkH493Eqs90+EAOoCSHgbZAydG3kZQs0jazZAHsU29ovfcES9VGo7rmKSJfM1z+4CFWz 2EWN0bKgNTT3IpIUzLgI7dM2rO+0ETo4lQxBfKDkv8yWBFxQLq8qFHDNwlskSwikGIvv wPPiAAAULBW5M3P1kZTqf4eBv8Hb1sSchmF9JvAPT7hjIJRNE6TRXADUYWUlibZyNE6U IIXs4C6CypdqgM8z4asNlHsOvO8kx4dRzT784DaqoJQqRtoIFEbjbpoDy1qrW6LCckrn ZdvA== X-Gm-Message-State: ACrzQf0JH2HsX+S89Qrkf4yeaeKCdwZ7lEKPapcHlTtd0xRxTwx0AiC5 FTbUKXXweFuIKwLbjjeYAo05/8VR3CTjry5/MixxtuUAEp3gT7pA5K4HboX355V2qQ5oqPtK+Cx Js1sRiKNP4lg7xGC85nbQ1nzI12mGSya2UuFmztX3FhrwYuhzlBvFSV3COGmUdramFF21zlk= X-Google-Smtp-Source: AMsMyM5IkMUEqTmtYSOBcyBAbmzwb+lHabB7pE8dcVcKiHudSYY25OINRJBF/cNwmTxCfonAzvMwa9Emlr/grB4r+Q== X-Received: from coltonlewis-kvm.c.googlers.com ([fda3:e722:ac3:cc00:2b:ff92:c0a8:14ce]) (user=coltonlewis job=sendgmr) by 2002:a05:6e02:1054:b0:2fa:a73:cfa1 with SMTP id p20-20020a056e02105400b002fa0a73cfa1mr7717158ilj.203.1666217629728; Wed, 19 Oct 2022 15:13:49 -0700 (PDT) Date: Wed, 19 Oct 2022 22:13:19 +0000 In-Reply-To: <20221019221321.3033920-1-coltonlewis@google.com> Mime-Version: 1.0 References: <20221019221321.3033920-1-coltonlewis@google.com> X-Mailer: git-send-email 2.38.0.413.g74048e4d9e-goog Message-ID: <20221019221321.3033920-2-coltonlewis@google.com> Subject: [PATCH v7 1/3] KVM: selftests: implement random number generation for guest code From: Colton Lewis To: kvm@vger.kernel.org Cc: pbonzini@redhat.com, maz@kernel.org, dmatlack@google.com, seanjc@google.com, oupton@google.com, ricarkol@google.com, Colton Lewis Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Implement random number generation for guest code to randomize parts of the test, making it less predictable and a more accurate reflection of reality. Create a -r argument to specify a random seed. If no argument is provided, the seed defaults to 0. The random seed is set with perf_test_set_random_seed() and must be set before guest_code runs to apply. The random number generator chosen is the Park-Miller Linear Congruential Generator, a fancy name for a basic and well-understood random number generator entirely sufficient for this purpose. Each vCPU calculates its own seed by adding its index to the seed provided. Signed-off-by: Colton Lewis Reviewed-by: Ricardo Koller Reviewed-by: David Matlack --- .../testing/selftests/kvm/dirty_log_perf_test.c | 12 ++++++++++-- .../selftests/kvm/include/perf_test_util.h | 2 ++ tools/testing/selftests/kvm/include/test_util.h | 7 +++++++ .../testing/selftests/kvm/lib/perf_test_util.c | 7 +++++++ tools/testing/selftests/kvm/lib/test_util.c | 17 +++++++++++++++++ 5 files changed, 43 insertions(+), 2 deletions(-) diff --git a/tools/testing/selftests/kvm/dirty_log_perf_test.c b/tools/testing/selftests/kvm/dirty_log_perf_test.c index f99e39a672d3..c97a5e455699 100644 --- a/tools/testing/selftests/kvm/dirty_log_perf_test.c +++ b/tools/testing/selftests/kvm/dirty_log_perf_test.c @@ -132,6 +132,7 @@ struct test_params { bool partition_vcpu_memory_access; enum vm_mem_backing_src_type backing_src; int slots; + uint32_t random_seed; }; static void toggle_dirty_logging(struct kvm_vm *vm, int slots, bool enable) @@ -225,6 +226,9 @@ static void run_test(enum vm_guest_mode mode, void *arg) p->slots, p->backing_src, p->partition_vcpu_memory_access); + /* If no argument provided, random seed will be 1. */ + pr_info("Random seed: %u\n", p->random_seed); + perf_test_set_random_seed(vm, p->random_seed ? p->random_seed : 1); perf_test_set_wr_fract(vm, p->wr_fract); guest_num_pages = (nr_vcpus * guest_percpu_mem_size) >> vm->page_shift; @@ -352,7 +356,7 @@ static void help(char *name) { puts(""); printf("usage: %s [-h] [-i iterations] [-p offset] [-g] " - "[-m mode] [-n] [-b vcpu bytes] [-v vcpus] [-o] [-s mem type]" + "[-m mode] [-n] [-b vcpu bytes] [-v vcpus] [-o] [-r random seed ] [-s mem type]" "[-x memslots]\n", name); puts(""); printf(" -i: specify iteration counts (default: %"PRIu64")\n", @@ -380,6 +384,7 @@ static void help(char *name) printf(" -v: specify the number of vCPUs to run.\n"); printf(" -o: Overlap guest memory accesses instead of partitioning\n" " them into a separate region of memory for each vCPU.\n"); + printf(" -r: specify the starting random seed.\n"); backing_src_help("-s"); printf(" -x: Split the memory region into this number of memslots.\n" " (default: 1)\n"); @@ -406,7 +411,7 @@ int main(int argc, char *argv[]) guest_modes_append_default(); - while ((opt = getopt(argc, argv, "eghi:p:m:nb:f:v:os:x:")) != -1) { + while ((opt = getopt(argc, argv, "eghi:p:m:nb:f:v:or:s:x:")) != -1) { switch (opt) { case 'e': /* 'e' is for evil. */ @@ -442,6 +447,9 @@ int main(int argc, char *argv[]) case 'o': p.partition_vcpu_memory_access = false; break; + case 'r': + p.random_seed = atoi(optarg); + break; case 's': p.backing_src = parse_backing_src_type(optarg); break; diff --git a/tools/testing/selftests/kvm/include/perf_test_util.h b/tools/testing/selftests/kvm/include/perf_test_util.h index eaa88df0555a..f1050fd42d10 100644 --- a/tools/testing/selftests/kvm/include/perf_test_util.h +++ b/tools/testing/selftests/kvm/include/perf_test_util.h @@ -35,6 +35,7 @@ struct perf_test_args { uint64_t gpa; uint64_t size; uint64_t guest_page_size; + uint32_t random_seed; int wr_fract; /* Run vCPUs in L2 instead of L1, if the architecture supports it. */ @@ -52,6 +53,7 @@ struct kvm_vm *perf_test_create_vm(enum vm_guest_mode mode, int nr_vcpus, void perf_test_destroy_vm(struct kvm_vm *vm); void perf_test_set_wr_fract(struct kvm_vm *vm, int wr_fract); +void perf_test_set_random_seed(struct kvm_vm *vm, uint32_t random_seed); void perf_test_start_vcpu_threads(int vcpus, void (*vcpu_fn)(struct perf_test_vcpu_args *)); void perf_test_join_vcpu_threads(int vcpus); diff --git a/tools/testing/selftests/kvm/include/test_util.h b/tools/testing/selftests/kvm/include/test_util.h index befc754ce9b3..9e4f36a1a8b0 100644 --- a/tools/testing/selftests/kvm/include/test_util.h +++ b/tools/testing/selftests/kvm/include/test_util.h @@ -152,4 +152,11 @@ static inline void *align_ptr_up(void *x, size_t size) return (void *)align_up((unsigned long)x, size); } +struct guest_random_state { + uint32_t seed; +}; + +struct guest_random_state new_guest_random_state(uint32_t seed); +uint32_t guest_random_u32(struct guest_random_state *state); + #endif /* SELFTEST_KVM_TEST_UTIL_H */ diff --git a/tools/testing/selftests/kvm/lib/perf_test_util.c b/tools/testing/selftests/kvm/lib/perf_test_util.c index 9618b37c66f7..5f0eebb626b5 100644 --- a/tools/testing/selftests/kvm/lib/perf_test_util.c +++ b/tools/testing/selftests/kvm/lib/perf_test_util.c @@ -49,6 +49,7 @@ void perf_test_guest_code(uint32_t vcpu_idx) uint64_t gva; uint64_t pages; int i; + struct guest_random_state rand_state = new_guest_random_state(pta->random_seed + vcpu_idx); gva = vcpu_args->gva; pages = vcpu_args->pages; @@ -229,6 +230,12 @@ void perf_test_set_wr_fract(struct kvm_vm *vm, int wr_fract) sync_global_to_guest(vm, perf_test_args); } +void perf_test_set_random_seed(struct kvm_vm *vm, uint32_t random_seed) +{ + perf_test_args.random_seed = random_seed; + sync_global_to_guest(vm, perf_test_args.random_seed); +} + uint64_t __weak perf_test_nested_pages(int nr_vcpus) { return 0; diff --git a/tools/testing/selftests/kvm/lib/test_util.c b/tools/testing/selftests/kvm/lib/test_util.c index 6d23878bbfe1..c4d2749fb2c3 100644 --- a/tools/testing/selftests/kvm/lib/test_util.c +++ b/tools/testing/selftests/kvm/lib/test_util.c @@ -17,6 +17,23 @@ #include "test_util.h" +/* + * Random number generator that is usable from guest code. This is the + * Park-Miller LCG using standard constants. + */ + +struct guest_random_state new_guest_random_state(uint32_t seed) +{ + struct guest_random_state s = {.seed = seed}; + return s; +} + +uint32_t guest_random_u32(struct guest_random_state *state) +{ + state->seed = (uint64_t)state->seed * 48271 % ((uint32_t)(1 << 31) - 1); + return state->seed; +} + /* * Parses "[0-9]+[kmgt]?". */ From patchwork Wed Oct 19 22:13:20 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Colton Lewis X-Patchwork-Id: 13012380 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 64B6FC433FE for ; Wed, 19 Oct 2022 22:13:57 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231390AbiJSWNx (ORCPT ); Wed, 19 Oct 2022 18:13:53 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40896 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230452AbiJSWNw (ORCPT ); Wed, 19 Oct 2022 18:13:52 -0400 Received: from mail-io1-xd4a.google.com (mail-io1-xd4a.google.com [IPv6:2607:f8b0:4864:20::d4a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5C9D3180248 for ; Wed, 19 Oct 2022 15:13:51 -0700 (PDT) Received: by mail-io1-xd4a.google.com with SMTP id i21-20020a6bf415000000b006bc987bf9faso14520201iog.6 for ; Wed, 19 Oct 2022 15:13:51 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=d34GJO7GQ09PGPUyUIG1XPP0AAT2WAOYMqmwwCDblFA=; b=bOwATuv54F32dJOh1J1dvgbZ1plvIWx6Xsp4Yz30MLMFn3UNM7IyTGNxvL6etio4c1 EQjaW3+p16ieNejng1iIIYCDHG9LyeVIeiuR2X/Owe66yxiBfDePPFisWMsV/tkiN/GJ sIIA6zfCdthcqsdFAU5PfJa7IpJRSpN/owQvKPhUb37Wq9bGFw9HhpwSZEZGX+O+kJHL 4Zg1r7kZMyM58dwLFYrzu7FIrsCSNzY9ELGWnvjwCDnqVk96Bsu9vHb1yO1ieV56LV3l OwEHJI0113k+BY9A8RcnVjHrenAS/rhpdigEqWRGWJvdI+M5n+jPMzXKasW2m4S4HiOQ V+Pw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=d34GJO7GQ09PGPUyUIG1XPP0AAT2WAOYMqmwwCDblFA=; b=dB4Mq0l1hV41HD+qw+2qFmwPuruaDCFjWC6Eq/9m6OMX+BXy/cIuJM3N4QziDUn2fK vv+4+gQKouakmPSMiJhZCU76l1LgoSQYpJ0eS+uZy2EVEpkfrALcV6paF0MH5jfE2K8B maJwNwfngDJJNKY0cXyQEc/uikKWH40JmIThFhV8jYc1OimHGqwrnwYFz7M9bG0YKTcv GZ84Vv0iPoObJn9lS2mhksIOi/NRwXVB/tg2k6czVtwovrQuAeYtY/DhLAn9aNTmSTyf 8R5rIomoxnadcnl/AJY7QXRhNckVI/1rGoUtBgfOB7b9PF8dQ+dP0hiuWpbJjseHZvO8 i8uw== X-Gm-Message-State: ACrzQf37ak6cUKXlCf0fP5dpDJ2NveYxBxMgainzXVk9AcTO/yVGq7jI RqotXh4gopBlsURJ4YMHMP9gSRKG4+PCjQr25cePvFjIbbClINIye3yzmpgsVIakkZOHp8fCgiu Jn4cIKDPCDiQ97eGhG+mulyAL7SV9sR24FTSpjVCoA2uMhquNZZDiSin0Uh5SzM0ezBdbKms= X-Google-Smtp-Source: AMsMyM5w/coArMyUVLqD1Jx9/k9Wn6kQ7Lx5BvQwQXheRurjtWz+yyEUIrHdHYHbLKgLSMYlM7NhRfT6F6dep0FeNg== X-Received: from coltonlewis-kvm.c.googlers.com ([fda3:e722:ac3:cc00:2b:ff92:c0a8:14ce]) (user=coltonlewis job=sendgmr) by 2002:a5d:9859:0:b0:6bb:4dff:8a8b with SMTP id p25-20020a5d9859000000b006bb4dff8a8bmr6992039ios.159.1666217630780; Wed, 19 Oct 2022 15:13:50 -0700 (PDT) Date: Wed, 19 Oct 2022 22:13:20 +0000 In-Reply-To: <20221019221321.3033920-1-coltonlewis@google.com> Mime-Version: 1.0 References: <20221019221321.3033920-1-coltonlewis@google.com> X-Mailer: git-send-email 2.38.0.413.g74048e4d9e-goog Message-ID: <20221019221321.3033920-3-coltonlewis@google.com> Subject: [PATCH v7 2/3] KVM: selftests: randomize which pages are written vs read From: Colton Lewis To: kvm@vger.kernel.org Cc: pbonzini@redhat.com, maz@kernel.org, dmatlack@google.com, seanjc@google.com, oupton@google.com, ricarkol@google.com, Colton Lewis Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Randomize which pages are written vs read using the random number generator. Change the variable wr_fract and associated function calls to write_percent that now operates as a percentage from 0 to 100 where X means each page has an X% chance of being written. Change the -f argument to -w to reflect the new variable semantics. Keep the same default of 100% writes. Population always uses 100% writes to ensure all memory is actually populated and not just mapped to the zero page. The prevents expensive copy-on-write faults from occurring during the dirty memory iterations below, which would pollute the performance results. Signed-off-by: Colton Lewis Reviewed-by: Ricardo Koller Reviewed-by: David Matlack --- .../selftests/kvm/access_tracking_perf_test.c | 2 +- .../selftests/kvm/dirty_log_perf_test.c | 38 ++++++++++++------- .../selftests/kvm/include/perf_test_util.h | 4 +- .../selftests/kvm/lib/perf_test_util.c | 10 ++--- 4 files changed, 33 insertions(+), 21 deletions(-) diff --git a/tools/testing/selftests/kvm/access_tracking_perf_test.c b/tools/testing/selftests/kvm/access_tracking_perf_test.c index 76c583a07ea2..3e16d5bd7856 100644 --- a/tools/testing/selftests/kvm/access_tracking_perf_test.c +++ b/tools/testing/selftests/kvm/access_tracking_perf_test.c @@ -279,7 +279,7 @@ static void run_iteration(struct kvm_vm *vm, int nr_vcpus, const char *descripti static void access_memory(struct kvm_vm *vm, int nr_vcpus, enum access_type access, const char *description) { - perf_test_set_wr_fract(vm, (access == ACCESS_READ) ? INT_MAX : 1); + perf_test_set_write_percent(vm, (access == ACCESS_READ) ? 0 : 100); iteration_work = ITERATION_ACCESS_MEMORY; run_iteration(vm, nr_vcpus, description); } diff --git a/tools/testing/selftests/kvm/dirty_log_perf_test.c b/tools/testing/selftests/kvm/dirty_log_perf_test.c index c97a5e455699..0d0240041acf 100644 --- a/tools/testing/selftests/kvm/dirty_log_perf_test.c +++ b/tools/testing/selftests/kvm/dirty_log_perf_test.c @@ -128,10 +128,10 @@ static void vcpu_worker(struct perf_test_vcpu_args *vcpu_args) struct test_params { unsigned long iterations; uint64_t phys_offset; - int wr_fract; bool partition_vcpu_memory_access; enum vm_mem_backing_src_type backing_src; int slots; + uint32_t write_percent; uint32_t random_seed; }; @@ -229,7 +229,7 @@ static void run_test(enum vm_guest_mode mode, void *arg) /* If no argument provided, random seed will be 1. */ pr_info("Random seed: %u\n", p->random_seed); perf_test_set_random_seed(vm, p->random_seed ? p->random_seed : 1); - perf_test_set_wr_fract(vm, p->wr_fract); + perf_test_set_write_percent(vm, p->write_percent); guest_num_pages = (nr_vcpus * guest_percpu_mem_size) >> vm->page_shift; guest_num_pages = vm_adjust_num_guest_pages(mode, guest_num_pages); @@ -252,6 +252,14 @@ static void run_test(enum vm_guest_mode mode, void *arg) for (i = 0; i < nr_vcpus; i++) vcpu_last_completed_iteration[i] = -1; + /* + * Use 100% writes during the population phase to ensure all + * memory is actually populated and not just mapped to the zero + * page. The prevents expensive copy-on-write faults from + * occurring during the dirty memory iterations below, which + * would pollute the performance results. + */ + perf_test_set_write_percent(vm, 100); perf_test_start_vcpu_threads(nr_vcpus, vcpu_worker); /* Allow the vCPUs to populate memory */ @@ -273,6 +281,8 @@ static void run_test(enum vm_guest_mode mode, void *arg) pr_info("Enabling dirty logging time: %ld.%.9lds\n\n", ts_diff.tv_sec, ts_diff.tv_nsec); + perf_test_set_write_percent(vm, p->write_percent); + while (iteration < p->iterations) { /* * Incrementing the iteration number will start the vCPUs @@ -357,7 +367,7 @@ static void help(char *name) puts(""); printf("usage: %s [-h] [-i iterations] [-p offset] [-g] " "[-m mode] [-n] [-b vcpu bytes] [-v vcpus] [-o] [-r random seed ] [-s mem type]" - "[-x memslots]\n", name); + "[-x memslots] [-w percentage]\n", name); puts(""); printf(" -i: specify iteration counts (default: %"PRIu64")\n", TEST_HOST_LOOP_N); @@ -377,10 +387,6 @@ static void help(char *name) printf(" -b: specify the size of the memory region which should be\n" " dirtied by each vCPU. e.g. 10M or 3G.\n" " (default: 1G)\n"); - printf(" -f: specify the fraction of pages which should be written to\n" - " as opposed to simply read, in the form\n" - " 1/.\n" - " (default: 1 i.e. all pages are written to.)\n"); printf(" -v: specify the number of vCPUs to run.\n"); printf(" -o: Overlap guest memory accesses instead of partitioning\n" " them into a separate region of memory for each vCPU.\n"); @@ -388,6 +394,11 @@ static void help(char *name) backing_src_help("-s"); printf(" -x: Split the memory region into this number of memslots.\n" " (default: 1)\n"); + printf(" -w: specify the percentage of pages which should be written to\n" + " as an integer from 0-100 inclusive. This is probabalistic,\n" + " so -w X means each page has an X%% chance of writing\n" + " and a (100-X)%% chance of reading.\n" + " (default: 100 i.e. all pages are written to.)\n"); puts(""); exit(0); } @@ -397,10 +408,10 @@ int main(int argc, char *argv[]) int max_vcpus = kvm_check_cap(KVM_CAP_MAX_VCPUS); struct test_params p = { .iterations = TEST_HOST_LOOP_N, - .wr_fract = 1, .partition_vcpu_memory_access = true, .backing_src = DEFAULT_VM_MEM_SRC, .slots = 1, + .write_percent = 100, }; int opt; @@ -411,7 +422,7 @@ int main(int argc, char *argv[]) guest_modes_append_default(); - while ((opt = getopt(argc, argv, "eghi:p:m:nb:f:v:or:s:x:")) != -1) { + while ((opt = getopt(argc, argv, "eghi:p:m:nb:v:or:s:x:w:")) != -1) { switch (opt) { case 'e': /* 'e' is for evil. */ @@ -434,10 +445,11 @@ int main(int argc, char *argv[]) case 'b': guest_percpu_mem_size = parse_size(optarg); break; - case 'f': - p.wr_fract = atoi(optarg); - TEST_ASSERT(p.wr_fract >= 1, - "Write fraction cannot be less than one"); + case 'w': + p.write_percent = atoi(optarg); + TEST_ASSERT(p.write_percent >= 0 + && p.write_percent <= 100, + "Write percentage must be between 0 and 100"); break; case 'v': nr_vcpus = atoi(optarg); diff --git a/tools/testing/selftests/kvm/include/perf_test_util.h b/tools/testing/selftests/kvm/include/perf_test_util.h index f1050fd42d10..845165001ec8 100644 --- a/tools/testing/selftests/kvm/include/perf_test_util.h +++ b/tools/testing/selftests/kvm/include/perf_test_util.h @@ -36,7 +36,7 @@ struct perf_test_args { uint64_t size; uint64_t guest_page_size; uint32_t random_seed; - int wr_fract; + uint32_t write_percent; /* Run vCPUs in L2 instead of L1, if the architecture supports it. */ bool nested; @@ -52,7 +52,7 @@ struct kvm_vm *perf_test_create_vm(enum vm_guest_mode mode, int nr_vcpus, bool partition_vcpu_memory_access); void perf_test_destroy_vm(struct kvm_vm *vm); -void perf_test_set_wr_fract(struct kvm_vm *vm, int wr_fract); +void perf_test_set_write_percent(struct kvm_vm *vm, uint32_t write_percent); void perf_test_set_random_seed(struct kvm_vm *vm, uint32_t random_seed); void perf_test_start_vcpu_threads(int vcpus, void (*vcpu_fn)(struct perf_test_vcpu_args *)); diff --git a/tools/testing/selftests/kvm/lib/perf_test_util.c b/tools/testing/selftests/kvm/lib/perf_test_util.c index 5f0eebb626b5..97a402f5ed23 100644 --- a/tools/testing/selftests/kvm/lib/perf_test_util.c +++ b/tools/testing/selftests/kvm/lib/perf_test_util.c @@ -61,7 +61,7 @@ void perf_test_guest_code(uint32_t vcpu_idx) for (i = 0; i < pages; i++) { uint64_t addr = gva + (i * pta->guest_page_size); - if (i % pta->wr_fract == 0) + if (guest_random_u32(&rand_state) % 100 < pta->write_percent) *(uint64_t *)addr = 0x0123456789ABCDEF; else READ_ONCE(*(uint64_t *)addr); @@ -122,7 +122,7 @@ struct kvm_vm *perf_test_create_vm(enum vm_guest_mode mode, int nr_vcpus, pr_info("Testing guest mode: %s\n", vm_guest_mode_string(mode)); /* By default vCPUs will write to memory. */ - pta->wr_fract = 1; + pta->write_percent = 100; /* * Snapshot the non-huge page size. This is used by the guest code to @@ -224,10 +224,10 @@ void perf_test_destroy_vm(struct kvm_vm *vm) kvm_vm_free(vm); } -void perf_test_set_wr_fract(struct kvm_vm *vm, int wr_fract) +void perf_test_set_write_percent(struct kvm_vm *vm, uint32_t write_percent) { - perf_test_args.wr_fract = wr_fract; - sync_global_to_guest(vm, perf_test_args); + perf_test_args.write_percent = write_percent; + sync_global_to_guest(vm, perf_test_args.write_percent); } void perf_test_set_random_seed(struct kvm_vm *vm, uint32_t random_seed) From patchwork Wed Oct 19 22:13:21 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Colton Lewis X-Patchwork-Id: 13012381 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 639CAC43217 for ; Wed, 19 Oct 2022 22:13:59 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231386AbiJSWN6 (ORCPT ); Wed, 19 Oct 2022 18:13:58 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40926 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231343AbiJSWNx (ORCPT ); Wed, 19 Oct 2022 18:13:53 -0400 Received: from mail-il1-x14a.google.com (mail-il1-x14a.google.com [IPv6:2607:f8b0:4864:20::14a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 577231C117F for ; Wed, 19 Oct 2022 15:13:52 -0700 (PDT) Received: by mail-il1-x14a.google.com with SMTP id r12-20020a92cd8c000000b002f9f5baaeeaso18074020ilb.4 for ; Wed, 19 Oct 2022 15:13:52 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=jvykNOua0j90vqhA4p4CMw4KBikj3OwR/bCK4YCrr2o=; b=jzjIwSqb4LZVGKVDavFGhXTpbzig+unJIY26PhtY44cU+6CP7HWGEzF/YB8Bgf08iz rfJKZOLdkYzkurtKErKYF0wMRZgSQt3lugsSbPY1TavsGynYBmaNHChDvCWPnsJ2/T+I sn5w6FJ/t92o3aep49QtfhpKVK/RbGuqiRkIud9yK1UaSHAcL7rKtpuLk7uh5wZE5DYC AIEyhgRAWHCgMvtVUIc+EI9sXShNjSeJ50/F6hKDB6hcOKBCYD45oEgzBiGi/SW8WGp2 ZBQMoc7PHTkAV4O38JM98aVaCueiI2+TajjStlIXX4HlE1qSNFgKkHfe0wS9XaAzi1fY POUA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=jvykNOua0j90vqhA4p4CMw4KBikj3OwR/bCK4YCrr2o=; b=vv6DLjnntkhze9j8CcjkXBL1EaUpIzlyNnA2F+NCkPDqtsUN5q4Qkzhh+QEgswTvhr aOp9JGLqYu104vS3hg0uW4988IaPRQ5P5/KvT2I24XrRxMMIKrcp/jMjU0S8IGFxNtzV 1WrDsncsFt1YbCJDPuUpgwXipImJg7jjcjKpjSr+DSmuHK7cn+3D+cDU/Fo3lh1Ri6d8 g8Vo6aTxyacBrS2UrNCgd/VowpQYg4HrAfBOul9DeRd4b2O31m/TgYr7JQoBwUAtnNZ/ x+PUeazC7kVp3imCMUTOwhbO8tik2hl8OUxc/2JNlyS81O+QqfLvveYR+kbDBHmJClRU bdhw== X-Gm-Message-State: ACrzQf0AjobylQIi08A7kl8LhY2HxkKwAH5M3AO9AU44NRiUvRCvOaei 9xGV8MuR42NmgDYKVKO4e1g1Pd2rZ09c77O6BjGzd/59qHBCiHqnuwKckfIy92JlbxIIDQ8+xvI 9hlflVEbdHalsi2lm0hfnUEGoE1YJc74oJL0zR6ZWhYdRJIQARf2UbuK6wq52SlZ2vs5TpIM= X-Google-Smtp-Source: AMsMyM5KC1I041sfaHyyTz4E4RyWZWM8UDxHxS+0nyxTAheL0Ky59wG5zo7zPPOP04VB/fsV++IZ2h2pKhkbprbUYQ== X-Received: from coltonlewis-kvm.c.googlers.com ([fda3:e722:ac3:cc00:2b:ff92:c0a8:14ce]) (user=coltonlewis job=sendgmr) by 2002:a05:6638:4123:b0:363:ce70:87cb with SMTP id ay35-20020a056638412300b00363ce7087cbmr8369295jab.189.1666217631591; Wed, 19 Oct 2022 15:13:51 -0700 (PDT) Date: Wed, 19 Oct 2022 22:13:21 +0000 In-Reply-To: <20221019221321.3033920-1-coltonlewis@google.com> Mime-Version: 1.0 References: <20221019221321.3033920-1-coltonlewis@google.com> X-Mailer: git-send-email 2.38.0.413.g74048e4d9e-goog Message-ID: <20221019221321.3033920-4-coltonlewis@google.com> Subject: [PATCH v7 3/3] KVM: selftests: randomize page access order From: Colton Lewis To: kvm@vger.kernel.org Cc: pbonzini@redhat.com, maz@kernel.org, dmatlack@google.com, seanjc@google.com, oupton@google.com, ricarkol@google.com, Colton Lewis Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Create the ability to randomize page access order with the -a argument. This includes the possibility that the same pages may be hit multiple times during an iteration or not at all. Population has random access as false to ensure all pages will be touched by population and avoid page faults in late dirty memory that would pollute the test results. Signed-off-by: Colton Lewis Reviewed-by: Ricardo Koller Reviewed-by: David Matlack --- tools/testing/selftests/kvm/dirty_log_perf_test.c | 12 ++++++++++-- .../selftests/kvm/include/perf_test_util.h | 2 ++ tools/testing/selftests/kvm/lib/perf_test_util.c | 15 ++++++++++++++- 3 files changed, 26 insertions(+), 3 deletions(-) diff --git a/tools/testing/selftests/kvm/dirty_log_perf_test.c b/tools/testing/selftests/kvm/dirty_log_perf_test.c index 0d0240041acf..e8855bd6f023 100644 --- a/tools/testing/selftests/kvm/dirty_log_perf_test.c +++ b/tools/testing/selftests/kvm/dirty_log_perf_test.c @@ -133,6 +133,7 @@ struct test_params { int slots; uint32_t write_percent; uint32_t random_seed; + bool random_access; }; static void toggle_dirty_logging(struct kvm_vm *vm, int slots, bool enable) @@ -260,6 +261,7 @@ static void run_test(enum vm_guest_mode mode, void *arg) * would pollute the performance results. */ perf_test_set_write_percent(vm, 100); + perf_test_set_random_access(vm, false); perf_test_start_vcpu_threads(nr_vcpus, vcpu_worker); /* Allow the vCPUs to populate memory */ @@ -282,6 +284,7 @@ static void run_test(enum vm_guest_mode mode, void *arg) ts_diff.tv_sec, ts_diff.tv_nsec); perf_test_set_write_percent(vm, p->write_percent); + perf_test_set_random_access(vm, p->random_access); while (iteration < p->iterations) { /* @@ -365,10 +368,11 @@ static void run_test(enum vm_guest_mode mode, void *arg) static void help(char *name) { puts(""); - printf("usage: %s [-h] [-i iterations] [-p offset] [-g] " + printf("usage: %s [-h] [-a] [-i iterations] [-p offset] [-g] " "[-m mode] [-n] [-b vcpu bytes] [-v vcpus] [-o] [-r random seed ] [-s mem type]" "[-x memslots] [-w percentage]\n", name); puts(""); + printf(" -a: access memory randomly rather than in order.\n"); printf(" -i: specify iteration counts (default: %"PRIu64")\n", TEST_HOST_LOOP_N); printf(" -g: Do not enable KVM_CAP_MANUAL_DIRTY_LOG_PROTECT2. This\n" @@ -422,11 +426,15 @@ int main(int argc, char *argv[]) guest_modes_append_default(); - while ((opt = getopt(argc, argv, "eghi:p:m:nb:v:or:s:x:w:")) != -1) { + while ((opt = getopt(argc, argv, "aeghi:p:m:nb:v:or:s:x:w:")) != -1) { switch (opt) { + case 'a': + p.random_access = true; + break; case 'e': /* 'e' is for evil. */ run_vcpus_while_disabling_dirty_logging = true; + break; case 'g': dirty_log_manual_caps = 0; break; diff --git a/tools/testing/selftests/kvm/include/perf_test_util.h b/tools/testing/selftests/kvm/include/perf_test_util.h index 845165001ec8..3d0b75ea866a 100644 --- a/tools/testing/selftests/kvm/include/perf_test_util.h +++ b/tools/testing/selftests/kvm/include/perf_test_util.h @@ -40,6 +40,7 @@ struct perf_test_args { /* Run vCPUs in L2 instead of L1, if the architecture supports it. */ bool nested; + bool random_access; struct perf_test_vcpu_args vcpu_args[KVM_MAX_VCPUS]; }; @@ -54,6 +55,7 @@ void perf_test_destroy_vm(struct kvm_vm *vm); void perf_test_set_write_percent(struct kvm_vm *vm, uint32_t write_percent); void perf_test_set_random_seed(struct kvm_vm *vm, uint32_t random_seed); +void perf_test_set_random_access(struct kvm_vm *vm, bool random_access); void perf_test_start_vcpu_threads(int vcpus, void (*vcpu_fn)(struct perf_test_vcpu_args *)); void perf_test_join_vcpu_threads(int vcpus); diff --git a/tools/testing/selftests/kvm/lib/perf_test_util.c b/tools/testing/selftests/kvm/lib/perf_test_util.c index 97a402f5ed23..a27405a590ba 100644 --- a/tools/testing/selftests/kvm/lib/perf_test_util.c +++ b/tools/testing/selftests/kvm/lib/perf_test_util.c @@ -48,6 +48,8 @@ void perf_test_guest_code(uint32_t vcpu_idx) struct perf_test_vcpu_args *vcpu_args = &pta->vcpu_args[vcpu_idx]; uint64_t gva; uint64_t pages; + uint64_t addr; + uint64_t page; int i; struct guest_random_state rand_state = new_guest_random_state(pta->random_seed + vcpu_idx); @@ -59,7 +61,12 @@ void perf_test_guest_code(uint32_t vcpu_idx) while (true) { for (i = 0; i < pages; i++) { - uint64_t addr = gva + (i * pta->guest_page_size); + if (pta->random_access) + page = guest_random_u32(&rand_state) % pages; + else + page = i; + + addr = gva + (page * pta->guest_page_size); if (guest_random_u32(&rand_state) % 100 < pta->write_percent) *(uint64_t *)addr = 0x0123456789ABCDEF; @@ -236,6 +243,12 @@ void perf_test_set_random_seed(struct kvm_vm *vm, uint32_t random_seed) sync_global_to_guest(vm, perf_test_args.random_seed); } +void perf_test_set_random_access(struct kvm_vm *vm, bool random_access) +{ + perf_test_args.random_access = random_access; + sync_global_to_guest(vm, perf_test_args.random_access); +} + uint64_t __weak perf_test_nested_pages(int nr_vcpus) { return 0;