From patchwork Fri Jun 2 16:08:59 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vipin Sharma X-Patchwork-Id: 13265626 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 6BF18C7EE2A for ; Fri, 2 Jun 2023 16:09:35 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:Message-ID: References:Mime-Version:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=pkBg380CHhUIbI4q+rG8VeO3yvwqgq498eZSsq4N3WI=; b=HjVdIByPePYicxc0CuUdt7OJBG hJZilfohoAnYgUbX/eD/4UZNqzq3QgpTyJWhCKRNdcAJtz2OsZwVhLaL5hiqFsmcv7fNUziBh2S4w SuuY8hUTN1PfKjmY3IjrasAKEiGGTEY2xECWwqYPmsNeYH8rzKYbihLVGelN//UE0cY90Jo8ijYOf IDwybF0YYY9mOyoAXoKM64B1nc8v7X+CIq/cH1AIxzA2Oe/DusGAd17mpjT5w/U6BNGxRTOIcwUOr FSWrfZngn56f54n0RH6Np918iyiEG7oH4JSrG7KuENB49wI04KyNFfzB4Un0YfRwfXVwpnHrhZoiQ s5aKubJQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1q57LG-007LX1-0b; Fri, 02 Jun 2023 16:09:30 +0000 Received: from mail-pf1-x44a.google.com ([2607:f8b0:4864:20::44a]) by bombadil.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1q57LB-007LQ6-10 for linux-riscv@lists.infradead.org; Fri, 02 Jun 2023 16:09:27 +0000 Received: by mail-pf1-x44a.google.com with SMTP id d2e1a72fcca58-653de9ae0c2so170252b3a.1 for ; Fri, 02 Jun 2023 09:09:22 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1685722162; x=1688314162; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=DpCi+Z+5IrlXYT4x1uPx6ey3Kh6k4RwU4EEsfyAeFCY=; b=HfcMR6iJ3p+XpoWwwGnuCf8DXEruqodHMrgo0qRUl9XKRxAQVZGiN8TxS7KXZ6liYk IHfRjrAnrHvGlvRXdd0vMGLUxjqvqzDBCEZqkZ+AlgA2D+HqWzekq3osmRA2k0du5O5o kCcbGty+UgEm8rxi/OLq7qgVCFwHH13HTXpftBj1XaM0i7tjBO9lKRJTdTnJF5R/3t6c JrkS5ti9XRCTfaa1D8P3ktMUx2ASyp8GqB5XYe9RG7pBvxny7Wq3BWLQtvGrXB/UToqw xyXoWdXWsB9HM0UfeGzOOvYeGlmazG1w/VKQFNAblPuwmKiFaBvQdPac1I4EbR7ShaXp NDtQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1685722162; x=1688314162; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=DpCi+Z+5IrlXYT4x1uPx6ey3Kh6k4RwU4EEsfyAeFCY=; b=WfI6U2Kx6jUHn8UW/r/BJEJILDJXBpGtyLEPxaWyWDVNMaSeboG3xiBKGNYBenJToD 5Mi4i1DHlgfNLGGG9lPcIxDD4E1RDZCLKquknPk5ze/tSJNPUalCdJDOwdV7kuKmxzpT o2xYvOUzVNEcZx6TjdjpmXFwqMrMEgGoLSwHwCgSEgvG7KxdimnzjZkXfak2D5Ojq/nJ rmcupKScLK6GybyUGEzKV0QS3w/EYxa4OH9VJ4I2oxqzN/nF34r1U5KQXleaoFhRsJp/ CKpNzHGxwfZHEA0SJOd0N9J+zCVJKtJ+K0QdxAqs7VxImIpcT4+LcaQdr37m8Gz8ZzRZ sxQA== X-Gm-Message-State: AC+VfDy5sgjvkZgaf+SpoIBiBUKPaRYYhKgGbZKExPTwSrMqcdKKqTLL Vs1TeCGNdxLSJDd1Gkn5Zr756bkCCetQ X-Google-Smtp-Source: ACHHUZ7r4si2IcQKI55bOSM8SHxJbAhndrnIqDFmjr9vqnHPxyaBa77OMmwMCW0FKU1TUemviXs4kKMtvfhR X-Received: from vipin.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:479f]) (user=vipinsh job=sendgmr) by 2002:a05:6a00:2183:b0:643:a542:b311 with SMTP id h3-20020a056a00218300b00643a542b311mr4840248pfi.0.1685722161708; Fri, 02 Jun 2023 09:09:21 -0700 (PDT) Date: Fri, 2 Jun 2023 09:08:59 -0700 In-Reply-To: <20230602160914.4011728-1-vipinsh@google.com> Mime-Version: 1.0 References: <20230602160914.4011728-1-vipinsh@google.com> X-Mailer: git-send-email 2.41.0.rc0.172.g3f132b7071-goog Message-ID: <20230602160914.4011728-2-vipinsh@google.com> Subject: [PATCH v2 01/16] KVM: selftests: Clear dirty logs in user defined chunks sizes in dirty_log_perf_test From: Vipin Sharma To: maz@kernel.org, oliver.upton@linux.dev, james.morse@arm.com, suzuki.poulose@arm.com, yuzenghui@huawei.com, catalin.marinas@arm.com, will@kernel.org, chenhuacai@kernel.org, aleksandar.qemu.devel@gmail.com, tsbogend@alpha.franken.de, anup@brainfault.org, atishp@atishpatra.org, paul.walmsley@sifive.com, palmer@dabbelt.com, aou@eecs.berkeley.edu, seanjc@google.com, pbonzini@redhat.com, dmatlack@google.com, ricarkol@google.com Cc: linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-mips@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kselftest@vger.kernel.org, kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Vipin Sharma X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230602_090925_348061_70DAC91E X-CRM114-Status: GOOD ( 15.54 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org In dirty_log_perf_test, provide a new option 'k' to specify the size of the chunks and clear dirty memory in chunks in each iteration. If option is not provided then fallback to the old way of clearing whole memslot in one call in each iteration. In production environment whole memslot is rarely cleared in a single call, instead clearing operation is split across multiple calls to reduce time between clearing and sending memory to a remote host. This change mimics the production usecases and allows to get performance numbers based on that. Signed-off-by: Vipin Sharma --- .../selftests/kvm/dirty_log_perf_test.c | 42 +++++++++++++++---- 1 file changed, 34 insertions(+), 8 deletions(-) diff --git a/tools/testing/selftests/kvm/dirty_log_perf_test.c b/tools/testing/selftests/kvm/dirty_log_perf_test.c index e9d6d1aecf89..119ddfc7306e 100644 --- a/tools/testing/selftests/kvm/dirty_log_perf_test.c +++ b/tools/testing/selftests/kvm/dirty_log_perf_test.c @@ -134,6 +134,7 @@ struct test_params { uint32_t write_percent; uint32_t random_seed; bool random_access; + uint64_t clear_chunk_size; }; static void toggle_dirty_logging(struct kvm_vm *vm, int slots, bool enable) @@ -169,16 +170,28 @@ static void get_dirty_log(struct kvm_vm *vm, unsigned long *bitmaps[], int slots } } -static void clear_dirty_log(struct kvm_vm *vm, unsigned long *bitmaps[], - int slots, uint64_t pages_per_slot) +static void clear_dirty_log_in_chunks(struct kvm_vm *vm, + unsigned long *bitmaps[], int slots, + uint64_t pages_per_slot, + uint64_t pages_per_clear) { - int i; + uint64_t from, clear_pages_count; + int i, slot; for (i = 0; i < slots; i++) { - int slot = MEMSTRESS_MEM_SLOT_INDEX + i; - - kvm_vm_clear_dirty_log(vm, slot, bitmaps[i], 0, pages_per_slot); + slot = MEMSTRESS_MEM_SLOT_INDEX + i; + from = 0; + clear_pages_count = pages_per_clear; + + while (from < pages_per_slot) { + if (from + clear_pages_count > pages_per_slot) + clear_pages_count = pages_per_slot - from; + kvm_vm_clear_dirty_log(vm, slot, bitmaps[i], from, + clear_pages_count); + from += clear_pages_count; + } } + } static unsigned long **alloc_bitmaps(int slots, uint64_t pages_per_slot) @@ -215,6 +228,7 @@ static void run_test(enum vm_guest_mode mode, void *arg) uint64_t guest_num_pages; uint64_t host_num_pages; uint64_t pages_per_slot; + uint64_t pages_per_clear; struct timespec start; struct timespec ts_diff; struct timespec get_dirty_log_total = (struct timespec){0}; @@ -235,6 +249,7 @@ static void run_test(enum vm_guest_mode mode, void *arg) guest_num_pages = vm_adjust_num_guest_pages(mode, guest_num_pages); host_num_pages = vm_num_host_pages(mode, guest_num_pages); pages_per_slot = host_num_pages / p->slots; + pages_per_clear = p->clear_chunk_size / getpagesize(); bitmaps = alloc_bitmaps(p->slots, pages_per_slot); @@ -315,7 +330,9 @@ static void run_test(enum vm_guest_mode mode, void *arg) if (dirty_log_manual_caps) { clock_gettime(CLOCK_MONOTONIC, &start); - clear_dirty_log(vm, bitmaps, p->slots, pages_per_slot); + clear_dirty_log_in_chunks(vm, bitmaps, p->slots, + pages_per_slot, + pages_per_clear); ts_diff = timespec_elapsed(start); clear_dirty_log_total = timespec_add(clear_dirty_log_total, ts_diff); @@ -413,6 +430,11 @@ static void help(char *name) " To leave the application task unpinned, drop the final entry:\n\n" " ./dirty_log_perf_test -v 3 -c 22,23,24\n\n" " (default: no pinning)\n"); + printf(" -k: Specify the chunk size in which dirty memory gets cleared\n" + " in memslots in each iteration. If the size is bigger than\n" + " the memslot size then whole memslot is cleared in one call.\n" + " Size must be aligned to the host page size. e.g. 10M or 3G\n" + " (default: UINT64_MAX, clears whole memslot in one call)\n"); puts(""); exit(0); } @@ -428,6 +450,7 @@ int main(int argc, char *argv[]) .slots = 1, .random_seed = 1, .write_percent = 100, + .clear_chunk_size = UINT64_MAX, }; int opt; @@ -438,7 +461,7 @@ int main(int argc, char *argv[]) guest_modes_append_default(); - while ((opt = getopt(argc, argv, "ab:c:eghi:m:nop:r:s:v:x:w:")) != -1) { + while ((opt = getopt(argc, argv, "ab:c:eghi:k:m:nop:r:s:v:x:w:")) != -1) { switch (opt) { case 'a': p.random_access = true; @@ -462,6 +485,9 @@ int main(int argc, char *argv[]) case 'i': p.iterations = atoi_positive("Number of iterations", optarg); break; + case 'k': + p.clear_chunk_size = parse_size(optarg); + break; case 'm': guest_modes_cmdline(optarg); break; From patchwork Fri Jun 2 16:09:00 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vipin Sharma X-Patchwork-Id: 13265627 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id E8A0FC7EE2D for ; Fri, 2 Jun 2023 16:09:37 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:Message-ID: References:Mime-Version:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=Thwg4z9zNlub+J5b1J4Sa6PeuRJiyEwPgcf46EN/YjU=; b=M2MuL6lHiq+ayEVbGBRZZygDpl bwBbY6OXPdS4Y46hLtPR9p1tNm53Y5fhMJuz1EmZHX83qBFuXQOZEKZL4IqSLEu59uCRvXx0gAKk0 z2iEcsa3mfJhGh1siwE61x5blicGQxPxDrxit0UNH8SFDKku2pYN+D1f59McVTF+9/k0lEKicbdsY pgayzwVJxFQiuph2H3Le3wGT8xUvURD43q5yTe2XuSzXphJov82Gvz5SoyOZCvxnSb4LzBP2rdlGs HCr1Qi8u7So48cAtcCHjfB8u3AalpPZm+qomT387PdYipe23nAlMZe6KV659SLnDWRLqkqZSMSSDw XHgwLD5w==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1q57LI-007LaA-2u; Fri, 02 Jun 2023 16:09:32 +0000 Received: from mail-yw1-x1149.google.com ([2607:f8b0:4864:20::1149]) by bombadil.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1q57LD-007LRA-0K for linux-riscv@lists.infradead.org; Fri, 02 Jun 2023 16:09:29 +0000 Received: by mail-yw1-x1149.google.com with SMTP id 00721157ae682-565d1b86a63so31907517b3.0 for ; Fri, 02 Jun 2023 09:09:24 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1685722164; x=1688314164; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=reSpQux61RJZJqDsTFoDFmSOkfig6/G5ur1FFXdW+Iw=; b=Nery7N09ub9ZwszJcaP2RaQeFUpVfIUClULYoJNCv0nmmaSATwziFhFG6U3dYDLKoQ L9drpqC2yL+EBnwjo9iEoiI5SqrPJ4COUeKCUkto6WScJ4k7QxjCwWO+xu6zR2w8Ms7Y g71MEHJild0QfvuE9EvmodRfRpkB6FSwoCyRei/rwamIW8aYnRWnvssNIV+xig+yToqy 7LLYR4U6K3iWH99NzWXy3BA1U6+xbZoa0eqzZ6Q+yGdcRqc3MVym+ONWDEZ2yy/xmSHV b2XWmusMNnhXq3avpcOVHAJU3vLRZTkYx3P/807P0dyObmLHptHPl4+5txiV/ugQdwi2 WupA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1685722164; x=1688314164; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=reSpQux61RJZJqDsTFoDFmSOkfig6/G5ur1FFXdW+Iw=; b=NDUwSdxKQgzgAA/Zwjvj6PLIG6Coz9N9QfbC7VoNBbmoGG9stKHGXwixGSBiyGZH50 7038LdJyRDWaJYfnuhGS1VLa1HqmL/4oIyw2vMPczQKeaLQ/POEx/7dRfFQo8ii/TAZT h6z4W45SQbkDYuOhiUBsOj7bJ6kIIp/emek2h/lq0RjxjPUhRCE+2mOYCwmYj9bhducR a+eoEhUog+ZvJkTjgboKEf/ANlECXxfo7GIDm3jZpfhXhOFDxFMu1GxWrCZsm+f4rEVB WYzA6YG4z9aVhlRDb9F0cg8WHflka4HlUW+woouY/yIhorTK9HZLIRuI6xcOC9iUuz6B ozfw== X-Gm-Message-State: AC+VfDyD7PFvHcDemYSf5J+CCD9xV22zoBJdCpz84BqOcZUCKvsV39gr VkmiuvyaxKK8tiuWmfO2XbAFINMvMsrX X-Google-Smtp-Source: ACHHUZ4Q3v0w4Q5HlE0q/no23AAAdsXEAJElE4UvgXKrVWGxv34xFISO1VT+CZ5ao9TA8TGy/Rc8XIiLO5th X-Received: from vipin.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:479f]) (user=vipinsh job=sendgmr) by 2002:a25:cd0a:0:b0:ba8:918a:ceec with SMTP id d10-20020a25cd0a000000b00ba8918aceecmr470216ybf.4.1685722163747; Fri, 02 Jun 2023 09:09:23 -0700 (PDT) Date: Fri, 2 Jun 2023 09:09:00 -0700 In-Reply-To: <20230602160914.4011728-1-vipinsh@google.com> Mime-Version: 1.0 References: <20230602160914.4011728-1-vipinsh@google.com> X-Mailer: git-send-email 2.41.0.rc0.172.g3f132b7071-goog Message-ID: <20230602160914.4011728-3-vipinsh@google.com> Subject: [PATCH v2 02/16] KVM: selftests: Add optional delay between consecutive clear-dirty-log calls From: Vipin Sharma To: maz@kernel.org, oliver.upton@linux.dev, james.morse@arm.com, suzuki.poulose@arm.com, yuzenghui@huawei.com, catalin.marinas@arm.com, will@kernel.org, chenhuacai@kernel.org, aleksandar.qemu.devel@gmail.com, tsbogend@alpha.franken.de, anup@brainfault.org, atishp@atishpatra.org, paul.walmsley@sifive.com, palmer@dabbelt.com, aou@eecs.berkeley.edu, seanjc@google.com, pbonzini@redhat.com, dmatlack@google.com, ricarkol@google.com Cc: linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-mips@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kselftest@vger.kernel.org, kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Vipin Sharma X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230602_090927_159873_B4139267 X-CRM114-Status: GOOD ( 13.30 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org In dirty_log_perf_test, add option "-l" to wait between consecutive clear-dirty-log calls. Accept delay from user in milliseconds. If option is not provided then fallback to no wait between clear calls. This allows dirty_log_perf_test to mimic real world use where after clearing dirty memory, some time is spent in transferring memory before making a subsequeunt clear-dirty-log call. Signed-off-by: Vipin Sharma --- .../selftests/kvm/dirty_log_perf_test.c | 35 +++++++++++++++---- 1 file changed, 29 insertions(+), 6 deletions(-) diff --git a/tools/testing/selftests/kvm/dirty_log_perf_test.c b/tools/testing/selftests/kvm/dirty_log_perf_test.c index 119ddfc7306e..2e31f13aaba6 100644 --- a/tools/testing/selftests/kvm/dirty_log_perf_test.c +++ b/tools/testing/selftests/kvm/dirty_log_perf_test.c @@ -135,6 +135,7 @@ struct test_params { uint32_t random_seed; bool random_access; uint64_t clear_chunk_size; + int clear_chunk_wait_time_ms; }; static void toggle_dirty_logging(struct kvm_vm *vm, int slots, bool enable) @@ -173,8 +174,14 @@ static void get_dirty_log(struct kvm_vm *vm, unsigned long *bitmaps[], int slots static void clear_dirty_log_in_chunks(struct kvm_vm *vm, unsigned long *bitmaps[], int slots, uint64_t pages_per_slot, - uint64_t pages_per_clear) + uint64_t pages_per_clear, int wait_ms, + struct timespec *time_taken) { + struct timespec wait = { + .tv_sec = wait_ms / 1000, + .tv_nsec = (wait_ms % 1000) * 1000000ull, + }; + struct timespec start, end; uint64_t from, clear_pages_count; int i, slot; @@ -186,12 +193,17 @@ static void clear_dirty_log_in_chunks(struct kvm_vm *vm, while (from < pages_per_slot) { if (from + clear_pages_count > pages_per_slot) clear_pages_count = pages_per_slot - from; + clock_gettime(CLOCK_MONOTONIC, &start); kvm_vm_clear_dirty_log(vm, slot, bitmaps[i], from, clear_pages_count); + end = timespec_elapsed(start); + *time_taken = timespec_add(*time_taken, end); from += clear_pages_count; + if (wait_ms) + nanosleep(&wait, NULL); + } } - } static unsigned long **alloc_bitmaps(int slots, uint64_t pages_per_slot) @@ -329,11 +341,11 @@ static void run_test(enum vm_guest_mode mode, void *arg) iteration, ts_diff.tv_sec, ts_diff.tv_nsec); if (dirty_log_manual_caps) { - clock_gettime(CLOCK_MONOTONIC, &start); clear_dirty_log_in_chunks(vm, bitmaps, p->slots, pages_per_slot, - pages_per_clear); - ts_diff = timespec_elapsed(start); + pages_per_clear, + p->clear_chunk_wait_time_ms, + &ts_diff); clear_dirty_log_total = timespec_add(clear_dirty_log_total, ts_diff); pr_info("Iteration %d clear dirty log time: %ld.%.9lds\n", @@ -435,6 +447,11 @@ static void help(char *name) " the memslot size then whole memslot is cleared in one call.\n" " Size must be aligned to the host page size. e.g. 10M or 3G\n" " (default: UINT64_MAX, clears whole memslot in one call)\n"); + printf(" -l: Specify time in milliseconds to wait after Clear-Dirty-Log\n" + " call. This allows to mimic use cases where flow is to get\n" + " dirty log followed by multiple clear dirty log calls and\n" + " sending corresponding memory to destination (in this test\n" + " sending will be just idle waiting)\n"); puts(""); exit(0); } @@ -451,6 +468,7 @@ int main(int argc, char *argv[]) .random_seed = 1, .write_percent = 100, .clear_chunk_size = UINT64_MAX, + .clear_chunk_wait_time_ms = 0, }; int opt; @@ -461,7 +479,7 @@ int main(int argc, char *argv[]) guest_modes_append_default(); - while ((opt = getopt(argc, argv, "ab:c:eghi:k:m:nop:r:s:v:x:w:")) != -1) { + while ((opt = getopt(argc, argv, "ab:c:eghi:k:l:m:nop:r:s:v:x:w:")) != -1) { switch (opt) { case 'a': p.random_access = true; @@ -488,6 +506,11 @@ int main(int argc, char *argv[]) case 'k': p.clear_chunk_size = parse_size(optarg); break; + case 'l': + p.clear_chunk_wait_time_ms = + atoi_non_negative("Clear dirty log chunks wait time", + optarg); + break; case 'm': guest_modes_cmdline(optarg); break; From patchwork Fri Jun 2 16:09:01 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vipin Sharma X-Patchwork-Id: 13265628 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 99498C7EE2A for ; Fri, 2 Jun 2023 16:09:38 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:Message-ID: References:Mime-Version:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=eCqTod5hB0E2hxvtXhVddDxTjvXHvenKJeyd11PkQWA=; b=1LOqA8mGdL8ZXNQWTpjzFR0XTZ +5uRjcte4tTHcu3DmCNJvflOq8ZZIPIb6SRyPiUUX5yMQC0xpF3WB7Xb5W4Z7VGvzAIrxXbndyPo0 up2HvCZg88JPfhCMlM8m6UHQFJd9Y6LlZnBFy3rMHg0NKX7U5vBqfpWmAUwEVxgwIwowTMWDK3T10 AVsH7BPvp/bGRnUWkHnX75HqaLP7wMSezn50sB6IaVtVY5rnmjlatoGmg5R5nURKJIC69U8mu3lF5 ysYm9taGrU3vPGRklzbett5Kvf9amEZjR4+1KmlW+7eKNfrb53KlqkAP6Sx8klRf7UnIkkyrnaRh8 7IF5c9+Q==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1q57LK-007Lbz-1t; Fri, 02 Jun 2023 16:09:34 +0000 Received: from mail-pg1-x549.google.com ([2607:f8b0:4864:20::549]) by bombadil.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1q57LE-007LSC-1Z for linux-riscv@lists.infradead.org; Fri, 02 Jun 2023 16:09:30 +0000 Received: by mail-pg1-x549.google.com with SMTP id 41be03b00d2f7-53fa457686eso2055226a12.0 for ; Fri, 02 Jun 2023 09:09:26 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1685722166; x=1688314166; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=U16a7SsqQ7yjyv5J8Y2N4u7Ylkl11H9qatwpuhmZUgo=; b=636YoGkhL1EaD07rsMfPK9rB926Ht76M8pVZszNcRp8cSFBaHFxW4ESJErvKH1m1wQ b8SVVb+49Lm4c7lpuGUxvabzksatMUh3aHtTfXuucGjqsfAbB7x8jB5X+IIkfWd/B+pM hI1Jypk91JlkfEFB2dbGzi8Bjd9QmphZBvbPzeFfXopwyd64oFIEk7hcFeTi49fzsYb0 jMhjapC0h8n44W4QCQ7q/RI4lKIJPywJjobPFbGliUK4+jwpyLj3v9rdAbRrHTavotzv jSbTLKdo3Rd++6WRrKKX3ZluUSmLDy9wL6sxwwb/x/fXUrLPPNX7BCJ2JivFRXiwGY+V XR9A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1685722166; x=1688314166; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=U16a7SsqQ7yjyv5J8Y2N4u7Ylkl11H9qatwpuhmZUgo=; b=HjtDFX++cBuNMRWQ9W/pjj/I6afUqPBHFc6UHf9koICck07vwV2uEojOhjKLrwxCBZ Fr6sup13g4gADQVgS76PPl5LIw8ekIeT+mwBDOAxlkz+PW71fWKaSxNnXF6c4fIYJ2O5 VaZbdDI/+5aFX87xGs9twXtvkQE1zLYtR3JLOzFDFkqbm64rdQY72NUdnqkdhWse8HPg Y94FXgAIwS5HbJdJtDX/dP/XqaS97P619zfMtC+Shi2xo7K3KBtFcuJh+D8j0zf26vn8 9ihXsB+aIJyEhFCUL/aa97XSgsds3uR/znlrtPZzDAf7DOm3/+0kXGs7Kk+A9T3rTDaS XNOg== X-Gm-Message-State: AC+VfDx4YUdoivrGP9AqbMzoCjmc1iaGdGZQ2aouZXbId6c9oXmGG/Jd OnCARry77CGvTn8UPd+PTn5pjx9hmTOQ X-Google-Smtp-Source: ACHHUZ5ZPLDaNjdv6ql0/d9oDMUgHRd2fE1fBqzKriMYD3yXbiA53bpGux4rVDyMUkeRamgRC/kjMfQm5qU6 X-Received: from vipin.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:479f]) (user=vipinsh job=sendgmr) by 2002:a17:90a:de04:b0:24e:18ff:5bad with SMTP id m4-20020a17090ade0400b0024e18ff5badmr42850pjv.0.1685722165847; Fri, 02 Jun 2023 09:09:25 -0700 (PDT) Date: Fri, 2 Jun 2023 09:09:01 -0700 In-Reply-To: <20230602160914.4011728-1-vipinsh@google.com> Mime-Version: 1.0 References: <20230602160914.4011728-1-vipinsh@google.com> X-Mailer: git-send-email 2.41.0.rc0.172.g3f132b7071-goog Message-ID: <20230602160914.4011728-4-vipinsh@google.com> Subject: [PATCH v2 03/16] KVM: selftests: Pass the count of read and write accesses from guest to host From: Vipin Sharma To: maz@kernel.org, oliver.upton@linux.dev, james.morse@arm.com, suzuki.poulose@arm.com, yuzenghui@huawei.com, catalin.marinas@arm.com, will@kernel.org, chenhuacai@kernel.org, aleksandar.qemu.devel@gmail.com, tsbogend@alpha.franken.de, anup@brainfault.org, atishp@atishpatra.org, paul.walmsley@sifive.com, palmer@dabbelt.com, aou@eecs.berkeley.edu, seanjc@google.com, pbonzini@redhat.com, dmatlack@google.com, ricarkol@google.com Cc: linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-mips@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kselftest@vger.kernel.org, kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Vipin Sharma X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230602_090928_525568_9C488C2C X-CRM114-Status: GOOD ( 15.60 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org Pass the number of read and write accesses done in the memstress guest code to userspace. These counts will provide a way to measure vCPUs performance during memstress and dirty logging related tests. For example, in dirty_log_perf_test this can be used to measure how much progress vCPUs are able to do while VMM is getting and clearing dirty logs. In dirty_log_perf_test, each vCPU runs once and then waits until iteration value is incremented by main thread, therefore, these access counts will not provide much useful information except for observing read vs write counts. However, in future commits, dirty_log_perf_test behavior will be changed to allow vCPUs to execute independent of userspace iterations. This will mimic real world workload where guest keeps on executing while VMM is collecting and clearing dirty logs separately. With read and write accesses known for each vCPU, impact of get and clear dirty log APIs can be quantified. Note that access counts will not be 100% reliable in knowing vCPUs performances. Few things which can affect vCPU progress: 1. vCPUs are scheduled less by host 2. Userspace operations run for longer time which end up giving vCPUs more time to execute. Signed-off-by: Vipin Sharma --- tools/testing/selftests/kvm/lib/memstress.c | 13 ++++++++++--- 1 file changed, 10 insertions(+), 3 deletions(-) diff --git a/tools/testing/selftests/kvm/lib/memstress.c b/tools/testing/selftests/kvm/lib/memstress.c index 5f1d3173c238..ac53cc6e36d7 100644 --- a/tools/testing/selftests/kvm/lib/memstress.c +++ b/tools/testing/selftests/kvm/lib/memstress.c @@ -49,6 +49,8 @@ void memstress_guest_code(uint32_t vcpu_idx) struct memstress_args *args = &memstress_args; struct memstress_vcpu_args *vcpu_args = &args->vcpu_args[vcpu_idx]; struct guest_random_state rand_state; + uint64_t write_access; + uint64_t read_access; uint64_t gva; uint64_t pages; uint64_t addr; @@ -64,6 +66,8 @@ void memstress_guest_code(uint32_t vcpu_idx) GUEST_ASSERT(vcpu_args->vcpu_idx == vcpu_idx); while (true) { + write_access = 0; + read_access = 0; for (i = 0; i < pages; i++) { if (args->random_access) page = guest_random_u32(&rand_state) % pages; @@ -72,13 +76,16 @@ void memstress_guest_code(uint32_t vcpu_idx) addr = gva + (page * args->guest_page_size); - if (guest_random_u32(&rand_state) % 100 < args->write_percent) + if (guest_random_u32(&rand_state) % 100 < args->write_percent) { *(uint64_t *)addr = 0x0123456789ABCDEF; - else + write_access++; + } else { READ_ONCE(*(uint64_t *)addr); + read_access++; + } } - GUEST_SYNC(1); + GUEST_SYNC_ARGS(1, read_access, write_access, 0, 0); } } From patchwork Fri Jun 2 16:09:02 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vipin Sharma X-Patchwork-Id: 13265629 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 63D96C77B7A for ; Fri, 2 Jun 2023 16:09:40 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:Message-ID: References:Mime-Version:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=KjGfPPl6DYw8l70b3B/8ZCoO2k2KO+awtI5EVbosgxw=; b=yCFErSoRAS8m4rW0jnOGsbztWc FauVxzSatYMyUDKr6r47eVFEpDdEnbM+wQ5tRXWMjF3UHnEcmvEJPf1En8QoRRafPmoB185j0xhsK INlHyB0fJ328N8aX4OBL+wmq/ZKAl458P5ayLDzBxOe+x1Ty6Oq6zq07sj0cWlBvNFw3kU39IQkdZ LgZpzzP76QLFBwrh/W6Xr7Zw2seBuz1+ZMVziVinobTWQPoJ2oOWU3vrtEPS5PGKIJ13QpZb9QhXp zCMEDQ4zJySpAAOfYgC/TdQ09hx80S0Yl6ZDJslnb8begv5AatPeu6JiqKBIKn1gtao+V6Ck8apqb VVwf3mOA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1q57LM-007Le0-12; Fri, 02 Jun 2023 16:09:36 +0000 Received: from mail-pg1-x54a.google.com ([2607:f8b0:4864:20::54a]) by bombadil.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1q57LE-007LUd-1b for linux-riscv@lists.infradead.org; Fri, 02 Jun 2023 16:09:31 +0000 Received: by mail-pg1-x54a.google.com with SMTP id 41be03b00d2f7-5340957a1f1so975247a12.1 for ; Fri, 02 Jun 2023 09:09:28 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1685722167; x=1688314167; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=vX8ZCjC9l+xt5cqUnwzlqCkk/7EvJIEZcs67EwYM95Y=; b=ZpYUhbbZC6uUYrReM+ixNdEUwp22WUU7l8j2aahWsJUYqP381L7DaYeopKye6+Q9sn w6jGO0nX6WnAtH7Q3sGlOdc68vECR5PacYtF8uFrv3gPYqUB1KYLtsQIi3wA+701ktoo VcYLyZENUnFYI88cWmJbgLosheXItBqWsjUj1JOOFK7yQMQ1EVYBrKEM74EAZ9G1JJes +Fy3XP3gLD3JszJ1CTZiXeoILpXFoZBpgnn/vdDYBMvk+Dqwv0kvHMMpaOH7tt3QFj+s ke1ao1kRwftxIusnCaBSonu2zz1KVzCGGfUgW7HZsGqYe4xCh5lCKxmVw7qkF3LGd5ny Gpkw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1685722167; x=1688314167; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=vX8ZCjC9l+xt5cqUnwzlqCkk/7EvJIEZcs67EwYM95Y=; b=LC01k3X3vCmiQtJdsy2u4fRW+Ku9TYzlD7jDD2hyJOKb0SgjuCDddtSXfJLrCDDEne iQlmnaL7Ia03UbnnxOQlE3YIk4twOtFcb0FPGwTSOcH4w1JSGvAenjFaqeBjATMNzxul ZOiMmgNRmaGr1K9KxObnFUyUF+XE7wyzE6tUHMQU0lCXEFs9jlq0SrRh3+JLaofuEal+ NWoNzn5HwgbxWEnG/xCrwZrnfdkjRqhttqMKcwZ69ujU54RTMOT+cIEXSPMrnu+9W3wP DkSL5UKrX/UwKXUH4eeLarHTdmucZowlgnNoYQ5Vg+U+j8ACDrPnXByDMsfaaD8aNiLR 5APQ== X-Gm-Message-State: AC+VfDyz2XgQPm/1uRohEVRwgHxfdscvEj2mjKur6Ihwntd2xXfUmz34 Tf4KfnfsvKbQfP2fMODk4dyLuhubFFbB X-Google-Smtp-Source: ACHHUZ7w+3zMwOJ66cN0pwH2useIGj7AgTlfLjnw5VFZgi7VE2BNKptqAZGtU12TE7bl7/aJEHHayi6hzCyj X-Received: from vipin.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:479f]) (user=vipinsh job=sendgmr) by 2002:a63:685:0:b0:530:866e:c3c1 with SMTP id 127-20020a630685000000b00530866ec3c1mr2520235pgg.11.1685722167618; Fri, 02 Jun 2023 09:09:27 -0700 (PDT) Date: Fri, 2 Jun 2023 09:09:02 -0700 In-Reply-To: <20230602160914.4011728-1-vipinsh@google.com> Mime-Version: 1.0 References: <20230602160914.4011728-1-vipinsh@google.com> X-Mailer: git-send-email 2.41.0.rc0.172.g3f132b7071-goog Message-ID: <20230602160914.4011728-5-vipinsh@google.com> Subject: [PATCH v2 04/16] KVM: selftests: Print read-write progress by vCPUs in dirty_log_perf_test From: Vipin Sharma To: maz@kernel.org, oliver.upton@linux.dev, james.morse@arm.com, suzuki.poulose@arm.com, yuzenghui@huawei.com, catalin.marinas@arm.com, will@kernel.org, chenhuacai@kernel.org, aleksandar.qemu.devel@gmail.com, tsbogend@alpha.franken.de, anup@brainfault.org, atishp@atishpatra.org, paul.walmsley@sifive.com, palmer@dabbelt.com, aou@eecs.berkeley.edu, seanjc@google.com, pbonzini@redhat.com, dmatlack@google.com, ricarkol@google.com Cc: linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-mips@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kselftest@vger.kernel.org, kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Vipin Sharma X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230602_090928_540744_C9703687 X-CRM114-Status: GOOD ( 12.07 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org Fetch count of read and write accesses from guest code and print sum of these values across all vCPUs in dirty_log_perf_test. This data provides progress made by vCPUs during dirty logging operations. Since, vCPUs execute in lockstep with userspace dirty log iterations, this metric is not very interesting. However, in future commits when dirty_log_perf_test can execute vCPUs independently from dirty log iterations then this metric can give good measure of vCPUs performance during dirty logging. Signed-off-by: Vipin Sharma --- .../selftests/kvm/dirty_log_perf_test.c | 18 +++++++++++++++++- 1 file changed, 17 insertions(+), 1 deletion(-) diff --git a/tools/testing/selftests/kvm/dirty_log_perf_test.c b/tools/testing/selftests/kvm/dirty_log_perf_test.c index 2e31f13aaba6..14b012a0dcb1 100644 --- a/tools/testing/selftests/kvm/dirty_log_perf_test.c +++ b/tools/testing/selftests/kvm/dirty_log_perf_test.c @@ -12,6 +12,7 @@ #include #include #include +#include #include #include "kvm_util.h" @@ -66,17 +67,22 @@ static u64 dirty_log_manual_caps; static bool host_quit; static int iteration; static int vcpu_last_completed_iteration[KVM_MAX_VCPUS]; +static atomic_ullong total_reads; +static atomic_ullong total_writes; static void vcpu_worker(struct memstress_vcpu_args *vcpu_args) { struct kvm_vcpu *vcpu = vcpu_args->vcpu; int vcpu_idx = vcpu_args->vcpu_idx; uint64_t pages_count = 0; + uint64_t reads = 0; + uint64_t writes = 0; struct kvm_run *run; struct timespec start; struct timespec ts_diff; struct timespec total = (struct timespec){0}; struct timespec avg; + struct ucall uc = {}; int ret; run = vcpu->run; @@ -89,7 +95,7 @@ static void vcpu_worker(struct memstress_vcpu_args *vcpu_args) ts_diff = timespec_elapsed(start); TEST_ASSERT(ret == 0, "vcpu_run failed: %d\n", ret); - TEST_ASSERT(get_ucall(vcpu, NULL) == UCALL_SYNC, + TEST_ASSERT(get_ucall(vcpu, &uc) == UCALL_SYNC, "Invalid guest sync status: exit_reason=%s\n", exit_reason_str(run->exit_reason)); @@ -101,6 +107,8 @@ static void vcpu_worker(struct memstress_vcpu_args *vcpu_args) if (current_iteration) { pages_count += vcpu_args->pages; total = timespec_add(total, ts_diff); + reads += uc.args[2]; + writes += uc.args[3]; pr_debug("vCPU %d iteration %d dirty memory time: %ld.%.9lds\n", vcpu_idx, current_iteration, ts_diff.tv_sec, ts_diff.tv_nsec); @@ -123,6 +131,8 @@ static void vcpu_worker(struct memstress_vcpu_args *vcpu_args) pr_debug("\nvCPU %d dirtied 0x%lx pages over %d iterations in %ld.%.9lds. (Avg %ld.%.9lds/iteration)\n", vcpu_idx, pages_count, vcpu_last_completed_iteration[vcpu_idx], total.tv_sec, total.tv_nsec, avg.tv_sec, avg.tv_nsec); + atomic_fetch_add(&total_reads, reads); + atomic_fetch_add(&total_writes, writes); } struct test_params { @@ -270,6 +280,8 @@ static void run_test(enum vm_guest_mode mode, void *arg) dirty_log_manual_caps); arch_setup_vm(vm, nr_vcpus); + atomic_store(&total_reads, 0); + atomic_store(&total_writes, 0); /* Start the iterations */ iteration = 0; @@ -388,6 +400,10 @@ static void run_test(enum vm_guest_mode mode, void *arg) clear_dirty_log_total.tv_nsec, avg.tv_sec, avg.tv_nsec); } + pr_info("Total pages touched: %llu (Reads: %llu, Writes: %llu)\n", + atomic_load(&total_reads) + atomic_load(&total_writes), + atomic_load(&total_reads), atomic_load(&total_writes)); + free_bitmaps(bitmaps, p->slots); arch_cleanup_vm(vm); memstress_destroy_vm(vm); From patchwork Fri Jun 2 16:09:03 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vipin Sharma X-Patchwork-Id: 13265630 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 642ABC77B7A for ; Fri, 2 Jun 2023 16:09:43 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:Message-ID: References:Mime-Version:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=Bd/EIA0M6UdMoX0y/RCiDzJ0ZJTZQ7ZfOgrunEuJAeA=; b=WkhfWODi+t0s4GzDj6vjWC0x8a nMZxFztIp+zDS/Cbfiwc+u+EkYyLLmBkDlSMPYNCq1kxF3Cw42DiuH6fXLO4D1e03b0fSjFM3rIFg /P+NvYT5JrVYgxgLFs/86Q3LBI3+s8BJFrtjSyyb2d/M2k22oEm6zUACRvlKEC1zNijYhwpEI3VFb daz93Ildq3UHTYbhfEZJZTxeBI4YyCMM+dAp+Rk9h+GOMF5UJ7XEkV0BxTt5cGHFgxy+0A8jNXe+j VlH/wGqc4wcYCL0wUYBYRzUTVA73UaK8FNies8Nq3YZMfKNTqD1N++8IC6vCdlqG51iM/vr4rZD2N mDQtIEpQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1q57LP-007LiB-1S; Fri, 02 Jun 2023 16:09:39 +0000 Received: from mail-pg1-x549.google.com ([2607:f8b0:4864:20::549]) by bombadil.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1q57LG-007LX7-36 for linux-riscv@lists.infradead.org; Fri, 02 Jun 2023 16:09:33 +0000 Received: by mail-pg1-x549.google.com with SMTP id 41be03b00d2f7-517bad1b8c5so2085631a12.0 for ; Fri, 02 Jun 2023 09:09:30 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1685722169; x=1688314169; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=TCZIk7VgxCOZOGj9BrulapCebBiXjQ5+VJfBT1Ieq9g=; b=48Jv9Ndg6W2FWtv0TNF+EvHQkuzevnf0de3wNHQhiINnd1y+TJ3ZHZNsYvgiBjl6lh 9WUrwiuBOv0A4GYPSawEV6yVx5LbPQL0U+gQGnFWAFVUnyrF1z1qm1xlbl6HRjy6NNwE 8XB2SVK1dbUaEeLrM/H8JHaqBSIup+A6xNFpzpaufkcVOP+VAaTUukUdmKNrTQcISuZt yTtWhk1kuZr3BXzyN9NP6mKp7IAXkKM1nK9WQaTXO/PtqFm7I9EIaNt4fcwETtWcECAj 9ktEJ+8oLB8jJgpGPXgzImfVTKXWHcOIgbQS6xiesRZonieLTn89FjSta0ZVX6QUoqee mZLw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1685722169; x=1688314169; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=TCZIk7VgxCOZOGj9BrulapCebBiXjQ5+VJfBT1Ieq9g=; b=ZA3XC6UANL6GQTHidfaCO7R/5rBvO7pt+kkiR2de4yQawZZFrdkIqK6yquQ1aoWLeZ O4JgBp4WgJNlY9DPl84SWyFudpexChynqw2dcQq+qx/CJcj+DItVaOZP0KEnfX39uRS2 qV62dc4hD0nmkuWcggiSNJ8qWTALAnk1nOUJZCnw0kvThJpTbWmZCVKAziHSBiOnOl7F 2cR+sfgRqv9wR6rlSsgKEsqJR/jAhBqP8fYoB/KOmD2WSYWulbEobvha1HbbeVbgvjEw AQ6wSCAyAnFgfY9X3HsTsFz9LjmyKhmJTq26wB5NVB1BQwFfkJpUp141DnbcYGGfHcPl ZlMQ== X-Gm-Message-State: AC+VfDwof2m0SN0SkxbK7BDna52K9fI469og3dBUnX21TEFStVh4QJRz 9gzwfbMasyseLQFl9gKgUeFp9jcIYepE X-Google-Smtp-Source: ACHHUZ5eYF/RV9lQrAgLH67VBGXqFnNNXk0G34PjM3lxGa6/KnJxp9P54a/FAO5X5SRVmoZXvlFdh+lSImCS X-Received: from vipin.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:479f]) (user=vipinsh job=sendgmr) by 2002:a63:d044:0:b0:53f:32cf:bcd1 with SMTP id s4-20020a63d044000000b0053f32cfbcd1mr2492124pgi.5.1685722169678; Fri, 02 Jun 2023 09:09:29 -0700 (PDT) Date: Fri, 2 Jun 2023 09:09:03 -0700 In-Reply-To: <20230602160914.4011728-1-vipinsh@google.com> Mime-Version: 1.0 References: <20230602160914.4011728-1-vipinsh@google.com> X-Mailer: git-send-email 2.41.0.rc0.172.g3f132b7071-goog Message-ID: <20230602160914.4011728-6-vipinsh@google.com> Subject: [PATCH v2 05/16] KVM: selftests: Allow independent execution of vCPUs in dirty_log_perf_test From: Vipin Sharma To: maz@kernel.org, oliver.upton@linux.dev, james.morse@arm.com, suzuki.poulose@arm.com, yuzenghui@huawei.com, catalin.marinas@arm.com, will@kernel.org, chenhuacai@kernel.org, aleksandar.qemu.devel@gmail.com, tsbogend@alpha.franken.de, anup@brainfault.org, atishp@atishpatra.org, paul.walmsley@sifive.com, palmer@dabbelt.com, aou@eecs.berkeley.edu, seanjc@google.com, pbonzini@redhat.com, dmatlack@google.com, ricarkol@google.com Cc: linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-mips@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kselftest@vger.kernel.org, kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Vipin Sharma X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230602_090931_016693_203C0A11 X-CRM114-Status: GOOD ( 17.35 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org Give users command line option (-j) to execute vCPUs independently of dirty log iterations after initialization is complete. This change makes dirty_log_perf_test behave like real world workflows where guest vCPUs keep on executing while VMM collects and clear dirty logs. Total pages touched during execution of test will give good estimate of how vCPUs are performing while dirty logging is enabled. Signed-off-by: Vipin Sharma --- .../selftests/kvm/dirty_log_perf_test.c | 64 +++++++++++++------ 1 file changed, 44 insertions(+), 20 deletions(-) diff --git a/tools/testing/selftests/kvm/dirty_log_perf_test.c b/tools/testing/selftests/kvm/dirty_log_perf_test.c index 14b012a0dcb1..fbf973d6cc66 100644 --- a/tools/testing/selftests/kvm/dirty_log_perf_test.c +++ b/tools/testing/selftests/kvm/dirty_log_perf_test.c @@ -69,6 +69,7 @@ static int iteration; static int vcpu_last_completed_iteration[KVM_MAX_VCPUS]; static atomic_ullong total_reads; static atomic_ullong total_writes; +static bool lockstep_iterations; static void vcpu_worker(struct memstress_vcpu_args *vcpu_args) { @@ -83,12 +84,16 @@ static void vcpu_worker(struct memstress_vcpu_args *vcpu_args) struct timespec total = (struct timespec){0}; struct timespec avg; struct ucall uc = {}; + int current_iteration = -1; int ret; run = vcpu->run; while (!READ_ONCE(host_quit)) { - int current_iteration = READ_ONCE(iteration); + if (lockstep_iterations) + current_iteration = READ_ONCE(iteration); + else + current_iteration++; clock_gettime(CLOCK_MONOTONIC, &start); ret = _vcpu_run(vcpu); @@ -118,13 +123,19 @@ static void vcpu_worker(struct memstress_vcpu_args *vcpu_args) ts_diff.tv_nsec); } - /* - * Keep running the guest while dirty logging is being disabled - * (iteration is negative) so that vCPUs are accessing memory - * for the entire duration of zapping collapsible SPTEs. - */ - while (current_iteration == READ_ONCE(iteration) && - READ_ONCE(iteration) >= 0 && !READ_ONCE(host_quit)) {} + if (lockstep_iterations) { + /* + * Keep running the guest while dirty logging is being disabled + * (iteration is negative) so that vCPUs are accessing memory + * for the entire duration of zapping collapsible SPTEs. + */ + while (current_iteration == READ_ONCE(iteration) && + READ_ONCE(iteration) >= 0 && !READ_ONCE(host_quit)) + ; + } else { + while (!READ_ONCE(iteration) && !READ_ONCE(host_quit)) + ; + } } avg = timespec_div(total, vcpu_last_completed_iteration[vcpu_idx]); @@ -332,18 +343,20 @@ static void run_test(enum vm_guest_mode mode, void *arg) clock_gettime(CLOCK_MONOTONIC, &start); iteration++; - pr_debug("Starting iteration %d\n", iteration); - for (i = 0; i < nr_vcpus; i++) { - while (READ_ONCE(vcpu_last_completed_iteration[i]) - != iteration) - ; + if (lockstep_iterations) { + pr_debug("Starting iteration %d\n", iteration); + for (i = 0; i < nr_vcpus; i++) { + while (READ_ONCE(vcpu_last_completed_iteration[i]) + != iteration) + ; + } + + ts_diff = timespec_elapsed(start); + vcpu_dirty_total = timespec_add(vcpu_dirty_total, ts_diff); + pr_info("Iteration %d dirty memory time: %ld.%.9lds\n", + iteration, ts_diff.tv_sec, ts_diff.tv_nsec); } - ts_diff = timespec_elapsed(start); - vcpu_dirty_total = timespec_add(vcpu_dirty_total, ts_diff); - pr_info("Iteration %d dirty memory time: %ld.%.9lds\n", - iteration, ts_diff.tv_sec, ts_diff.tv_nsec); - clock_gettime(CLOCK_MONOTONIC, &start); get_dirty_log(vm, bitmaps, p->slots); ts_diff = timespec_elapsed(start); @@ -365,6 +378,10 @@ static void run_test(enum vm_guest_mode mode, void *arg) } } + /* Block further vCPUs execution */ + if (!lockstep_iterations) + WRITE_ONCE(iteration, 0); + /* * Run vCPUs while dirty logging is being disabled to stress disabling * in terms of both performance and correctness. Opt-in via command @@ -458,6 +475,10 @@ static void help(char *name) " To leave the application task unpinned, drop the final entry:\n\n" " ./dirty_log_perf_test -v 3 -c 22,23,24\n\n" " (default: no pinning)\n"); + printf(" -j: Execute vCPUs independent of dirty log iterations\n" + " Independent vCPUs execution will allow them to continuously\n" + " dirty memory while main thread is collecting and clearing\n" + " dirty logs in each iteration.\n"); printf(" -k: Specify the chunk size in which dirty memory gets cleared\n" " in memslots in each iteration. If the size is bigger than\n" " the memslot size then whole memslot is cleared in one call.\n" @@ -492,10 +513,10 @@ int main(int argc, char *argv[]) kvm_check_cap(KVM_CAP_MANUAL_DIRTY_LOG_PROTECT2); dirty_log_manual_caps &= (KVM_DIRTY_LOG_MANUAL_PROTECT_ENABLE | KVM_DIRTY_LOG_INITIALLY_SET); - + lockstep_iterations = true; guest_modes_append_default(); - while ((opt = getopt(argc, argv, "ab:c:eghi:k:l:m:nop:r:s:v:x:w:")) != -1) { + while ((opt = getopt(argc, argv, "ab:c:eghi:jk:l:m:nop:r:s:v:x:w:")) != -1) { switch (opt) { case 'a': p.random_access = true; @@ -519,6 +540,9 @@ int main(int argc, char *argv[]) case 'i': p.iterations = atoi_positive("Number of iterations", optarg); break; + case 'j': + lockstep_iterations = false; + break; case 'k': p.clear_chunk_size = parse_size(optarg); break; From patchwork Fri Jun 2 16:09:04 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vipin Sharma X-Patchwork-Id: 13265700 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 85D4CC7EE2A for ; Fri, 2 Jun 2023 16:41:32 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:Message-ID: References:Mime-Version:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=e7yuRBw+A8HphiOvOC/HfVQP3H3Z5r9mRt4jb/r7b+Y=; b=JaorK/1HhOrsR6f+yZ23/WyZ+r v0/39LWCcrHmqbXQKxM4txnrPyNH0bcohLIhBFuxwPChFzFiaH+aPLQycfkw6MeQGizRxmccoiZd+ G982gIscuh6HBhQIXti7aHmyw5qrHB2077IssXNzNsYI8+kYrMhAF+O6R3qKT4k17q6JGIwtP2XOx oQzGrTOFgexlwu/2T0m0Jlogm9+aLIV3+2OAVc0KZ0MTmVN4jeZfuUH5OU67ZNHMi6EF+Ih+HKBK7 F6QX4NTc5OmzKLpoXhSrLbyrQJrQvS/4VJ/7u4pfF7p7OsQgqhr1aMKZ+N0FSWsz3Y1LRoMkVbBfT sWgTdi/A==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1q57qC-007R9P-2T; Fri, 02 Jun 2023 16:41:28 +0000 Received: from desiato.infradead.org ([2001:8b0:10b:1:d65d:64ff:fe57:4e05]) by bombadil.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1q57qB-007R7o-07 for linux-riscv@bombadil.infradead.org; Fri, 02 Jun 2023 16:41:27 +0000 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=desiato.20200630; h=Content-Type:Cc:To:From:Subject: Message-ID:References:Mime-Version:In-Reply-To:Date:Sender:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description; bh=fngtlIXFBOk5ojPhoCZKxdftJsl/PCjzcivfHulUfxM=; b=Cod4I8ItEcc8e34M2G/EQIJrBM hnSy95CSNjB/TDFEjpWch9jQSYzGolQjgApVY6a5nSDDPVfRhkbb3uMQhXT0vO7gVLVaqIPx3C9GV HG411OJ8D/ZBCCrU1Gav/Ed7CG0A3T77LmyGBh96Eqp5qVtQMYcaN++/vT/OJbTlvV6F2rNwHcBED Y2k/MhJQ32dIaxrM2sabynHEYG51gVktB/w/8FRPr92JPUU9cF9Ko5xPYGfMJPKN5tO2OdPW5O0gx HGbunyBBNoeSDh2pjfbyP2qeJ74Mzi763DtUN8kjym5I0Z/l+LwCllvCXOF1PKMU3DkX/OSqhx23c lAaZW30Q==; Received: from mail-yw1-x1149.google.com ([2607:f8b0:4864:20::1149]) by desiato.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1q57LK-001N7X-1b for linux-riscv@lists.infradead.org; Fri, 02 Jun 2023 16:09:36 +0000 Received: by mail-yw1-x1149.google.com with SMTP id 00721157ae682-568ab5c813eso33691947b3.2 for ; Fri, 02 Jun 2023 09:09:33 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1685722172; x=1688314172; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=fngtlIXFBOk5ojPhoCZKxdftJsl/PCjzcivfHulUfxM=; b=KTqFTUl8/O/otunDnIcW8NMT9NoZpURoL+fT5x3jr7xLTtmqLTsOpMHELHqceBBNBQ AmDayYQtgKFH5Z55J+qEEznt7CMzcfBLKa9rXdwkcbaxYdDyFrpWK5tIf+OgM6RdtGyq GQdHIK3QOQ2drYG/ZFFk+kHjBy+s90SO6eGGqGaoiIn9r7XkEBIEo/mXJtp3GumBEOA+ p4STDDMfVtsOTK10O8PNGuXazkAWxMkEfK3iCnrl9MiWc43AYLln6h9/+ty9OxynFFMd DAYdFz0PuZJQnqyg+7O9VnZBBnLY/2RVdSpDM0DZZNDzkFP0BkvcK+wZMlJvwm1mfdaV VyLQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1685722172; x=1688314172; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=fngtlIXFBOk5ojPhoCZKxdftJsl/PCjzcivfHulUfxM=; b=gEKzVClsigJRVrA2axQa9nh6PODp4a1kcrrapw5PcvBKixUnPMUAkOC2i26LyOCdVU kPD8hmF1GeuAVLmKJHNFVKDlOsIPeGh8hmCUdH9RXXZ99UR3x/b9olORMqqOzgCwZjOg LbMZiWBo1K/eDS4ofx6PtwcXzv9oD5l2hAm7zltrB2wZhSoMVEnTIndyl+BPGt8LlDST HUxDu/GxKSm6x6gvLIiVGbngp//1cD8lGdtyY1LuI21F1S8xq0Wkdcl9E+Bn7lt+G6SX Msi+HxrPKU4MxVXeLOdzimiyHBmQy2xk/GSAg+PVl40jhyc5ug+/OESvtYywLVWxKlID n4/A== X-Gm-Message-State: AC+VfDzh4D3hgP+gfiywu35nR6Ly0hjO16ZNV/gMbeXU9cSP9hIzC9m/ 35P33RpE6AdULw7yn+Lqwnrb8INC6Cqk X-Google-Smtp-Source: ACHHUZ6gan5YlBkZGg9KteJiKVhKxkJ5gsh9dOHqcHA/QTL4d+xT++IRNn1eOG3Ha8PAz+KvOVFz0vRB3dhD X-Received: from vipin.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:479f]) (user=vipinsh job=sendgmr) by 2002:a81:c509:0:b0:55a:3133:86fa with SMTP id k9-20020a81c509000000b0055a313386famr182818ywi.3.1685722171727; Fri, 02 Jun 2023 09:09:31 -0700 (PDT) Date: Fri, 2 Jun 2023 09:09:04 -0700 In-Reply-To: <20230602160914.4011728-1-vipinsh@google.com> Mime-Version: 1.0 References: <20230602160914.4011728-1-vipinsh@google.com> X-Mailer: git-send-email 2.41.0.rc0.172.g3f132b7071-goog Message-ID: <20230602160914.4011728-7-vipinsh@google.com> Subject: [PATCH v2 06/16] KVM: arm64: Correct the kvm_pgtable_stage2_flush() documentation From: Vipin Sharma To: maz@kernel.org, oliver.upton@linux.dev, james.morse@arm.com, suzuki.poulose@arm.com, yuzenghui@huawei.com, catalin.marinas@arm.com, will@kernel.org, chenhuacai@kernel.org, aleksandar.qemu.devel@gmail.com, tsbogend@alpha.franken.de, anup@brainfault.org, atishp@atishpatra.org, paul.walmsley@sifive.com, palmer@dabbelt.com, aou@eecs.berkeley.edu, seanjc@google.com, pbonzini@redhat.com, dmatlack@google.com, ricarkol@google.com Cc: linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-mips@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kselftest@vger.kernel.org, kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Vipin Sharma X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230602_170934_656758_D4067092 X-CRM114-Status: UNSURE ( 9.57 ) X-CRM114-Notice: Please train this message. X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org Remove _range suffix from kvm_pgtable_stage2_flush_range which is used in documentation of kvm_pgtable_stage2_flush(). There is no function named kvm_pgtable_stage2_flush_range(). Fixes: 93c66b40d728 ("KVM: arm64: Add support for stage-2 cache flushing in generic page-table") Signed-off-by: Vipin Sharma --- arch/arm64/include/asm/kvm_pgtable.h | 5 ++--- 1 file changed, 2 insertions(+), 3 deletions(-) diff --git a/arch/arm64/include/asm/kvm_pgtable.h b/arch/arm64/include/asm/kvm_pgtable.h index 850d65f705fa..d542a671c564 100644 --- a/arch/arm64/include/asm/kvm_pgtable.h +++ b/arch/arm64/include/asm/kvm_pgtable.h @@ -657,9 +657,8 @@ int kvm_pgtable_stage2_relax_perms(struct kvm_pgtable *pgt, u64 addr, bool kvm_pgtable_stage2_is_young(struct kvm_pgtable *pgt, u64 addr); /** - * kvm_pgtable_stage2_flush_range() - Clean and invalidate data cache to Point - * of Coherency for guest stage-2 address - * range. + * kvm_pgtable_stage2_flush() - Clean and invalidate data cache to Point of + * Coherency for guest stage-2 address range. * @pgt: Page-table structure initialised by kvm_pgtable_stage2_init*(). * @addr: Intermediate physical address from which to flush. * @size: Size of the range. From patchwork Fri Jun 2 16:09:05 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vipin Sharma X-Patchwork-Id: 13265697 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 9944CC7EE24 for ; Fri, 2 Jun 2023 16:41:26 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:Message-ID: References:Mime-Version:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=wjpVTFc+af7s+tq35WqQhSKMav3Ah/EHIb2CKIIO1wo=; b=yEBZ+gQA745RJhMH9FjRCMOy2o VIO/y5hHkntbzhoTM0iRr2pfwZMmkWaJmvYG9ti7Js2v8gVH/mhwDY5P5D0v2Lb0RdEQH5UiSALJ1 6QcOo3z3NVGRp77nCk8cemYQet8viAOAOWMRMWGNr3CA2XwCCeqPt7xHI9g40ht5+8ZlRwackDd1K 9B3CbLQJLVEa4FCK/YDXEA6Qb3cU9JxIfaxEDrUvyb3uOZPK5PJMl9UoiHrKB7b1BEozoG2V6PvsL pHBFHPW4SRcyatJH5Q6XX4bbby0KP3X233jTKiKGxItZy+UVjdQ1CW6vUAMgHBg+k400WBBIJkv0A lTj9NhkA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1q57q6-007R5h-2U; Fri, 02 Jun 2023 16:41:22 +0000 Received: from desiato.infradead.org ([2001:8b0:10b:1:d65d:64ff:fe57:4e05]) by bombadil.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1q57q5-007R58-2h for linux-riscv@bombadil.infradead.org; Fri, 02 Jun 2023 16:41:21 +0000 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=desiato.20200630; h=Content-Type:Cc:To:From:Subject: Message-ID:References:Mime-Version:In-Reply-To:Date:Sender:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description; bh=XlXfZDaqwtQ8jb+NBFxxHmMdNa0dhVq7sf+3J9mNCvc=; b=A+Px8dY+2zxuf0Hkbjk7iOGyC9 xraJ/5iCAA2mGaQjcsY2AH/LpAXvsx5dkbjC4QJXY9zY8B/HtgIuI4aktunjJFL+KDxmcN5agr2fR fo4faQGUEpC9DLNxvuYYROmliCM4wMvNDecggF89FFHiM9uSudUhtcHjSf6fNuOMyaQ0pZ8RW559q oy9vQJzxzVnKwfrMZaMT9z3m440C50Pm3gYeO+Wqz0fXWhdCGlf5hbd70HyQpTns1EPCs6BhaxGjj 3E1CvwUdu4atd8LHKXq3RZdAeRtqyQZA2E4tZApX2LtrsJwsg/6FGCwP/igsqxbJ+0KrYa4r+SG3z CDvP5paw==; Received: from mail-pg1-x549.google.com ([2607:f8b0:4864:20::549]) by desiato.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1q57LL-001N7q-1j for linux-riscv@lists.infradead.org; Fri, 02 Jun 2023 16:10:14 +0000 Received: by mail-pg1-x549.google.com with SMTP id 41be03b00d2f7-5340957a1f1so975326a12.1 for ; Fri, 02 Jun 2023 09:09:35 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1685722174; x=1688314174; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=XlXfZDaqwtQ8jb+NBFxxHmMdNa0dhVq7sf+3J9mNCvc=; b=Cav0E0cCrA4YixnMBQKMfIUVj88wiap2lnIB8ag1Iz/8mWp6WkXueCvYRCBiVgJMpL yz6b5GmGWPhdoaqfWLr4eMh+mes1ArSi+nNH2aSvBuPWl2X06K+1JFMN+UK2I8zKDZ+Y An4/Ccw/5HBcgWIwlUsNKHK+GBxiwi6UrW0Rt5OfBwhcgGTE2dlnkWyH08h03FWRL/+p PpYax2iQC4+/Q7s8QR0zA1oOFG5toYWxuVQyxHq17uCmAv8sJvsjFT497G95S2n7vNCo An7ffox8QUfjjPO3WBaFb5yO8HO3WOQ44El3H8raJwdkFn/AASepix+It/ssVM4Hws1C WsIA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1685722174; x=1688314174; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=XlXfZDaqwtQ8jb+NBFxxHmMdNa0dhVq7sf+3J9mNCvc=; b=dAgHt10EmJmrxgZ0CFbGfvla2tHsWFSFvL2znBBR3UrmRW9A1HnMv6uqb7RiRq2Iz9 uQlLqcMzm8WASFkzKaNyhehRtMX5cHWWM7xuCxyc/NBxg6W3XdawjWxMM1Hkw0sZOaRR xZvRheyQaCy0UFDMrh+upBgV7F/w7/MmSR1NUT9WUt0TjlzqzOfZf/377UcgtktTX5rb QPWcJw3egFeemF47xw9BsFMYW8O80CGxYTyq2MFXKpcx9oH/VCS3a0U94+XiAu8JL/0y 3AIERXvvdP2pQbAkAjb+o1VdmEIehpzhelcch3Q5/NvNJ13FUg+FahIwY/UDTlaYCt4o m5Bg== X-Gm-Message-State: AC+VfDyG9Zc4qbpHDtgFMe9xofzpY+PyfwQ+Cy5Y3fD/E/toA3rksUS6 ojccRXkeyZitbwEpu94Avyv/RyXbxTn3 X-Google-Smtp-Source: ACHHUZ63/E8GjVgWNJSqqa8vS+/kUt+EPMZ2lBAM1R4WdW2tBWK9Y4EulGJYzAllZOblN3IVUjTbhEd8OzIe X-Received: from vipin.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:479f]) (user=vipinsh job=sendgmr) by 2002:a63:385:0:b0:530:70cb:6da9 with SMTP id 127-20020a630385000000b0053070cb6da9mr2521810pgd.10.1685722173764; Fri, 02 Jun 2023 09:09:33 -0700 (PDT) Date: Fri, 2 Jun 2023 09:09:05 -0700 In-Reply-To: <20230602160914.4011728-1-vipinsh@google.com> Mime-Version: 1.0 References: <20230602160914.4011728-1-vipinsh@google.com> X-Mailer: git-send-email 2.41.0.rc0.172.g3f132b7071-goog Message-ID: <20230602160914.4011728-8-vipinsh@google.com> Subject: [PATCH v2 07/16] KVM: mmu: Move mmu lock/unlock to arch code for clear dirty log From: Vipin Sharma To: maz@kernel.org, oliver.upton@linux.dev, james.morse@arm.com, suzuki.poulose@arm.com, yuzenghui@huawei.com, catalin.marinas@arm.com, will@kernel.org, chenhuacai@kernel.org, aleksandar.qemu.devel@gmail.com, tsbogend@alpha.franken.de, anup@brainfault.org, atishp@atishpatra.org, paul.walmsley@sifive.com, palmer@dabbelt.com, aou@eecs.berkeley.edu, seanjc@google.com, pbonzini@redhat.com, dmatlack@google.com, ricarkol@google.com Cc: linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-mips@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kselftest@vger.kernel.org, kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Vipin Sharma X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230602_170935_837740_4E7B1C5D X-CRM114-Status: GOOD ( 14.83 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org Move mmu_lock lock and unlock calls from common code in kvm_clear_dirty_log_protect() to arch specific code in kvm_arch_mmu_enable_log_dirty_pt_masked(). None of the other code inside the for loop of kvm_arch_mmu_enable_log_dirty_pt_masked() needs mmu_lock exclusivity apart from the arch specific API call. Future commits will change clear dirty log operations under mmu read lock instead of write lock for ARM and, potentially, x86 architectures. No functional changes intended. Signed-off-by: Vipin Sharma --- arch/arm64/kvm/mmu.c | 2 ++ arch/mips/kvm/mmu.c | 2 ++ arch/riscv/kvm/mmu.c | 2 ++ arch/x86/kvm/mmu/mmu.c | 3 +++ virt/kvm/dirty_ring.c | 2 -- virt/kvm/kvm_main.c | 4 ---- 6 files changed, 9 insertions(+), 6 deletions(-) diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c index 6db9ef288ec3..0c2c2c0846f1 100644 --- a/arch/arm64/kvm/mmu.c +++ b/arch/arm64/kvm/mmu.c @@ -1125,6 +1125,7 @@ void kvm_arch_mmu_enable_log_dirty_pt_masked(struct kvm *kvm, phys_addr_t start = (base_gfn + __ffs(mask)) << PAGE_SHIFT; phys_addr_t end = (base_gfn + __fls(mask) + 1) << PAGE_SHIFT; + write_lock(&kvm->mmu_lock); lockdep_assert_held_write(&kvm->mmu_lock); stage2_wp_range(&kvm->arch.mmu, start, end); @@ -1139,6 +1140,7 @@ void kvm_arch_mmu_enable_log_dirty_pt_masked(struct kvm *kvm, */ if (kvm_dirty_log_manual_protect_and_init_set(kvm)) kvm_mmu_split_huge_pages(kvm, start, end); + write_unlock(&kvm->mmu_lock); } static void kvm_send_hwpoison_signal(unsigned long address, short lsb) diff --git a/arch/mips/kvm/mmu.c b/arch/mips/kvm/mmu.c index e8c08988ed37..33c5af333ff9 100644 --- a/arch/mips/kvm/mmu.c +++ b/arch/mips/kvm/mmu.c @@ -419,7 +419,9 @@ void kvm_arch_mmu_enable_log_dirty_pt_masked(struct kvm *kvm, gfn_t start = base_gfn + __ffs(mask); gfn_t end = base_gfn + __fls(mask); + spin_lock(&kvm->mmu_lock); kvm_mips_mkclean_gpa_pt(kvm, start, end); + spin_unlock(&kvm->mmu_lock); } /* diff --git a/arch/riscv/kvm/mmu.c b/arch/riscv/kvm/mmu.c index f2eb47925806..fe026ff5eb65 100644 --- a/arch/riscv/kvm/mmu.c +++ b/arch/riscv/kvm/mmu.c @@ -399,7 +399,9 @@ void kvm_arch_mmu_enable_log_dirty_pt_masked(struct kvm *kvm, phys_addr_t start = (base_gfn + __ffs(mask)) << PAGE_SHIFT; phys_addr_t end = (base_gfn + __fls(mask) + 1) << PAGE_SHIFT; + spin_lock(&kvm->mmu_lock); gstage_wp_range(kvm, start, end); + spin_unlock(&kvm->mmu_lock); } void kvm_arch_sync_dirty_log(struct kvm *kvm, struct kvm_memory_slot *memslot) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index c8961f45e3b1..6fff4228e31c 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -1382,6 +1382,7 @@ void kvm_arch_mmu_enable_log_dirty_pt_masked(struct kvm *kvm, struct kvm_memory_slot *slot, gfn_t gfn_offset, unsigned long mask) { + write_lock(&kvm->mmu_lock); /* * Huge pages are NOT write protected when we start dirty logging in * initially-all-set mode; must write protect them here so that they @@ -1412,6 +1413,8 @@ void kvm_arch_mmu_enable_log_dirty_pt_masked(struct kvm *kvm, kvm_mmu_clear_dirty_pt_masked(kvm, slot, gfn_offset, mask); else kvm_mmu_write_protect_pt_masked(kvm, slot, gfn_offset, mask); + + write_unlock(&kvm->mmu_lock); } int kvm_cpu_dirty_log_size(void) diff --git a/virt/kvm/dirty_ring.c b/virt/kvm/dirty_ring.c index c1cd7dfe4a90..d894c58d2152 100644 --- a/virt/kvm/dirty_ring.c +++ b/virt/kvm/dirty_ring.c @@ -66,9 +66,7 @@ static void kvm_reset_dirty_gfn(struct kvm *kvm, u32 slot, u64 offset, u64 mask) if (!memslot || (offset + __fls(mask)) >= memslot->npages) return; - KVM_MMU_LOCK(kvm); kvm_arch_mmu_enable_log_dirty_pt_masked(kvm, memslot, offset, mask); - KVM_MMU_UNLOCK(kvm); } int kvm_dirty_ring_alloc(struct kvm_dirty_ring *ring, int index, u32 size) diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index 13aed654111a..747bfa2f1dd3 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -2160,7 +2160,6 @@ static int kvm_get_dirty_log_protect(struct kvm *kvm, struct kvm_dirty_log *log) dirty_bitmap_buffer = kvm_second_dirty_bitmap(memslot); memset(dirty_bitmap_buffer, 0, n); - KVM_MMU_LOCK(kvm); for (i = 0; i < n / sizeof(long); i++) { unsigned long mask; gfn_t offset; @@ -2176,7 +2175,6 @@ static int kvm_get_dirty_log_protect(struct kvm *kvm, struct kvm_dirty_log *log) kvm_arch_mmu_enable_log_dirty_pt_masked(kvm, memslot, offset, mask); } - KVM_MMU_UNLOCK(kvm); } if (flush) @@ -2271,7 +2269,6 @@ static int kvm_clear_dirty_log_protect(struct kvm *kvm, if (copy_from_user(dirty_bitmap_buffer, log->dirty_bitmap, n)) return -EFAULT; - KVM_MMU_LOCK(kvm); for (offset = log->first_page, i = offset / BITS_PER_LONG, n = DIV_ROUND_UP(log->num_pages, BITS_PER_LONG); n--; i++, offset += BITS_PER_LONG) { @@ -2294,7 +2291,6 @@ static int kvm_clear_dirty_log_protect(struct kvm *kvm, offset, mask); } } - KVM_MMU_UNLOCK(kvm); if (flush) kvm_arch_flush_remote_tlbs_memslot(kvm, memslot); From patchwork Fri Jun 2 16:09:06 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vipin Sharma X-Patchwork-Id: 13265699 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 2C7E9C7EE24 for ; Fri, 2 Jun 2023 16:41:30 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:Message-ID: References:Mime-Version:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=zYFqvEbNIrlwoLpibGGCVjLDrXBI/Ym+exOHVTNN/4s=; b=HuSCb3Eos3hIpCw/XiIDGDSvih Q28SUYwhd+gadWF0m4CqEJgtFzGJDi+6fhFwP36eNvpIXrjQVnUXyH3RmKC8lyQNPrUseFLvH7FAv XCrTHDj1y6NS7YWok0cmusH7pxONlLSgL68TNn8xkvcT7xGIuue5UiXBMcmp08vssrSVCzsHZw6ah 2ylVo9W2LP+I60zCPy73oLxerTJJ/re+7YjIebSoOF7rOX2DGzbvAvPDuQcnpH/munmPTgag7WB0C 2FF+3wcZN5SnKVVf6BnJlgm2rI6RZx+UiShE45SrESfoPRx0kI+6oTYjQE6eZ3ZsqH8RDZmiSqIru t31MVqoA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1q57q9-007R7M-31; Fri, 02 Jun 2023 16:41:25 +0000 Received: from desiato.infradead.org ([2001:8b0:10b:1:d65d:64ff:fe57:4e05]) by bombadil.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1q57q8-007R6I-1a for linux-riscv@bombadil.infradead.org; Fri, 02 Jun 2023 16:41:24 +0000 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=desiato.20200630; h=Content-Type:Cc:To:From:Subject: Message-ID:References:Mime-Version:In-Reply-To:Date:Sender:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description; bh=GJx+yypKzd3COmz5TQWkFGxt9Sb1HD5xNFJXRHqVvy0=; b=GdWtbguzJF/qKI+JDnb4wKg53e DLhh62+CI5T/dcoy50I45qw4l+6cczFH9ysrBQT7YzpgLYJEDgL+yAlHMSklIlFheNf3XRSsPivWe hhnXYcZ+m//Ws5uTYvoJw8N91rD0ebcNClv4xzJHtOwJzGQ/UqUp1rwxezn/u3AjbtAxcqsPfVCxc LloQjbJKcVvb+Cjey51yxN8ndIJV6lTBdRP/TVnrmzbm28rRYCVJEChYgTzekhnfAnuBT3L4ug9qz 2siyDNLbz66yZc9qTyHvObetTv6Q2OaBKxpHIX0CyRgnYTuQg0Q/r+ODo+gJjZOgzEhbM39gmoh4y MDYr8/6A==; Received: from mail-yb1-xb49.google.com ([2607:f8b0:4864:20::b49]) by desiato.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1q57LO-001N8m-1B for linux-riscv@lists.infradead.org; Fri, 02 Jun 2023 16:09:41 +0000 Received: by mail-yb1-xb49.google.com with SMTP id 3f1490d57ef6-babb5e91ab4so3222987276.0 for ; Fri, 02 Jun 2023 09:09:37 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1685722176; x=1688314176; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=GJx+yypKzd3COmz5TQWkFGxt9Sb1HD5xNFJXRHqVvy0=; b=szBGu+EQ2vs8Q3A0ZT1RS79FNLRJxSOVkrDrBBX7rmCKLzKn/b+E4mNymytABzCLif EgUFXaPy7nG7KM6jDjkYBO0vsJLxZ4XBjsoF6X75ZimJDOLxnKGoSfjvnPNLfA9vSHBD s9ihl+kXGelNmLHAJZYRz4+yQ+Fx6J3bT5bOlL9FOfNpgH6MN5wQNIGrZvk2XbL68tHF B/csW1RkKWfzpRPcNxb+4yNUfWvhAhXwGpxjibheJUkN4+CaHLLPZUdUbJv/P8pKuPN/ Cvmf4i9nzVBBvNWz3J0YGn8SEauVZOkYjg+xhiWvEBlTBwkFZHPSrYROctUU+loxAFaS zjKQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1685722176; x=1688314176; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=GJx+yypKzd3COmz5TQWkFGxt9Sb1HD5xNFJXRHqVvy0=; b=VQRXTgbTBuYkWZzxbeY0K76kd6fEoFppL7kMTcYK0TPsfdITaMPAIBz3lW+DRX/528 D/QSxYnxL3XnFdiQ3MdyXuim+Z720iBRg5SgDFZhcKkleWCUxKNfiQR9yg6sYgOM+D0P vXFUu3HOHUo0bTApwPec1yZFHY5ULxBKeSWaPyjdHNBJgpEJtjRWLatetF87QWpEBceZ SmVn/EVakWx6iyyD2lHP9900ioZ3R7ul5rzaGo8AlwlUPFVz1tMwWFoMb0XWkRefr+f4 HYNIYgAt3PpDvJ30ZLp1jdZ5vg5+gfesSwIyViLnbr+5TizsJ8ysfwDSE2R9pF4YNeq3 tblw== X-Gm-Message-State: AC+VfDy3skTPA3iDot+0vP2B2yZqAtaC5gviLG+176gdjWzneXGuoW17 I4zYcjpska8Q1mTH9YaFYrEsi/bQta/1 X-Google-Smtp-Source: ACHHUZ55GAWUbCX2pBxCQpbjCqZ+MgzCwRW5rUzsUOGzP4VGdNUP2AN+4xPnboHdKBOoO7J5TTwkex+IpQ+T X-Received: from vipin.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:479f]) (user=vipinsh job=sendgmr) by 2002:a05:6902:100a:b0:bad:600:1833 with SMTP id w10-20020a056902100a00b00bad06001833mr2072241ybt.0.1685722176005; Fri, 02 Jun 2023 09:09:36 -0700 (PDT) Date: Fri, 2 Jun 2023 09:09:06 -0700 In-Reply-To: <20230602160914.4011728-1-vipinsh@google.com> Mime-Version: 1.0 References: <20230602160914.4011728-1-vipinsh@google.com> X-Mailer: git-send-email 2.41.0.rc0.172.g3f132b7071-goog Message-ID: <20230602160914.4011728-9-vipinsh@google.com> Subject: [PATCH v2 08/16] KMV: arm64: Pass page table walker flags to stage2_apply_range_*() From: Vipin Sharma To: maz@kernel.org, oliver.upton@linux.dev, james.morse@arm.com, suzuki.poulose@arm.com, yuzenghui@huawei.com, catalin.marinas@arm.com, will@kernel.org, chenhuacai@kernel.org, aleksandar.qemu.devel@gmail.com, tsbogend@alpha.franken.de, anup@brainfault.org, atishp@atishpatra.org, paul.walmsley@sifive.com, palmer@dabbelt.com, aou@eecs.berkeley.edu, seanjc@google.com, pbonzini@redhat.com, dmatlack@google.com, ricarkol@google.com Cc: linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-mips@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kselftest@vger.kernel.org, kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Vipin Sharma X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230602_170938_483098_9AF4DA83 X-CRM114-Status: GOOD ( 16.60 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org Allow stage2_apply_range_*() to accept enum kvm_pgtable_walk_flags{} for stage 2 walkers. Pass 0 as the flag value from all of its caller effectively making it a no-op. Page table walker flags will be used in future commits to enable clear-dirty-log operation under MMU read lock. Current users of stage2_apply_range_*() API runs under assumption of holding MMU write lock. Stage2 page table walkers then run under the same assumption. In future commits, when clear-dirty-log is modified to run under MMU read lock then this flag will be used to pass shared page walk intent. No functional changes intended. Signed-off-by: Vipin Sharma --- arch/arm64/include/asm/kvm_pgtable.h | 12 +++++++++--- arch/arm64/kvm/hyp/nvhe/mem_protect.c | 4 ++-- arch/arm64/kvm/hyp/pgtable.c | 16 ++++++++++------ arch/arm64/kvm/mmu.c | 26 ++++++++++++++++---------- 4 files changed, 37 insertions(+), 21 deletions(-) diff --git a/arch/arm64/include/asm/kvm_pgtable.h b/arch/arm64/include/asm/kvm_pgtable.h index d542a671c564..8ef7e8f3f054 100644 --- a/arch/arm64/include/asm/kvm_pgtable.h +++ b/arch/arm64/include/asm/kvm_pgtable.h @@ -560,6 +560,7 @@ int kvm_pgtable_stage2_set_owner(struct kvm_pgtable *pgt, u64 addr, u64 size, * @pgt: Page-table structure initialised by kvm_pgtable_stage2_init*(). * @addr: Intermediate physical address from which to remove the mapping. * @size: Size of the mapping. + * @flags: Page-table walker flags. * * The offset of @addr within a page is ignored and @size is rounded-up to * the next page boundary. @@ -572,7 +573,8 @@ int kvm_pgtable_stage2_set_owner(struct kvm_pgtable *pgt, u64 addr, u64 size, * * Return: 0 on success, negative error code on failure. */ -int kvm_pgtable_stage2_unmap(struct kvm_pgtable *pgt, u64 addr, u64 size); +int kvm_pgtable_stage2_unmap(struct kvm_pgtable *pgt, u64 addr, u64 size, + enum kvm_pgtable_walk_flags flags); /** * kvm_pgtable_stage2_wrprotect() - Write-protect guest stage-2 address range @@ -580,6 +582,7 @@ int kvm_pgtable_stage2_unmap(struct kvm_pgtable *pgt, u64 addr, u64 size); * @pgt: Page-table structure initialised by kvm_pgtable_stage2_init*(). * @addr: Intermediate physical address from which to write-protect, * @size: Size of the range. + * @flags: Page-table walker flags. * * The offset of @addr within a page is ignored and @size is rounded-up to * the next page boundary. @@ -590,7 +593,8 @@ int kvm_pgtable_stage2_unmap(struct kvm_pgtable *pgt, u64 addr, u64 size); * * Return: 0 on success, negative error code on failure. */ -int kvm_pgtable_stage2_wrprotect(struct kvm_pgtable *pgt, u64 addr, u64 size); +int kvm_pgtable_stage2_wrprotect(struct kvm_pgtable *pgt, u64 addr, u64 size, + enum kvm_pgtable_walk_flags flags); /** * kvm_pgtable_stage2_mkyoung() - Set the access flag in a page-table entry. @@ -662,13 +666,15 @@ bool kvm_pgtable_stage2_is_young(struct kvm_pgtable *pgt, u64 addr); * @pgt: Page-table structure initialised by kvm_pgtable_stage2_init*(). * @addr: Intermediate physical address from which to flush. * @size: Size of the range. + * @flags: Page-table walker flags. * * The offset of @addr within a page is ignored and @size is rounded-up to * the next page boundary. * * Return: 0 on success, negative error code on failure. */ -int kvm_pgtable_stage2_flush(struct kvm_pgtable *pgt, u64 addr, u64 size); +int kvm_pgtable_stage2_flush(struct kvm_pgtable *pgt, u64 addr, u64 size, + enum kvm_pgtable_walk_flags flags); /** * kvm_pgtable_stage2_split() - Split a range of huge pages into leaf PTEs pointing diff --git a/arch/arm64/kvm/hyp/nvhe/mem_protect.c b/arch/arm64/kvm/hyp/nvhe/mem_protect.c index d35e75b13ffe..13f5cf5f87c3 100644 --- a/arch/arm64/kvm/hyp/nvhe/mem_protect.c +++ b/arch/arm64/kvm/hyp/nvhe/mem_protect.c @@ -333,11 +333,11 @@ static int host_stage2_unmap_dev_all(void) /* Unmap all non-memory regions to recycle the pages */ for (i = 0; i < hyp_memblock_nr; i++, addr = reg->base + reg->size) { reg = &hyp_memory[i]; - ret = kvm_pgtable_stage2_unmap(pgt, addr, reg->base - addr); + ret = kvm_pgtable_stage2_unmap(pgt, addr, reg->base - addr, 0); if (ret) return ret; } - return kvm_pgtable_stage2_unmap(pgt, addr, BIT(pgt->ia_bits) - addr); + return kvm_pgtable_stage2_unmap(pgt, addr, BIT(pgt->ia_bits) - addr, 0); } struct kvm_mem_range { diff --git a/arch/arm64/kvm/hyp/pgtable.c b/arch/arm64/kvm/hyp/pgtable.c index 364b68013038..a3a0812b2301 100644 --- a/arch/arm64/kvm/hyp/pgtable.c +++ b/arch/arm64/kvm/hyp/pgtable.c @@ -1044,12 +1044,14 @@ static int stage2_unmap_walker(const struct kvm_pgtable_visit_ctx *ctx, return 0; } -int kvm_pgtable_stage2_unmap(struct kvm_pgtable *pgt, u64 addr, u64 size) +int kvm_pgtable_stage2_unmap(struct kvm_pgtable *pgt, u64 addr, u64 size, + enum kvm_pgtable_walk_flags flags) { struct kvm_pgtable_walker walker = { .cb = stage2_unmap_walker, .arg = pgt, - .flags = KVM_PGTABLE_WALK_LEAF | KVM_PGTABLE_WALK_TABLE_POST, + .flags = flags | KVM_PGTABLE_WALK_LEAF | + KVM_PGTABLE_WALK_TABLE_POST, }; return kvm_pgtable_walk(pgt, addr, size, &walker); @@ -1128,11 +1130,12 @@ static int stage2_update_leaf_attrs(struct kvm_pgtable *pgt, u64 addr, return 0; } -int kvm_pgtable_stage2_wrprotect(struct kvm_pgtable *pgt, u64 addr, u64 size) +int kvm_pgtable_stage2_wrprotect(struct kvm_pgtable *pgt, u64 addr, u64 size, + enum kvm_pgtable_walk_flags flags) { return stage2_update_leaf_attrs(pgt, addr, size, 0, KVM_PTE_LEAF_ATTR_LO_S2_S2AP_W, - NULL, NULL, 0); + NULL, NULL, flags); } kvm_pte_t kvm_pgtable_stage2_mkyoung(struct kvm_pgtable *pgt, u64 addr) @@ -1213,11 +1216,12 @@ static int stage2_flush_walker(const struct kvm_pgtable_visit_ctx *ctx, return 0; } -int kvm_pgtable_stage2_flush(struct kvm_pgtable *pgt, u64 addr, u64 size) +int kvm_pgtable_stage2_flush(struct kvm_pgtable *pgt, u64 addr, u64 size, + enum kvm_pgtable_walk_flags flags) { struct kvm_pgtable_walker walker = { .cb = stage2_flush_walker, - .flags = KVM_PGTABLE_WALK_LEAF, + .flags = flags | KVM_PGTABLE_WALK_LEAF, .arg = pgt, }; diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c index 0c2c2c0846f1..1030921d89f8 100644 --- a/arch/arm64/kvm/mmu.c +++ b/arch/arm64/kvm/mmu.c @@ -55,7 +55,9 @@ static phys_addr_t stage2_range_addr_end(phys_addr_t addr, phys_addr_t end) */ static int stage2_apply_range(struct kvm_s2_mmu *mmu, phys_addr_t addr, phys_addr_t end, - int (*fn)(struct kvm_pgtable *, u64, u64), + enum kvm_pgtable_walk_flags flags, + int (*fn)(struct kvm_pgtable *, u64, u64, + enum kvm_pgtable_walk_flags), bool resched) { struct kvm *kvm = kvm_s2_mmu_to_kvm(mmu); @@ -68,7 +70,7 @@ static int stage2_apply_range(struct kvm_s2_mmu *mmu, phys_addr_t addr, return -EINVAL; next = stage2_range_addr_end(addr, end); - ret = fn(pgt, addr, next - addr); + ret = fn(pgt, addr, next - addr, flags); if (ret) break; @@ -79,8 +81,8 @@ static int stage2_apply_range(struct kvm_s2_mmu *mmu, phys_addr_t addr, return ret; } -#define stage2_apply_range_resched(mmu, addr, end, fn) \ - stage2_apply_range(mmu, addr, end, fn, true) +#define stage2_apply_range_resched(mmu, addr, end, flags, fn) \ + stage2_apply_range(mmu, addr, end, flags, fn, true) /* * Get the maximum number of page-tables pages needed to split a range @@ -316,7 +318,7 @@ static void __unmap_stage2_range(struct kvm_s2_mmu *mmu, phys_addr_t start, u64 lockdep_assert_held_write(&kvm->mmu_lock); WARN_ON(size & ~PAGE_MASK); - WARN_ON(stage2_apply_range(mmu, start, end, kvm_pgtable_stage2_unmap, + WARN_ON(stage2_apply_range(mmu, start, end, 0, kvm_pgtable_stage2_unmap, may_block)); } @@ -331,7 +333,8 @@ static void stage2_flush_memslot(struct kvm *kvm, phys_addr_t addr = memslot->base_gfn << PAGE_SHIFT; phys_addr_t end = addr + PAGE_SIZE * memslot->npages; - stage2_apply_range_resched(&kvm->arch.mmu, addr, end, kvm_pgtable_stage2_flush); + stage2_apply_range_resched(&kvm->arch.mmu, addr, end, 0, + kvm_pgtable_stage2_flush); } /** @@ -1041,10 +1044,13 @@ int kvm_phys_addr_ioremap(struct kvm *kvm, phys_addr_t guest_ipa, * @mmu: The KVM stage-2 MMU pointer * @addr: Start address of range * @end: End address of range + * @flags: Page-table walker flags. */ -static void stage2_wp_range(struct kvm_s2_mmu *mmu, phys_addr_t addr, phys_addr_t end) +static void stage2_wp_range(struct kvm_s2_mmu *mmu, phys_addr_t addr, phys_addr_t end, + enum kvm_pgtable_walk_flags flags) { - stage2_apply_range_resched(mmu, addr, end, kvm_pgtable_stage2_wrprotect); + stage2_apply_range_resched(mmu, addr, end, flags, + kvm_pgtable_stage2_wrprotect); } /** @@ -1073,7 +1079,7 @@ static void kvm_mmu_wp_memory_region(struct kvm *kvm, int slot) end = (memslot->base_gfn + memslot->npages) << PAGE_SHIFT; write_lock(&kvm->mmu_lock); - stage2_wp_range(&kvm->arch.mmu, start, end); + stage2_wp_range(&kvm->arch.mmu, start, end, 0); write_unlock(&kvm->mmu_lock); kvm_flush_remote_tlbs(kvm); } @@ -1128,7 +1134,7 @@ void kvm_arch_mmu_enable_log_dirty_pt_masked(struct kvm *kvm, write_lock(&kvm->mmu_lock); lockdep_assert_held_write(&kvm->mmu_lock); - stage2_wp_range(&kvm->arch.mmu, start, end); + stage2_wp_range(&kvm->arch.mmu, start, end, 0); /* * Eager-splitting is done when manual-protect is set. We From patchwork Fri Jun 2 16:09:07 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vipin Sharma X-Patchwork-Id: 13265698 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 2A2F7C7EE2A for ; Fri, 2 Jun 2023 16:41:29 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:Message-ID: References:Mime-Version:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=uzsYC+XEaQYstdfJRCOsT4QhSsTzi+OQUBqcqCRB3bs=; b=4PXElRgy4qVjH7cC84vkox5tz1 4hk9PDHll5oD/PzDR/cTBc690Ur6Ddv3zDNeVRzZHJdcX6xnQUUOBuahGAc7Tgrrv8xgi3f0O9r7I jbqebAblAvGIP6DOOtt227Wvefv2Atj+s8AH5d+eV1bPcg7VeHkiT9okoaOq2cm0+FzMF1e9eVX0x V3GTrk5EC9LTYcFvX4pvy1UlwRweHOKoHUfBV+AIqhpL07AiFrsHrqVAfB+36zJAhMIcuiBNiEUQF YSmk2LMFrcK4eaVyz9kXZmztUO3cxq9EsDNeh0rWL4ZFdkpe0V2cGTHTmrltVDPTQfxoBL4+iWtI0 tSmrRyug==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1q57q8-007R6R-19; Fri, 02 Jun 2023 16:41:24 +0000 Received: from desiato.infradead.org ([2001:8b0:10b:1:d65d:64ff:fe57:4e05]) by bombadil.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1q57q7-007R5g-0q for linux-riscv@bombadil.infradead.org; Fri, 02 Jun 2023 16:41:23 +0000 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=desiato.20200630; h=Content-Type:Cc:To:From:Subject: Message-ID:References:Mime-Version:In-Reply-To:Date:Sender:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description; bh=NyrihgmYesBmBKLp4ITVdJ3LGNh6LVDPlJHbJ3BWmKo=; b=VwXFF2oBvMh1txyZtgY8+XKuxA W717UCwypD+YexGfNk6wvYuZS8QHj25WIFYcmo7GqVMIaTvigqxLWCIUJJog02JGbseMst0BAmQem Y2Zn5PInhYfEl12gxPZlw9gIhj2wTrvXxVkyugb/1/EbGiAmadt8VMiTpU/ovzGbyIDNvLIyXRXW2 ncQNmuF07S1jEEQpjoNhtqKC1EDPQjLdx6qv6GPpheS1WF1Y2BWxd+x4AmmAQftQVDK1kFC1oiv9v awCNaGHYGi2qNqAKJpzEFLHheeLNd25knK8i1uY0AS3n+rzopmn5vjVsP98CuMFjllyyzMk9P4z9B wNQiqoOw==; Received: from mail-yw1-x1149.google.com ([2607:f8b0:4864:20::1149]) by desiato.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1q57LQ-001N9X-0N for linux-riscv@lists.infradead.org; Fri, 02 Jun 2023 16:09:44 +0000 Received: by mail-yw1-x1149.google.com with SMTP id 00721157ae682-56552a72cfbso33353447b3.3 for ; Fri, 02 Jun 2023 09:09:39 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1685722178; x=1688314178; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=NyrihgmYesBmBKLp4ITVdJ3LGNh6LVDPlJHbJ3BWmKo=; b=6JvB68luEiFeoyKEfjP/g+vMwC/sRddTQaYllS5g+5bpQDmhXIgwwZiHHiYhNY6/hh +N3LQSiNtnyna/XZaiM18ivCXXQ4yuX/15A1AGZwy0Qa/f1hIAf3oKXB2h5AuHkYQgCs xj8Y82H0f8TZvkTqQ0Hx5MWBndbNiI7NCzZ3czEuYGWHs7LAUCirzskxbkVOl+noMZFx tm27fSCTKAwfbeEstcfEbI/nW6D9qAihf0lb2MdszgnmyhfNayDhGNG1boLJ+W+UNY9A PKxN0bCcFoMuNsJAFUnK+mzP9snrdOrZPjuzoVjhjXEuIkdNPWm6zU7rdhcbdNuxMxHK DaOA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1685722178; x=1688314178; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=NyrihgmYesBmBKLp4ITVdJ3LGNh6LVDPlJHbJ3BWmKo=; b=QhbmWDruA04y4uN5t6ysbqgHT2XAey5a885S0gkFn9126DsHWE4JsdViqmW09JpVA3 KgmMymzmR7zZT/oHNRB203Ew3kOlwU86VvIaJEzhvY9zDrp3yhuz+waAhamgKFLQIUwX Wa91j2r9ZtqR8+Wq6ZFPcp2/98ABX4xlwZJVxqXGDgeJS8QPMgOZuS0D46GzfdxWsApt dvEg6BtLN9qqZQnzuvUh3bTvmf1VlxZQHxFajkmJP9GE8iztUcIvG/X07XTSXYEA386z rXzh5d+rRIkaaaAZv8Y1nrP5HKhWP+ur3lqYPkyqiLwiHXOCuTq63hRS2YnzVWoMBaKF u81Q== X-Gm-Message-State: AC+VfDzxbWTZaWirJvDbnowfEbcfRFhbtT0/9u5LOOlA9mf6oCpH8X7u N+yNwYXKHIlPNupcQ10bU3UG+8lY+Zfi X-Google-Smtp-Source: ACHHUZ7z74Jy0Y96qFW+7u40ZiTgX4ucCC/5bWAvnbf5ZGF1cU7Xjv0z02cQ2QpbWgJAAII5qTnOdyVeR9zk X-Received: from vipin.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:479f]) (user=vipinsh job=sendgmr) by 2002:a81:b620:0:b0:561:b8d1:743b with SMTP id u32-20020a81b620000000b00561b8d1743bmr170869ywh.10.1685722178126; Fri, 02 Jun 2023 09:09:38 -0700 (PDT) Date: Fri, 2 Jun 2023 09:09:07 -0700 In-Reply-To: <20230602160914.4011728-1-vipinsh@google.com> Mime-Version: 1.0 References: <20230602160914.4011728-1-vipinsh@google.com> X-Mailer: git-send-email 2.41.0.rc0.172.g3f132b7071-goog Message-ID: <20230602160914.4011728-10-vipinsh@google.com> Subject: [PATCH v2 09/16] KVM: arm64: Document the page table walker actions based on the callback's return value From: Vipin Sharma To: maz@kernel.org, oliver.upton@linux.dev, james.morse@arm.com, suzuki.poulose@arm.com, yuzenghui@huawei.com, catalin.marinas@arm.com, will@kernel.org, chenhuacai@kernel.org, aleksandar.qemu.devel@gmail.com, tsbogend@alpha.franken.de, anup@brainfault.org, atishp@atishpatra.org, paul.walmsley@sifive.com, palmer@dabbelt.com, aou@eecs.berkeley.edu, seanjc@google.com, pbonzini@redhat.com, dmatlack@google.com, ricarkol@google.com Cc: linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-mips@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kselftest@vger.kernel.org, kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Vipin Sharma X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230602_170940_464377_D9215ADF X-CRM114-Status: GOOD ( 13.79 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org Document what the page table walker do when walker callback function returns a value. Current documentation is not correct as negative error of -EAGAIN on a non-shared page table walker doesn't terminate the walker and continues to the next step. There might be a better place to keep this information, for now this documentation will work as a reference guide until a better way is found. Signed-off-by: Vipin Sharma --- arch/arm64/include/asm/kvm_pgtable.h | 15 +++++++++++++-- 1 file changed, 13 insertions(+), 2 deletions(-) diff --git a/arch/arm64/include/asm/kvm_pgtable.h b/arch/arm64/include/asm/kvm_pgtable.h index 8ef7e8f3f054..957bc20dab00 100644 --- a/arch/arm64/include/asm/kvm_pgtable.h +++ b/arch/arm64/include/asm/kvm_pgtable.h @@ -711,8 +711,19 @@ int kvm_pgtable_stage2_split(struct kvm_pgtable *pgt, u64 addr, u64 size, * after invoking the walker callback, allowing the walker to descend into * a newly installed table. * - * Returning a negative error code from the walker callback function will - * terminate the walk immediately with the same error code. + * Depending on the return value from the walker callback function, the page + * table walk will continue or exit the walk. This is also dependent on the + * type of the walker, i.e. shared walker (vCPU fault handlers) or non-shared + * walker. + * + * Walker Type | Callback | Walker action + * -------------|------------------|-------------- + * Non-Shared | 0 | Continue + * Non-Shared | -EAGAIN | Continue + * Non-Shared | Any other | Exit + * -------------|------------------|-------------- + * Shared | 0 | Continue + * Shared | Any other | Exit * * Return: 0 on success, negative error code on failure. */ From patchwork Fri Jun 2 16:09:08 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vipin Sharma X-Patchwork-Id: 13265696 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id F26BFC7EE24 for ; Fri, 2 Jun 2023 16:41:00 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:Message-ID: References:Mime-Version:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=kM9b5D0TGJR7zXaObeBnlfZxmZ5BE+n0HHJP5S6uTqQ=; b=y6Dqm34yHwPy0RVyWQmqsXh91B X9wsLq4qutoUYd0+nxJmSe1iY/4XZKs23nCWv1NhdjazY78X0A6Xg4zUZtsK4Ie4bOsK50vKAMXdo JGlQmCvwWopHoKfhRmP+FO+UYdLE0rfUj5mL3AUc8THSRs2VCPmUrRM92Ovuhqlvu93IpSivaMbyU TfXazrTjZUqo9DlyzgIIK0sKfMJ6oC113zcoVLBC3Ced0QQC2GtYEUwdSGqx7kXVtausD1USOgFNq /qqaYNbro1DIcbapATypgnDMQlm3mc3wfRFSOz1wq4d6HA79cUMUB+61gWQ23f6OSEp0iXUqCJykM EcDxjkYg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1q57ph-007QzK-0w; Fri, 02 Jun 2023 16:40:57 +0000 Received: from desiato.infradead.org ([2001:8b0:10b:1:d65d:64ff:fe57:4e05]) by bombadil.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1q57pf-007QyQ-2i for linux-riscv@bombadil.infradead.org; Fri, 02 Jun 2023 16:40:55 +0000 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=desiato.20200630; h=Content-Type:Cc:To:From:Subject: Message-ID:References:Mime-Version:In-Reply-To:Date:Sender:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description; bh=i16qvWrTxXmM44zqvAfMO+VQ2ZgLrONxQcP35eFduXw=; b=N39ifwT6cq+ncpU1JeZggVz2FN OzMf6ERE7wz5L6HQVMyMOlVnqwCljse8peO8NwJCvvmnqxHYRtajHhAvxujywKmXVb88KRYJbiTpJ /UrM9Fubyo3RXY3ct0x8Mur3eUXPYQzi54WiTo064ax62lQTuzWRobAwicNDcE5+Z+w/mBn6wnsRu 3fArgOl/NiED2IgUlFQRt4qslAw14MSc813R5idXQlgDo5bRer1vCgwWOpbW0x3RoINFINUapW/H+ ytXlKV1KcicePJQ13Mb3ZzlVtWQ2yeaMpcyYjXXQjJHRyV/ycDjc7tT/MevMkq8BuwZ/ss8tFdoDR QWA2aUNg==; Received: from mail-pg1-x54a.google.com ([2607:f8b0:4864:20::54a]) by desiato.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1q57LS-001NAk-0Y for linux-riscv@lists.infradead.org; Fri, 02 Jun 2023 16:09:46 +0000 Received: by mail-pg1-x54a.google.com with SMTP id 41be03b00d2f7-51f7638a56fso2227841a12.3 for ; Fri, 02 Jun 2023 09:09:41 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1685722180; x=1688314180; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=i16qvWrTxXmM44zqvAfMO+VQ2ZgLrONxQcP35eFduXw=; b=7epqj+D7b/Vs8MJCaB/Yaig4pdk6Xsa6+PhHha/KiUX7vkzJuwNVRvh/QKSBOYWOoF 9Kl1CGagQi1DXxlGyYvTod7C6g15Jc/lvseccbcVTnZGlpUMuN85YotpNb24SiVNDe/4 jJWg+BewppQ9iRi+5TbG/EFvPtgidVbx51XdS+ZgnSZfKoodj1WI4tk7ctB/2mpU7ujt 5GR5L1lxyi6Yc/dQP61OizppKV4BxJWb97C2tfKTvu6BWLjF7QcXhyPnKGED9xXxbRV0 rkKjEao0Yem9n3mVyvzo3PMl7x2joHF/KT0tNezMLtd5RxYZ9imnBCUvokvJr8Ssuvgw mKhg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1685722180; x=1688314180; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=i16qvWrTxXmM44zqvAfMO+VQ2ZgLrONxQcP35eFduXw=; b=Cvv811i2xF4+ohJbeYVkFIdTWCI9J9CyveCQtoJrZu+6AhW8ugL3zqJDpgcFJ/ob5C Lw6YNNuoPEUg8Pcw20IljwGNnHaSjVmCHSHQmQSgNLtJQTZJOF06VyWesTM+PPu+RTy1 sf85RVhR4/+MybLtieGinWRV+LIWPl19uXLChYybmg/hIyKrdTTlFGA63oZzgYpxRBVz /WwsCV+Veppkbj2L6V7EaYHbhRq7tv7ni0m4O/gAn37/XDp5LjFTjGdkxlolGdaPv+3V bQ9ZPtKhdD1YAwlR04FW911jMU8wjp6xrb1+3vv7cCIdFix8B0IlSnk+nBdwD6i0StYL R0Nw== X-Gm-Message-State: AC+VfDyHa0PNDetvqOB8vq+rYOhiB0VfyjCnys9waeI8IQQBOjpE2Iu4 Yw/bEKD1pVN0mxHShHul7r0miO5ubayw X-Google-Smtp-Source: ACHHUZ50eFmYtmjd+7phpNHJVAKKYKF0b2Pf24cv0bbiOkOy3oeWBFiQvv39uMwyDgbA4q0xlnPSa5qY2vtt X-Received: from vipin.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:479f]) (user=vipinsh job=sendgmr) by 2002:a17:903:3251:b0:1b0:6a10:1ba1 with SMTP id ji17-20020a170903325100b001b06a101ba1mr117384plb.13.1685722180001; Fri, 02 Jun 2023 09:09:40 -0700 (PDT) Date: Fri, 2 Jun 2023 09:09:08 -0700 In-Reply-To: <20230602160914.4011728-1-vipinsh@google.com> Mime-Version: 1.0 References: <20230602160914.4011728-1-vipinsh@google.com> X-Mailer: git-send-email 2.41.0.rc0.172.g3f132b7071-goog Message-ID: <20230602160914.4011728-11-vipinsh@google.com> Subject: [PATCH v2 10/16] KVM: arm64: Return -ENOENT if PTE is not valid in stage2_attr_walker From: Vipin Sharma To: maz@kernel.org, oliver.upton@linux.dev, james.morse@arm.com, suzuki.poulose@arm.com, yuzenghui@huawei.com, catalin.marinas@arm.com, will@kernel.org, chenhuacai@kernel.org, aleksandar.qemu.devel@gmail.com, tsbogend@alpha.franken.de, anup@brainfault.org, atishp@atishpatra.org, paul.walmsley@sifive.com, palmer@dabbelt.com, aou@eecs.berkeley.edu, seanjc@google.com, pbonzini@redhat.com, dmatlack@google.com, ricarkol@google.com Cc: linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-mips@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kselftest@vger.kernel.org, kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Vipin Sharma X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230602_170942_301179_60607C7B X-CRM114-Status: GOOD ( 20.19 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org Return -ENOENT from stage2_attr_walker for invalid PTE. Continue page table walk if walker callback returns -ENOENT outside of the fault handler path else terminate the walk. In fault handler path, similar to -EAGAIN in user_mem_abort, retry guest execution. stage2_attr_walker() is used from multiple places like, write protection, MMU notifier callbacks, and relaxing permission during vCPU faults. This function returns -EAGAIN for different cases: 1. When PTE is not valid. 2. When cmpxchg() fails while setting new SPTE. For non-shared walkers, like write protection and MMU notifier, above 2 cases are just ignored by walker and it moves to the next SPTE. #2 will never happen for non-shared walkers as they don't use cmpxchg() for updating SPTEs. For shared walkers, like vCPU fault handler, above 2 cases results in walk termination. In future commits, clear-dirty-log walker will write protect SPTEs under MMU read lock and use shared page table walker. This will result in two shared page table walkers type, vCPUs fault handler and clear-dirty-log, competing with each other and sometime causing cmpxchg() failure. So, -EAGAIN in clear-dirty-log walker due to cmpxchg() failure must be retried. Whereas, -EAGAIN in the clear-dirty-log due to invalid SPTE must be ignored instead of exiting as per the current logic of shared page table walker. This is not needed for vCPU fault handler which also runs via shared page table walker and terminates walk on getting -EAGAIN due to invalid SPTE. To handle all these scenarios, stage2_attr_walker must return different error codes for invalid SPTEs and cmxchg() failure. -ENOENT for invalid SPTE is chosen because it is not used by any other shared walker. When clear-dirty-log will be changed to use shared page table walker, it will be possible to differentiate cases of retrying, continuing or terminating the walk for shared fault handler and shared clear-dirty-log. Signed-off-by: Vipin Sharma --- arch/arm64/include/asm/kvm_pgtable.h | 1 + arch/arm64/kvm/hyp/pgtable.c | 19 ++++++++++++------- arch/arm64/kvm/mmu.c | 2 +- 3 files changed, 14 insertions(+), 8 deletions(-) diff --git a/arch/arm64/include/asm/kvm_pgtable.h b/arch/arm64/include/asm/kvm_pgtable.h index 957bc20dab00..23e7e7851f1d 100644 --- a/arch/arm64/include/asm/kvm_pgtable.h +++ b/arch/arm64/include/asm/kvm_pgtable.h @@ -720,6 +720,7 @@ int kvm_pgtable_stage2_split(struct kvm_pgtable *pgt, u64 addr, u64 size, * -------------|------------------|-------------- * Non-Shared | 0 | Continue * Non-Shared | -EAGAIN | Continue + * Non-Shared | -ENOENT | Continue * Non-Shared | Any other | Exit * -------------|------------------|-------------- * Shared | 0 | Continue diff --git a/arch/arm64/kvm/hyp/pgtable.c b/arch/arm64/kvm/hyp/pgtable.c index a3a0812b2301..bc8c5c4ac1cf 100644 --- a/arch/arm64/kvm/hyp/pgtable.c +++ b/arch/arm64/kvm/hyp/pgtable.c @@ -186,14 +186,19 @@ static bool kvm_pgtable_walk_continue(const struct kvm_pgtable_walker *walker, /* * Visitor callbacks return EAGAIN when the conditions that led to a * fault are no longer reflected in the page tables due to a race to - * update a PTE. In the context of a fault handler this is interpreted - * as a signal to retry guest execution. + * update a PTE. * - * Ignore the return code altogether for walkers outside a fault handler - * (e.g. write protecting a range of memory) and chug along with the - * page table walk. + * Callbacks can also return ENOENT when PTE which is visited is not + * valid. + * + * In the context of a fault handler interpret these as a signal + * to retry guest execution. + * + * Ignore these return codes altogether for walkers outside a fault + * handler (e.g. write protecting a range of memory) and chug along + * with the page table walk. */ - if (r == -EAGAIN) + if (r == -EAGAIN || r == -ENOENT) return !(walker->flags & KVM_PGTABLE_WALK_HANDLE_FAULT); return !r; @@ -1072,7 +1077,7 @@ static int stage2_attr_walker(const struct kvm_pgtable_visit_ctx *ctx, struct kvm_pgtable_mm_ops *mm_ops = ctx->mm_ops; if (!kvm_pte_valid(ctx->old)) - return -EAGAIN; + return -ENOENT; data->level = ctx->level; data->pte = pte; diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c index 1030921d89f8..356dc4131023 100644 --- a/arch/arm64/kvm/mmu.c +++ b/arch/arm64/kvm/mmu.c @@ -1551,7 +1551,7 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, read_unlock(&kvm->mmu_lock); kvm_set_pfn_accessed(pfn); kvm_release_pfn_clean(pfn); - return ret != -EAGAIN ? ret : 0; + return (ret != -EAGAIN && ret != -ENOENT) ? ret : 0; } /* Resolve the access fault by making the page young again. */ From patchwork Fri Jun 2 16:09:09 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vipin Sharma X-Patchwork-Id: 13265701 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id D1231C7EE24 for ; Fri, 2 Jun 2023 16:41:37 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:Message-ID: References:Mime-Version:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=QiwA9h+qrd4KzI+lWNKTe53kwRqt+164NbizJH782H4=; b=h2BBQ5X441kMbQUBVtVG6BsONS ZZCaYCbZGg9GFul3zM2/8hjad5IvmBgUDPQIOjKG5U8aa1qwvHXOXQldD+R0N7JRwYi5HLhQHFcZP /iSHJhRwHNEZ6b3vUPhcGhwpYlmWZ5ApNstSSQsuIjsK8+I0vJryVvnZUHuFBAkmwR8n7o39x7G2b VuqqVyQruML+3SgLnRPPPh9HzFq4LTVQ2iq6FSy9tj8WkZjEdOVsu9+7Cdz/QSOCDuwhtV9P5nZbq YGFtcY57UP6UFbOoVFENvamKqFw5DVAvawKeIPZuPLJrKeQC//gPdh3FD/nWNxP8vNwCfzreV/+uE SZ0sfRHw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1q57qI-007REq-0O; Fri, 02 Jun 2023 16:41:34 +0000 Received: from desiato.infradead.org ([2001:8b0:10b:1:d65d:64ff:fe57:4e05]) by bombadil.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1q57qG-007RBb-2X for linux-riscv@bombadil.infradead.org; Fri, 02 Jun 2023 16:41:32 +0000 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=desiato.20200630; h=Content-Type:Cc:To:From:Subject: Message-ID:References:Mime-Version:In-Reply-To:Date:Sender:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description; bh=rivHDhk0ioxMpP1XynwGaSZFsNZDBBbqgVAGbR7E/X8=; b=dyecyeCEd59sMLHaoda9jZN1nU FDOkOhfKgqmqbCGQRmrKVyQ6+j07mK3875igB3inDMKxiFgoeyx9l+GWcbnqhesilzyMnFUPw+yNY XEmEhnbWem9LWFE7i1yNZuU6PFzz/AvtGMvcOr1p+2EMEop0+p1Hj38vsFel/tNAVgNduYJCqbKFr 16n7DO61zT37ZxFuuc2LeiG99x0seas+bMvd9POKFQcxMhINtjM2SUV8bz8ulrxjeB2GvMQddM28b PNBJU4OQMxUgjaJulSAKG1mIdlaVQ1U+vHrK3Fg2qJb7t+iN0zZjX6Nes1sBTtxw8sTPHee01V7Hk ZYSnYgkg==; Received: from mail-pl1-x64a.google.com ([2607:f8b0:4864:20::64a]) by desiato.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1q57LT-001NB7-2q for linux-riscv@lists.infradead.org; Fri, 02 Jun 2023 16:09:50 +0000 Received: by mail-pl1-x64a.google.com with SMTP id d9443c01a7336-1afba64045aso22825445ad.0 for ; Fri, 02 Jun 2023 09:09:42 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1685722181; x=1688314181; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=rivHDhk0ioxMpP1XynwGaSZFsNZDBBbqgVAGbR7E/X8=; b=IxpZG4TkYuPd8UtpcDQplKJDDIpAs6lGKWZe95duADLUk5Xly6SRvjrVvCEGB/Fk1a E9X3IJECzkdQ25Bkjejj4IbXek6JqPY58HRA50EOI/YFTzUpFaA/bzGSlAOm+5PEvo+T 6jKSmBS0KSmdPni70wpBlwTPrvyYGZvMtxSC3Fbri9jwwzePk7OY91lpj01vmu9g7PZn a5J+YN0h1Qu/z/w7ubpNpCMDwcQqYWttGGXgoLeySgwUxIZDyVDmv/3Bk/4OJqAcfD0+ gHcpPqAo0vTPvSXpzWrIqDWuTjrALALhGQwaLZCyItTHvmVctEjS/JfpfTvctKcX+1gM QFLQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1685722181; x=1688314181; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=rivHDhk0ioxMpP1XynwGaSZFsNZDBBbqgVAGbR7E/X8=; b=dgIdJxlikLlQJ25hTajJrg+Jkq2urXvvPZqRYQfgDPtJu0bsdmICq5auja/mXsLasY V9aGLPWVAO1//KacO1/Rnv+UtUAzxj3CMbH1n9ZuRoQ5JG4IRtyFzzN3g3oVXgAV21MH jGD7VNZ3xll14ASKNVE2NjxLcCqLBiMS9X1h6rTO1XfSAO3xilzc4ggCFyRtnErX3BG5 +xH0GNlvn8ksZrRYCBRerZT1b6nop3yglkrYhsibu430hxOan+kzWi53rlyJb8hJLobF 85QwbmAFQSM2+7BTXWf7o5/2B4iNr38c2N67iR5ilCd5gH30QAeMovvTopkG4YJ3em6t e5oQ== X-Gm-Message-State: AC+VfDwu0P9/HgarPLswCWQqBtExbyarCrBNUu8Sy700kGAuomRL0UrJ al0Y4pY2Nzv8F9f3CKEip/MkJry8hRLf X-Google-Smtp-Source: ACHHUZ4crxpGe15BxcOcaGOZO2WjPbibX9jRmjl7rSrwdD8Z6/aya1uE9CJM9AhjU7HHArKs+z3PEQeSOj8s X-Received: from vipin.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:479f]) (user=vipinsh job=sendgmr) by 2002:a17:902:e3c5:b0:1b1:c90e:b7aa with SMTP id r5-20020a170902e3c500b001b1c90eb7aamr56727ple.4.1685722181662; Fri, 02 Jun 2023 09:09:41 -0700 (PDT) Date: Fri, 2 Jun 2023 09:09:09 -0700 In-Reply-To: <20230602160914.4011728-1-vipinsh@google.com> Mime-Version: 1.0 References: <20230602160914.4011728-1-vipinsh@google.com> X-Mailer: git-send-email 2.41.0.rc0.172.g3f132b7071-goog Message-ID: <20230602160914.4011728-12-vipinsh@google.com> Subject: [PATCH v2 11/16] KVM: arm64: Use KVM_PGTABLE_WALK_SHARED flag instead of KVM_PGTABLE_WALK_HANDLE_FAULT From: Vipin Sharma To: maz@kernel.org, oliver.upton@linux.dev, james.morse@arm.com, suzuki.poulose@arm.com, yuzenghui@huawei.com, catalin.marinas@arm.com, will@kernel.org, chenhuacai@kernel.org, aleksandar.qemu.devel@gmail.com, tsbogend@alpha.franken.de, anup@brainfault.org, atishp@atishpatra.org, paul.walmsley@sifive.com, palmer@dabbelt.com, aou@eecs.berkeley.edu, seanjc@google.com, pbonzini@redhat.com, dmatlack@google.com, ricarkol@google.com Cc: linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-mips@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kselftest@vger.kernel.org, kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Vipin Sharma X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230602_170944_233669_F9062542 X-CRM114-Status: GOOD ( 11.41 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org Check against shared page table walker flag instead of fault handler flag when determining if walk should continue or not. vCPU page fault handlers uses shared page walker and there are no other shared page walkers in Arm. This will change in future commit when clear-dirty-log will use shared page walker and continue, retry or terminate logic for a walk will change between shared page walkers. Signed-off-by: Vipin Sharma --- arch/arm64/kvm/hyp/pgtable.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/arch/arm64/kvm/hyp/pgtable.c b/arch/arm64/kvm/hyp/pgtable.c index bc8c5c4ac1cf..7f80e953b502 100644 --- a/arch/arm64/kvm/hyp/pgtable.c +++ b/arch/arm64/kvm/hyp/pgtable.c @@ -191,7 +191,7 @@ static bool kvm_pgtable_walk_continue(const struct kvm_pgtable_walker *walker, * Callbacks can also return ENOENT when PTE which is visited is not * valid. * - * In the context of a fault handler interpret these as a signal + * In the context of a shared walker interpret these as a signal * to retry guest execution. * * Ignore these return codes altogether for walkers outside a fault @@ -199,7 +199,7 @@ static bool kvm_pgtable_walk_continue(const struct kvm_pgtable_walker *walker, * with the page table walk. */ if (r == -EAGAIN || r == -ENOENT) - return !(walker->flags & KVM_PGTABLE_WALK_HANDLE_FAULT); + return !(walker->flags & KVM_PGTABLE_WALK_SHARED); return !r; } From patchwork Fri Jun 2 16:09:10 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vipin Sharma X-Patchwork-Id: 13265687 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id AA2E0C7EE2A for ; Fri, 2 Jun 2023 16:34:46 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:Message-ID: References:Mime-Version:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=dBG4EsZyVRFWYpLyeexKOHuOyiAGW8m9++Bq0XBiask=; b=PNEfLe2enljPCgGqXD4SanPxN9 1CsYCSwxG28KoxPz4KagoCGFgQpatmstJiwlWWaAWVFn4Y6eONBceVIv/w8zj1CWoXkodqLYUwudR S22YA9S2OBBzFda1C2/xMjf113QpNYwCwUUc80geyyw2/7OB/lWSd6SiFtyvui3wXvvni8bl41RsU aTW/vFMvf5LNKGAScQ2vhBK/gKiKKpM9RKO7CU0QSVDwGNF2ngWrYjy1FHpXV96NCLOE6SUSpNgub 1c37ZHBetYvAHmp2yj0cMrZ9HM7FsCJcfGhydchsK4WNigXgmz4gAMEsg+KCjYE5IMP0mMHueNAgC 2PgOEE0Q==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1q57jf-007PkQ-05; Fri, 02 Jun 2023 16:34:43 +0000 Received: from casper.infradead.org ([2001:8b0:10b:1236::1]) by bombadil.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1q57jb-007Pfm-1J for linux-riscv@bombadil.infradead.org; Fri, 02 Jun 2023 16:34:39 +0000 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Type:Cc:To:From:Subject: Message-ID:References:Mime-Version:In-Reply-To:Date:Sender:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description; bh=dYi36v/YAwaQRjgMVH+8EY8FA7aBBtQ+BKuLP6eDkvQ=; b=enY0ql5YpqvK2gFI1ngsZG5v6y aCseBn6HJQOBmXFLFFRyyY9uh+kPcgpEDENwcUz80/lbHCpSIhwYgEH+wSUIpf17hEWgYrhaIdp3o sAvCMNV6oM7tFl+QXstlPrP+JBfQ4DXpvBTn76OP4vU6CGKlzWaqUb+0xDzC96cuU9+uGvqboqn4Y HNwUoOV/qAln1tEVXv4FM+uIwPykZ3IaAgQtUo8BuVRD8V3kw+MEN1Dd2EWwoLMwHAJGWDkS4U7PD AFyXrQ4d5fBJLNHomnEStrGR/sK9vjdHu+zBzOf1SQoBeXnKyluYPWTW9hRjcBQ1bQAEDNmvd069M 9+lRiFkA==; Received: from mail-pj1-x1049.google.com ([2607:f8b0:4864:20::1049]) by casper.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1q57LX-009LkM-RZ for linux-riscv@lists.infradead.org; Fri, 02 Jun 2023 16:09:51 +0000 Received: by mail-pj1-x1049.google.com with SMTP id 98e67ed59e1d1-256647a6fadso845395a91.3 for ; Fri, 02 Jun 2023 09:09:46 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1685722184; x=1688314184; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=dYi36v/YAwaQRjgMVH+8EY8FA7aBBtQ+BKuLP6eDkvQ=; b=Dw12ocPSe9vJUzYvSsSf9AU3CWoiwTk/XWOJWSQ28tLpoS2cFvmMFQbSf/qpjpxy7v oO7YPmr7slJeCTpuInsqqCsWP0mQqS3ua3oN9mzqlTzskeKCsUc+Eng3akxDWWH0DIw4 jhV0Z/aBx0lDztLRsncoYBKeU0QZUKmq2gyk1zRIGBfBu0xTC7gF0pRwGDV7061++Ztg o5Q/lPpRn/0BOjk3RHl0WBfcrcdb3R1NScbQhD6quOJddTL53tPtBy4zB10NE0mDM/7t Ap9iqqT0iz8St+1DIUrYSZSYDvTKdSKGgplDSGDJzXhKejoEUVmNl+mnTfSIjic7IX8O rhDA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1685722184; x=1688314184; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=dYi36v/YAwaQRjgMVH+8EY8FA7aBBtQ+BKuLP6eDkvQ=; b=MHlMaL/mynOq2v933t+g7phzLtTY6sf0zIeoaFNfx/VYdHOYGdnGUyqZfPKBmdobDv D1Yhhf+bBUc3L6Xm9Qj0JtFREPEs898FhKiPz91xWSnr+6Wjmhfiw2pfreu4a2AOrQuw 7+ABzEUhJv0G34MNQbJ+P4IXqyA/ueOQLxD3eC0rI5b/Ysk6QuiCiAp7MDtWhzriCYXz K26N/2zaZuXfbS5iEr43kGmlLSQeJkrWoxHAcqmKvjnL5KfpC703Zz2LGjH+b+Qky81u qvGKc/3dy9k2+BiYJhhd8YO5RUcWR+b7oqNsiwdm2hWpfSpfAk3ReJyhRHM+MDUpFbYi +tuw== X-Gm-Message-State: AC+VfDwprTpOHwT8GbrMAgOFoN1V6mGCHgOrnDTfRBumU/2oqqRAYO6A oERi192RHdWdkfnRhBANWCuRUf+MwU15 X-Google-Smtp-Source: ACHHUZ4+c4f5xcGK/eLSU3xHgu8tOZbN6Cv39+hh+7NNc/cJ0dF1+UQE1gqaCbeHO9GT4u+sOrr9aoJh/Nwo X-Received: from vipin.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:479f]) (user=vipinsh job=sendgmr) by 2002:a17:90a:c08e:b0:252:733d:15dc with SMTP id o14-20020a17090ac08e00b00252733d15dcmr97816pjs.2.1685722183850; Fri, 02 Jun 2023 09:09:43 -0700 (PDT) Date: Fri, 2 Jun 2023 09:09:10 -0700 In-Reply-To: <20230602160914.4011728-1-vipinsh@google.com> Mime-Version: 1.0 References: <20230602160914.4011728-1-vipinsh@google.com> X-Mailer: git-send-email 2.41.0.rc0.172.g3f132b7071-goog Message-ID: <20230602160914.4011728-13-vipinsh@google.com> Subject: [PATCH v2 12/16] KVM: arm64: Retry shared page table walks outside of fault handler From: Vipin Sharma To: maz@kernel.org, oliver.upton@linux.dev, james.morse@arm.com, suzuki.poulose@arm.com, yuzenghui@huawei.com, catalin.marinas@arm.com, will@kernel.org, chenhuacai@kernel.org, aleksandar.qemu.devel@gmail.com, tsbogend@alpha.franken.de, anup@brainfault.org, atishp@atishpatra.org, paul.walmsley@sifive.com, palmer@dabbelt.com, aou@eecs.berkeley.edu, seanjc@google.com, pbonzini@redhat.com, dmatlack@google.com, ricarkol@google.com Cc: linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-mips@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kselftest@vger.kernel.org, kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Vipin Sharma X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230602_170947_927710_5DEC967D X-CRM114-Status: GOOD ( 16.19 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org For a shared page walker which is not fault handler, retry the walk if walker callback function returns -EAGAIN, or continue to the next SPTE if callback function return -ENOENT. Update the kvm_pgtable_walk documentation. For fault handler logic remains same, i.e. exit the walk and resume the guest when getting -EAGAIN and -ENOENT errors from walker callback function. Currently, there is no page walker which is shared and not a fault handler, but this will change in future patches when clear-dirty-log walker will use MMU read lock and run via shared walker. Signed-off-by: Vipin Sharma --- arch/arm64/include/asm/kvm_pgtable.h | 23 ++++++++++------- arch/arm64/kvm/hyp/pgtable.c | 38 +++++++++++++++++++++++----- 2 files changed, 46 insertions(+), 15 deletions(-) diff --git a/arch/arm64/include/asm/kvm_pgtable.h b/arch/arm64/include/asm/kvm_pgtable.h index 23e7e7851f1d..145be12a5fc2 100644 --- a/arch/arm64/include/asm/kvm_pgtable.h +++ b/arch/arm64/include/asm/kvm_pgtable.h @@ -716,15 +716,20 @@ int kvm_pgtable_stage2_split(struct kvm_pgtable *pgt, u64 addr, u64 size, * type of the walker, i.e. shared walker (vCPU fault handlers) or non-shared * walker. * - * Walker Type | Callback | Walker action - * -------------|------------------|-------------- - * Non-Shared | 0 | Continue - * Non-Shared | -EAGAIN | Continue - * Non-Shared | -ENOENT | Continue - * Non-Shared | Any other | Exit - * -------------|------------------|-------------- - * Shared | 0 | Continue - * Shared | Any other | Exit + * Walker Type | Callback | Walker action + * -----------------------|------------------|-------------- + * Non-Shared | 0 | Continue + * Non-Shared | -EAGAIN | Continue + * Non-Shared | -ENOENT | Continue + * Non-Shared | Any other | Exit + * -----------------------|------------------|-------------- + * Shared | 0 | Continue + * Shared | -EAGAIN | Retry + * Shared | -ENOENT | Continue + * Shared | Any other | Exit + * -----------------------|------------------|-------------- + * Shared (Fault Handler) | 0 | Continue + * Shared (Fault Handler) | Any other | Exit * * Return: 0 on success, negative error code on failure. */ diff --git a/arch/arm64/kvm/hyp/pgtable.c b/arch/arm64/kvm/hyp/pgtable.c index 7f80e953b502..23cda3de2dd4 100644 --- a/arch/arm64/kvm/hyp/pgtable.c +++ b/arch/arm64/kvm/hyp/pgtable.c @@ -191,15 +191,21 @@ static bool kvm_pgtable_walk_continue(const struct kvm_pgtable_walker *walker, * Callbacks can also return ENOENT when PTE which is visited is not * valid. * - * In the context of a shared walker interpret these as a signal + * In the context of a fault handler interpret these as a signal * to retry guest execution. * - * Ignore these return codes altogether for walkers outside a fault - * handler (e.g. write protecting a range of memory) and chug along + * In the context of a shared walker which is not fault handler + * interpret: + * 1. EAGAIN - A signal to retry walk again. + * 2. ENOENT - A signal to ignore and move on to next SPTE. + * + * Ignore these return codes altogether for other walkers and chug along * with the page table walk. */ - if (r == -EAGAIN || r == -ENOENT) + if (r == -EAGAIN) return !(walker->flags & KVM_PGTABLE_WALK_SHARED); + if (r == -ENOENT) + return !(walker->flags & KVM_PGTABLE_WALK_HANDLE_FAULT); return !r; } @@ -260,24 +266,44 @@ static inline int __kvm_pgtable_visit(struct kvm_pgtable_walk_data *data, return ret; } +static bool kvm_pgtable_walk_retry(const struct kvm_pgtable_walker *walker, + int r) +{ + /* + * All shared page table walks where visitor callbacks return -EAGAIN + * should be retried with the exception of fault handler. In case of + * fault handler retry is achieved by resuming the guest. + */ + if (r == -EAGAIN) + return (walker->flags & KVM_PGTABLE_WALK_SHARED) && + !(walker->flags & KVM_PGTABLE_WALK_HANDLE_FAULT); + + return !r; +} + static int __kvm_pgtable_walk(struct kvm_pgtable_walk_data *data, struct kvm_pgtable_mm_ops *mm_ops, kvm_pteref_t pgtable, u32 level) { u32 idx; int ret = 0; + kvm_pteref_t pteref; if (WARN_ON_ONCE(level >= KVM_PGTABLE_MAX_LEVELS)) return -EINVAL; for (idx = kvm_pgtable_idx(data, level); idx < PTRS_PER_PTE; ++idx) { - kvm_pteref_t pteref = &pgtable[idx]; +retry: + pteref = &pgtable[idx]; if (data->addr >= data->end) break; ret = __kvm_pgtable_visit(data, mm_ops, pteref, level); - if (ret) + if (ret) { + if (kvm_pgtable_walk_retry(data->walker, ret)) + goto retry; break; + } } return ret; From patchwork Fri Jun 2 16:09:11 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vipin Sharma X-Patchwork-Id: 13265686 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 2764EC77B7A for ; Fri, 2 Jun 2023 16:34:47 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:Message-ID: References:Mime-Version:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=RYMbeVcN3YlyoVAaLwZYIi4Z6bZJ5YAtFP5NbNr4mLs=; b=fX9RtYIH7NTHC5vwsQe6Us7TE/ U8T5wk2lhg+OzhlqTnM28oyc4TRYzBB2hqPxi6RWLBr79g7Sp7J0KV8dobNPKjaF4H8DZi24Bpi2p VQge4WyG86YKXBPI2EZYG5yXt/kRX7K4di5yG6iNf1uLvYbNVErmXBXRBJ2eq93npX6yNWgRiWFNz eUHYIsh9LK7HQqBaYjDaXLjB/jYnHlPcSmCrkiLBndEdD1yjzI33xe64RrDuidZEX42jc9yzoHVI+ wHDE3dXl535bTb3kTUwdqxH8mgX811SUW4wAztK6cwVhQL9B6yAipLRUGh69AHqsxOBpPIcY7VgKH 5bVCZiVA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1q57jd-007Pjm-2M; Fri, 02 Jun 2023 16:34:41 +0000 Received: from casper.infradead.org ([2001:8b0:10b:1236::1]) by bombadil.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1q57ja-007Pfm-1t for linux-riscv@bombadil.infradead.org; Fri, 02 Jun 2023 16:34:38 +0000 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Type:Cc:To:From:Subject: Message-ID:References:Mime-Version:In-Reply-To:Date:Sender:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description; bh=iiQKR5r12cEmfxEtqKB501qQ7U5/inaqlNqkSHrg1Dk=; b=tSXRJ6oqoXDv02Alp0HFidWh5F 7VzCmZU0NbnIeO9YTHMBQ5Uq/KFYFUOG+4303fEDV/JK9TL2PZ3IF9smQBWETTyNAGOjO1NwdqMRa mAVfEuSZeD7Lv8JILo2GBkyEvW1Zq9HTm9RnLls5mt/izZo3p0iW4zVIWS08F4A1y+/kLtTy7AE98 7WHz+q4vAwCpLu8/0uzFDwppLX5BGs8LKwOLJjCBeZqTcFaAlpV3on/83+v+vtTnAsdl32mEtDCjm Z+S8S8w6iA0FS1sCFnzRmPgjjOoL18h3TLiPuf3q1Co3shhqEf06G31Ha8OCvCC/WEvMwggd+YgsY noy6o1ww==; Received: from mail-yb1-xb4a.google.com ([2607:f8b0:4864:20::b4a]) by casper.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1q57LZ-009Lkc-Jq for linux-riscv@lists.infradead.org; Fri, 02 Jun 2023 16:09:52 +0000 Received: by mail-yb1-xb4a.google.com with SMTP id 3f1490d57ef6-ba83a9779f3so3264030276.1 for ; Fri, 02 Jun 2023 09:09:48 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1685722186; x=1688314186; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=iiQKR5r12cEmfxEtqKB501qQ7U5/inaqlNqkSHrg1Dk=; b=pa3mVvglr9ldo0oASWJgWygRLd3hw8lG1pjLeB+Qzm3K3zOU5BQU88aja7wxcgPXwA AwayadZx6shxLAXjuJFrDzlMtsvb67kGNUVBIz8jQ9pcx5Jd6BF7pMABQixgPuk47qo/ QPSMIrvsitPTIbIvBs3Unmlez10KGpvyazPz0vuoWxeM0ddu7MUX6bOpsZv5FX/RHsfI ab0UhQ4BKG9ohwFpekgRN7rsQtHAtMFU2PEbvOsbX5+K7m399sRVLlUcmEJAu36XrUm4 MIK72o4Xoc2dnSjS77GyfH7dzmpmSt4gmenJO3sxGLvRMkP8fiKXjBjSppWa9vRlnGfd FwPA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1685722186; x=1688314186; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=iiQKR5r12cEmfxEtqKB501qQ7U5/inaqlNqkSHrg1Dk=; b=MZPKCuJXxvdWJSdxxMUnx3wTNg5cQGTMzGbztkfrDkBF7RZczQUO+tr2EJOtmz5//R 10BcwOeuLn8zzIZXExoBAllXlS7EeE8gQkEvNqfldHrLBKhivMyWPVWuSYx4lhPQ9gYM cocE4TPAphUv+DtZlL+bv7h0EKPcYVXFkOCM8bEGUDjGf81JkexmrKctWZZMAzrXd3ix 5mz+zCIyhZAcn9kqpwTqTlNUP81ZcYnUVystIDCc7duGlgDpS5X23yhkGYcSn7mh+5pH SpJdiHCLCgLvTm3Arm1IXMjrYub7SFeiGMRRYwEM4O0cdd8MZOQDxIr9tWcJ1xzUbB+t 4F8g== X-Gm-Message-State: AC+VfDyhgrpjSvJd9VYf143YVLPjWCXjbdWe9mvYC5xQV0Nk6jVZ20CZ LB/OFxL9dhJiKQVR+WAqSCFettkprcRZ X-Google-Smtp-Source: ACHHUZ4B/VNE5B0Lzqq5kfdIQx5Z2vYuoQBh8/63RZRhZfWL9mTILlGstZ7p9kQje0NOJUJPGy2sE9Fs2xL/ X-Received: from vipin.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:479f]) (user=vipinsh job=sendgmr) by 2002:a05:6902:1545:b0:ba8:181b:2558 with SMTP id r5-20020a056902154500b00ba8181b2558mr2255805ybu.4.1685722186031; Fri, 02 Jun 2023 09:09:46 -0700 (PDT) Date: Fri, 2 Jun 2023 09:09:11 -0700 In-Reply-To: <20230602160914.4011728-1-vipinsh@google.com> Mime-Version: 1.0 References: <20230602160914.4011728-1-vipinsh@google.com> X-Mailer: git-send-email 2.41.0.rc0.172.g3f132b7071-goog Message-ID: <20230602160914.4011728-14-vipinsh@google.com> Subject: [PATCH v2 13/16] KVM: arm64: Run clear-dirty-log under MMU read lock From: Vipin Sharma To: maz@kernel.org, oliver.upton@linux.dev, james.morse@arm.com, suzuki.poulose@arm.com, yuzenghui@huawei.com, catalin.marinas@arm.com, will@kernel.org, chenhuacai@kernel.org, aleksandar.qemu.devel@gmail.com, tsbogend@alpha.franken.de, anup@brainfault.org, atishp@atishpatra.org, paul.walmsley@sifive.com, palmer@dabbelt.com, aou@eecs.berkeley.edu, seanjc@google.com, pbonzini@redhat.com, dmatlack@google.com, ricarkol@google.com Cc: linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-mips@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kselftest@vger.kernel.org, kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Vipin Sharma X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230602_170949_698435_51116518 X-CRM114-Status: UNSURE ( 9.05 ) X-CRM114-Notice: Please train this message. X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org Take MMU read lock for clearing dirty logs and use shared page table walker. Dirty logs are currently cleared using MMU write locks. This means vCPUs page faults, which takes MMU read lock, will be blocked while dirty logs are being cleared. This causes guest degradation and especially noticeable on VMs with lot of vCPUs. Taking MMU read lock will allow vCPUs to execute parallelly and reduces the impact on vCPUs performance. Signed-off-by: Vipin Sharma --- arch/arm64/kvm/mmu.c | 16 ++++++++++------ 1 file changed, 10 insertions(+), 6 deletions(-) diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c index 356dc4131023..7c966f6f1a41 100644 --- a/arch/arm64/kvm/mmu.c +++ b/arch/arm64/kvm/mmu.c @@ -74,8 +74,12 @@ static int stage2_apply_range(struct kvm_s2_mmu *mmu, phys_addr_t addr, if (ret) break; - if (resched && next != end) - cond_resched_rwlock_write(&kvm->mmu_lock); + if (resched && next != end) { + if (flags & KVM_PGTABLE_WALK_SHARED) + cond_resched_rwlock_read(&kvm->mmu_lock); + else + cond_resched_rwlock_write(&kvm->mmu_lock); + } } while (addr = next, addr != end); return ret; @@ -1131,11 +1135,11 @@ void kvm_arch_mmu_enable_log_dirty_pt_masked(struct kvm *kvm, phys_addr_t start = (base_gfn + __ffs(mask)) << PAGE_SHIFT; phys_addr_t end = (base_gfn + __fls(mask) + 1) << PAGE_SHIFT; - write_lock(&kvm->mmu_lock); - lockdep_assert_held_write(&kvm->mmu_lock); - - stage2_wp_range(&kvm->arch.mmu, start, end, 0); + read_lock(&kvm->mmu_lock); + stage2_wp_range(&kvm->arch.mmu, start, end, KVM_PGTABLE_WALK_SHARED); + read_unlock(&kvm->mmu_lock); + write_lock(&kvm->mmu_lock); /* * Eager-splitting is done when manual-protect is set. We * also check for initially-all-set because we can avoid From patchwork Fri Jun 2 16:09:12 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vipin Sharma X-Patchwork-Id: 13265720 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id C0E68C7EE29 for ; Fri, 2 Jun 2023 17:15:59 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:Message-ID: References:Mime-Version:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=QPWYrsEDf/pLQNY84xG7dqXKTYpCMqpBwgxy15X3SQk=; b=d/3pjye7CUWR3BHKtScRSq6gOQ OmXb6UeaRa/SpZlx+4E0F6NbaqeGP1fBJ8A5ZQxlFM0OhzXh/QdoYz7n81R7mESAxTKnFjgohK5oy anK1M0fv/f8KqptVAeyUzXP5crPuw2d+3GNMGcB9oCROP1kvYuu+CAXWbuxjg1m651zK4kMdXkRNg W7i5+NGypWPlSoU3bmU6JtS71JHV8HtC/HNhp/N/qZEOxxqZj+ze5VloZwFbbaqCCEcMveqDK+dyK vnrrIgl9ErAPsHqB3OwaUinGOYakJcKlTWSnwltSGrpQd94mDOSYZ5E1FPHG7O4S6M6JoM/fN1Ubq 7LywFDwQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1q58NS-007Vs9-1m; Fri, 02 Jun 2023 17:15:50 +0000 Received: from mail-yw1-x114a.google.com ([2607:f8b0:4864:20::114a]) by bombadil.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1q57LY-007LR8-2w for linux-riscv@lists.infradead.org; Fri, 02 Jun 2023 16:09:50 +0000 Received: by mail-yw1-x114a.google.com with SMTP id 00721157ae682-56942442eb0so20414267b3.1 for ; Fri, 02 Jun 2023 09:09:48 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1685722188; x=1688314188; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=ATfYeV4iZLydVfrv7VkfrrcrF6Iyfqli3wOIB/41cFU=; b=Rqyu16keB0kE0WrdwrfeOOljSmmTEf5ym2HasOLzHF8ArvodVvyp0S6Lr0lp8jhqaO yU7vlRlNbGqDi7zvQpbHT6R5zFRE8DckASYhOzfH9V34Q20hh77zuuNe12GxNtHCZXh5 lnbwZ2Svz+2HB81mY68YPRkYswYr7F2GvqW0LS+zjsO1U81VkyPUwY6eD2Ki1IgFqmLS 03LcT6Q3zPmFBx5Lk1wR2QR7kJ2u+CVAlb7S/W7GXhPrjwIAChkVcnGC08nkCFgLZ6+3 IJyon3TZZUbxyP2iFDZRsS+2chcqzC8RuTMktYv6Efxwhi8TOTgmYLyH3MqqUylTOe6h 6cPw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1685722188; x=1688314188; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=ATfYeV4iZLydVfrv7VkfrrcrF6Iyfqli3wOIB/41cFU=; b=d9fvOoAet85GeawvswVREPJq9kF/GAWOuOtWEdrGtcCZT3FKqueTX+7u2xOpBm2Jsl rUz7cW0pEvPP0cNxyYtCVzJ9iMA0imgeL4fxoONcGjG43IjbkNEMc+9+uHCwSZpoPZ8H Z9II9R2aBGVFC1T2aWBkVv1VELg1gtp9Ag1nPlP1ftEcMbMdWkrP9YdKfAaizff9pGJe AbFaBAEaibgxy1+Oy15PuTkUcOJGH0wSxypGUOETtB40hfQr+b4mdPSt44az/PjqM0Jh /Eoa02Hmz/qJ3MuxiurA6N0NRtcPquX4PJLR8KNhnDNAlZsRyIjtASgIVHFwL8e4cb49 yfNA== X-Gm-Message-State: AC+VfDz1EOeIbIKt1oeDnn36Ui5exrYQBJRStmS3RaWlU0oIdVSGrAa2 iaOyWh8d0V9Sf+fw8Kj4JpOBWaNjvmPV X-Google-Smtp-Source: ACHHUZ4IW8kR8j7bn7igS1kcMhvfZS4XncKRF7NLGQ8SgKKpwW4y3+9yGIJ+PiwzD8LO4pWUxsLhYeKpfotT X-Received: from vipin.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:479f]) (user=vipinsh job=sendgmr) by 2002:a25:e706:0:b0:bac:5d2c:844b with SMTP id e6-20020a25e706000000b00bac5d2c844bmr1305917ybh.8.1685722187745; Fri, 02 Jun 2023 09:09:47 -0700 (PDT) Date: Fri, 2 Jun 2023 09:09:12 -0700 In-Reply-To: <20230602160914.4011728-1-vipinsh@google.com> Mime-Version: 1.0 References: <20230602160914.4011728-1-vipinsh@google.com> X-Mailer: git-send-email 2.41.0.rc0.172.g3f132b7071-goog Message-ID: <20230602160914.4011728-15-vipinsh@google.com> Subject: [PATCH v2 14/16] KVM: arm64: Pass page walker flags from callers of stage 2 split walker From: Vipin Sharma To: maz@kernel.org, oliver.upton@linux.dev, james.morse@arm.com, suzuki.poulose@arm.com, yuzenghui@huawei.com, catalin.marinas@arm.com, will@kernel.org, chenhuacai@kernel.org, aleksandar.qemu.devel@gmail.com, tsbogend@alpha.franken.de, anup@brainfault.org, atishp@atishpatra.org, paul.walmsley@sifive.com, palmer@dabbelt.com, aou@eecs.berkeley.edu, seanjc@google.com, pbonzini@redhat.com, dmatlack@google.com, ricarkol@google.com Cc: linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-mips@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kselftest@vger.kernel.org, kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Vipin Sharma X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230602_090948_954146_E36F1989 X-CRM114-Status: GOOD ( 12.93 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org Pass enum kvm_pgtable_walk_flags{} to kvm_pgtable_stage2_split() walker from its caller. This allows split walker users to specify if they want to run split logic via shared walker or non-shared walker. Signed-off-by: Vipin Sharma --- arch/arm64/include/asm/kvm_pgtable.h | 4 +++- arch/arm64/kvm/hyp/pgtable.c | 5 +++-- arch/arm64/kvm/mmu.c | 2 +- 3 files changed, 7 insertions(+), 4 deletions(-) diff --git a/arch/arm64/include/asm/kvm_pgtable.h b/arch/arm64/include/asm/kvm_pgtable.h index 145be12a5fc2..fbf5c6c509fb 100644 --- a/arch/arm64/include/asm/kvm_pgtable.h +++ b/arch/arm64/include/asm/kvm_pgtable.h @@ -684,6 +684,7 @@ int kvm_pgtable_stage2_flush(struct kvm_pgtable *pgt, u64 addr, u64 size, * @size: Size of the range. * @mc: Cache of pre-allocated and zeroed memory from which to allocate * page-table pages. + * @flags: Page walker flags * * The function tries to split any level 1 or 2 entry that overlaps * with the input range (given by @addr and @size). @@ -693,7 +694,8 @@ int kvm_pgtable_stage2_flush(struct kvm_pgtable *pgt, u64 addr, u64 size, * blocks in the input range as allowed by @mc_capacity. */ int kvm_pgtable_stage2_split(struct kvm_pgtable *pgt, u64 addr, u64 size, - struct kvm_mmu_memory_cache *mc); + struct kvm_mmu_memory_cache *mc, + enum kvm_pgtable_walk_flags flags); /** * kvm_pgtable_walk() - Walk a page-table. diff --git a/arch/arm64/kvm/hyp/pgtable.c b/arch/arm64/kvm/hyp/pgtable.c index 23cda3de2dd4..7e84be13d76d 100644 --- a/arch/arm64/kvm/hyp/pgtable.c +++ b/arch/arm64/kvm/hyp/pgtable.c @@ -1408,11 +1408,12 @@ static int stage2_split_walker(const struct kvm_pgtable_visit_ctx *ctx, } int kvm_pgtable_stage2_split(struct kvm_pgtable *pgt, u64 addr, u64 size, - struct kvm_mmu_memory_cache *mc) + struct kvm_mmu_memory_cache *mc, + enum kvm_pgtable_walk_flags flags) { struct kvm_pgtable_walker walker = { .cb = stage2_split_walker, - .flags = KVM_PGTABLE_WALK_LEAF, + .flags = flags | KVM_PGTABLE_WALK_LEAF, .arg = mc, }; diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c index 7c966f6f1a41..34d2bd03cf5f 100644 --- a/arch/arm64/kvm/mmu.c +++ b/arch/arm64/kvm/mmu.c @@ -153,7 +153,7 @@ static int kvm_mmu_split_huge_pages(struct kvm *kvm, phys_addr_t addr, return -EINVAL; next = __stage2_range_addr_end(addr, end, chunk_size); - ret = kvm_pgtable_stage2_split(pgt, addr, next - addr, cache); + ret = kvm_pgtable_stage2_split(pgt, addr, next - addr, cache, 0); if (ret) break; } while (addr = next, addr != end); From patchwork Fri Jun 2 16:09:13 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vipin Sharma X-Patchwork-Id: 13265684 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 3CCF9C77B7A for ; Fri, 2 Jun 2023 16:34:44 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:Message-ID: References:Mime-Version:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=yqQz34hR4akW/VFMelm3ohJCwVpVbeJr+fnkSH+Muzc=; b=ce+Fg6HFt83bPD2CDmb6OFGfqG Jhp55EnZKxprwi8GZfSnFMgWLjoW+Sm5ePOl7Q1mD8ktRdt31pkTKteEtuzmdOGyx0a1kHAJwG2/Z qvJ3zupJNJ910U36kmZpS3k7YgaoRhHNNUshO8kVKclOUA/WrBeqL10Sy6/neXVdQfhWfrTP+kTR7 i6l4VsxgtcSfrzWir9CHcB8qzeltnJHdbbuICxPIokHeOvEOhHxsXibMuQr0n0zyEILgKzmCLHSiK 5lv7npoZwuyOYgemYY2Z7TSohIf9/at89e/pMiiNIN2qyOIYudBHDnCRSCwGoPUC1egwPewQP6i93 gQeqNrFQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1q57ja-007Pgd-04; Fri, 02 Jun 2023 16:34:38 +0000 Received: from casper.infradead.org ([2001:8b0:10b:1236::1]) by bombadil.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1q57jY-007Pfm-0v for linux-riscv@bombadil.infradead.org; Fri, 02 Jun 2023 16:34:36 +0000 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Type:Cc:To:From:Subject: Message-ID:References:Mime-Version:In-Reply-To:Date:Sender:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description; bh=Hryh0ZWAFdBP/1ensclnMWOs6DFcoKq9t8dEIwwzMjM=; b=mzOvwd96TxWtM50pa61rCCJI5T 8GXpLX6ExMWEfZRRcayikOCSWtvD8c/+nDNLuOMgOl5Oes6m29pjZ3R99N8yA7/fioLgMBtaxF9Rg AYvr4JgW+jeG2/P8/WsbQffYJQwdV9B6GcDVso4bEuHRbA29qHbkZn8g6rwH58ZT/3+c5yKK7uvkb rjSwazf2E2h89Ya/56QoVemz9Km0cM+J4VDqcLiEYDgFRLcPIq4omXmZkRArEy8ENIiO5mmiAN8Ps AGrTC/JfK7j6DHmKfmIanUO2Uz10tnKgHtUFmp2hxD7wLOdhn0AMbH+Ux+A+h+C4EG+8EPUree+2P zH3eVvpw==; Received: from mail-yw1-x114a.google.com ([2607:f8b0:4864:20::114a]) by casper.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1q57Ld-009Lmk-Qn for linux-riscv@lists.infradead.org; Fri, 02 Jun 2023 16:09:56 +0000 Received: by mail-yw1-x114a.google.com with SMTP id 00721157ae682-565ba5667d5so28871807b3.0 for ; Fri, 02 Jun 2023 09:09:51 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1685722190; x=1688314190; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=Hryh0ZWAFdBP/1ensclnMWOs6DFcoKq9t8dEIwwzMjM=; b=yFcSJ5sk1YiYcQktVHUrIraA4vq3gzBg01wawQ+99X99BcERdv6aw79BJglFIsX38X Jmk2e5UaolCr5t1qAVgv5xhRcuwqz8qsPeKTN4dMinvHjEGhpXtPGawN2ICvlDQZGj8D ZmOScsl1sdyn4gxehMyRauugNpIrQjpu1KxL67VdJydhpwrAe8TXpUojYixIDTXpkYLZ L/KcS63c1WkLy8I19gAWFgXf90HgC8c3L1xN5Hze37jt9RE4scxALJ3lb9KGpgKF7m+y IPNOsGvdWwTPZEtqpxeUuo6lpLAcPxZcX9jQeWMRNApjKkg5jyviIXgCTWMDwLFLuVTH Fsfg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1685722190; x=1688314190; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=Hryh0ZWAFdBP/1ensclnMWOs6DFcoKq9t8dEIwwzMjM=; b=I8RXb54XVf7z6RBQYbHN1km9sH8pmoYzwMdWZtXfKF9o/fVMNRF56Cx36EXru19KeF raZpJbDyDlxg1asPsJm4Icdln0db/cvPrEGzDL7bjfvUouYAYRXRnVPPj6H8ev++rW0K 1F8Kb7oQHGuIWpWq0ymeGz6ttsCXa5CLhHKfnOCjyJq9JoRXvbjCPXW2PayheLZxYarL vmdCQjwN+07AtW3VENGPCQHoSrnePWOYlMyWCwYMKVm8jKIrC0JAxxaPbq1Ig+bKqKTg rKLfH487mbmlfQnlp0VZzYecOAY5J0I2wmak7esihxtR2we9U11C+vgX9C5DVNlM3+WU gDtw== X-Gm-Message-State: AC+VfDw/W+sSmolKepOh47vVWq7QITWqpLr3RE75K+uJy08bIWQijXyC VLcYOleAR9qmJMCsDcOk+ngpt57KFVd2 X-Google-Smtp-Source: ACHHUZ5M9AG9HKhKFMo8prXUgGc/ucGnPOZDLd1OSRiqHslc40DT/QGSesfErKBawc3k99MHt4wj3TuQ4IOn X-Received: from vipin.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:479f]) (user=vipinsh job=sendgmr) by 2002:a25:bc7:0:b0:ba8:cbd2:61b3 with SMTP id 190-20020a250bc7000000b00ba8cbd261b3mr1223005ybl.5.1685722190144; Fri, 02 Jun 2023 09:09:50 -0700 (PDT) Date: Fri, 2 Jun 2023 09:09:13 -0700 In-Reply-To: <20230602160914.4011728-1-vipinsh@google.com> Mime-Version: 1.0 References: <20230602160914.4011728-1-vipinsh@google.com> X-Mailer: git-send-email 2.41.0.rc0.172.g3f132b7071-goog Message-ID: <20230602160914.4011728-16-vipinsh@google.com> Subject: [PATCH v2 15/16] KVM: arm64: Provide option to pass page walker flag for huge page splits From: Vipin Sharma To: maz@kernel.org, oliver.upton@linux.dev, james.morse@arm.com, suzuki.poulose@arm.com, yuzenghui@huawei.com, catalin.marinas@arm.com, will@kernel.org, chenhuacai@kernel.org, aleksandar.qemu.devel@gmail.com, tsbogend@alpha.franken.de, anup@brainfault.org, atishp@atishpatra.org, paul.walmsley@sifive.com, palmer@dabbelt.com, aou@eecs.berkeley.edu, seanjc@google.com, pbonzini@redhat.com, dmatlack@google.com, ricarkol@google.com Cc: linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-mips@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kselftest@vger.kernel.org, kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Vipin Sharma X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230602_170953_869109_AA3B60A4 X-CRM114-Status: UNSURE ( 9.08 ) X-CRM114-Notice: Please train this message. X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org Pass enum kvm_pgtable_walk_flags{} to kvm_mmu_split_huge_pages(). Use 0 as the flag value to make it no-op. In future commit kvm_mmu_split_huge_pages() will be used under both MMU read lock and MMU write lock. Flag allows to pass intent to use shared or non-shared page walkers to split the huge pages. Signed-off-by: Vipin Sharma --- arch/arm64/kvm/mmu.c | 10 ++++++---- 1 file changed, 6 insertions(+), 4 deletions(-) diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c index 34d2bd03cf5f..6dd964e3682c 100644 --- a/arch/arm64/kvm/mmu.c +++ b/arch/arm64/kvm/mmu.c @@ -118,7 +118,8 @@ static bool need_split_memcache_topup_or_resched(struct kvm *kvm) } static int kvm_mmu_split_huge_pages(struct kvm *kvm, phys_addr_t addr, - phys_addr_t end) + phys_addr_t end, + enum kvm_pgtable_walk_flags flags) { struct kvm_mmu_memory_cache *cache; struct kvm_pgtable *pgt; @@ -153,7 +154,8 @@ static int kvm_mmu_split_huge_pages(struct kvm *kvm, phys_addr_t addr, return -EINVAL; next = __stage2_range_addr_end(addr, end, chunk_size); - ret = kvm_pgtable_stage2_split(pgt, addr, next - addr, cache, 0); + ret = kvm_pgtable_stage2_split(pgt, addr, next - addr, cache, + flags); if (ret) break; } while (addr = next, addr != end); @@ -1112,7 +1114,7 @@ static void kvm_mmu_split_memory_region(struct kvm *kvm, int slot) end = (memslot->base_gfn + memslot->npages) << PAGE_SHIFT; write_lock(&kvm->mmu_lock); - kvm_mmu_split_huge_pages(kvm, start, end); + kvm_mmu_split_huge_pages(kvm, start, end, 0); write_unlock(&kvm->mmu_lock); } @@ -1149,7 +1151,7 @@ void kvm_arch_mmu_enable_log_dirty_pt_masked(struct kvm *kvm, * again. */ if (kvm_dirty_log_manual_protect_and_init_set(kvm)) - kvm_mmu_split_huge_pages(kvm, start, end); + kvm_mmu_split_huge_pages(kvm, start, end, 0); write_unlock(&kvm->mmu_lock); } From patchwork Fri Jun 2 16:09:14 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vipin Sharma X-Patchwork-Id: 13265685 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 07390C7EE24 for ; Fri, 2 Jun 2023 16:34:46 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:Message-ID: References:Mime-Version:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=IFaNBLf4Gg0v3oxeTLXDszYDgxFz6iksM3Jw4R4Srys=; b=Yojanf7xUv7MeYNxxOR3ERIFzz 8ME1fSPec23CiyYBBFNlcy1jI2UVX8mGc20cGJ76U9X6+XYAJ4fBLYwAYXhP0WRA23vvmRIZr9SrS jsEB9/l2p7rPVev9y9wflJaWerm1UsuhTa0yQ0wfkILS7yYHX0evDiTfKD5GlQs5HEjPlUffLt9NR a8YrjD99sC5XzGAnIwo/HIJNz0BzknRUIL4L1pwoC3jx9ULZOG7e1RcBUzVS/9axiHg4k3lG3sNnk I5ID6UNMljgOiD3BIO4mWkFupNeAfocGj4g+1KaF1+40BqTTi6iMpDRNprqnSE9RF2I+Q5WLiAzJh qyR9b54g==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1q57jb-007PiE-2d; Fri, 02 Jun 2023 16:34:39 +0000 Received: from casper.infradead.org ([2001:8b0:10b:1236::1]) by bombadil.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1q57jZ-007Pfm-0e for linux-riscv@bombadil.infradead.org; Fri, 02 Jun 2023 16:34:37 +0000 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Type:Cc:To:From:Subject: Message-ID:References:Mime-Version:In-Reply-To:Date:Sender:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description; bh=pnaTvciscS5chSJrFdy/XWmGw+v5DL+sKG313yn6vtE=; b=gUEVHRPYwSH7E2XXV/RG8EwyJa ZJaCbhkmz7FdMkvDfS5GylRo6F+gfk5TKWciU6rEryNXD/lMp4+ezgZc/uJJmTQBuTWUvQp8ws1iX oHT26rpQ0TKYKX+sQOFkGxzUqVVjQVOLYhkijJDKEKxcQPb71Clcjh1+7x6nM3pXDUJ5vQwVH9o0R AB4kIUTY3cTNuN6QNUCfYnU4buLVzrVa7rCtO7YjJbSFsG8RBTHW5vlZlfezd7CflqEaGU5Gp6hp4 zhcFL0fVmqKO1YiWMAKW3t39EeR2jJXsO/HdhcqO+yKtkhFuk2T0CY+W7tTN8w+LmjAwWJca0/Ofc ezodFCqQ==; Received: from mail-pf1-x44a.google.com ([2607:f8b0:4864:20::44a]) by casper.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1q57Lf-009Lnp-7K for linux-riscv@lists.infradead.org; Fri, 02 Jun 2023 16:09:57 +0000 Received: by mail-pf1-x44a.google.com with SMTP id d2e1a72fcca58-653d8974c49so170766b3a.2 for ; Fri, 02 Jun 2023 09:09:54 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1685722192; x=1688314192; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=pnaTvciscS5chSJrFdy/XWmGw+v5DL+sKG313yn6vtE=; b=3qM+gPLnKOcFkte53nrFJ4yxYHfRvj6VAQ74elJ+qIzLMj97WYlQnMm+LRqHDvoho4 8AW7RUIjuOvTDqJRc6MGpLfknZTHUVt0ac+jf1e4fznmqTNQB1YeAez2Pbf+vu4gX/wo xA9uw3YiFJ59xCiKuG3PqvgkOpwn1kfOyfDGnV3P6/41gfMjzqB9OHhb0wbZKNfck3ux QuubqPWPIMr2dr+usJ/0oHW/nWAg0PkVCfHTFjYBlz4FbohVSt4JZLXej+/lCcVIkw51 dikq26nIlv8414HoG8qlckcW0gk8fed5eU7B/bJPELllDOCa6K4cYwpu/yZBPdgX/sdW NMYg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1685722192; x=1688314192; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=pnaTvciscS5chSJrFdy/XWmGw+v5DL+sKG313yn6vtE=; b=IdqQ5n2LrWdE7SAK47lEgKVhBmD83JYlzAq025QuoPgvSE/z9SKnc/IDwqSezS7opM KCg27KRX252CREOcuJLhsgYlV5EVvaYH+eKA6w/spNw4rKxwwUZ8D/wn9k5bfyW1eXJR whfmTkUiabNohmB2YQl2JHP5KgxJJQHZ51nY+jycrVOjzIMhaUjLHUwKRDOELhA7WINa lLVRhPfN5La7JPN4MuewIDpLKq/H4DrtGigMj/NzFm4PzGSIZjXmPcNSounAYWM2UDAh Ltat8o+TBMNbZIfyBBH49bWLvLFNdAuB3ZApfRxHawAxYY9bdm1TYIX/2uyxRPWpkwtk lxpw== X-Gm-Message-State: AC+VfDzsCq22GuKPGOTB/X+ZCdrh5a+ymb160pEYD7AsGKtKmXizpHvm t9yTWiFLyvWOGsUULPtSQ1NOCIDVsgvO X-Google-Smtp-Source: ACHHUZ7CrgQcXBBk0scZtBwPutYqwQ0i1jeHWFv03iHedZStcHfAHgjqtsGQJinldbGkFtIY3nwdqwhDBsPu X-Received: from vipin.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:479f]) (user=vipinsh job=sendgmr) by 2002:a05:6a00:2e1a:b0:64f:9e1b:d4a8 with SMTP id fc26-20020a056a002e1a00b0064f9e1bd4a8mr4922796pfb.1.1685722192061; Fri, 02 Jun 2023 09:09:52 -0700 (PDT) Date: Fri, 2 Jun 2023 09:09:14 -0700 In-Reply-To: <20230602160914.4011728-1-vipinsh@google.com> Mime-Version: 1.0 References: <20230602160914.4011728-1-vipinsh@google.com> X-Mailer: git-send-email 2.41.0.rc0.172.g3f132b7071-goog Message-ID: <20230602160914.4011728-17-vipinsh@google.com> Subject: [PATCH v2 16/16] KVM: arm64: Split huge pages during clear-dirty-log under MMU read lock From: Vipin Sharma To: maz@kernel.org, oliver.upton@linux.dev, james.morse@arm.com, suzuki.poulose@arm.com, yuzenghui@huawei.com, catalin.marinas@arm.com, will@kernel.org, chenhuacai@kernel.org, aleksandar.qemu.devel@gmail.com, tsbogend@alpha.franken.de, anup@brainfault.org, atishp@atishpatra.org, paul.walmsley@sifive.com, palmer@dabbelt.com, aou@eecs.berkeley.edu, seanjc@google.com, pbonzini@redhat.com, dmatlack@google.com, ricarkol@google.com Cc: linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-mips@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kselftest@vger.kernel.org, kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Vipin Sharma X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230602_170955_270387_A2BE1CBF X-CRM114-Status: GOOD ( 12.09 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org Split huge pages under MMU read lock instead of write when clearing dirty log. Running huge page split under read lock will unblock vCPUs execution and allow whole clear-dirty-log operation run parallelly to vCPUs. Note that splitting huge pages involves two walkers. First walker calls stage2_split_walker() callback on each huge page. This callback will call another walker which creates an unlinked page table. This commit makes first walker as shared page walker which means, -EAGAIN will be retried. Before this patch, -EAGAIN would have been ignored and walker would go to next huge page. In practice this would not happen as the first walker was holding MMU write lock. Inner walker is unchanged as it is working on unlinked page table so no other thread will have access to it. To improve confidence in correctness tested via dirty_log_test. To measure performance improvement tested via dirty_log_perf_test. Set up: ------- Host: ARM Ampere Altra host (64 CPUs, 256 GB memory and single NUMA node) Test VM: 48 vCPU, 192 GB total memory. Ran dirty_log_perf_test for 400 iterations. ./dirty_log_perf_test -k 192G -v 48 -b 4G -m 2 -i 4000 -s anonymous_hugetlb_2mb -j Observation: ------------ +==================+=============================+===================+ | Clear Chunk size | Clear dirty log time change | vCPUs improvement | +==================+=============================+===================+ | 192GB | 56% | 152% | +------------------+-----------------------------+-------------------+ | 1GB | -81% | 72% | +------------------+-----------------------------+-------------------+ When larger chunks are used, clear dirty log time increases due to lots of cmpxchg() but vCPUs are also able to execute parallelly causing better performance of guest. When chunk size is small, read lock is very fast in clearing dirty logs as it is not waiting for MMU write lock and vCPUs are also able to run parallelly. Signed-off-by: Vipin Sharma --- arch/arm64/kvm/mmu.c | 21 ++++++++++++++------- 1 file changed, 14 insertions(+), 7 deletions(-) diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c index 6dd964e3682c..aa278f5d27a2 100644 --- a/arch/arm64/kvm/mmu.c +++ b/arch/arm64/kvm/mmu.c @@ -126,7 +126,10 @@ static int kvm_mmu_split_huge_pages(struct kvm *kvm, phys_addr_t addr, int ret, cache_capacity; u64 next, chunk_size; - lockdep_assert_held_write(&kvm->mmu_lock); + if (flags & KVM_PGTABLE_WALK_SHARED) + lockdep_assert_held_read(&kvm->mmu_lock); + else + lockdep_assert_held_write(&kvm->mmu_lock); chunk_size = kvm->arch.mmu.split_page_chunk_size; cache_capacity = kvm_mmu_split_nr_page_tables(chunk_size); @@ -138,13 +141,19 @@ static int kvm_mmu_split_huge_pages(struct kvm *kvm, phys_addr_t addr, do { if (need_split_memcache_topup_or_resched(kvm)) { - write_unlock(&kvm->mmu_lock); + if (flags & KVM_PGTABLE_WALK_SHARED) + read_unlock(&kvm->mmu_lock); + else + write_unlock(&kvm->mmu_lock); cond_resched(); /* Eager page splitting is best-effort. */ ret = __kvm_mmu_topup_memory_cache(cache, cache_capacity, cache_capacity); - write_lock(&kvm->mmu_lock); + if (flags & KVM_PGTABLE_WALK_SHARED) + read_lock(&kvm->mmu_lock); + else + write_lock(&kvm->mmu_lock); if (ret) break; } @@ -1139,9 +1148,7 @@ void kvm_arch_mmu_enable_log_dirty_pt_masked(struct kvm *kvm, read_lock(&kvm->mmu_lock); stage2_wp_range(&kvm->arch.mmu, start, end, KVM_PGTABLE_WALK_SHARED); - read_unlock(&kvm->mmu_lock); - write_lock(&kvm->mmu_lock); /* * Eager-splitting is done when manual-protect is set. We * also check for initially-all-set because we can avoid @@ -1151,8 +1158,8 @@ void kvm_arch_mmu_enable_log_dirty_pt_masked(struct kvm *kvm, * again. */ if (kvm_dirty_log_manual_protect_and_init_set(kvm)) - kvm_mmu_split_huge_pages(kvm, start, end, 0); - write_unlock(&kvm->mmu_lock); + kvm_mmu_split_huge_pages(kvm, start, end, KVM_PGTABLE_WALK_SHARED); + read_unlock(&kvm->mmu_lock); } static void kvm_send_hwpoison_signal(unsigned long address, short lsb)