From patchwork Thu Jun 8 03:24:20 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nicholas Piggin X-Patchwork-Id: 13271571 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id EA927C77B7A for ; Thu, 8 Jun 2023 03:25:07 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234037AbjFHDZE (ORCPT ); Wed, 7 Jun 2023 23:25:04 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34472 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234055AbjFHDYo (ORCPT ); Wed, 7 Jun 2023 23:24:44 -0400 Received: from mail-pg1-x536.google.com (mail-pg1-x536.google.com [IPv6:2607:f8b0:4864:20::536]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D9693270E for ; Wed, 7 Jun 2023 20:24:39 -0700 (PDT) Received: by mail-pg1-x536.google.com with SMTP id 41be03b00d2f7-5343c3daff0so7704a12.0 for ; Wed, 07 Jun 2023 20:24:39 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1686194679; x=1688786679; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=VOCdNvewECOQ5+r30J+0zwy8ziQLU8oKSXOMN2+Hq4s=; b=d9EhnBemmD2f4oibbfAEEtWejICQ1rNiFBazg0jFGkD4tHwSNxfejfLeSyWocMTaLl 69ynTHOgeRffw9JCwU/y0mkCtCcRzp2JHpPre42leBmr68P2ymOJOS1NDhptQVpW7X6i OuTRe1NRirHOLlD1MDgzWUn20RpLjBGhZqNJ9T4QpmYittxkECm/5cS7vFLw/DAsHcew ThrwAgWJ+opa5bWMZGPTAachovkIliDPpSvhae4lb1OTYqOhKYA1wORhDjSg+pwrwt9K mAyxel8HQuUxkV8F6oZKoh5+SB2GPc84eFHWuvG3IfNBkKX+AyUTIjyhmdcYcWEA2oIH dwew== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1686194679; x=1688786679; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=VOCdNvewECOQ5+r30J+0zwy8ziQLU8oKSXOMN2+Hq4s=; b=Y9Q4lBQpXegZfFwayde9zeipjUbqvdwqnHGl8ACoz9KITUEIlit4KKfFk2tzpn9EZ5 oNmxPRuxzS4DhQ8Vg1B/u1QkmQ7lI/HjFVdLiAUQw9998gX9n8ukxk0lEmkB1uID90Fb o/zxvuapL9Oph60pyjalmnKRls1Wo4GULphMn1PKfRkjPm0DdXwvSv0zu0RBpePp/78d Xv/7zn6r+Kfepm0G/eJYGWjHZ57802dIDeF4fxnvy/0B+G2wS5ok1iqMVqSKJQv9mXWY uCnn0EAq5Coje6MqWXu4VKh+OiAbWZdXup6b6SVmpfOOqFk3ysn3Fz42zj0crGZA8IbU 9UQw== X-Gm-Message-State: AC+VfDx8MJFKscx68MYuDOfW8sBpC+Plb+taISGhE0zmmXZpuR8/bcVA fELFpzAWqENguUst95HEJChgylP4xf0= X-Google-Smtp-Source: ACHHUZ6Fc9Zs7wtSCeCsqeNk7P4TIUVPbOq/F2Y/2HPWYYO86i9s/Q34WVH0CArGjz6ZCHpbTvyAxw== X-Received: by 2002:a17:90a:4409:b0:256:675c:e552 with SMTP id s9-20020a17090a440900b00256675ce552mr3139034pjg.13.1686194678781; Wed, 07 Jun 2023 20:24:38 -0700 (PDT) Received: from wheely.local0.net (58-6-224-112.tpgi.com.au. [58.6.224.112]) by smtp.gmail.com with ESMTPSA id s12-20020a17090a5d0c00b0025930e50e28sm2015629pji.41.2023.06.07.20.24.36 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 07 Jun 2023 20:24:38 -0700 (PDT) From: Nicholas Piggin To: kvm@vger.kernel.org, Paolo Bonzini Cc: Nicholas Piggin , linuxppc-dev@lists.ozlabs.org Subject: [PATCH v3 1/6] KVM: selftests: Move pgd_created check into virt_pgd_alloc Date: Thu, 8 Jun 2023 13:24:20 +1000 Message-Id: <20230608032425.59796-2-npiggin@gmail.com> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20230608032425.59796-1-npiggin@gmail.com> References: <20230608032425.59796-1-npiggin@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org virt_arch_pgd_alloc all do the same test and set of pgd_created. Move this into common code. Signed-off-by: Nicholas Piggin --- tools/testing/selftests/kvm/include/kvm_util_base.h | 5 +++++ tools/testing/selftests/kvm/lib/aarch64/processor.c | 4 ---- tools/testing/selftests/kvm/lib/riscv/processor.c | 4 ---- tools/testing/selftests/kvm/lib/s390x/processor.c | 4 ---- tools/testing/selftests/kvm/lib/x86_64/processor.c | 7 ++----- 5 files changed, 7 insertions(+), 17 deletions(-) diff --git a/tools/testing/selftests/kvm/include/kvm_util_base.h b/tools/testing/selftests/kvm/include/kvm_util_base.h index a089c356f354..d630a0a1877c 100644 --- a/tools/testing/selftests/kvm/include/kvm_util_base.h +++ b/tools/testing/selftests/kvm/include/kvm_util_base.h @@ -822,7 +822,12 @@ void virt_arch_pgd_alloc(struct kvm_vm *vm); static inline void virt_pgd_alloc(struct kvm_vm *vm) { + if (vm->pgd_created) + return; + virt_arch_pgd_alloc(vm); + + vm->pgd_created = true; } /* diff --git a/tools/testing/selftests/kvm/lib/aarch64/processor.c b/tools/testing/selftests/kvm/lib/aarch64/processor.c index 3a0259e25335..3da3ec7c5b23 100644 --- a/tools/testing/selftests/kvm/lib/aarch64/processor.c +++ b/tools/testing/selftests/kvm/lib/aarch64/processor.c @@ -96,13 +96,9 @@ void virt_arch_pgd_alloc(struct kvm_vm *vm) { size_t nr_pages = page_align(vm, ptrs_per_pgd(vm) * 8) / vm->page_size; - if (vm->pgd_created) - return; - vm->pgd = vm_phy_pages_alloc(vm, nr_pages, KVM_GUEST_PAGE_TABLE_MIN_PADDR, vm->memslots[MEM_REGION_PT]); - vm->pgd_created = true; } static void _virt_pg_map(struct kvm_vm *vm, uint64_t vaddr, uint64_t paddr, diff --git a/tools/testing/selftests/kvm/lib/riscv/processor.c b/tools/testing/selftests/kvm/lib/riscv/processor.c index d146ca71e0c0..7695ba2cd369 100644 --- a/tools/testing/selftests/kvm/lib/riscv/processor.c +++ b/tools/testing/selftests/kvm/lib/riscv/processor.c @@ -57,13 +57,9 @@ void virt_arch_pgd_alloc(struct kvm_vm *vm) { size_t nr_pages = page_align(vm, ptrs_per_pte(vm) * 8) / vm->page_size; - if (vm->pgd_created) - return; - vm->pgd = vm_phy_pages_alloc(vm, nr_pages, KVM_GUEST_PAGE_TABLE_MIN_PADDR, vm->memslots[MEM_REGION_PT]); - vm->pgd_created = true; } void virt_arch_pg_map(struct kvm_vm *vm, uint64_t vaddr, uint64_t paddr) diff --git a/tools/testing/selftests/kvm/lib/s390x/processor.c b/tools/testing/selftests/kvm/lib/s390x/processor.c index 15945121daf1..358e03f09c7a 100644 --- a/tools/testing/selftests/kvm/lib/s390x/processor.c +++ b/tools/testing/selftests/kvm/lib/s390x/processor.c @@ -17,16 +17,12 @@ void virt_arch_pgd_alloc(struct kvm_vm *vm) TEST_ASSERT(vm->page_size == 4096, "Unsupported page size: 0x%x", vm->page_size); - if (vm->pgd_created) - return; - paddr = vm_phy_pages_alloc(vm, PAGES_PER_REGION, KVM_GUEST_PAGE_TABLE_MIN_PADDR, vm->memslots[MEM_REGION_PT]); memset(addr_gpa2hva(vm, paddr), 0xff, PAGES_PER_REGION * vm->page_size); vm->pgd = paddr; - vm->pgd_created = true; } /* diff --git a/tools/testing/selftests/kvm/lib/x86_64/processor.c b/tools/testing/selftests/kvm/lib/x86_64/processor.c index d4a0b504b1e0..d4deb2718e86 100644 --- a/tools/testing/selftests/kvm/lib/x86_64/processor.c +++ b/tools/testing/selftests/kvm/lib/x86_64/processor.c @@ -127,11 +127,8 @@ void virt_arch_pgd_alloc(struct kvm_vm *vm) TEST_ASSERT(vm->mode == VM_MODE_PXXV48_4K, "Attempt to use " "unknown or unsupported guest mode, mode: 0x%x", vm->mode); - /* If needed, create page map l4 table. */ - if (!vm->pgd_created) { - vm->pgd = vm_alloc_page_table(vm); - vm->pgd_created = true; - } + /* Create page map l4 table. */ + vm->pgd = vm_alloc_page_table(vm); } static void *virt_get_pte(struct kvm_vm *vm, uint64_t *parent_pte, From patchwork Thu Jun 8 03:24:21 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nicholas Piggin X-Patchwork-Id: 13271572 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id AAADCC7EE2F for ; Thu, 8 Jun 2023 03:25:09 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233848AbjFHDZH (ORCPT ); Wed, 7 Jun 2023 23:25:07 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34384 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234064AbjFHDYp (ORCPT ); Wed, 7 Jun 2023 23:24:45 -0400 Received: from mail-oi1-x230.google.com (mail-oi1-x230.google.com [IPv6:2607:f8b0:4864:20::230]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id BDBDB210A for ; Wed, 7 Jun 2023 20:24:44 -0700 (PDT) Received: by mail-oi1-x230.google.com with SMTP id 5614622812f47-39a9b16b37bso130890b6e.1 for ; Wed, 07 Jun 2023 20:24:44 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1686194682; x=1688786682; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=amk+85rcFOEG0VcwuC0GEJTQvC7gKI7CfBL/n9y3Wjk=; b=KoW2Cai7O57Pl3gqJ9zgbc22dpLlVb7iGVHoWl5Z8VXh082/fIfw3dubwBorOs33IQ 4ZCqSaIt8cQJtOOSvoYpIq2TThd6bR0sVQkNy5WQZJWVGo4P1nPuDY5YxPlObTQ9jXsa WAq3ku+12nxQomVObqlFCWMwrsuySN589sjuFj+0VjDH5LRNTLoECGq5Ba4OcP70NfUi 5NSq5VJrVLPbxzfxDZhHfjBX5gh5eCzTTqNHQGZtRHb/FW5zNTv6KZ+8CpzDlFfPPXzG vFrN3/WXaK2NdzHWZXROg59TOz0RGZkY4EMtbJjcH5JyKoUuiyKGkaYdMusMAr5jkvKN LI2g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1686194682; x=1688786682; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=amk+85rcFOEG0VcwuC0GEJTQvC7gKI7CfBL/n9y3Wjk=; b=ikF7adNFkDnfXIwWMh7nGsoc4tuljaYqV3H99QRJEe06z25YXWDSr4hfS4+/M/Tecx df5K2RR6jCmBStV1lZp1YPcdexNJWxmd5gk7yETHctsJgQn4vbpMWfUcxk/xYqS+n/vh ONIiUMS2itcQvavM3Ku4LLN7BWIwHfVx241QXqk+92NpYlcbhMjsZNf89jPrE5LyNw0E KuhGIdxJSdOCgvqkSKbnKwmDBZ5+tBGDdwKfvwTtrHGRNqGBYGik1XJU9uWXMH5cOpit zrRoyebBr574t1u98c2u1xVr0cBDPxTyqsZ6ZAqEPyuPPAz+zd61OM9JYPwASu1R8DbH VdUw== X-Gm-Message-State: AC+VfDwxRwgCH4eRqsc95FI7DPDYw84KyrM69cO/to0Xo4VADEuMEhux oArDA7EFO8ngCc8hL8GAF0dHEw3Ycy8= X-Google-Smtp-Source: ACHHUZ52jpFSNcSJqFuHZeU6QrpkZcVwzp1YQoGV2ph/MtJ+gQ0dqW58LzXNOK9Zb3AWfX1DlRKnmg== X-Received: by 2002:a05:6808:496:b0:394:45ad:3ea7 with SMTP id z22-20020a056808049600b0039445ad3ea7mr7593080oid.5.1686194681878; Wed, 07 Jun 2023 20:24:41 -0700 (PDT) Received: from wheely.local0.net (58-6-224-112.tpgi.com.au. [58.6.224.112]) by smtp.gmail.com with ESMTPSA id s12-20020a17090a5d0c00b0025930e50e28sm2015629pji.41.2023.06.07.20.24.39 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 07 Jun 2023 20:24:41 -0700 (PDT) From: Nicholas Piggin To: kvm@vger.kernel.org, Paolo Bonzini Cc: Nicholas Piggin , linuxppc-dev@lists.ozlabs.org Subject: [PATCH v3 2/6] KVM: selftests: Add aligned guest physical page allocator Date: Thu, 8 Jun 2023 13:24:21 +1000 Message-Id: <20230608032425.59796-3-npiggin@gmail.com> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20230608032425.59796-1-npiggin@gmail.com> References: <20230608032425.59796-1-npiggin@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org powerpc will require this to allocate MMU tables in guest memory that are larger than guest base page size. Signed-off-by: Nicholas Piggin --- .../selftests/kvm/include/kvm_util_base.h | 2 + tools/testing/selftests/kvm/lib/kvm_util.c | 44 ++++++++++++------- 2 files changed, 29 insertions(+), 17 deletions(-) diff --git a/tools/testing/selftests/kvm/include/kvm_util_base.h b/tools/testing/selftests/kvm/include/kvm_util_base.h index d630a0a1877c..42d03ae08ecb 100644 --- a/tools/testing/selftests/kvm/include/kvm_util_base.h +++ b/tools/testing/selftests/kvm/include/kvm_util_base.h @@ -680,6 +680,8 @@ const char *exit_reason_str(unsigned int exit_reason); vm_paddr_t vm_phy_page_alloc(struct kvm_vm *vm, vm_paddr_t paddr_min, uint32_t memslot); +vm_paddr_t vm_phy_pages_alloc_align(struct kvm_vm *vm, size_t num, size_t align, + vm_paddr_t paddr_min, uint32_t memslot); vm_paddr_t vm_phy_pages_alloc(struct kvm_vm *vm, size_t num, vm_paddr_t paddr_min, uint32_t memslot); vm_paddr_t vm_alloc_page_table(struct kvm_vm *vm); diff --git a/tools/testing/selftests/kvm/lib/kvm_util.c b/tools/testing/selftests/kvm/lib/kvm_util.c index 298c4372fb1a..68558d60f949 100644 --- a/tools/testing/selftests/kvm/lib/kvm_util.c +++ b/tools/testing/selftests/kvm/lib/kvm_util.c @@ -1903,6 +1903,7 @@ const char *exit_reason_str(unsigned int exit_reason) * Input Args: * vm - Virtual Machine * num - number of pages + * align - pages alignment * paddr_min - Physical address minimum * memslot - Memory region to allocate page from * @@ -1916,7 +1917,7 @@ const char *exit_reason_str(unsigned int exit_reason) * and their base address is returned. A TEST_ASSERT failure occurs if * not enough pages are available at or above paddr_min. */ -vm_paddr_t vm_phy_pages_alloc(struct kvm_vm *vm, size_t num, +vm_paddr_t vm_phy_pages_alloc_align(struct kvm_vm *vm, size_t num, size_t align, vm_paddr_t paddr_min, uint32_t memslot) { struct userspace_mem_region *region; @@ -1930,24 +1931,27 @@ vm_paddr_t vm_phy_pages_alloc(struct kvm_vm *vm, size_t num, paddr_min, vm->page_size); region = memslot2region(vm, memslot); - base = pg = paddr_min >> vm->page_shift; - - do { - for (; pg < base + num; ++pg) { - if (!sparsebit_is_set(region->unused_phy_pages, pg)) { - base = pg = sparsebit_next_set(region->unused_phy_pages, pg); - break; + base = paddr_min >> vm->page_shift; + +again: + base = (base + align - 1) & ~(align - 1); + for (pg = base; pg < base + num; ++pg) { + if (!sparsebit_is_set(region->unused_phy_pages, pg)) { + base = sparsebit_next_set(region->unused_phy_pages, pg); + if (!base) { + fprintf(stderr, "No guest physical pages " + "available, paddr_min: 0x%lx " + "page_size: 0x%x memslot: %u " + "num_pages: %lu align: %lu\n", + paddr_min, vm->page_size, memslot, + num, align); + fputs("---- vm dump ----\n", stderr); + vm_dump(stderr, vm, 2); + TEST_ASSERT(false, "false"); + abort(); } + goto again; } - } while (pg && pg != base + num); - - if (pg == 0) { - fprintf(stderr, "No guest physical page available, " - "paddr_min: 0x%lx page_size: 0x%x memslot: %u\n", - paddr_min, vm->page_size, memslot); - fputs("---- vm dump ----\n", stderr); - vm_dump(stderr, vm, 2); - abort(); } for (pg = base; pg < base + num; ++pg) @@ -1956,6 +1960,12 @@ vm_paddr_t vm_phy_pages_alloc(struct kvm_vm *vm, size_t num, return base * vm->page_size; } +vm_paddr_t vm_phy_pages_alloc(struct kvm_vm *vm, size_t num, + vm_paddr_t paddr_min, uint32_t memslot) +{ + return vm_phy_pages_alloc_align(vm, num, 1, paddr_min, memslot); +} + vm_paddr_t vm_phy_page_alloc(struct kvm_vm *vm, vm_paddr_t paddr_min, uint32_t memslot) { From patchwork Thu Jun 8 03:24:22 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nicholas Piggin X-Patchwork-Id: 13271573 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4CB02C7EE2F for ; Thu, 8 Jun 2023 03:25:12 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234048AbjFHDZK (ORCPT ); Wed, 7 Jun 2023 23:25:10 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34508 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234049AbjFHDYv (ORCPT ); Wed, 7 Jun 2023 23:24:51 -0400 Received: from mail-pj1-x1036.google.com (mail-pj1-x1036.google.com [IPv6:2607:f8b0:4864:20::1036]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 88A6D2696 for ; Wed, 7 Jun 2023 20:24:48 -0700 (PDT) Received: by mail-pj1-x1036.google.com with SMTP id 98e67ed59e1d1-256e1d87a46so64253a91.0 for ; Wed, 07 Jun 2023 20:24:48 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1686194687; x=1688786687; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=qzrE8+27NaNaaox86Qro4QbdWzCKXZ43f5sQBQmnneg=; b=I/2+bvMIk3RrR7Xv2sUX96hPJB1FX/9VRzOAaR/IiBMKQvSEiagn42nggWv2k6wQum baIqL1TTjlRhseLmXJNDfyWtEy7Vj5zMXkQyy6s4XC5CnxCD/qz/KgYasD8JnHc01Eag NFU316S/tEuSOV3nROLeSE214NnlctOficoEfhpf0S8gxgyhzCeaSW+iukGCOvVlpRSv G8GIVWjMOQvxu79o0alBSn4ZFYhaxohV+0aBJ3JE7vezcKR2r190Hu6LE6zrefYA6hNX DKSWvMffljtpmctWFD9J5YI2plUtB71WtmKFfjlIGdIfkfa6ZxlEdsjKfAZhlsKEs+6i 3pzw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1686194687; x=1688786687; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=qzrE8+27NaNaaox86Qro4QbdWzCKXZ43f5sQBQmnneg=; b=HLn9v61BtITHyg/KngpKVJfEbHzknhiiy5j9zio9iltqzLeaS8JgQA4XioQT7llXJW WG3NncjoZDa8t8zJb/Rx9lVnHoy0pHJ2lOj3LgY8R3rOmLzC80b4+tU/JTmaH/CdfnPF yR2VTd0DATSpu9+qkuiwk2OQXU2vPvwaISq4JmkYyNYRx6B3Wb+uR7mjP/YrbsTKn4Wp fRwhr1MpwasslVYwPUe9mXW8bST2gsOq5fszRg8gKrFXyBRiBgy2qENphr1XdcwWG4vN e+BKVuyGZqBkcJfBcfzQbnL0IgZu583PhNwe8uL0ILkc2P5B47wVeANpBYi8rwLge45M dagQ== X-Gm-Message-State: AC+VfDxQBkY7bUMCZi4tjTQnbasHC3MoUUZHh/V+KylmMVi+U26Dlmg9 A7Vjp02OBHs4FV/mabRgxzaeBLFhCnw= X-Google-Smtp-Source: ACHHUZ6sQuTaVVS/041R9WJ0F/29oaDrDU8B4QQQxx5uI48CN/+O4uHn0+KQy0OigzU14o/+pm/x9Q== X-Received: by 2002:a17:90b:1006:b0:253:360a:f6b with SMTP id gm6-20020a17090b100600b00253360a0f6bmr3684218pjb.13.1686194687041; Wed, 07 Jun 2023 20:24:47 -0700 (PDT) Received: from wheely.local0.net (58-6-224-112.tpgi.com.au. [58.6.224.112]) by smtp.gmail.com with ESMTPSA id s12-20020a17090a5d0c00b0025930e50e28sm2015629pji.41.2023.06.07.20.24.42 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 07 Jun 2023 20:24:46 -0700 (PDT) From: Nicholas Piggin To: kvm@vger.kernel.org, Paolo Bonzini Cc: Nicholas Piggin , linuxppc-dev@lists.ozlabs.org, Michael Ellerman Subject: [PATCH v3 3/6] KVM: PPC: selftests: add support for powerpc Date: Thu, 8 Jun 2023 13:24:22 +1000 Message-Id: <20230608032425.59796-4-npiggin@gmail.com> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20230608032425.59796-1-npiggin@gmail.com> References: <20230608032425.59796-1-npiggin@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Implement KVM selftests support for powerpc (Book3S-64). ucalls are implemented with an unsuppored PAPR hcall number which will always cause KVM to exit to userspace. Virtual memory is implemented for the radix MMU, and only a base page size is supported (both 4K and 64K). Guest interrupts are taken in real-mode, so require a page allocated at gRA 0x0. Interrupt entry is complicated because gVA:gRA is not 1:1 mapped (like the kernel is), so the MMU can not just just be switched on and off. Acked-by: Michael Ellerman (powerpc) Signed-off-by: Nicholas Piggin --- MAINTAINERS | 2 + tools/testing/selftests/kvm/Makefile | 19 + .../selftests/kvm/include/kvm_util_base.h | 22 + .../selftests/kvm/include/powerpc/hcall.h | 19 + .../selftests/kvm/include/powerpc/ppc_asm.h | 32 ++ .../selftests/kvm/include/powerpc/processor.h | 39 ++ tools/testing/selftests/kvm/lib/guest_modes.c | 27 +- tools/testing/selftests/kvm/lib/kvm_util.c | 12 + .../selftests/kvm/lib/powerpc/handlers.S | 93 ++++ .../testing/selftests/kvm/lib/powerpc/hcall.c | 45 ++ .../selftests/kvm/lib/powerpc/processor.c | 439 ++++++++++++++++++ .../testing/selftests/kvm/lib/powerpc/ucall.c | 30 ++ tools/testing/selftests/kvm/powerpc/helpers.h | 46 ++ 13 files changed, 821 insertions(+), 4 deletions(-) create mode 100644 tools/testing/selftests/kvm/include/powerpc/hcall.h create mode 100644 tools/testing/selftests/kvm/include/powerpc/ppc_asm.h create mode 100644 tools/testing/selftests/kvm/include/powerpc/processor.h create mode 100644 tools/testing/selftests/kvm/lib/powerpc/handlers.S create mode 100644 tools/testing/selftests/kvm/lib/powerpc/hcall.c create mode 100644 tools/testing/selftests/kvm/lib/powerpc/processor.c create mode 100644 tools/testing/selftests/kvm/lib/powerpc/ucall.c create mode 100644 tools/testing/selftests/kvm/powerpc/helpers.h diff --git a/MAINTAINERS b/MAINTAINERS index 44417acd2936..39afb356369e 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -11391,6 +11391,8 @@ F: arch/powerpc/include/asm/kvm* F: arch/powerpc/include/uapi/asm/kvm* F: arch/powerpc/kernel/kvm* F: arch/powerpc/kvm/ +F: tools/testing/selftests/kvm/*/powerpc/ +F: tools/testing/selftests/kvm/powerpc/ KERNEL VIRTUAL MACHINE FOR RISC-V (KVM/riscv) M: Anup Patel diff --git a/tools/testing/selftests/kvm/Makefile b/tools/testing/selftests/kvm/Makefile index 4761b768b773..53cd3ce63dec 100644 --- a/tools/testing/selftests/kvm/Makefile +++ b/tools/testing/selftests/kvm/Makefile @@ -55,6 +55,11 @@ LIBKVM_s390x += lib/s390x/ucall.c LIBKVM_riscv += lib/riscv/processor.c LIBKVM_riscv += lib/riscv/ucall.c +LIBKVM_powerpc += lib/powerpc/handlers.S +LIBKVM_powerpc += lib/powerpc/processor.c +LIBKVM_powerpc += lib/powerpc/ucall.c +LIBKVM_powerpc += lib/powerpc/hcall.c + # Non-compiled test targets TEST_PROGS_x86_64 += x86_64/nx_huge_pages_test.sh @@ -179,6 +184,20 @@ TEST_GEN_PROGS_riscv += kvm_page_table_test TEST_GEN_PROGS_riscv += set_memory_region_test TEST_GEN_PROGS_riscv += kvm_binary_stats_test +TEST_GEN_PROGS_powerpc += access_tracking_perf_test +TEST_GEN_PROGS_powerpc += demand_paging_test +TEST_GEN_PROGS_powerpc += dirty_log_test +TEST_GEN_PROGS_powerpc += dirty_log_perf_test +TEST_GEN_PROGS_powerpc += hardware_disable_test +TEST_GEN_PROGS_powerpc += kvm_create_max_vcpus +TEST_GEN_PROGS_powerpc += kvm_page_table_test +TEST_GEN_PROGS_powerpc += max_guest_memory_test +TEST_GEN_PROGS_powerpc += memslot_modification_stress_test +TEST_GEN_PROGS_powerpc += memslot_perf_test +TEST_GEN_PROGS_powerpc += rseq_test +TEST_GEN_PROGS_powerpc += set_memory_region_test +TEST_GEN_PROGS_powerpc += kvm_binary_stats_test + TEST_PROGS += $(TEST_PROGS_$(ARCH_DIR)) TEST_GEN_PROGS += $(TEST_GEN_PROGS_$(ARCH_DIR)) TEST_GEN_PROGS_EXTENDED += $(TEST_GEN_PROGS_EXTENDED_$(ARCH_DIR)) diff --git a/tools/testing/selftests/kvm/include/kvm_util_base.h b/tools/testing/selftests/kvm/include/kvm_util_base.h index 42d03ae08ecb..17b80709b894 100644 --- a/tools/testing/selftests/kvm/include/kvm_util_base.h +++ b/tools/testing/selftests/kvm/include/kvm_util_base.h @@ -105,6 +105,7 @@ struct kvm_vm { bool pgd_created; vm_paddr_t ucall_mmio_addr; vm_paddr_t pgd; + vm_paddr_t prtb; // powerpc process table vm_vaddr_t gdt; vm_vaddr_t tss; vm_vaddr_t idt; @@ -160,6 +161,8 @@ enum vm_guest_mode { VM_MODE_PXXV48_4K, /* For 48bits VA but ANY bits PA */ VM_MODE_P47V64_4K, VM_MODE_P44V64_4K, + VM_MODE_P52V52_4K, + VM_MODE_P52V52_64K, VM_MODE_P36V48_4K, VM_MODE_P36V48_16K, VM_MODE_P36V48_64K, @@ -197,6 +200,25 @@ extern enum vm_guest_mode vm_mode_default; #define MIN_PAGE_SHIFT 12U #define ptes_per_page(page_size) ((page_size) / 8) +#elif defined(__powerpc64__) + +extern enum vm_guest_mode vm_mode_default; + +#define VM_MODE_DEFAULT vm_mode_default + +/* + * XXX: This is a hack to allocate more memory for page tables because we + * don't pack "fragments" well with 64K page sizes. Should rework generic + * code to allow more flexible page table memory estimation (and fix our + * page table allocation). + */ +#define MIN_PAGE_SHIFT 12U +#define ptes_per_page(page_size) ((page_size) / 8) + +#else + +#error "KVM selftests not implemented for architecture" + #endif #define MIN_PAGE_SIZE (1U << MIN_PAGE_SHIFT) diff --git a/tools/testing/selftests/kvm/include/powerpc/hcall.h b/tools/testing/selftests/kvm/include/powerpc/hcall.h new file mode 100644 index 000000000000..ba119f5a3fef --- /dev/null +++ b/tools/testing/selftests/kvm/include/powerpc/hcall.h @@ -0,0 +1,19 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* + * powerpc hcall defines + */ +#ifndef SELFTEST_KVM_HCALL_H +#define SELFTEST_KVM_HCALL_H + +#include + +/* Ucalls use unimplemented PAPR hcall 0 which exits KVM */ +#define H_UCALL 0 +#define UCALL_R4_UCALL 0x5715 // regular ucall, r5 contains ucall pointer +#define UCALL_R4_SIMPLE 0x0000 // simple exit usable by asm with no ucall data + +int64_t hcall0(uint64_t token); +int64_t hcall1(uint64_t token, uint64_t arg1); +int64_t hcall2(uint64_t token, uint64_t arg1, uint64_t arg2); + +#endif diff --git a/tools/testing/selftests/kvm/include/powerpc/ppc_asm.h b/tools/testing/selftests/kvm/include/powerpc/ppc_asm.h new file mode 100644 index 000000000000..b9df64659792 --- /dev/null +++ b/tools/testing/selftests/kvm/include/powerpc/ppc_asm.h @@ -0,0 +1,32 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* + * powerpc asm specific defines + */ +#ifndef SELFTEST_KVM_PPC_ASM_H +#define SELFTEST_KVM_PPC_ASM_H + +#define STACK_FRAME_MIN_SIZE 112 /* Could be 32 on ELFv2 */ +#define STACK_REDZONE_SIZE 512 + +#define INT_FRAME_SIZE (STACK_FRAME_MIN_SIZE + STACK_REDZONE_SIZE) + +#define SPR_SRR0 0x01a +#define SPR_SRR1 0x01b +#define SPR_CFAR 0x01c + +#define MSR_SF 0x8000000000000000ULL +#define MSR_HV 0x1000000000000000ULL +#define MSR_VEC 0x0000000002000000ULL +#define MSR_VSX 0x0000000000800000ULL +#define MSR_EE 0x0000000000008000ULL +#define MSR_PR 0x0000000000004000ULL +#define MSR_FP 0x0000000000002000ULL +#define MSR_ME 0x0000000000001000ULL +#define MSR_IR 0x0000000000000020ULL +#define MSR_DR 0x0000000000000010ULL +#define MSR_RI 0x0000000000000002ULL +#define MSR_LE 0x0000000000000001ULL + +#define LPCR_ILE 0x0000000002000000ULL + +#endif diff --git a/tools/testing/selftests/kvm/include/powerpc/processor.h b/tools/testing/selftests/kvm/include/powerpc/processor.h new file mode 100644 index 000000000000..ce5a23525dbd --- /dev/null +++ b/tools/testing/selftests/kvm/include/powerpc/processor.h @@ -0,0 +1,39 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* + * powerpc processor specific defines + */ +#ifndef SELFTEST_KVM_PROCESSOR_H +#define SELFTEST_KVM_PROCESSOR_H + +#include +#include "ppc_asm.h" + +extern unsigned char __interrupts_start[]; +extern unsigned char __interrupts_end[]; + +struct kvm_vm; +struct kvm_vcpu; +extern bool (*interrupt_handler)(struct kvm_vcpu *vcpu, unsigned trap); + +struct ex_regs { + uint64_t gprs[32]; + uint64_t nia; + uint64_t msr; + uint64_t cfar; + uint64_t lr; + uint64_t ctr; + uint64_t xer; + uint32_t cr; + uint32_t trap; + uint64_t vaddr; /* vaddr of this struct */ +}; + +void vm_install_exception_handler(struct kvm_vm *vm, int vector, + void (*handler)(struct ex_regs *)); + +static inline void cpu_relax(void) +{ + asm volatile("" ::: "memory"); +} + +#endif diff --git a/tools/testing/selftests/kvm/lib/guest_modes.c b/tools/testing/selftests/kvm/lib/guest_modes.c index 1df3ce4b16fd..4dfaed1706d9 100644 --- a/tools/testing/selftests/kvm/lib/guest_modes.c +++ b/tools/testing/selftests/kvm/lib/guest_modes.c @@ -4,7 +4,11 @@ */ #include "guest_modes.h" -#ifdef __aarch64__ +#if defined(__powerpc__) +#include +#endif + +#if defined(__aarch64__) || defined(__powerpc__) #include "processor.h" enum vm_guest_mode vm_mode_default; #endif @@ -13,9 +17,7 @@ struct guest_mode guest_modes[NUM_VM_MODES]; void guest_modes_append_default(void) { -#ifndef __aarch64__ - guest_mode_append(VM_MODE_DEFAULT, true, true); -#else +#ifdef __aarch64__ { unsigned int limit = kvm_check_cap(KVM_CAP_ARM_VM_IPA_SIZE); bool ps4k, ps16k, ps64k; @@ -70,6 +72,8 @@ void guest_modes_append_default(void) KVM_S390_VM_CPU_PROCESSOR, &info); close(vm_fd); close(kvm_fd); + + guest_mode_append(VM_MODE_DEFAULT, true, true); /* Starting with z13 we have 47bits of physical address */ if (info.ibc >= 0x30) guest_mode_append(VM_MODE_P47V64_4K, true, true); @@ -79,12 +83,27 @@ void guest_modes_append_default(void) { unsigned int sz = kvm_check_cap(KVM_CAP_VM_GPA_BITS); + guest_mode_append(VM_MODE_DEFAULT, true, true); if (sz >= 52) guest_mode_append(VM_MODE_P52V48_4K, true, true); if (sz >= 48) guest_mode_append(VM_MODE_P48V48_4K, true, true); } #endif +#ifdef __powerpc__ + { + TEST_ASSERT(kvm_check_cap(KVM_CAP_PPC_MMU_RADIX), + "Radix MMU not available, KVM selftests " + "does not support Hash MMU!"); + /* Radix guest EA and RA are 52-bit on POWER9 and POWER10 */ + if (sysconf(_SC_PAGESIZE) == 4096) + vm_mode_default = VM_MODE_P52V52_4K; + else + vm_mode_default = VM_MODE_P52V52_64K; + guest_mode_append(VM_MODE_P52V52_4K, true, true); + guest_mode_append(VM_MODE_P52V52_64K, true, true); + } +#endif } void for_each_guest_mode(void (*func)(enum vm_guest_mode, void *), void *arg) diff --git a/tools/testing/selftests/kvm/lib/kvm_util.c b/tools/testing/selftests/kvm/lib/kvm_util.c index 68558d60f949..696989a22c5a 100644 --- a/tools/testing/selftests/kvm/lib/kvm_util.c +++ b/tools/testing/selftests/kvm/lib/kvm_util.c @@ -158,6 +158,8 @@ const char *vm_guest_mode_string(uint32_t i) [VM_MODE_PXXV48_4K] = "PA-bits:ANY, VA-bits:48, 4K pages", [VM_MODE_P47V64_4K] = "PA-bits:47, VA-bits:64, 4K pages", [VM_MODE_P44V64_4K] = "PA-bits:44, VA-bits:64, 4K pages", + [VM_MODE_P52V52_4K] = "PA-bits:52, VA-bits:52, 4K pages", + [VM_MODE_P52V52_64K] = "PA-bits:52, VA-bits:52, 64K pages", [VM_MODE_P36V48_4K] = "PA-bits:36, VA-bits:48, 4K pages", [VM_MODE_P36V48_16K] = "PA-bits:36, VA-bits:48, 16K pages", [VM_MODE_P36V48_64K] = "PA-bits:36, VA-bits:48, 64K pages", @@ -183,6 +185,8 @@ const struct vm_guest_mode_params vm_guest_mode_params[] = { [VM_MODE_PXXV48_4K] = { 0, 0, 0x1000, 12 }, [VM_MODE_P47V64_4K] = { 47, 64, 0x1000, 12 }, [VM_MODE_P44V64_4K] = { 44, 64, 0x1000, 12 }, + [VM_MODE_P52V52_4K] = { 52, 52, 0x1000, 12 }, + [VM_MODE_P52V52_64K] = { 52, 52, 0x10000, 16 }, [VM_MODE_P36V48_4K] = { 36, 48, 0x1000, 12 }, [VM_MODE_P36V48_16K] = { 36, 48, 0x4000, 14 }, [VM_MODE_P36V48_64K] = { 36, 48, 0x10000, 16 }, @@ -284,6 +288,14 @@ struct kvm_vm *____vm_create(enum vm_guest_mode mode) case VM_MODE_P44V64_4K: vm->pgtable_levels = 5; break; +#ifdef __powerpc__ + case VM_MODE_P52V52_64K: + vm->pgtable_levels = 4; + break; + case VM_MODE_P52V52_4K: + vm->pgtable_levels = 4; + break; +#endif default: TEST_FAIL("Unknown guest mode, mode: 0x%x", mode); } diff --git a/tools/testing/selftests/kvm/lib/powerpc/handlers.S b/tools/testing/selftests/kvm/lib/powerpc/handlers.S new file mode 100644 index 000000000000..a68c187b835f --- /dev/null +++ b/tools/testing/selftests/kvm/lib/powerpc/handlers.S @@ -0,0 +1,93 @@ +/* SPDX-License-Identifier: GPL-2.0 */ + +#include + +.macro INTERRUPT vec +. = __interrupts_start + \vec + std %r0,(0*8)(%r13) + std %r3,(3*8)(%r13) + mfspr %r0,SPR_CFAR + li %r3,\vec + b handle_interrupt +.endm + +.balign 0x1000 +.global __interrupts_start +__interrupts_start: +INTERRUPT 0x100 +INTERRUPT 0x200 +INTERRUPT 0x300 +INTERRUPT 0x380 +INTERRUPT 0x400 +INTERRUPT 0x480 +INTERRUPT 0x500 +INTERRUPT 0x600 +INTERRUPT 0x700 +INTERRUPT 0x800 +INTERRUPT 0x900 +INTERRUPT 0xa00 +INTERRUPT 0xc00 +INTERRUPT 0xd00 +INTERRUPT 0xf00 +INTERRUPT 0xf20 +INTERRUPT 0xf40 +INTERRUPT 0xf60 + +virt_handle_interrupt: + stdu %r1,-INT_FRAME_SIZE(%r1) + mr %r3,%r31 + bl route_interrupt + ld %r4,(32*8)(%r31) /* NIA */ + ld %r5,(33*8)(%r31) /* MSR */ + ld %r6,(35*8)(%r31) /* LR */ + ld %r7,(36*8)(%r31) /* CTR */ + ld %r8,(37*8)(%r31) /* XER */ + lwz %r9,(38*8)(%r31) /* CR */ + mtspr SPR_SRR0,%r4 + mtspr SPR_SRR1,%r5 + mtlr %r6 + mtctr %r7 + mtxer %r8 + mtcr %r9 +reg=4 + ld %r0,(0*8)(%r31) + ld %r3,(3*8)(%r31) +.rept 28 + ld reg,(reg*8)(%r31) + reg=reg+1 +.endr + addi %r1,%r1,INT_FRAME_SIZE + rfid + +virt_handle_interrupt_p: + .llong virt_handle_interrupt + +handle_interrupt: +reg=4 +.rept 28 + std reg,(reg*8)(%r13) + reg=reg+1 +.endr + mfspr %r4,SPR_SRR0 + mfspr %r5,SPR_SRR1 + mflr %r6 + mfctr %r7 + mfxer %r8 + mfcr %r9 + std %r4,(32*8)(%r13) /* NIA */ + std %r5,(33*8)(%r13) /* MSR */ + std %r0,(34*8)(%r13) /* CFAR */ + std %r6,(35*8)(%r13) /* LR */ + std %r7,(36*8)(%r13) /* CTR */ + std %r8,(37*8)(%r13) /* XER */ + stw %r9,(38*8)(%r13) /* CR */ + stw %r3,(38*8 + 4)(%r13) /* TRAP */ + + ld %r31,(39*8)(%r13) /* vaddr */ + ld %r4,virt_handle_interrupt_p - __interrupts_start(0) + mtspr SPR_SRR0,%r4 + /* Reuse SRR1 */ + + rfid +.global __interrupts_end +__interrupts_end: diff --git a/tools/testing/selftests/kvm/lib/powerpc/hcall.c b/tools/testing/selftests/kvm/lib/powerpc/hcall.c new file mode 100644 index 000000000000..23a56aabad42 --- /dev/null +++ b/tools/testing/selftests/kvm/lib/powerpc/hcall.c @@ -0,0 +1,45 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * PAPR (pseries) hcall support. + */ +#include "kvm_util.h" +#include "hcall.h" + +int64_t hcall0(uint64_t token) +{ + register uintptr_t r3 asm ("r3") = token; + + asm volatile("sc 1" : "+r"(r3) : + : "r0", "r4", "r5", "r6", "r7", "r8", "r9", + "r10","r11", "r12", "ctr", "xer", + "memory"); + + return r3; +} + +int64_t hcall1(uint64_t token, uint64_t arg1) +{ + register uintptr_t r3 asm ("r3") = token; + register uintptr_t r4 asm ("r4") = arg1; + + asm volatile("sc 1" : "+r"(r3), "+r"(r4) : + : "r0", "r5", "r6", "r7", "r8", "r9", + "r10","r11", "r12", "ctr", "xer", + "memory"); + + return r3; +} + +int64_t hcall2(uint64_t token, uint64_t arg1, uint64_t arg2) +{ + register uintptr_t r3 asm ("r3") = token; + register uintptr_t r4 asm ("r4") = arg1; + register uintptr_t r5 asm ("r5") = arg2; + + asm volatile("sc 1" : "+r"(r3), "+r"(r4), "+r"(r5) : + : "r0", "r6", "r7", "r8", "r9", + "r10","r11", "r12", "ctr", "xer", + "memory"); + + return r3; +} diff --git a/tools/testing/selftests/kvm/lib/powerpc/processor.c b/tools/testing/selftests/kvm/lib/powerpc/processor.c new file mode 100644 index 000000000000..02db2ff86da8 --- /dev/null +++ b/tools/testing/selftests/kvm/lib/powerpc/processor.c @@ -0,0 +1,439 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * KVM selftest powerpc library code - CPU-related functions (page tables...) + */ + +#include + +#include "processor.h" +#include "kvm_util.h" +#include "kvm_util_base.h" +#include "guest_modes.h" +#include "hcall.h" + +#define RADIX_TREE_SIZE ((0x2UL << 61) | (0x5UL << 5)) // 52-bits +#define RADIX_PGD_INDEX_SIZE 13 + +static void set_proc_table(struct kvm_vm *vm, int pid, uint64_t dw0, uint64_t dw1) +{ + uint64_t *proc_table; + + proc_table = addr_gpa2hva(vm, vm->prtb); + proc_table[pid * 2 + 0] = cpu_to_be64(dw0); + proc_table[pid * 2 + 1] = cpu_to_be64(dw1); +} + +static void set_radix_proc_table(struct kvm_vm *vm, int pid, vm_paddr_t pgd) +{ + set_proc_table(vm, pid, pgd | RADIX_TREE_SIZE | RADIX_PGD_INDEX_SIZE, 0); +} + +void virt_arch_pgd_alloc(struct kvm_vm *vm) +{ + struct kvm_ppc_mmuv3_cfg mmu_cfg; + vm_paddr_t prtb, pgtb; + size_t pgd_pages; + + TEST_ASSERT((vm->mode == VM_MODE_P52V52_4K) || + (vm->mode == VM_MODE_P52V52_64K), + "Unsupported guest mode, mode: 0x%x", vm->mode); + + prtb = vm_phy_page_alloc(vm, KVM_GUEST_PAGE_TABLE_MIN_PADDR, + vm->memslots[MEM_REGION_PT]); + vm->prtb = prtb; + + pgd_pages = (1UL << (RADIX_PGD_INDEX_SIZE + 3)) >> vm->page_shift; + if (!pgd_pages) + pgd_pages = 1; + pgtb = vm_phy_pages_alloc_align(vm, pgd_pages, pgd_pages, + KVM_GUEST_PAGE_TABLE_MIN_PADDR, + vm->memslots[MEM_REGION_PT]); + vm->pgd = pgtb; + + /* Set the base page directory in the proc table */ + set_radix_proc_table(vm, 0, pgtb); + + if (vm->mode == VM_MODE_P52V52_4K) + mmu_cfg.process_table = prtb | 0x8000000000000000UL | 0x0; // 4K size + else /* vm->mode == VM_MODE_P52V52_64K */ + mmu_cfg.process_table = prtb | 0x8000000000000000UL | 0x4; // 64K size + mmu_cfg.flags = KVM_PPC_MMUV3_RADIX | KVM_PPC_MMUV3_GTSE; + + vm_ioctl(vm, KVM_PPC_CONFIGURE_V3_MMU, &mmu_cfg); +} + +static int pt_shift(struct kvm_vm *vm, int level) +{ + switch (level) { + case 1: + return 13; + case 2: + case 3: + return 9; + case 4: + if (vm->mode == VM_MODE_P52V52_4K) + return 9; + else /* vm->mode == VM_MODE_P52V52_64K */ + return 5; + default: + TEST_ASSERT(false, "Invalid page table level %d\n", level); + return 0; + } +} + +static uint64_t pt_entry_coverage(struct kvm_vm *vm, int level) +{ + uint64_t size = vm->page_size; + + if (level == 4) + return size; + size <<= pt_shift(vm, 4); + if (level == 3) + return size; + size <<= pt_shift(vm, 3); + if (level == 2) + return size; + size <<= pt_shift(vm, 2); + return size; +} + +static int pt_idx(struct kvm_vm *vm, uint64_t vaddr, int level, uint64_t *nls) +{ + switch (level) { + case 1: + *nls = 0x9; + return (vaddr >> 39) & 0x1fff; + case 2: + *nls = 0x9; + return (vaddr >> 30) & 0x1ff; + case 3: + if (vm->mode == VM_MODE_P52V52_4K) + *nls = 0x9; + else /* vm->mode == VM_MODE_P52V52_64K */ + *nls = 0x5; + return (vaddr >> 21) & 0x1ff; + case 4: + if (vm->mode == VM_MODE_P52V52_4K) + return (vaddr >> 12) & 0x1ff; + else /* vm->mode == VM_MODE_P52V52_64K */ + return (vaddr >> 16) & 0x1f; + default: + TEST_ASSERT(false, "Invalid page table level %d\n", level); + return 0; + } +} + +static uint64_t *virt_get_pte(struct kvm_vm *vm, vm_paddr_t pt, + uint64_t vaddr, int level, uint64_t *nls) +{ + int idx = pt_idx(vm, vaddr, level, nls); + uint64_t *ptep = addr_gpa2hva(vm, pt + idx*8); + + return ptep; +} + +#define PTE_VALID 0x8000000000000000ull +#define PTE_LEAF 0x4000000000000000ull +#define PTE_REFERENCED 0x0000000000000100ull +#define PTE_CHANGED 0x0000000000000080ull +#define PTE_PRIV 0x0000000000000008ull +#define PTE_READ 0x0000000000000004ull +#define PTE_RW 0x0000000000000002ull +#define PTE_EXEC 0x0000000000000001ull +#define PTE_PAGE_MASK 0x01fffffffffff000ull + +#define PDE_VALID PTE_VALID +#define PDE_NLS 0x0000000000000011ull +#define PDE_PT_MASK 0x0fffffffffffff00ull + +void virt_arch_pg_map(struct kvm_vm *vm, uint64_t gva, uint64_t gpa) +{ + vm_paddr_t pt = vm->pgd; + uint64_t *ptep, pte; + int level; + + for (level = 1; level <= 3; level++) { + uint64_t nls; + uint64_t *pdep = virt_get_pte(vm, pt, gva, level, &nls); + uint64_t pde = be64_to_cpu(*pdep); + size_t pt_pages; + + if (pde) { + TEST_ASSERT((pde & PDE_VALID) && !(pde & PTE_LEAF), + "Invalid PDE at level: %u gva: 0x%lx pde:0x%lx\n", + level, gva, pde); + pt = pde & PDE_PT_MASK; + continue; + } + + pt_pages = (1ULL << (nls + 3)) >> vm->page_shift; + if (!pt_pages) + pt_pages = 1; + pt = vm_phy_pages_alloc_align(vm, pt_pages, pt_pages, + KVM_GUEST_PAGE_TABLE_MIN_PADDR, + vm->memslots[MEM_REGION_PT]); + pde = PDE_VALID | nls | pt; + *pdep = cpu_to_be64(pde); + } + + ptep = virt_get_pte(vm, pt, gva, level, NULL); + pte = be64_to_cpu(*ptep); + + TEST_ASSERT(!pte, "PTE already present at level: %u gva: 0x%lx pte:0x%lx\n", + level, gva, pte); + + pte = PTE_VALID | PTE_LEAF | PTE_REFERENCED | PTE_CHANGED |PTE_PRIV | + PTE_READ | PTE_RW | PTE_EXEC | (gpa & PTE_PAGE_MASK); + *ptep = cpu_to_be64(pte); +} + +vm_paddr_t addr_arch_gva2gpa(struct kvm_vm *vm, vm_vaddr_t gva) +{ + vm_paddr_t pt = vm->pgd; + uint64_t *ptep, pte; + int level; + + for (level = 1; level <= 3; level++) { + uint64_t nls; + uint64_t *pdep = virt_get_pte(vm, pt, gva, level, &nls); + uint64_t pde = be64_to_cpu(*pdep); + + TEST_ASSERT((pde & PDE_VALID) && !(pde & PTE_LEAF), + "PDE not present at level: %u gva: 0x%lx pde:0x%lx\n", + level, gva, pde); + pt = pde & PDE_PT_MASK; + } + + ptep = virt_get_pte(vm, pt, gva, level, NULL); + pte = be64_to_cpu(*ptep); + + TEST_ASSERT(pte, + "PTE not present at level: %u gva: 0x%lx pte:0x%lx\n", + level, gva, pte); + + TEST_ASSERT((pte & PTE_VALID) && (pte & PTE_LEAF) && + (pte & PTE_READ) && (pte & PTE_RW) && (pte & PTE_EXEC), + "PTE not valid at level: %u gva: 0x%lx pte:0x%lx\n", + level, gva, pte); + + return (pte & PTE_PAGE_MASK) + (gva & (vm->page_size - 1)); +} + +static void virt_dump_pt(FILE *stream, struct kvm_vm *vm, vm_paddr_t pt, + vm_vaddr_t va, int level, uint8_t indent) +{ + int size, idx; + + size = 1U << (pt_shift(vm, level) + 3); + + for (idx = 0; idx < size; idx += 8, va += pt_entry_coverage(vm, level)) { + uint64_t *page_table = addr_gpa2hva(vm, pt + idx); + uint64_t pte = be64_to_cpu(*page_table); + + if (!(pte & PTE_VALID)) + continue; + + if (pte & PTE_LEAF) { + fprintf(stream, + "%*s PTE[%d] gVA:0x%016lx -> gRA:0x%016llx\n", + indent, "", idx/8, va, pte & PTE_PAGE_MASK); + } else { + fprintf(stream, "%*sPDE%d[%d] gVA:0x%016lx\n", + indent, "", level, idx/8, va); + virt_dump_pt(stream, vm, pte & PDE_PT_MASK, va, + level + 1, indent + 2); + } + } + +} + +void virt_arch_dump(FILE *stream, struct kvm_vm *vm, uint8_t indent) +{ + vm_paddr_t pt = vm->pgd; + + if (!vm->pgd_created) + return; + + virt_dump_pt(stream, vm, pt, 0, 1, indent); +} + +static unsigned long get_r2(void) +{ + unsigned long r2; + + asm("mr %0,%%r2" : "=r"(r2)); + + return r2; +} + +struct kvm_vcpu *vm_arch_vcpu_add(struct kvm_vm *vm, uint32_t vcpu_id, + void *guest_code) +{ + const size_t stack_size = SZ_64K; + vm_vaddr_t stack_vaddr, ex_regs_vaddr; + vm_paddr_t ex_regs_paddr; + struct ex_regs *ex_regs; + struct kvm_regs regs; + struct kvm_vcpu *vcpu; + uint64_t lpcr; + + stack_vaddr = __vm_vaddr_alloc(vm, stack_size, + DEFAULT_GUEST_STACK_VADDR_MIN, + MEM_REGION_DATA); + + ex_regs_vaddr = __vm_vaddr_alloc(vm, stack_size, + DEFAULT_GUEST_STACK_VADDR_MIN, + MEM_REGION_DATA); + ex_regs_paddr = addr_gva2gpa(vm, ex_regs_vaddr); + ex_regs = addr_gpa2hva(vm, ex_regs_paddr); + ex_regs->vaddr = ex_regs_vaddr; + + vcpu = __vm_vcpu_add(vm, vcpu_id); + + vcpu_enable_cap(vcpu, KVM_CAP_PPC_PAPR, 1); + + /* Setup guest registers */ + vcpu_regs_get(vcpu, ®s); + vcpu_get_reg(vcpu, KVM_REG_PPC_LPCR_64, &lpcr); + + regs.pc = (uintptr_t)guest_code; + regs.gpr[1] = stack_vaddr + stack_size - 256; + regs.gpr[2] = (uintptr_t)get_r2(); + regs.gpr[12] = (uintptr_t)guest_code; + regs.gpr[13] = (uintptr_t)ex_regs_paddr; + + regs.msr = MSR_SF | MSR_VEC | MSR_VSX | MSR_FP | + MSR_ME | MSR_IR | MSR_DR | MSR_RI; + + if (BYTE_ORDER == LITTLE_ENDIAN) { + regs.msr |= MSR_LE; + lpcr |= LPCR_ILE; + } else { + lpcr &= ~LPCR_ILE; + } + + vcpu_regs_set(vcpu, ®s); + vcpu_set_reg(vcpu, KVM_REG_PPC_LPCR_64, lpcr); + + return vcpu; +} + +void vcpu_args_set(struct kvm_vcpu *vcpu, unsigned int num, ...) +{ + va_list ap; + struct kvm_regs regs; + int i; + + TEST_ASSERT(num >= 1 && num <= 5, "Unsupported number of args: %u\n", + num); + + va_start(ap, num); + vcpu_regs_get(vcpu, ®s); + + for (i = 0; i < num; i++) + regs.gpr[i + 3] = va_arg(ap, uint64_t); + + vcpu_regs_set(vcpu, ®s); + va_end(ap); +} + +void vcpu_arch_dump(FILE *stream, struct kvm_vcpu *vcpu, uint8_t indent) +{ + struct kvm_regs regs; + + vcpu_regs_get(vcpu, ®s); + + fprintf(stream, "%*sNIA: 0x%016llx MSR: 0x%016llx\n", + indent, "", regs.pc, regs.msr); + fprintf(stream, "%*sLR: 0x%016llx CTR :0x%016llx\n", + indent, "", regs.lr, regs.ctr); + fprintf(stream, "%*sCR: 0x%08llx XER :0x%016llx\n", + indent, "", regs.cr, regs.xer); +} + +void vm_init_descriptor_tables(struct kvm_vm *vm) +{ +} + +void kvm_arch_vm_post_create(struct kvm_vm *vm) +{ + vm_paddr_t excp_paddr; + void *mem; + + excp_paddr = vm_phy_page_alloc(vm, 0, vm->memslots[MEM_REGION_DATA]); + + TEST_ASSERT(excp_paddr == 0, + "Interrupt vectors not allocated at gPA address 0: (0x%lx)", + excp_paddr); + + mem = addr_gpa2hva(vm, excp_paddr); + memcpy(mem, __interrupts_start, __interrupts_end - __interrupts_start); +} + +void assert_on_unhandled_exception(struct kvm_vcpu *vcpu) +{ + struct ucall uc; + + if (get_ucall(vcpu, &uc) == UCALL_UNHANDLED) { + vm_paddr_t ex_regs_paddr; + struct ex_regs *ex_regs; + struct kvm_regs regs; + + vcpu_regs_get(vcpu, ®s); + ex_regs_paddr = (vm_paddr_t)regs.gpr[13]; + ex_regs = addr_gpa2hva(vcpu->vm, ex_regs_paddr); + + TEST_FAIL("Unexpected interrupt in guest NIA:0x%016lx MSR:0x%016lx TRAP:0x%04x", + ex_regs->nia, ex_regs->msr, ex_regs->trap); + } +} + +struct handler { + void (*fn)(struct ex_regs *regs); + int trap; +}; + +#define NR_HANDLERS 10 +static struct handler handlers[NR_HANDLERS]; + +void route_interrupt(struct ex_regs *regs) +{ + int i; + + for (i = 0; i < NR_HANDLERS; i++) { + if (handlers[i].trap == regs->trap) { + handlers[i].fn(regs); + return; + } + } + + ucall(UCALL_UNHANDLED, 0); +} + +void vm_install_exception_handler(struct kvm_vm *vm, int trap, + void (*fn)(struct ex_regs *)) +{ + int i; + + for (i = 0; i < NR_HANDLERS; i++) { + if (!handlers[i].trap || handlers[i].trap == trap) { + if (fn == NULL) + trap = 0; /* Clear handler */ + handlers[i].trap = trap; + handlers[i].fn = fn; + sync_global_to_guest(vm, handlers[i]); + return; + } + } + + TEST_FAIL("Out of exception handlers"); +} + +void kvm_selftest_arch_init(void) +{ + /* + * powerpc default mode is set by host page size and not static, + * so start by computing that early. + */ + guest_modes_append_default(); +} diff --git a/tools/testing/selftests/kvm/lib/powerpc/ucall.c b/tools/testing/selftests/kvm/lib/powerpc/ucall.c new file mode 100644 index 000000000000..ce0ddde45fef --- /dev/null +++ b/tools/testing/selftests/kvm/lib/powerpc/ucall.c @@ -0,0 +1,30 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * ucall support. A ucall is a "hypercall to host userspace". + */ +#include "kvm_util.h" +#include "hcall.h" + +void ucall_arch_init(struct kvm_vm *vm, vm_paddr_t mmio_gpa) +{ +} + +void ucall_arch_do_ucall(vm_vaddr_t uc) +{ + hcall2(H_UCALL, UCALL_R4_UCALL, (uintptr_t)(uc)); +} + +void *ucall_arch_get_ucall(struct kvm_vcpu *vcpu) +{ + struct kvm_run *run = vcpu->run; + + if (run->exit_reason == KVM_EXIT_PAPR_HCALL && + run->papr_hcall.nr == H_UCALL) { + struct kvm_regs regs; + + vcpu_regs_get(vcpu, ®s); + if (regs.gpr[4] == UCALL_R4_UCALL) + return (void *)regs.gpr[5]; + } + return NULL; +} diff --git a/tools/testing/selftests/kvm/powerpc/helpers.h b/tools/testing/selftests/kvm/powerpc/helpers.h new file mode 100644 index 000000000000..8f60bb826830 --- /dev/null +++ b/tools/testing/selftests/kvm/powerpc/helpers.h @@ -0,0 +1,46 @@ +// SPDX-License-Identifier: GPL-2.0-only + +#ifndef SELFTEST_KVM_HELPERS_H +#define SELFTEST_KVM_HELPERS_H + +#include "kvm_util.h" +#include "processor.h" + +static inline void __handle_ucall(struct kvm_vcpu *vcpu, uint64_t expect, struct ucall *uc) +{ + uint64_t ret; + struct kvm_regs regs; + + ret = get_ucall(vcpu, uc); + if (ret == expect) + return; + + vcpu_regs_get(vcpu, ®s); + fprintf(stderr, "Guest failure at NIA:0x%016llx MSR:0x%016llx\n", regs.pc, regs.msr); + fprintf(stderr, "Expected ucall: %lu\n", expect); + + if (ret == UCALL_ABORT) + REPORT_GUEST_ASSERT(*uc); + else + TEST_FAIL("Unexpected ucall: %lu exit_reason=%s", + ret, exit_reason_str(vcpu->run->exit_reason)); +} + +static inline void handle_ucall(struct kvm_vcpu *vcpu, uint64_t expect) +{ + struct ucall uc; + + __handle_ucall(vcpu, expect, &uc); +} + +static inline void host_sync(struct kvm_vcpu *vcpu, uint64_t sync) +{ + struct ucall uc; + + __handle_ucall(vcpu, UCALL_SYNC, &uc); + + TEST_ASSERT(uc.args[1] == (sync), "Sync failed host:%ld guest:%ld", + (long)sync, (long)uc.args[1]); +} + +#endif From patchwork Thu Jun 8 03:24:23 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nicholas Piggin X-Patchwork-Id: 13271574 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1424EC7EE25 for ; Thu, 8 Jun 2023 03:25:16 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234055AbjFHDZM (ORCPT ); Wed, 7 Jun 2023 23:25:12 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34572 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234074AbjFHDYx (ORCPT ); Wed, 7 Jun 2023 23:24:53 -0400 Received: from mail-oo1-xc29.google.com (mail-oo1-xc29.google.com [IPv6:2607:f8b0:4864:20::c29]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 590032696 for ; Wed, 7 Jun 2023 20:24:52 -0700 (PDT) Received: by mail-oo1-xc29.google.com with SMTP id 006d021491bc7-559b0ddcd4aso105351eaf.0 for ; Wed, 07 Jun 2023 20:24:52 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1686194691; x=1688786691; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=3v5N2Gk/tK5Dqm+6ABW1i0aRZErobVuh2y73QQLA8DA=; b=IZ+1xuk2Et+yIaWvje2+vuvg7PBWGy5k/xKqQk5mltdMS4ldReJ7L/8OQDFov3M6gq i/bOCkKzV+LVT0oG4u9Fa7oqxh4P+fio8c9cF1+blPoWf92m58X4+uj32vMBb7gpaGQs vl2OubVnUtr4KOW+X+S82mWkjcnxFNnF30kvduQp5w2UlMt3NB9Rrws7/cxjcZETdVbx c005UuAwaTUDl41AOeu1wWnwUqO81jHwnhb+z6fKGpSY8KyAA9KjqfBmu8LuOe9P35k5 R1APwNwR6cZlTOtwYYze+aiz45G1STlj52oC86rc4hO43b5StCAowPkEoJP4VW4WcfaB UrXA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1686194691; x=1688786691; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=3v5N2Gk/tK5Dqm+6ABW1i0aRZErobVuh2y73QQLA8DA=; b=Jpi9OMWA7CWEx7jXgmZvGlcysdXXT+S2QZiEphNoo9JjrDh1DBdJV+JRQ+4QYq4hic uYwh6xJqZKIAAOJ0Z516Qmre+Og1tALZ4ykrCMFqwUan5SMqfJWJt/S4Vy2aXEBbcPX+ CQ/DIRT+gN/4v7mFItTXIBZWWBhET68UQgs5ml/8JLOEO6UmGKUEZILdQWNuh7XEeesL LQNO6Ierxy0K9nbqXrSDCEy8reyU35GL+hVHNAkZR61MRvli7n8sgYtx7tHbTmfUGCti 846wBK8TW7nHS+nAifQz10/ExyEhKYIWxQEKG113y+V9ktL5iUHwqbRLuNCc53HnwcHm AMDg== X-Gm-Message-State: AC+VfDwwXFH9ihauIMy7ex+ZleA6R3K6O5vthjJvaRoYzSunm3zQMQoz SUoJ2d7HCjF9kNc9Cu7thuYRJ7BhltI= X-Google-Smtp-Source: ACHHUZ5o+e8ue7wuHb9eVXQby1L7Zndb4Fuw4DizRxGk+rlHYuzENWN0b42aAhY3rjEHq8COTJYSUw== X-Received: by 2002:a05:6808:11a:b0:398:132b:7462 with SMTP id b26-20020a056808011a00b00398132b7462mr6606689oie.54.1686194691127; Wed, 07 Jun 2023 20:24:51 -0700 (PDT) Received: from wheely.local0.net (58-6-224-112.tpgi.com.au. [58.6.224.112]) by smtp.gmail.com with ESMTPSA id s12-20020a17090a5d0c00b0025930e50e28sm2015629pji.41.2023.06.07.20.24.47 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 07 Jun 2023 20:24:50 -0700 (PDT) From: Nicholas Piggin To: kvm@vger.kernel.org, Paolo Bonzini Cc: Nicholas Piggin , linuxppc-dev@lists.ozlabs.org, Michael Ellerman Subject: [PATCH v3 4/6] KVM: PPC: selftests: add selftests sanity tests Date: Thu, 8 Jun 2023 13:24:23 +1000 Message-Id: <20230608032425.59796-5-npiggin@gmail.com> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20230608032425.59796-1-npiggin@gmail.com> References: <20230608032425.59796-1-npiggin@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Add tests that exercise very basic functions of the kvm selftests framework, guest creation, ucalls, hcalls, copying data between guest and host, interrupts and page faults. These don't stress KVM so much as being useful when developing support for powerpc. Acked-by: Michael Ellerman (powerpc) Signed-off-by: Nicholas Piggin --- tools/testing/selftests/kvm/Makefile | 2 + .../selftests/kvm/include/powerpc/hcall.h | 2 + .../testing/selftests/kvm/powerpc/null_test.c | 166 ++++++++++++++++++ .../selftests/kvm/powerpc/rtas_hcall.c | 136 ++++++++++++++ 4 files changed, 306 insertions(+) create mode 100644 tools/testing/selftests/kvm/powerpc/null_test.c create mode 100644 tools/testing/selftests/kvm/powerpc/rtas_hcall.c diff --git a/tools/testing/selftests/kvm/Makefile b/tools/testing/selftests/kvm/Makefile index 53cd3ce63dec..efb8700b9752 100644 --- a/tools/testing/selftests/kvm/Makefile +++ b/tools/testing/selftests/kvm/Makefile @@ -184,6 +184,8 @@ TEST_GEN_PROGS_riscv += kvm_page_table_test TEST_GEN_PROGS_riscv += set_memory_region_test TEST_GEN_PROGS_riscv += kvm_binary_stats_test +TEST_GEN_PROGS_powerpc += powerpc/null_test +TEST_GEN_PROGS_powerpc += powerpc/rtas_hcall TEST_GEN_PROGS_powerpc += access_tracking_perf_test TEST_GEN_PROGS_powerpc += demand_paging_test TEST_GEN_PROGS_powerpc += dirty_log_test diff --git a/tools/testing/selftests/kvm/include/powerpc/hcall.h b/tools/testing/selftests/kvm/include/powerpc/hcall.h index ba119f5a3fef..04c7d2d13020 100644 --- a/tools/testing/selftests/kvm/include/powerpc/hcall.h +++ b/tools/testing/selftests/kvm/include/powerpc/hcall.h @@ -12,6 +12,8 @@ #define UCALL_R4_UCALL 0x5715 // regular ucall, r5 contains ucall pointer #define UCALL_R4_SIMPLE 0x0000 // simple exit usable by asm with no ucall data +#define H_RTAS 0xf000 + int64_t hcall0(uint64_t token); int64_t hcall1(uint64_t token, uint64_t arg1); int64_t hcall2(uint64_t token, uint64_t arg1, uint64_t arg2); diff --git a/tools/testing/selftests/kvm/powerpc/null_test.c b/tools/testing/selftests/kvm/powerpc/null_test.c new file mode 100644 index 000000000000..31db0b6becd6 --- /dev/null +++ b/tools/testing/selftests/kvm/powerpc/null_test.c @@ -0,0 +1,166 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Tests for guest creation, run, ucall, interrupt, and vm dumping. + */ + +#define _GNU_SOURCE /* for program_invocation_short_name */ +#include +#include +#include +#include +#include + +#include "test_util.h" +#include "kvm_util.h" +#include "kselftest.h" +#include "processor.h" +#include "helpers.h" + +extern void guest_code_asm(void); +asm(".global guest_code_asm"); +asm(".balign 4"); +asm("guest_code_asm:"); +asm("li 3,0"); // H_UCALL +asm("li 4,0"); // UCALL_R4_SIMPLE +asm("sc 1"); + +static void test_asm(void) +{ + struct kvm_vcpu *vcpu; + struct kvm_vm *vm; + + vm = vm_create_with_one_vcpu(&vcpu, guest_code_asm); + + vcpu_run(vcpu); + handle_ucall(vcpu, UCALL_NONE); + + kvm_vm_free(vm); +} + +static void guest_code_ucall(void) +{ + GUEST_DONE(); +} + +static void test_ucall(void) +{ + struct kvm_vcpu *vcpu; + struct kvm_vm *vm; + + vm = vm_create_with_one_vcpu(&vcpu, guest_code_ucall); + + vcpu_run(vcpu); + handle_ucall(vcpu, UCALL_DONE); + + kvm_vm_free(vm); +} + +static void trap_handler(struct ex_regs *regs) +{ + GUEST_SYNC(1); + regs->nia += 4; +} + +static void guest_code_trap(void) +{ + GUEST_SYNC(0); + asm volatile("trap"); + GUEST_DONE(); +} + +static void test_trap(void) +{ + struct kvm_vcpu *vcpu; + struct kvm_vm *vm; + + vm = vm_create_with_one_vcpu(&vcpu, guest_code_trap); + vm_install_exception_handler(vm, 0x700, trap_handler); + + vcpu_run(vcpu); + host_sync(vcpu, 0); + vcpu_run(vcpu); + host_sync(vcpu, 1); + vcpu_run(vcpu); + handle_ucall(vcpu, UCALL_DONE); + + vm_install_exception_handler(vm, 0x700, NULL); + + kvm_vm_free(vm); +} + +static void dsi_handler(struct ex_regs *regs) +{ + GUEST_SYNC(1); + regs->nia += 4; +} + +static void guest_code_dsi(void) +{ + GUEST_SYNC(0); + asm volatile("stb %r0,0(0)"); + GUEST_DONE(); +} + +static void test_dsi(void) +{ + struct kvm_vcpu *vcpu; + struct kvm_vm *vm; + + vm = vm_create_with_one_vcpu(&vcpu, guest_code_dsi); + vm_install_exception_handler(vm, 0x300, dsi_handler); + + vcpu_run(vcpu); + host_sync(vcpu, 0); + vcpu_run(vcpu); + host_sync(vcpu, 1); + vcpu_run(vcpu); + handle_ucall(vcpu, UCALL_DONE); + + vm_install_exception_handler(vm, 0x300, NULL); + + kvm_vm_free(vm); +} + +static void test_dump(void) +{ + struct kvm_vcpu *vcpu; + struct kvm_vm *vm; + + vm = vm_create_with_one_vcpu(&vcpu, guest_code_ucall); + + vcpu_run(vcpu); + handle_ucall(vcpu, UCALL_DONE); + + printf("Testing vm_dump...\n"); + vm_dump(stderr, vm, 2); + + kvm_vm_free(vm); +} + + +struct testdef { + const char *name; + void (*test)(void); +} testlist[] = { + { "null asm test", test_asm}, + { "null ucall test", test_ucall}, + { "trap test", test_trap}, + { "page fault test", test_dsi}, + { "vm dump test", test_dump}, +}; + +int main(int argc, char *argv[]) +{ + int idx; + + ksft_print_header(); + + ksft_set_plan(ARRAY_SIZE(testlist)); + + for (idx = 0; idx < ARRAY_SIZE(testlist); idx++) { + testlist[idx].test(); + ksft_test_result_pass("%s\n", testlist[idx].name); + } + + ksft_finished(); /* Print results and exit() accordingly */ +} diff --git a/tools/testing/selftests/kvm/powerpc/rtas_hcall.c b/tools/testing/selftests/kvm/powerpc/rtas_hcall.c new file mode 100644 index 000000000000..05af22c711cb --- /dev/null +++ b/tools/testing/selftests/kvm/powerpc/rtas_hcall.c @@ -0,0 +1,136 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Test the KVM H_RTAS hcall and copying buffers between guest and host. + */ + +#define _GNU_SOURCE /* for program_invocation_short_name */ +#include +#include +#include +#include +#include + +#include "test_util.h" +#include "kvm_util.h" +#include "kselftest.h" +#include "hcall.h" + +struct rtas_args { + __be32 token; + __be32 nargs; + __be32 nret; + __be32 args[16]; + __be32 *rets; /* Pointer to return values in args[]. */ +}; + +static void guest_code(void) +{ + struct rtas_args r; + int64_t rc; + + r.token = cpu_to_be32(0xdeadbeef); + r.nargs = cpu_to_be32(3); + r.nret = cpu_to_be32(2); + r.rets = &r.args[3]; + r.args[0] = cpu_to_be32(0x1000); + r.args[1] = cpu_to_be32(0x1001); + r.args[2] = cpu_to_be32(0x1002); + rc = hcall1(H_RTAS, (uint64_t)&r); + GUEST_ASSERT(rc == 0); + GUEST_ASSERT_1(be32_to_cpu(r.rets[0]) == 0xabc, be32_to_cpu(r.rets[0])); + GUEST_ASSERT_1(be32_to_cpu(r.rets[1]) == 0x123, be32_to_cpu(r.rets[1])); + + GUEST_DONE(); +} + +int main(int argc, char *argv[]) +{ + struct kvm_regs regs; + struct rtas_args *r; + vm_vaddr_t rtas_vaddr; + struct ucall uc; + struct kvm_vcpu *vcpu; + struct kvm_vm *vm; + uint64_t tmp; + int ret; + + ksft_print_header(); + + ksft_set_plan(1); + + /* Create VM */ + vm = vm_create_with_one_vcpu(&vcpu, guest_code); + + ret = _vcpu_run(vcpu); + TEST_ASSERT(ret == 0, "vcpu_run failed: %d\n", ret); + switch ((tmp = get_ucall(vcpu, &uc))) { + case UCALL_NONE: + break; // good + case UCALL_DONE: + TEST_FAIL("Unexpected final guest exit %lu\n", tmp); + break; + case UCALL_ABORT: + REPORT_GUEST_ASSERT_N(uc, "values: %lu (0x%lx)\n", + GUEST_ASSERT_ARG(uc, 0), + GUEST_ASSERT_ARG(uc, 0)); + break; + default: + TEST_FAIL("Unexpected guest exit %lu\n", tmp); + } + + TEST_ASSERT(vcpu->run->exit_reason == KVM_EXIT_PAPR_HCALL, + "Expected PAPR_HCALL exit, got %s\n", + exit_reason_str(vcpu->run->exit_reason)); + TEST_ASSERT(vcpu->run->papr_hcall.nr == H_RTAS, + "Expected H_RTAS exit, got %lld\n", + vcpu->run->papr_hcall.nr); + + vcpu_regs_get(vcpu, ®s); + rtas_vaddr = regs.gpr[4]; + + r = addr_gva2hva(vm, rtas_vaddr); + + TEST_ASSERT(r->token == cpu_to_be32(0xdeadbeef), + "Expected RTAS token 0xdeadbeef, got 0x%x\n", + be32_to_cpu(r->token)); + TEST_ASSERT(r->nargs == cpu_to_be32(3), + "Expected RTAS nargs 3, got %u\n", + be32_to_cpu(r->nargs)); + TEST_ASSERT(r->nret == cpu_to_be32(2), + "Expected RTAS nret 2, got %u\n", + be32_to_cpu(r->nret)); + TEST_ASSERT(r->args[0] == cpu_to_be32(0x1000), + "Expected args[0] to be 0x1000, got 0x%x\n", + be32_to_cpu(r->args[0])); + TEST_ASSERT(r->args[1] == cpu_to_be32(0x1001), + "Expected args[1] to be 0x1001, got 0x%x\n", + be32_to_cpu(r->args[1])); + TEST_ASSERT(r->args[2] == cpu_to_be32(0x1002), + "Expected args[2] to be 0x1002, got 0x%x\n", + be32_to_cpu(r->args[2])); + + r->args[3] = cpu_to_be32(0xabc); + r->args[4] = cpu_to_be32(0x123); + + regs.gpr[3] = 0; + vcpu_regs_set(vcpu, ®s); + + ret = _vcpu_run(vcpu); + TEST_ASSERT(ret == 0, "vcpu_run failed: %d\n", ret); + switch ((tmp = get_ucall(vcpu, &uc))) { + case UCALL_DONE: + break; + case UCALL_ABORT: + REPORT_GUEST_ASSERT_N(uc, "values: %lu (0x%lx)\n", + GUEST_ASSERT_ARG(uc, 0), + GUEST_ASSERT_ARG(uc, 0)); + break; + default: + TEST_FAIL("Unexpected guest exit %lu\n", tmp); + } + + kvm_vm_free(vm); + + ksft_test_result_pass("%s\n", "rtas buffer copy test"); + ksft_finished(); /* Print results and exit() accordingly */ +} From patchwork Thu Jun 8 03:24:24 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nicholas Piggin X-Patchwork-Id: 13271575 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0DE89C77B7A for ; Thu, 8 Jun 2023 03:25:20 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233648AbjFHDZR (ORCPT ); Wed, 7 Jun 2023 23:25:17 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34622 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234117AbjFHDZB (ORCPT ); Wed, 7 Jun 2023 23:25:01 -0400 Received: from mail-pj1-x102e.google.com (mail-pj1-x102e.google.com [IPv6:2607:f8b0:4864:20::102e]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5474926A2 for ; Wed, 7 Jun 2023 20:24:56 -0700 (PDT) Received: by mail-pj1-x102e.google.com with SMTP id 98e67ed59e1d1-25669acf1b0so123722a91.0 for ; Wed, 07 Jun 2023 20:24:56 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1686194695; x=1688786695; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=mtkn8MG1l6d0rnDrKg50x+C49aCaIkLucrOXa80nT/k=; b=V3stYaLM6PVapxKi5aRX9P/lJi1xylmQzw3KYmDO7XEzUlPyt1stDvmmVOh/a5fNFQ z/vRjEJ/P+q+jOYfp2z7FVumostFaXImY8n9qisiuTSpXyGkarykZtwzHjSpsErE3lUL 5FfLwaKjztr8oLBJxbbE0in/s8pQm+0nx3UehpRJSIwnQChrrBeZdIgCtEQrQzx9UBuy T4neAPWs0Q+mBauXMQHi82s8HUaqNp3LTSEWetsXt7PPQea/R7sMQ+rjj+ZW9jnuFZeD 4vkTyinNiNgDSLjTjkCpri+rBhKPuL2pEFUMWuURiXB+yAQ8jn4zjRNrwL0MEID2CzDq JNcA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1686194695; x=1688786695; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=mtkn8MG1l6d0rnDrKg50x+C49aCaIkLucrOXa80nT/k=; b=Wz3GP7U2g/pmbmOXrGi9l/A0+s3ZPHa5Tw6HBkpgNFYnOEQFUGJk0UChFUGZTVQlq5 3T/4D4ptuMlA/c+MK0hQsQWbXbhyLkQIBoF1GpvjHgfKovd2kWOMXD6HX/8MEcLU+b92 R/wwP49tbCN9KvO5ew1+G0NJ0KXAY99jjDcjdFljN+nrpOApp33FoPcuAmRM+dNrSEir PxKoU0UWKe4oIRE7gXk6BfcsdpAFiHQmj5CK9M9mAYCgRcuKf+g574zcW4aYq2ZLLA+x R70Pd/AjANI5dgmHQ8LDIeTw9a6Xw1sWF7TPw5VURQK39s3d+AOigOaOgf8sSUaG5TWW uMPQ== X-Gm-Message-State: AC+VfDytdtINPhZOxc5qXeFlkxVOO/NYuvtEooIlMX9ir056KwuTY+ZJ cT/VTTntA0Qo8MD/vXF4osnUgTnZgE4= X-Google-Smtp-Source: ACHHUZ6Ux01aJrb26kNNuYZTHJe9RtcE09reTQxW8rVOcqTYksMpZ22HkHCycmfnzX2MS6V2yty44w== X-Received: by 2002:a17:90a:a42:b0:259:224a:9cf9 with SMTP id o60-20020a17090a0a4200b00259224a9cf9mr6677604pjo.36.1686194695062; Wed, 07 Jun 2023 20:24:55 -0700 (PDT) Received: from wheely.local0.net (58-6-224-112.tpgi.com.au. [58.6.224.112]) by smtp.gmail.com with ESMTPSA id s12-20020a17090a5d0c00b0025930e50e28sm2015629pji.41.2023.06.07.20.24.51 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 07 Jun 2023 20:24:54 -0700 (PDT) From: Nicholas Piggin To: kvm@vger.kernel.org, Paolo Bonzini Cc: Nicholas Piggin , linuxppc-dev@lists.ozlabs.org Subject: [PATCH v3 5/6] KVM: PPC: selftests: Add a TLBIEL virtualisation tester Date: Thu, 8 Jun 2023 13:24:24 +1000 Message-Id: <20230608032425.59796-6-npiggin@gmail.com> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20230608032425.59796-1-npiggin@gmail.com> References: <20230608032425.59796-1-npiggin@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org TLBIEL virtualisation has been a source of difficulty. The TLBIEL instruction operates on the TLB of the hardware thread which executes it, but the behaviour expected by the guest environment Signed-off-by: Nicholas Piggin --- tools/testing/selftests/kvm/Makefile | 1 + .../selftests/kvm/include/powerpc/processor.h | 7 + .../selftests/kvm/lib/powerpc/processor.c | 108 +++- .../selftests/kvm/powerpc/tlbiel_test.c | 508 ++++++++++++++++++ 4 files changed, 621 insertions(+), 3 deletions(-) create mode 100644 tools/testing/selftests/kvm/powerpc/tlbiel_test.c diff --git a/tools/testing/selftests/kvm/Makefile b/tools/testing/selftests/kvm/Makefile index efb8700b9752..aa3a8ca676c2 100644 --- a/tools/testing/selftests/kvm/Makefile +++ b/tools/testing/selftests/kvm/Makefile @@ -186,6 +186,7 @@ TEST_GEN_PROGS_riscv += kvm_binary_stats_test TEST_GEN_PROGS_powerpc += powerpc/null_test TEST_GEN_PROGS_powerpc += powerpc/rtas_hcall +TEST_GEN_PROGS_powerpc += powerpc/tlbiel_test TEST_GEN_PROGS_powerpc += access_tracking_perf_test TEST_GEN_PROGS_powerpc += demand_paging_test TEST_GEN_PROGS_powerpc += dirty_log_test diff --git a/tools/testing/selftests/kvm/include/powerpc/processor.h b/tools/testing/selftests/kvm/include/powerpc/processor.h index ce5a23525dbd..92ef6476a9ef 100644 --- a/tools/testing/selftests/kvm/include/powerpc/processor.h +++ b/tools/testing/selftests/kvm/include/powerpc/processor.h @@ -7,6 +7,7 @@ #include #include "ppc_asm.h" +#include "kvm_util_base.h" extern unsigned char __interrupts_start[]; extern unsigned char __interrupts_end[]; @@ -31,6 +32,12 @@ struct ex_regs { void vm_install_exception_handler(struct kvm_vm *vm, int vector, void (*handler)(struct ex_regs *)); +vm_paddr_t virt_pt_duplicate(struct kvm_vm *vm); +void set_radix_proc_table(struct kvm_vm *vm, int pid, vm_paddr_t pgd); +bool virt_wrprotect_pte(struct kvm_vm *vm, uint64_t gva); +bool virt_wrenable_pte(struct kvm_vm *vm, uint64_t gva); +bool virt_remap_pte(struct kvm_vm *vm, uint64_t gva, vm_paddr_t gpa); + static inline void cpu_relax(void) { asm volatile("" ::: "memory"); diff --git a/tools/testing/selftests/kvm/lib/powerpc/processor.c b/tools/testing/selftests/kvm/lib/powerpc/processor.c index 02db2ff86da8..17ea440f9026 100644 --- a/tools/testing/selftests/kvm/lib/powerpc/processor.c +++ b/tools/testing/selftests/kvm/lib/powerpc/processor.c @@ -23,7 +23,7 @@ static void set_proc_table(struct kvm_vm *vm, int pid, uint64_t dw0, uint64_t dw proc_table[pid * 2 + 1] = cpu_to_be64(dw1); } -static void set_radix_proc_table(struct kvm_vm *vm, int pid, vm_paddr_t pgd) +void set_radix_proc_table(struct kvm_vm *vm, int pid, vm_paddr_t pgd) { set_proc_table(vm, pid, pgd | RADIX_TREE_SIZE | RADIX_PGD_INDEX_SIZE, 0); } @@ -146,9 +146,69 @@ static uint64_t *virt_get_pte(struct kvm_vm *vm, vm_paddr_t pt, #define PDE_NLS 0x0000000000000011ull #define PDE_PT_MASK 0x0fffffffffffff00ull -void virt_arch_pg_map(struct kvm_vm *vm, uint64_t gva, uint64_t gpa) +static uint64_t *virt_lookup_pte(struct kvm_vm *vm, uint64_t gva) { vm_paddr_t pt = vm->pgd; + uint64_t *ptep; + int level; + + for (level = 1; level <= 3; level++) { + uint64_t nls; + uint64_t *pdep = virt_get_pte(vm, pt, gva, level, &nls); + uint64_t pde = be64_to_cpu(*pdep); + + if (pde) { + TEST_ASSERT((pde & PDE_VALID) && !(pde & PTE_LEAF), + "Invalid PDE at level: %u gva: 0x%lx pde:0x%lx\n", + level, gva, pde); + pt = pde & PDE_PT_MASK; + continue; + } + + return NULL; + } + + ptep = virt_get_pte(vm, pt, gva, level, NULL); + + return ptep; +} + +static bool virt_modify_pte(struct kvm_vm *vm, uint64_t gva, uint64_t clr, uint64_t set) +{ + uint64_t *ptep, pte; + + ptep = virt_lookup_pte(vm, gva); + if (!ptep) + return false; + + pte = be64_to_cpu(*ptep); + if (!(pte & PTE_VALID)) + return false; + + pte = (pte & ~clr) | set; + *ptep = cpu_to_be64(pte); + + return true; +} + +bool virt_remap_pte(struct kvm_vm *vm, uint64_t gva, vm_paddr_t gpa) +{ + return virt_modify_pte(vm, gva, PTE_PAGE_MASK, (gpa & PTE_PAGE_MASK)); +} + +bool virt_wrprotect_pte(struct kvm_vm *vm, uint64_t gva) +{ + return virt_modify_pte(vm, gva, PTE_RW, 0); +} + +bool virt_wrenable_pte(struct kvm_vm *vm, uint64_t gva) +{ + return virt_modify_pte(vm, gva, 0, PTE_RW); +} + +static void __virt_arch_pg_map(struct kvm_vm *vm, vm_paddr_t pgd, uint64_t gva, uint64_t gpa) +{ + vm_paddr_t pt = pgd; uint64_t *ptep, pte; int level; @@ -187,6 +247,49 @@ void virt_arch_pg_map(struct kvm_vm *vm, uint64_t gva, uint64_t gpa) *ptep = cpu_to_be64(pte); } +void virt_arch_pg_map(struct kvm_vm *vm, uint64_t gva, uint64_t gpa) +{ + __virt_arch_pg_map(vm, vm->pgd, gva, gpa); +} + +static void __virt_pt_duplicate(struct kvm_vm *vm, vm_paddr_t pgd, vm_paddr_t pt, vm_vaddr_t va, int level) +{ + uint64_t *page_table; + int size, idx; + + page_table = addr_gpa2hva(vm, pt); + size = 1U << pt_shift(vm, level); + for (idx = 0; idx < size; idx++) { + uint64_t pte = be64_to_cpu(page_table[idx]); + if (pte & PTE_VALID) { + if (pte & PTE_LEAF) { + __virt_arch_pg_map(vm, pgd, va, pte & PTE_PAGE_MASK); + } else { + __virt_pt_duplicate(vm, pgd, pte & PDE_PT_MASK, va, level + 1); + } + } + va += pt_entry_coverage(vm, level); + } +} + +vm_paddr_t virt_pt_duplicate(struct kvm_vm *vm) +{ + vm_paddr_t pgtb; + uint64_t *page_table; + size_t pgd_pages; + + pgd_pages = 1UL << ((RADIX_PGD_INDEX_SIZE + 3) >> vm->page_shift); + TEST_ASSERT(pgd_pages == 1, "PGD allocation must be single page"); + pgtb = vm_phy_page_alloc(vm, KVM_GUEST_PAGE_TABLE_MIN_PADDR, + vm->memslots[MEM_REGION_PT]); + page_table = addr_gpa2hva(vm, pgtb); + memset(page_table, 0, vm->page_size * pgd_pages); + + __virt_pt_duplicate(vm, pgtb, vm->pgd, 0, 1); + + return pgtb; +} + vm_paddr_t addr_arch_gva2gpa(struct kvm_vm *vm, vm_vaddr_t gva) { vm_paddr_t pt = vm->pgd; @@ -244,7 +347,6 @@ static void virt_dump_pt(FILE *stream, struct kvm_vm *vm, vm_paddr_t pt, level + 1, indent + 2); } } - } void virt_arch_dump(FILE *stream, struct kvm_vm *vm, uint8_t indent) diff --git a/tools/testing/selftests/kvm/powerpc/tlbiel_test.c b/tools/testing/selftests/kvm/powerpc/tlbiel_test.c new file mode 100644 index 000000000000..63ffcff15617 --- /dev/null +++ b/tools/testing/selftests/kvm/powerpc/tlbiel_test.c @@ -0,0 +1,508 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Test TLBIEL virtualisation. The TLBIEL instruction operates on cached + * translations of the hardware thread and/or core which executes it, but the + * behaviour required of the guest is that it should invalidate cached + * translations visible to the vCPU that executed it. The instruction can + * not be trapped by the hypervisor. + * + * This requires that when a vCPU is migrated to a different hardware thread, + * KVM must ensure that no potentially stale translations be visible on + * the new hardware thread. Implementing this has been a source of + * difficulty. + * + * This test tries to create and invalidate different kinds oftranslations + * while moving vCPUs between CPUs, and checking for stale translations. + */ + +#define _GNU_SOURCE /* for program_invocation_short_name */ +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#include "test_util.h" +#include "kvm_util.h" +#include "kselftest.h" +#include "processor.h" +#include "helpers.h" + +static int nr_cpus; +static int *cpu_array; + +static void set_cpu(int cpu) +{ + cpu_set_t set; + + CPU_ZERO(&set); + CPU_SET(cpu, &set); + + if (sched_setaffinity(0, sizeof(set), &set) == -1) { + perror("sched_setaffinity"); + exit(1); + } +} + +static void set_random_cpu(void) +{ + set_cpu(cpu_array[random() % nr_cpus]); +} + +static void init_sched_cpu(void) +{ + cpu_set_t possible_mask; + int i, cnt, nproc; + + nproc = get_nprocs_conf(); + + TEST_ASSERT(!sched_getaffinity(0, sizeof(possible_mask), &possible_mask), + "sched_getaffinity failed, errno = %d (%s)", errno, strerror(errno)); + + nr_cpus = CPU_COUNT(&possible_mask); + cpu_array = malloc(nr_cpus * sizeof(int)); + + cnt = 0; + for (i = 0; i < nproc; i++) { + if (CPU_ISSET(i, &possible_mask)) { + cpu_array[cnt] = i; + cnt++; + } + } +} + +static volatile bool timeout; + +static void set_timer(int sec) +{ + struct itimerval timer; + + timeout = false; + + timer.it_value.tv_sec = sec; + timer.it_value.tv_usec = 0; + timer.it_interval = timer.it_value; + TEST_ASSERT(setitimer(ITIMER_REAL, &timer, NULL) == 0, + "setitimer failed %s", strerror(errno)); +} + +static void sigalrm_handler(int sig) +{ + timeout = true; +} + +static void init_timers(void) +{ + TEST_ASSERT(signal(SIGALRM, sigalrm_handler) != SIG_ERR, + "Failed to register SIGALRM handler, errno = %d (%s)", + errno, strerror(errno)); +} + +static inline void virt_invalidate_tlb(uint64_t gva) +{ + unsigned long rb, rs; + unsigned long is = 2, ric = 0, prs = 1, r = 1; + + rb = is << 10; + rs = 0; + + asm volatile("ptesync ; .machine push ; .machine power9 ; tlbiel %0,%1,%2,%3,%4 ; .machine pop ; ptesync" + :: "r"(rb), "r"(rs), "i"(ric), "i"(prs), "i"(r) + : "memory"); +} + +static inline void virt_invalidate_pwc(uint64_t gva) +{ + unsigned long rb, rs; + unsigned long is = 2, ric = 1, prs = 1, r = 1; + + rb = is << 10; + rs = 0; + + asm volatile("ptesync ; .machine push ; .machine power9 ; tlbiel %0,%1,%2,%3,%4 ; .machine pop ; ptesync" + :: "r"(rb), "r"(rs), "i"(ric), "i"(prs), "i"(r) + : "memory"); +} + +static inline void virt_invalidate_all(uint64_t gva) +{ + unsigned long rb, rs; + unsigned long is = 2, ric = 2, prs = 1, r = 1; + + rb = is << 10; + rs = 0; + + asm volatile("ptesync ; .machine push ; .machine power9 ; tlbiel %0,%1,%2,%3,%4 ; .machine pop ; ptesync" + :: "r"(rb), "r"(rs), "i"(ric), "i"(prs), "i"(r) + : "memory"); +} + +static inline void virt_invalidate_page(uint64_t gva) +{ + unsigned long rb, rs; + unsigned long is = 0, ric = 0, prs = 1, r = 1; + unsigned long ap = 0x5; + unsigned long epn = gva & ~0xffffUL; + unsigned long pid = 0; + + rb = epn | (is << 10) | (ap << 5); + rs = pid << 32; + + asm volatile("ptesync ; .machine push ; .machine power9 ; tlbiel %0,%1,%2,%3,%4 ; .machine pop ; ptesync" + :: "r"(rb), "r"(rs), "i"(ric), "i"(prs), "i"(r) + : "memory"); +} + +enum { + SYNC_BEFORE_LOAD1, + SYNC_BEFORE_LOAD2, + SYNC_BEFORE_STORE, + SYNC_BEFORE_INVALIDATE, + SYNC_DSI, +}; + +static void remap_dsi_handler(struct ex_regs *regs) +{ + GUEST_ASSERT(0); +} + +#define PAGE1_VAL 0x1234567890abcdef +#define PAGE2_VAL 0x5c5c5c5c5c5c5c5c + +static void remap_guest_code(vm_vaddr_t page) +{ + unsigned long *mem = (void *)page; + + for (;;) { + unsigned long tmp; + + GUEST_SYNC(SYNC_BEFORE_LOAD1); + asm volatile("ld %0,%1" : "=r"(tmp) : "m"(*mem)); + GUEST_ASSERT(tmp == PAGE1_VAL); + GUEST_SYNC(SYNC_BEFORE_INVALIDATE); + virt_invalidate_page(page); + GUEST_SYNC(SYNC_BEFORE_LOAD2); + asm volatile("ld %0,%1" : "=r"(tmp) : "m"(*mem)); + GUEST_ASSERT(tmp == PAGE2_VAL); + GUEST_SYNC(SYNC_BEFORE_INVALIDATE); + virt_invalidate_page(page); + } +} + +static void remap_test(void) +{ + struct kvm_vcpu *vcpu; + struct kvm_vm *vm; + vm_vaddr_t vaddr; + vm_paddr_t pages[2]; + uint64_t *hostptr; + + /* Create VM */ + vm = vm_create_with_one_vcpu(&vcpu, remap_guest_code); + vm_install_exception_handler(vm, 0x300, remap_dsi_handler); + + vaddr = vm_vaddr_alloc_page(vm); + pages[0] = addr_gva2gpa(vm, vaddr); + pages[1] = vm_phy_page_alloc(vm, 0, vm->memslots[MEM_REGION_DATA]); + + hostptr = addr_gpa2hva(vm, pages[0]); + *hostptr = PAGE1_VAL; + + hostptr = addr_gpa2hva(vm, pages[1]); + *hostptr = PAGE2_VAL; + + vcpu_args_set(vcpu, 1, vaddr); + + set_random_cpu(); + set_timer(10); + + while (!timeout) { + vcpu_run(vcpu); + + host_sync(vcpu, SYNC_BEFORE_LOAD1); + set_random_cpu(); + vcpu_run(vcpu); + + host_sync(vcpu, SYNC_BEFORE_INVALIDATE); + set_random_cpu(); + TEST_ASSERT(virt_remap_pte(vm, vaddr, pages[1]), "Remap page1 failed"); + vcpu_run(vcpu); + + host_sync(vcpu, SYNC_BEFORE_LOAD2); + set_random_cpu(); + vcpu_run(vcpu); + + host_sync(vcpu, SYNC_BEFORE_INVALIDATE); + TEST_ASSERT(virt_remap_pte(vm, vaddr, pages[0]), "Remap page0 failed"); + set_random_cpu(); + } + + vm_install_exception_handler(vm, 0x300, NULL); + + kvm_vm_free(vm); +} + +static void wrprotect_dsi_handler(struct ex_regs *regs) +{ + GUEST_SYNC(SYNC_DSI); + regs->nia += 4; +} + +static void wrprotect_guest_code(vm_vaddr_t page) +{ + volatile char *mem = (void *)page; + + for (;;) { + GUEST_SYNC(SYNC_BEFORE_STORE); + asm volatile("stb %1,%0" : "=m"(*mem) : "r"(1)); + GUEST_SYNC(SYNC_BEFORE_INVALIDATE); + virt_invalidate_page(page); + } +} + +static void wrprotect_test(void) +{ + struct kvm_vcpu *vcpu; + struct kvm_vm *vm; + vm_vaddr_t page; + void *hostptr; + + /* Create VM */ + vm = vm_create_with_one_vcpu(&vcpu, wrprotect_guest_code); + vm_install_exception_handler(vm, 0x300, wrprotect_dsi_handler); + + page = vm_vaddr_alloc_page(vm); + hostptr = addr_gva2hva(vm, page); + memset(hostptr, 0, vm->page_size); + + vcpu_args_set(vcpu, 1, page); + + set_random_cpu(); + set_timer(10); + + while (!timeout) { + vcpu_run(vcpu); + host_sync(vcpu, SYNC_BEFORE_STORE); + + vcpu_run(vcpu); + host_sync(vcpu, SYNC_BEFORE_INVALIDATE); + + TEST_ASSERT(virt_wrprotect_pte(vm, page), "Wrprotect page failed"); + /* Invalidate on different CPU */ + set_random_cpu(); + vcpu_run(vcpu); + host_sync(vcpu, SYNC_BEFORE_STORE); + + /* Store on different CPU */ + set_random_cpu(); + vcpu_run(vcpu); + host_sync(vcpu, SYNC_DSI); + vcpu_run(vcpu); + host_sync(vcpu, SYNC_BEFORE_INVALIDATE); + + TEST_ASSERT(virt_wrenable_pte(vm, page), "Wrenable page failed"); + + /* Invalidate on different CPU when we go around */ + set_random_cpu(); + } + + vm_install_exception_handler(vm, 0x300, NULL); + + kvm_vm_free(vm); +} + +static void wrp_mt_dsi_handler(struct ex_regs *regs) +{ + GUEST_SYNC(SYNC_DSI); + regs->nia += 4; +} + +static void wrp_mt_guest_code(vm_vaddr_t page, bool invalidates) +{ + volatile char *mem = (void *)page; + + for (;;) { + GUEST_SYNC(SYNC_BEFORE_STORE); + asm volatile("stb %1,%0" : "=m"(*mem) : "r"(1)); + if (invalidates) { + GUEST_SYNC(SYNC_BEFORE_INVALIDATE); + virt_invalidate_page(page); + } + } +} + +static void wrp_mt_test(void) +{ + struct kvm_vcpu *vcpu[2]; + struct kvm_vm *vm; + vm_vaddr_t page; + void *hostptr; + + /* Create VM */ + vm = vm_create_with_vcpus(2, wrp_mt_guest_code, vcpu); + vm_install_exception_handler(vm, 0x300, wrp_mt_dsi_handler); + + page = vm_vaddr_alloc_page(vm); + hostptr = addr_gva2hva(vm, page); + memset(hostptr, 0, vm->page_size); + + vcpu_args_set(vcpu[0], 2, page, 1); + vcpu_args_set(vcpu[1], 2, page, 0); + + set_random_cpu(); + set_timer(10); + + while (!timeout) { + /* Run vcpu[1] only when page is writable, should never fault */ + vcpu_run(vcpu[1]); + host_sync(vcpu[1], SYNC_BEFORE_STORE); + + vcpu_run(vcpu[0]); + host_sync(vcpu[0], SYNC_BEFORE_STORE); + + vcpu_run(vcpu[0]); + host_sync(vcpu[0], SYNC_BEFORE_INVALIDATE); + + TEST_ASSERT(virt_wrprotect_pte(vm, page), "Wrprotect page failed"); + /* Invalidate on different CPU */ + set_random_cpu(); + vcpu_run(vcpu[0]); + host_sync(vcpu[0], SYNC_BEFORE_STORE); + + /* Store on different CPU */ + set_random_cpu(); + vcpu_run(vcpu[0]); + host_sync(vcpu[0], SYNC_DSI); + vcpu_run(vcpu[0]); + host_sync(vcpu[0], SYNC_BEFORE_INVALIDATE); + + TEST_ASSERT(virt_wrenable_pte(vm, page), "Wrenable page failed"); + /* Invalidate on different CPU when we go around */ + set_random_cpu(); + } + + vm_install_exception_handler(vm, 0x300, NULL); + + kvm_vm_free(vm); +} + +static void proctbl_dsi_handler(struct ex_regs *regs) +{ + GUEST_SYNC(SYNC_DSI); + regs->nia += 4; +} + +static void proctbl_guest_code(vm_vaddr_t page) +{ + volatile char *mem = (void *)page; + + for (;;) { + GUEST_SYNC(SYNC_BEFORE_STORE); + asm volatile("stb %1,%0" : "=m"(*mem) : "r"(1)); + GUEST_SYNC(SYNC_BEFORE_INVALIDATE); + virt_invalidate_all(page); + } +} + +static void proctbl_test(void) +{ + struct kvm_vcpu *vcpu; + struct kvm_vm *vm; + vm_vaddr_t page; + vm_paddr_t orig_pgd; + vm_paddr_t alternate_pgd; + void *hostptr; + + /* Create VM */ + vm = vm_create_with_one_vcpu(&vcpu, proctbl_guest_code); + vm_install_exception_handler(vm, 0x300, proctbl_dsi_handler); + + page = vm_vaddr_alloc_page(vm); + hostptr = addr_gva2hva(vm, page); + memset(hostptr, 0, vm->page_size); + + orig_pgd = vm->pgd; + alternate_pgd = virt_pt_duplicate(vm); + + /* Write protect the original PTE */ + TEST_ASSERT(virt_wrprotect_pte(vm, page), "Wrprotect page failed"); + + vm->pgd = alternate_pgd; + set_radix_proc_table(vm, 0, vm->pgd); + + vcpu_args_set(vcpu, 1, page); + + set_random_cpu(); + set_timer(10); + + while (!timeout) { + vcpu_run(vcpu); + host_sync(vcpu, SYNC_BEFORE_STORE); + + vcpu_run(vcpu); + host_sync(vcpu, SYNC_BEFORE_INVALIDATE); + /* Writeable store succeeds */ + + /* Swap page tables to write protected one */ + vm->pgd = orig_pgd; + set_radix_proc_table(vm, 0, vm->pgd); + + /* Invalidate on different CPU */ + set_random_cpu(); + vcpu_run(vcpu); + host_sync(vcpu, SYNC_BEFORE_STORE); + + /* Store on different CPU */ + set_random_cpu(); + vcpu_run(vcpu); + host_sync(vcpu, SYNC_DSI); + vcpu_run(vcpu); + host_sync(vcpu, SYNC_BEFORE_INVALIDATE); + + /* Swap page tables to write enabled one */ + vm->pgd = alternate_pgd; + set_radix_proc_table(vm, 0, vm->pgd); + + /* Invalidate on different CPU when we go around */ + set_random_cpu(); + } + vm->pgd = orig_pgd; + set_radix_proc_table(vm, 0, vm->pgd); + + vm_install_exception_handler(vm, 0x300, NULL); + + kvm_vm_free(vm); +} + +struct testdef { + const char *name; + void (*test)(void); +} testlist[] = { + { "tlbiel wrprotect test", wrprotect_test}, + { "tlbiel wrprotect 2-vCPU test", wrp_mt_test}, + { "tlbiel process table update test", proctbl_test}, + { "tlbiel remap test", remap_test}, +}; + +int main(int argc, char *argv[]) +{ + int idx; + + ksft_print_header(); + + ksft_set_plan(ARRAY_SIZE(testlist)); + + init_sched_cpu(); + init_timers(); + + for (idx = 0; idx < ARRAY_SIZE(testlist); idx++) { + testlist[idx].test(); + ksft_test_result_pass("%s\n", testlist[idx].name); + } + + ksft_finished(); /* Print results and exit() accordingly */ +} From patchwork Thu Jun 8 03:24:25 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nicholas Piggin X-Patchwork-Id: 13271576 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2DFB8C7EE25 for ; Thu, 8 Jun 2023 03:25:22 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233730AbjFHDZU (ORCPT ); Wed, 7 Jun 2023 23:25:20 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34486 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234120AbjFHDZB (ORCPT ); Wed, 7 Jun 2023 23:25:01 -0400 Received: from mail-pl1-x62e.google.com (mail-pl1-x62e.google.com [IPv6:2607:f8b0:4864:20::62e]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 2C7B8210A for ; Wed, 7 Jun 2023 20:24:59 -0700 (PDT) Received: by mail-pl1-x62e.google.com with SMTP id d9443c01a7336-1b02497f4cfso170705ad.3 for ; Wed, 07 Jun 2023 20:24:59 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1686194698; x=1688786698; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=SKaHgr+HL1117S3Jb0aiH5upNruURl+KhWKkgZPggeU=; b=RXSSoF6/UG7yQ3r8E8hAePxnc2Sm1OYVC6HZJwNG7h/9pNEy0g4rhTDZORYspmc4j+ JlKG7R/MRd5qsIXjy4XTgvLjLo+4nMBz72TXeGkeEJuf2mV4SxpotNfga70I7vBkzKgE Ok15EbcgRrzT/XsJ+gCoL7oYSsrqq5iMth45B+y2RzHIvvBtPNS/tKMizfol39chJ7F5 2EoUTqjGPnOPBpCxnBwphX9kxwKf8xIPi3rtK6ycBGa0+Bk4DmMAdmeHzVDc9kuHBfFL SO/uX2LZkKld7RViInfEeHW4Sk8M5ldvrPOhWkVBSWg55tH4//wbFJPKquxzs2UjPTu4 C1tg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1686194698; x=1688786698; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=SKaHgr+HL1117S3Jb0aiH5upNruURl+KhWKkgZPggeU=; b=Aaz105unVguzX23BS9H2jEVBmMg3NTHcHU+cjiEWDDC4MIGVN9JFN3+4qFC94Oxb8L wWs901LMUU+XCXYRWnPrY2utV8xadjKXjZZXn/mhAQ/tdnwyEbIC38S3D7y3/2NGpa5S 2reXOlPCWMtmZFG0LF/jCXk63C2xB1RRtKLszwd8KshVYwYbldSexFbylQJfArjiEuuj hlDJXenXqHShPIHekTlYaUJCwt3IMctNNfIKhwU2xsW7R4c/0BZmh4bqEXUvh0PRwLHQ +oTk0D1WbH8zdVKXxtL0zy/gye3i9k7/BnydCpUhP+sDgD71YAufBX/p6DHvWty5C90S kbWQ== X-Gm-Message-State: AC+VfDztxGZ8qpKswcdDXc6pYOrhydxEnwuPqg8iUw6njnnHY+pK+PgA bUrUW+Q1a4agV11nhGVfegDUaYwzb5w= X-Google-Smtp-Source: ACHHUZ7I51jMXcRhU4L0iRuO4wTdv5VNRALpAastt2XMOL+tebny1cqSEqZmj7MsagjiwC9XLtJeEg== X-Received: by 2002:a17:90a:1901:b0:259:e75a:bdc9 with SMTP id 1-20020a17090a190100b00259e75abdc9mr827229pjg.27.1686194698170; Wed, 07 Jun 2023 20:24:58 -0700 (PDT) Received: from wheely.local0.net (58-6-224-112.tpgi.com.au. [58.6.224.112]) by smtp.gmail.com with ESMTPSA id s12-20020a17090a5d0c00b0025930e50e28sm2015629pji.41.2023.06.07.20.24.55 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 07 Jun 2023 20:24:57 -0700 (PDT) From: Nicholas Piggin To: kvm@vger.kernel.org, Paolo Bonzini Cc: Nicholas Piggin , linuxppc-dev@lists.ozlabs.org Subject: [PATCH v3 6/6] KVM: PPC: selftests: Add interrupt performance tester Date: Thu, 8 Jun 2023 13:24:25 +1000 Message-Id: <20230608032425.59796-7-npiggin@gmail.com> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20230608032425.59796-1-npiggin@gmail.com> References: <20230608032425.59796-1-npiggin@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Add a little perf tester for interrupts that go to guest, host, and userspace. Signed-off-by: Nicholas Piggin --- tools/testing/selftests/kvm/Makefile | 1 + .../selftests/kvm/powerpc/interrupt_perf.c | 199 ++++++++++++++++++ 2 files changed, 200 insertions(+) create mode 100644 tools/testing/selftests/kvm/powerpc/interrupt_perf.c diff --git a/tools/testing/selftests/kvm/Makefile b/tools/testing/selftests/kvm/Makefile index aa3a8ca676c2..834f98971b0c 100644 --- a/tools/testing/selftests/kvm/Makefile +++ b/tools/testing/selftests/kvm/Makefile @@ -184,6 +184,7 @@ TEST_GEN_PROGS_riscv += kvm_page_table_test TEST_GEN_PROGS_riscv += set_memory_region_test TEST_GEN_PROGS_riscv += kvm_binary_stats_test +TEST_GEN_PROGS_powerpc += powerpc/interrupt_perf TEST_GEN_PROGS_powerpc += powerpc/null_test TEST_GEN_PROGS_powerpc += powerpc/rtas_hcall TEST_GEN_PROGS_powerpc += powerpc/tlbiel_test diff --git a/tools/testing/selftests/kvm/powerpc/interrupt_perf.c b/tools/testing/selftests/kvm/powerpc/interrupt_perf.c new file mode 100644 index 000000000000..50d078899e22 --- /dev/null +++ b/tools/testing/selftests/kvm/powerpc/interrupt_perf.c @@ -0,0 +1,199 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Test basic guest interrupt/exit performance. + */ + +#define _GNU_SOURCE /* for program_invocation_short_name */ +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#include "test_util.h" +#include "kvm_util.h" +#include "kselftest.h" +#include "processor.h" +#include "helpers.h" +#include "hcall.h" + +static bool timeout; +static unsigned long count; +static struct kvm_vm *kvm_vm; + +static void set_timer(int sec) +{ + struct itimerval timer; + + timeout = false; + + timer.it_value.tv_sec = sec; + timer.it_value.tv_usec = 0; + timer.it_interval = timer.it_value; + TEST_ASSERT(setitimer(ITIMER_REAL, &timer, NULL) == 0, + "setitimer failed %s", strerror(errno)); +} + +static void sigalrm_handler(int sig) +{ + timeout = true; + sync_global_to_guest(kvm_vm, timeout); +} + +static void init_timers(void) +{ + TEST_ASSERT(signal(SIGALRM, sigalrm_handler) != SIG_ERR, + "Failed to register SIGALRM handler, errno = %d (%s)", + errno, strerror(errno)); +} + +static void program_interrupt_handler(struct ex_regs *regs) +{ + regs->nia += 4; +} + +static void program_interrupt_guest_code(void) +{ + unsigned long nr = 0; + + while (!timeout) { + asm volatile("trap"); + nr++; + barrier(); + } + count = nr; + + GUEST_DONE(); +} + +static void program_interrupt_test(void) +{ + struct kvm_vcpu *vcpu; + struct kvm_vm *vm; + + /* Create VM */ + vm = vm_create_with_one_vcpu(&vcpu, program_interrupt_guest_code); + kvm_vm = vm; + vm_install_exception_handler(vm, 0x700, program_interrupt_handler); + + set_timer(1); + + while (!timeout) { + vcpu_run(vcpu); + barrier(); + } + + sync_global_from_guest(vm, count); + + kvm_vm = NULL; + vm_install_exception_handler(vm, 0x700, NULL); + + kvm_vm_free(vm); + + printf("%lu guest interrupts per second\n", count); + count = 0; +} + +static void heai_guest_code(void) +{ + unsigned long nr = 0; + + while (!timeout) { + asm volatile(".long 0"); + nr++; + barrier(); + } + count = nr; + + GUEST_DONE(); +} + +static void heai_test(void) +{ + struct kvm_vcpu *vcpu; + struct kvm_vm *vm; + + /* Create VM */ + vm = vm_create_with_one_vcpu(&vcpu, heai_guest_code); + kvm_vm = vm; + vm_install_exception_handler(vm, 0x700, program_interrupt_handler); + + set_timer(1); + + while (!timeout) { + vcpu_run(vcpu); + barrier(); + } + + sync_global_from_guest(vm, count); + + kvm_vm = NULL; + vm_install_exception_handler(vm, 0x700, NULL); + + kvm_vm_free(vm); + + printf("%lu guest exits per second\n", count); + count = 0; +} + +static void hcall_guest_code(void) +{ + for (;;) + hcall0(H_RTAS); +} + +static void hcall_test(void) +{ + struct kvm_vcpu *vcpu; + struct kvm_vm *vm; + + /* Create VM */ + vm = vm_create_with_one_vcpu(&vcpu, hcall_guest_code); + kvm_vm = vm; + + set_timer(1); + + while (!timeout) { + vcpu_run(vcpu); + count++; + barrier(); + } + + kvm_vm = NULL; + + kvm_vm_free(vm); + + printf("%lu KVM exits per second\n", count); + count = 0; +} + +struct testdef { + const char *name; + void (*test)(void); +} testlist[] = { + { "guest interrupt test", program_interrupt_test}, + { "guest exit test", heai_test}, + { "KVM exit test", hcall_test}, +}; + +int main(int argc, char *argv[]) +{ + int idx; + + ksft_print_header(); + + ksft_set_plan(ARRAY_SIZE(testlist)); + + init_timers(); + + for (idx = 0; idx < ARRAY_SIZE(testlist); idx++) { + testlist[idx].test(); + ksft_test_result_pass("%s\n", testlist[idx].name); + } + + ksft_finished(); /* Print results and exit() accordingly */ +}