From patchwork Wed Apr 3 08:04:51 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Atish Kumar Patra X-Patchwork-Id: 13615564 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 61CECCD1294 for ; Wed, 3 Apr 2024 09:19:39 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=oa9cDIpDax/yKTzMQfgKN3xjx86nTZZ1X0SE4uCUPZ4=; b=uZQ1d2g6s/aEVH 4bPijnKhZRAoR5JtnomKAcwtEOafxtFZnl2AW4vJNlIbXI5QRle6Qo91F27ayUsqIA964yAUtOTtW 97M3itZPsV39mnAVD8+4pBampyhpPoUbgCUI040CJNu1boJInRHM+M0F3DDyQDI73lM/P/gxtApZQ wprLLQMpEt65771fW1NwwPEIu0VHrAyAJI5GGtVRxzEdZhoCwjn9dRZTNfBwhjARAENBuJ6em7JVE jfFrB3tRxYo8brb2r1yyZqOOeFQ6Ct0MA7RsHEyLQ031615Fv0thjBFx0sDL8rJ0E6rGvELkkG82t RqnI4MrnN6ooINmz3bBw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1rrwmL-0000000FAY5-0gBQ; Wed, 03 Apr 2024 09:19:33 +0000 Received: from mail-pl1-x62e.google.com ([2607:f8b0:4864:20::62e]) by bombadil.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1rrvdE-0000000Ej5f-2TtU for linux-riscv@lists.infradead.org; Wed, 03 Apr 2024 08:06:11 +0000 Received: by mail-pl1-x62e.google.com with SMTP id d9443c01a7336-1e0411c0a52so51652095ad.0 for ; Wed, 03 Apr 2024 01:06:02 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=rivosinc-com.20230601.gappssmtp.com; s=20230601; t=1712131562; x=1712736362; darn=lists.infradead.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=08Ow0qzJAlLAfVOdK6hW39Nfp7LW+caK9DtQyZhdmmU=; b=E0rmXj/QuULAtN9xBEWRUMXfy2U3vjkqn5buK1rMeJNoWEgSLTWyd7uY7BxEBANkdW lC2n5vZoru5d0fZV6fiRu/i+ZxJ46eR5dMVN84E7sswj4qsMxv8G9UPyg9gzBO9Uvhvy FWsZLBHwl8/67NjWJmKBeWiHr/5e2h+E4OeRYclmL4zJ3wQrAfwU211HGLOpFQ3T71H3 dbEW13shJQMn5jIKS3Y9OE0NBnBel6W8RpBhWi1jQ0OAA1/Tnl0Eul9LRa565/SURV63 /ZmetZkK+qIKv5UBkB6oEZETm1g5AxlRDS0sVcL0o76q3UnwX9LKcP+l19G7OOAT3ikO SIRg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1712131562; x=1712736362; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=08Ow0qzJAlLAfVOdK6hW39Nfp7LW+caK9DtQyZhdmmU=; b=mr9xnxLXJYFtTAcxrYVsPbLqnKXtVYeECPK2D2XkQCUub7zKlERU3AKERoSzupgpmr vb1gFBRa0uQb0AvhYrIIioH+Hr4YxN8zUZYEoihofJ6yaqNxpzKrxQHU97V62PkaI8SI bRiuhWxc52yXGSrnNo+op1dbv77UQ2j8yuUH697slanDM5/kZ8/gOyNNxwRGCn0Pm11D V4W5oK5OpuIWuh474y4+pKRqmt5xmsQtjqPqX65R6DiioiKji1i/QyGzJRItwv0TD1PS 8W/IKpuIS8H5KuUK+Twqve1UGE+KDutfoFhuRa5veWK768p7y/Z1ea+Vlv/cZDlIuKkz yQ/w== X-Forwarded-Encrypted: i=1; AJvYcCUByefNmiVLfeBaDFlnMmTKVOpIGJpiwpVe2ceDtQozHt7s2hTgBfVrJ3/GOPE8QjZX45jgmTk4wYisZbELahSgsfNXpVCqOSK6eMDxbqIh X-Gm-Message-State: AOJu0YzO+Jzov9W1D87OuT/KZeETSIiQfhMf+PTH/EYKJHB7ZZYSmfV6 3iTTFWsK11qe0X20S4MHrYj9bHXueI7Y8vIyOFiwrZccXxz0xNxDrDDZlZYoCTk= X-Google-Smtp-Source: AGHT+IE72ENkNXFbTzbhg6SzqOY0byxQCRn9OXM0YoFZNh+Y8OZZcNtpF6ZIM4eDWD3GUsrp+79BKw== X-Received: by 2002:a17:902:ea10:b0:1e0:e6b0:2364 with SMTP id s16-20020a170902ea1000b001e0e6b02364mr14335253plg.64.1712131562243; Wed, 03 Apr 2024 01:06:02 -0700 (PDT) Received: from atishp.ba.rivosinc.com ([64.71.180.162]) by smtp.gmail.com with ESMTPSA id c12-20020a170902d48c00b001e0b5d49fc7sm12557229plg.161.2024.04.03.01.05.59 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 03 Apr 2024 01:06:00 -0700 (PDT) From: Atish Patra To: linux-kernel@vger.kernel.org Cc: Atish Patra , Anup Patel , Ajay Kaher , Alexandre Ghiti , Alexey Makhalov , Andrew Jones , Conor Dooley , Juergen Gross , kvm-riscv@lists.infradead.org, kvm@vger.kernel.org, linux-kselftest@vger.kernel.org, linux-riscv@lists.infradead.org, Mark Rutland , Palmer Dabbelt , Paolo Bonzini , Paul Walmsley , Shuah Khan , virtualization@lists.linux.dev, VMware PV-Drivers Reviewers , Will Deacon , x86@kernel.org Subject: [PATCH v5 22/22] KVM: riscv: selftests: Add a test for counter overflow Date: Wed, 3 Apr 2024 01:04:51 -0700 Message-Id: <20240403080452.1007601-23-atishp@rivosinc.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240403080452.1007601-1-atishp@rivosinc.com> References: <20240403080452.1007601-1-atishp@rivosinc.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240403_010605_301052_AA17147D X-CRM114-Status: GOOD ( 17.80 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org Add a test for verifying overflow interrupt. Currently, it relies on overflow support on cycle/instret events. This test works for cycle/ instret events which support sampling via hpmcounters on the platform. There are no ISA extensions to detect if a platform supports that. Thus, this test will fail on platform with virtualization but doesn't support overflow on these two events. Reviewed-by: Anup Patel Signed-off-by: Atish Patra Reviewed-by: Andrew Jones --- .../selftests/kvm/riscv/sbi_pmu_test.c | 114 ++++++++++++++++++ 1 file changed, 114 insertions(+) diff --git a/tools/testing/selftests/kvm/riscv/sbi_pmu_test.c b/tools/testing/selftests/kvm/riscv/sbi_pmu_test.c index 7d195be5c3d9..451db956b885 100644 --- a/tools/testing/selftests/kvm/riscv/sbi_pmu_test.c +++ b/tools/testing/selftests/kvm/riscv/sbi_pmu_test.c @@ -14,6 +14,7 @@ #include "test_util.h" #include "processor.h" #include "sbi.h" +#include "arch_timer.h" /* Maximum counters(firmware + hardware) */ #define RISCV_MAX_PMU_COUNTERS 64 @@ -24,6 +25,9 @@ union sbi_pmu_ctr_info ctrinfo_arr[RISCV_MAX_PMU_COUNTERS]; static void *snapshot_gva; static vm_paddr_t snapshot_gpa; +static int vcpu_shared_irq_count; +static int counter_in_use; + /* Cache the available counters in a bitmask */ static unsigned long counter_mask_available; @@ -117,6 +121,31 @@ static void guest_illegal_exception_handler(struct ex_regs *regs) regs->epc += 4; } +static void guest_irq_handler(struct ex_regs *regs) +{ + unsigned int irq_num = regs->cause & ~CAUSE_IRQ_FLAG; + struct riscv_pmu_snapshot_data *snapshot_data = snapshot_gva; + unsigned long overflown_mask; + unsigned long counter_val = 0; + + /* Validate that we are in the correct irq handler */ + GUEST_ASSERT_EQ(irq_num, IRQ_PMU_OVF); + + /* Stop all counters first to avoid further interrupts */ + stop_counter(counter_in_use, SBI_PMU_STOP_FLAG_TAKE_SNAPSHOT); + + csr_clear(CSR_SIP, BIT(IRQ_PMU_OVF)); + + overflown_mask = READ_ONCE(snapshot_data->ctr_overflow_mask); + GUEST_ASSERT(overflown_mask & 0x01); + + WRITE_ONCE(vcpu_shared_irq_count, vcpu_shared_irq_count+1); + + counter_val = READ_ONCE(snapshot_data->ctr_values[0]); + /* Now start the counter to mimick the real driver behavior */ + start_counter(counter_in_use, SBI_PMU_START_FLAG_SET_INIT_VALUE, counter_val); +} + static unsigned long get_counter_index(unsigned long cbase, unsigned long cmask, unsigned long cflags, unsigned long event) @@ -276,6 +305,33 @@ static void test_pmu_event_snapshot(unsigned long event) stop_reset_counter(counter, 0); } +static void test_pmu_event_overflow(unsigned long event) +{ + unsigned long counter; + unsigned long counter_value_post; + unsigned long counter_init_value = ULONG_MAX - 10000; + struct riscv_pmu_snapshot_data *snapshot_data = snapshot_gva; + + counter = get_counter_index(0, counter_mask_available, 0, event); + counter_in_use = counter; + + /* The counter value is updated w.r.t relative index of cbase passed to start/stop */ + WRITE_ONCE(snapshot_data->ctr_values[0], counter_init_value); + start_counter(counter, SBI_PMU_START_FLAG_INIT_SNAPSHOT, 0); + dummy_func_loop(10000); + udelay(msecs_to_usecs(2000)); + /* irq handler should have stopped the counter */ + stop_counter(counter, SBI_PMU_STOP_FLAG_TAKE_SNAPSHOT); + + counter_value_post = READ_ONCE(snapshot_data->ctr_values[0]); + /* The counter value after stopping should be less the init value due to overflow */ + __GUEST_ASSERT(counter_value_post < counter_init_value, + "counter_value_post %lx counter_init_value %lx for counter\n", + counter_value_post, counter_init_value); + + stop_reset_counter(counter, 0); +} + static void test_invalid_event(void) { struct sbiret ret; @@ -366,6 +422,34 @@ static void test_pmu_events_snaphost(void) GUEST_DONE(); } +static void test_pmu_events_overflow(void) +{ + int num_counters = 0; + + /* Verify presence of SBI PMU and minimum requrired SBI version */ + verify_sbi_requirement_assert(); + + snapshot_set_shmem(snapshot_gpa, 0); + csr_set(CSR_IE, BIT(IRQ_PMU_OVF)); + local_irq_enable(); + + /* Get the counter details */ + num_counters = get_num_counters(); + update_counter_info(num_counters); + + /* + * Qemu supports overflow for cycle/instruction. + * This test may fail on any platform that do not support overflow for these two events. + */ + test_pmu_event_overflow(SBI_PMU_HW_CPU_CYCLES); + GUEST_ASSERT_EQ(vcpu_shared_irq_count, 1); + + test_pmu_event_overflow(SBI_PMU_HW_INSTRUCTIONS); + GUEST_ASSERT_EQ(vcpu_shared_irq_count, 2); + + GUEST_DONE(); +} + static void run_vcpu(struct kvm_vcpu *vcpu) { struct ucall uc; @@ -451,6 +535,33 @@ static void test_vm_events_snapshot_test(void *guest_code) test_vm_destroy(vm); } +static void test_vm_events_overflow(void *guest_code) +{ + struct kvm_vm *vm = NULL; + struct kvm_vcpu *vcpu; + + vm = vm_create_with_one_vcpu(&vcpu, guest_code); + __TEST_REQUIRE(__vcpu_has_sbi_ext(vcpu, KVM_RISCV_SBI_EXT_PMU), + "SBI PMU not available, skipping test"); + + __TEST_REQUIRE(__vcpu_has_isa_ext(vcpu, KVM_RISCV_ISA_EXT_SSCOFPMF), + "Sscofpmf is not available, skipping overflow test"); + + + test_vm_setup_snapshot_mem(vm, vcpu); + vm_init_vector_tables(vm); + vm_install_interrupt_handler(vm, guest_irq_handler); + + vcpu_init_vector_tables(vcpu); + /* Initialize guest timer frequency. */ + vcpu_get_reg(vcpu, RISCV_TIMER_REG(frequency), &timer_freq); + sync_global_to_guest(vm, timer_freq); + + run_vcpu(vcpu); + + test_vm_destroy(vm); +} + int main(void) { pr_info("SBI PMU basic test : starting\n"); @@ -463,5 +574,8 @@ int main(void) test_vm_events_snapshot_test(test_pmu_events_snaphost); pr_info("SBI PMU event verification with snapshot test : PASS\n"); + test_vm_events_overflow(test_pmu_events_overflow); + pr_info("SBI PMU event verification with overflow test : PASS\n"); + return 0; }