From patchwork Thu Jul 27 07:20:05 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Haibo Xu X-Patchwork-Id: 13329056 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id E8148C0015E for ; Thu, 27 Jul 2023 07:25:40 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233301AbjG0HZj (ORCPT ); Thu, 27 Jul 2023 03:25:39 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41242 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233302AbjG0HYO (ORCPT ); Thu, 27 Jul 2023 03:24:14 -0400 Received: from mgamail.intel.com (mga04.intel.com [192.55.52.120]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 6BF4A4C06; Thu, 27 Jul 2023 00:15:23 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1690442123; x=1721978123; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=7h/0hZf3xsYJlE3BBdTFYysxnJ4Nt8eEM/a5WpIe31M=; b=f0ba+1RChqanRT31ZkJXoUTYvK1CdgLpujhtBuPzM0zzKylf/fGPeAsM LtaYCwo7jF5ljPb0bCGhZAFt3sqXwUKdt8v1ubI4JTldk3Wc/RHXdcKHL NFElvih2WHSsI1wNxS86qu0Ll3FYTxV+DjVlq5hSaFdTIG2vw1IVWhv3I C5wUpBHRMGT/dNYsGzZgbHyDOYTcVSW69OT+0142f8JoTYBJxP6oA3Lp6 sV23jpREDIsqV6x+eQznIE8Hq8bDPCNjscyaUsLG5vNJNlhdkZSaYbppD aDq0j7txko4u9CuimqSZIfKOa61k+F/XmI3/oPp/+yFb2xxFB4YwGsn/M A==; X-IronPort-AV: E=McAfee;i="6600,9927,10783"; a="367102109" X-IronPort-AV: E=Sophos;i="6.01,234,1684825200"; d="scan'208";a="367102109" Received: from orsmga007.jf.intel.com ([10.7.209.58]) by fmsmga104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 27 Jul 2023 00:14:25 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10783"; a="720785811" X-IronPort-AV: E=Sophos;i="6.01,234,1684825200"; d="scan'208";a="720785811" Received: from haibo-optiplex-7090.sh.intel.com ([10.239.159.132]) by orsmga007-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 27 Jul 2023 00:14:19 -0700 From: Haibo Xu Cc: xiaobo55x@gmail.com, haibo1.xu@intel.com, ajones@ventanamicro.com, Paul Walmsley , Palmer Dabbelt , Albert Ou , Paolo Bonzini , Shuah Khan , Anup Patel , Atish Patra , Sean Christopherson , Vipin Sharma , Marc Zyngier , Colton Lewis , Andrew Jones , Vishal Annapurve , linux-kernel@vger.kernel.org, linux-riscv@lists.infradead.org, kvm@vger.kernel.org, linux-kselftest@vger.kernel.org, kvm-riscv@lists.infradead.org Subject: [PATCH 1/4] tools: riscv: Add header file csr.h Date: Thu, 27 Jul 2023 15:20:05 +0800 Message-Id: <35ce2b9f7ca655eb3af13730b1ca9f05b518e08f.1690364259.git.haibo1.xu@intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: References: MIME-Version: 1.0 To: unlisted-recipients:; (no To-header on input) Precedence: bulk List-ID: X-Mailing-List: linux-kselftest@vger.kernel.org Borrow some of the csr definitions and operations from kernel's arch/riscv/include/asm/csr.h to tools/ for riscv. Signed-off-by: Haibo Xu --- tools/arch/riscv/include/asm/csr.h | 127 +++++++++++++++++++++++++++++ 1 file changed, 127 insertions(+) create mode 100644 tools/arch/riscv/include/asm/csr.h diff --git a/tools/arch/riscv/include/asm/csr.h b/tools/arch/riscv/include/asm/csr.h new file mode 100644 index 000000000000..b437e6438216 --- /dev/null +++ b/tools/arch/riscv/include/asm/csr.h @@ -0,0 +1,127 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* + * Copyright (C) 2015 Regents of the University of California + * Copyright (c) 2023, Intel Corporation. + */ + +#ifndef _ASM_RISCV_CSR_H +#define _ASM_RISCV_CSR_H + +#ifdef __ASSEMBLY__ +#define __ASM_STR(x) x +#else +#define __ASM_STR(x) #x +#endif + +/* Status register flags */ +#define SR_SIE _AC(0x00000002, UL) /* Supervisor Interrupt Enable */ +#define SR_SPIE _AC(0x00000020, UL) /* Previous Supervisor IE */ +#define SR_SPP _AC(0x00000100, UL) /* Previously Supervisor */ +#define SR_SUM _AC(0x00040000, UL) /* Supervisor User Memory Access */ + +#define SR_FS _AC(0x00006000, UL) /* Floating-point Status */ +#define SR_FS_OFF _AC(0x00000000, UL) +#define SR_FS_INITIAL _AC(0x00002000, UL) +#define SR_FS_CLEAN _AC(0x00004000, UL) +#define SR_FS_DIRTY _AC(0x00006000, UL) + +#define SR_XS _AC(0x00018000, UL) /* Extension Status */ +#define SR_XS_OFF _AC(0x00000000, UL) +#define SR_XS_INITIAL _AC(0x00008000, UL) +#define SR_XS_CLEAN _AC(0x00010000, UL) +#define SR_XS_DIRTY _AC(0x00018000, UL) + +#define SR_SD _AC(0x8000000000000000, UL) /* FS/XS dirty */ + +#define SR_UXL _AC(0x300000000, UL) /* XLEN mask for U-mode */ +#define SR_UXL_32 _AC(0x100000000, UL) /* XLEN = 32 for U-mode */ +#define SR_UXL_64 _AC(0x200000000, UL) /* XLEN = 64 for U-mode */ + +/* Exception cause high bit - is an interrupt if set */ +#define CAUSE_IRQ_FLAG (_AC(1, UL) << (__riscv_xlen - 1)) + +/* Interrupt causes (minus the high bit) */ +#define IRQ_S_SOFT 1 +#define IRQ_S_TIMER 5 +#define IRQ_S_EXT 9 + +/* Exception causes */ +#define EXC_INST_MISALIGNED 0 +#define EXC_INST_ACCESS 1 +#define EXC_INST_ILLEGAL 2 +#define EXC_BREAKPOINT 3 +#define EXC_LOAD_ACCESS 5 +#define EXC_STORE_ACCESS 7 +#define EXC_SYSCALL 8 +#define EXC_HYPERVISOR_SYSCALL 9 +#define EXC_SUPERVISOR_SYSCALL 10 +#define EXC_INST_PAGE_FAULT 12 +#define EXC_LOAD_PAGE_FAULT 13 +#define EXC_STORE_PAGE_FAULT 15 +#define EXC_INST_GUEST_PAGE_FAULT 20 +#define EXC_LOAD_GUEST_PAGE_FAULT 21 +#define EXC_VIRTUAL_INST_FAULT 22 +#define EXC_STORE_GUEST_PAGE_FAULT 23 + +/* symbolic CSR names: */ +#define CSR_CYCLE 0xc00 +#define CSR_TIME 0xc01 +#define CSR_INSTRET 0xc02 + +#define CSR_SSTATUS 0x100 +#define CSR_SIE 0x104 +#define CSR_STVEC 0x105 +#define CSR_SCOUNTEREN 0x106 +#define CSR_SSCRATCH 0x140 +#define CSR_SEPC 0x141 +#define CSR_SCAUSE 0x142 +#define CSR_STVAL 0x143 +#define CSR_SIP 0x144 +#define CSR_SATP 0x180 + +#define CSR_STIMECMP 0x14D +#define CSR_STIMECMPH 0x15D + +/* IE/IP (Supervisor Interrupt Enable/Pending) flags */ +#define IE_SIE (_AC(0x1, UL) << IRQ_S_SOFT) +#define IE_TIE (_AC(0x1, UL) << IRQ_S_TIMER) +#define IE_EIE (_AC(0x1, UL) << IRQ_S_EXT) + +#ifndef __ASSEMBLY__ + +#define csr_read(csr) \ +({ \ + register unsigned long __v; \ + __asm__ __volatile__ ("csrr %0, " __ASM_STR(csr) \ + : "=r" (__v) : \ + : "memory"); \ + __v; \ +}) + +#define csr_write(csr, val) \ +({ \ + unsigned long __v = (unsigned long)(val); \ + __asm__ __volatile__ ("csrw " __ASM_STR(csr) ", %0" \ + : : "rK" (__v) \ + : "memory"); \ +}) + +#define csr_set(csr, val) \ +({ \ + unsigned long __v = (unsigned long)(val); \ + __asm__ __volatile__ ("csrs " __ASM_STR(csr) ", %0" \ + : : "rK" (__v) \ + : "memory"); \ +}) + +#define csr_clear(csr, val) \ +({ \ + unsigned long __v = (unsigned long)(val); \ + __asm__ __volatile__ ("csrc " __ASM_STR(csr) ", %0" \ + : : "rK" (__v) \ + : "memory"); \ +}) + +#endif /* __ASSEMBLY__ */ + +#endif /* _ASM_RISCV_CSR_H */ From patchwork Thu Jul 27 07:20:06 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Haibo Xu X-Patchwork-Id: 13329057 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id DB689C0015E for ; Thu, 27 Jul 2023 07:25:49 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231927AbjG0HZr (ORCPT ); Thu, 27 Jul 2023 03:25:47 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41320 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233342AbjG0HYV (ORCPT ); Thu, 27 Jul 2023 03:24:21 -0400 Received: from mgamail.intel.com (mga04.intel.com [192.55.52.120]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 576CD65AA; Thu, 27 Jul 2023 00:15:33 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1690442133; x=1721978133; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=//1/LLYO5oApTbEBhVCPVL/NEfjxItf5NDXvgd2wGBs=; b=AXqLtSoZecCZXEDhL2OmnVV/aEmIrGrHNBvy6jRkPaEeEPtkD/UCzqIG x3BK1YC5ur69ZGYgN5RyA5DCzxw4Pt6BsDXb4gyylJ3vW0KCx2WhibueH /tRw6KtT4wTOicLb6wsyWjm+a1UIUmQtQUvW1Fd+Vi3w7IOKr4So+hmXl tkbcLOK7jmGP4iYsPcRvTQpY1ftjM9+HHjPwq6bWMk4WRTLoL2bG7Y7FK bZ1zbIyvD5gsWLeC0duUW18Bg0rzekDzHhd09fbHpscUOHYRHndPt8At0 CbLyJMkJE3Cn1NA82g+peg3iQt7Sgdl4CT+8ffLu2jqZRjssNkBw9SC7k Q==; X-IronPort-AV: E=McAfee;i="6600,9927,10783"; a="367102239" X-IronPort-AV: E=Sophos;i="6.01,234,1684825200"; d="scan'208";a="367102239" Received: from orsmga007.jf.intel.com ([10.7.209.58]) by fmsmga104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 27 Jul 2023 00:14:36 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10783"; a="720785822" X-IronPort-AV: E=Sophos;i="6.01,234,1684825200"; d="scan'208";a="720785822" Received: from haibo-optiplex-7090.sh.intel.com ([10.239.159.132]) by orsmga007-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 27 Jul 2023 00:14:31 -0700 From: Haibo Xu Cc: xiaobo55x@gmail.com, haibo1.xu@intel.com, ajones@ventanamicro.com, Paul Walmsley , Palmer Dabbelt , Albert Ou , Paolo Bonzini , Shuah Khan , Anup Patel , Atish Patra , Sean Christopherson , Marc Zyngier , Andrew Jones , Colton Lewis , Vipin Sharma , Vishal Annapurve , linux-kernel@vger.kernel.org, linux-riscv@lists.infradead.org, kvm@vger.kernel.org, linux-kselftest@vger.kernel.org, kvm-riscv@lists.infradead.org Subject: [PATCH 2/4] KVM: riscv: selftests: Add exception handling support Date: Thu, 27 Jul 2023 15:20:06 +0800 Message-Id: <7e9d2d5518375083f10c92a3d1acd98914f69fd8.1690364259.git.haibo1.xu@intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: References: MIME-Version: 1.0 To: unlisted-recipients:; (no To-header on input) Precedence: bulk List-ID: X-Mailing-List: linux-kselftest@vger.kernel.org Add the infrastructure for exception handling in riscv selftests. Currently, the guest_unexp_trap handler was used by default, which aborts the test. Customized handlers can be enabled by calling vm_install_exception_handler(vector) or vm_install_interrupt_handler(). The code is inspired from that of x86/arm64. Signed-off-by: Haibo Xu --- tools/testing/selftests/kvm/Makefile | 1 + .../selftests/kvm/include/riscv/processor.h | 49 +++++++++ .../selftests/kvm/lib/riscv/handlers.S | 101 ++++++++++++++++++ .../selftests/kvm/lib/riscv/processor.c | 57 ++++++++++ 4 files changed, 208 insertions(+) create mode 100644 tools/testing/selftests/kvm/lib/riscv/handlers.S diff --git a/tools/testing/selftests/kvm/Makefile b/tools/testing/selftests/kvm/Makefile index c692cc86e7da..70f3a5ba991e 100644 --- a/tools/testing/selftests/kvm/Makefile +++ b/tools/testing/selftests/kvm/Makefile @@ -52,6 +52,7 @@ LIBKVM_s390x += lib/s390x/diag318_test_handler.c LIBKVM_s390x += lib/s390x/processor.c LIBKVM_s390x += lib/s390x/ucall.c +LIBKVM_riscv += lib/riscv/handlers.S LIBKVM_riscv += lib/riscv/processor.c LIBKVM_riscv += lib/riscv/ucall.c diff --git a/tools/testing/selftests/kvm/include/riscv/processor.h b/tools/testing/selftests/kvm/include/riscv/processor.h index d00d213c3805..9ea6e7bedc61 100644 --- a/tools/testing/selftests/kvm/include/riscv/processor.h +++ b/tools/testing/selftests/kvm/include/riscv/processor.h @@ -9,6 +9,7 @@ #include "kvm_util.h" #include +#include static inline uint64_t __kvm_reg_id(uint64_t type, uint64_t idx, uint64_t size) @@ -38,6 +39,54 @@ static inline uint64_t __kvm_reg_id(uint64_t type, uint64_t idx, KVM_REG_RISCV_TIMER_REG(name), \ KVM_REG_SIZE_U64) +struct ex_regs { + unsigned long ra; + unsigned long sp; + unsigned long gp; + unsigned long tp; + unsigned long t0; + unsigned long t1; + unsigned long t2; + unsigned long s0; + unsigned long s1; + unsigned long a0; + unsigned long a1; + unsigned long a2; + unsigned long a3; + unsigned long a4; + unsigned long a5; + unsigned long a6; + unsigned long a7; + unsigned long s2; + unsigned long s3; + unsigned long s4; + unsigned long s5; + unsigned long s6; + unsigned long s7; + unsigned long s8; + unsigned long s9; + unsigned long s10; + unsigned long s11; + unsigned long t3; + unsigned long t4; + unsigned long t5; + unsigned long t6; + unsigned long epc; + unsigned long status; + unsigned long cause; +}; + +#define VECTOR_NUM 2 +#define EC_NUM 32 +#define EC_MASK (EC_NUM - 1) + +void vm_init_trap_vector_tables(struct kvm_vm *vm); +void vcpu_init_trap_vector_tables(struct kvm_vcpu *vcpu); + +typedef void(*handler_fn)(struct ex_regs *); +void vm_install_exception_handler(struct kvm_vm *vm, int ec, handler_fn handler); +void vm_install_interrupt_handler(struct kvm_vm *vm, handler_fn handler); + /* L3 index Bit[47:39] */ #define PGTBL_L3_INDEX_MASK 0x0000FF8000000000ULL #define PGTBL_L3_INDEX_SHIFT 39 diff --git a/tools/testing/selftests/kvm/lib/riscv/handlers.S b/tools/testing/selftests/kvm/lib/riscv/handlers.S new file mode 100644 index 000000000000..ce0b1d5415b9 --- /dev/null +++ b/tools/testing/selftests/kvm/lib/riscv/handlers.S @@ -0,0 +1,101 @@ +// SPDX-License-Identifier: GPL-2.0 */ +/* + * Copyright (c) 2023 Intel Corporation + */ + +#include + +.macro save_context + addi sp, sp, (-8*34) + + sd x1, 0(sp) + sd x2, 8(sp) + sd x3, 16(sp) + sd x4, 24(sp) + sd x5, 32(sp) + sd x6, 40(sp) + sd x7, 48(sp) + sd x8, 56(sp) + sd x9, 64(sp) + sd x10, 72(sp) + sd x11, 80(sp) + sd x12, 88(sp) + sd x13, 96(sp) + sd x14, 104(sp) + sd x15, 112(sp) + sd x16, 120(sp) + sd x17, 128(sp) + sd x18, 136(sp) + sd x19, 144(sp) + sd x20, 152(sp) + sd x21, 160(sp) + sd x22, 168(sp) + sd x23, 176(sp) + sd x24, 184(sp) + sd x25, 192(sp) + sd x26, 200(sp) + sd x27, 208(sp) + sd x28, 216(sp) + sd x29, 224(sp) + sd x30, 232(sp) + sd x31, 240(sp) + + csrr s0, CSR_SEPC + csrr s1, CSR_SSTATUS + csrr s2, CSR_SCAUSE + sd s0, 248(sp) + sd s1, 256(sp) + sd s2, 264(sp) +.endm + +.balign 4 +.global exception_vectors +exception_vectors: + save_context + move a0, sp + la ra, ret_from_exception + tail route_exception + +.global ret_from_exception +ret_from_exception: + ld s2, 264(sp) + ld s1, 256(sp) + ld s0, 248(sp) + csrw CSR_SCAUSE, s2 + csrw CSR_SSTATUS, s1 + csrw CSR_SEPC, s0 + + ld x31, 240(sp) + ld x30, 232(sp) + ld x29, 224(sp) + ld x28, 216(sp) + ld x27, 208(sp) + ld x26, 200(sp) + ld x25, 192(sp) + ld x24, 184(sp) + ld x23, 176(sp) + ld x22, 168(sp) + ld x21, 160(sp) + ld x20, 152(sp) + ld x19, 144(sp) + ld x18, 136(sp) + ld x17, 128(sp) + ld x16, 120(sp) + ld x15, 112(sp) + ld x14, 104(sp) + ld x13, 96(sp) + ld x12, 88(sp) + ld x11, 80(sp) + ld x10, 72(sp) + ld x9, 64(sp) + ld x8, 56(sp) + ld x7, 48(sp) + ld x6, 40(sp) + ld x5, 32(sp) + ld x4, 24(sp) + ld x3, 16(sp) + ld x2, 8(sp) + ld x1, 0(sp) + + addi sp, sp, (8*34) + sret diff --git a/tools/testing/selftests/kvm/lib/riscv/processor.c b/tools/testing/selftests/kvm/lib/riscv/processor.c index d146ca71e0c0..f1b0be58a5dc 100644 --- a/tools/testing/selftests/kvm/lib/riscv/processor.c +++ b/tools/testing/selftests/kvm/lib/riscv/processor.c @@ -13,6 +13,8 @@ #define DEFAULT_RISCV_GUEST_STACK_VADDR_MIN 0xac0000 +static vm_vaddr_t exception_handlers; + static uint64_t page_align(struct kvm_vm *vm, uint64_t v) { return (v + vm->page_size) & ~(vm->page_size - 1); @@ -367,3 +369,58 @@ void vcpu_args_set(struct kvm_vcpu *vcpu, unsigned int num, ...) void assert_on_unhandled_exception(struct kvm_vcpu *vcpu) { } + +struct handlers { + handler_fn exception_handlers[VECTOR_NUM][EC_NUM]; +}; + +void route_exception(struct ex_regs *regs) +{ + struct handlers *handlers = (struct handlers *)exception_handlers; + int vector = 0, ec; + + ec = regs->cause & ~CAUSE_IRQ_FLAG; + if (ec >= EC_NUM) + goto guest_unexpected_trap; + + /* Use the same handler for all the interrupts */ + if (regs->cause & CAUSE_IRQ_FLAG) { + vector = 1; + ec = 0; + } + + if (handlers && handlers->exception_handlers[vector][ec]) + return handlers->exception_handlers[vector][ec](regs); + +guest_unexpected_trap: + return guest_unexp_trap(); +} + +void vcpu_init_trap_vector_tables(struct kvm_vcpu *vcpu) +{ + extern char exception_vectors; + + vcpu_set_reg(vcpu, RISCV_CSR_REG(stvec), (unsigned long)&exception_vectors); +} + +void vm_init_trap_vector_tables(struct kvm_vm *vm) +{ + vm->handlers = __vm_vaddr_alloc(vm, sizeof(struct handlers), + vm->page_size, MEM_REGION_DATA); + + *(vm_vaddr_t *)addr_gva2hva(vm, (vm_vaddr_t)(&exception_handlers)) = vm->handlers; +} + +void vm_install_exception_handler(struct kvm_vm *vm, int ec, void (*handler)(struct ex_regs *)) +{ + struct handlers *handlers = addr_gva2hva(vm, vm->handlers); + + handlers->exception_handlers[0][ec] = handler; +} + +void vm_install_interrupt_handler(struct kvm_vm *vm, void (*handler)(struct ex_regs *)) +{ + struct handlers *handlers = addr_gva2hva(vm, vm->handlers); + + handlers->exception_handlers[1][0] = handler; +} From patchwork Thu Jul 27 07:20:07 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Haibo Xu X-Patchwork-Id: 13329062 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id DB17BC41513 for ; Thu, 27 Jul 2023 07:26:51 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233393AbjG0H0u (ORCPT ); Thu, 27 Jul 2023 03:26:50 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40210 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233314AbjG0HZn (ORCPT ); Thu, 27 Jul 2023 03:25:43 -0400 Received: from mgamail.intel.com (mga04.intel.com [192.55.52.120]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id BDFEE6A60; Thu, 27 Jul 2023 00:15:50 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1690442150; x=1721978150; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=NsStjbCQjlnQYvLgrTgV2SD7Xv4q7PhNaVzmvY/YSHs=; b=mwLv3MwfBjzw3I8OeXbZLsQjXz9Y3umgYyX9lKzuSHHZ2tAarhYFurFS pzs5/bmSrUJC4lCVMC/DFgrI70kbqe3MP/fe1kHOIiP6Y6VtlhepaVa6+ OiI/sfGXFsgZ1GMd52BY0bkzuPUhxP2m5wRdkZi4CEBX+qTZZ10Zo9IwS ZWJWb8rfhxrNzcvHZzIy8AM8hwSjpGXDRr9OpPSDzpsuV9zFqkCdQm/9J 8DySrWeDijpftmvkB4fMMjP89SFrOeSuRAzScPbTDv02WqmHgg5SKUd2T c3P+wPZawesPPY2q8gAmJAyUNj5qII60SR9I5oOf0/3Mpvqw7VxvZF1Uv A==; X-IronPort-AV: E=McAfee;i="6600,9927,10783"; a="367102383" X-IronPort-AV: E=Sophos;i="6.01,234,1684825200"; d="scan'208";a="367102383" Received: from orsmga007.jf.intel.com ([10.7.209.58]) by fmsmga104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 27 Jul 2023 00:14:48 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10783"; a="720785833" X-IronPort-AV: E=Sophos;i="6.01,234,1684825200"; d="scan'208";a="720785833" Received: from haibo-optiplex-7090.sh.intel.com ([10.239.159.132]) by orsmga007-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 27 Jul 2023 00:14:42 -0700 From: Haibo Xu Cc: xiaobo55x@gmail.com, haibo1.xu@intel.com, ajones@ventanamicro.com, Paul Walmsley , Palmer Dabbelt , Albert Ou , Paolo Bonzini , Shuah Khan , Anup Patel , Atish Patra , Sean Christopherson , Colton Lewis , Andrew Jones , Vipin Sharma , Marc Zyngier , Vishal Annapurve , linux-kernel@vger.kernel.org, linux-riscv@lists.infradead.org, kvm@vger.kernel.org, linux-kselftest@vger.kernel.org, kvm-riscv@lists.infradead.org Subject: [PATCH 3/4] KVM: riscv: selftests: Add guest helper to get vcpu id Date: Thu, 27 Jul 2023 15:20:07 +0800 Message-Id: <36b5837e9e94465dd2b7d7a17bb84dea082f2ffa.1690364259.git.haibo1.xu@intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: References: MIME-Version: 1.0 To: unlisted-recipients:; (no To-header on input) Precedence: bulk List-ID: X-Mailing-List: linux-kselftest@vger.kernel.org Add guest_get_vcpuid() helper to simplify accessing to per-cpu private data. The sscratch CSR was used to store the vcpu id. Signed-off-by: Haibo Xu --- tools/testing/selftests/kvm/include/riscv/processor.h | 2 ++ tools/testing/selftests/kvm/lib/riscv/processor.c | 8 ++++++++ 2 files changed, 10 insertions(+) diff --git a/tools/testing/selftests/kvm/include/riscv/processor.h b/tools/testing/selftests/kvm/include/riscv/processor.h index 9ea6e7bedc61..ca53570ce6de 100644 --- a/tools/testing/selftests/kvm/include/riscv/processor.h +++ b/tools/testing/selftests/kvm/include/riscv/processor.h @@ -165,4 +165,6 @@ struct sbiret sbi_ecall(int ext, int fid, unsigned long arg0, unsigned long arg3, unsigned long arg4, unsigned long arg5); +uint32_t guest_get_vcpuid(void); + #endif /* SELFTEST_KVM_PROCESSOR_H */ diff --git a/tools/testing/selftests/kvm/lib/riscv/processor.c b/tools/testing/selftests/kvm/lib/riscv/processor.c index f1b0be58a5dc..b8ad3e69a697 100644 --- a/tools/testing/selftests/kvm/lib/riscv/processor.c +++ b/tools/testing/selftests/kvm/lib/riscv/processor.c @@ -316,6 +316,9 @@ struct kvm_vcpu *vm_arch_vcpu_add(struct kvm_vm *vm, uint32_t vcpu_id, vcpu_set_reg(vcpu, RISCV_CORE_REG(regs.sp), stack_vaddr + stack_size); vcpu_set_reg(vcpu, RISCV_CORE_REG(regs.pc), (unsigned long)guest_code); + /* Setup scratch regiter of guest */ + vcpu_set_reg(vcpu, RISCV_CSR_REG(sscratch), vcpu_id); + /* Setup default exception vector of guest */ vcpu_set_reg(vcpu, RISCV_CSR_REG(stvec), (unsigned long)guest_unexp_trap); @@ -424,3 +427,8 @@ void vm_install_interrupt_handler(struct kvm_vm *vm, void (*handler)(struct ex_r handlers->exception_handlers[1][0] = handler; } + +uint32_t guest_get_vcpuid(void) +{ + return csr_read(CSR_SSCRATCH); +} From patchwork Thu Jul 27 07:20:08 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Haibo Xu X-Patchwork-Id: 13329063 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 39906C001DC for ; Thu, 27 Jul 2023 07:27:02 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233404AbjG0H1B (ORCPT ); Thu, 27 Jul 2023 03:27:01 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41268 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233364AbjG0HZo (ORCPT ); Thu, 27 Jul 2023 03:25:44 -0400 Received: from mgamail.intel.com (mga04.intel.com [192.55.52.120]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5506235AC; Thu, 27 Jul 2023 00:15:52 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1690442152; x=1721978152; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=LRiVvkYU4onaM07z5HsamlgLNdVrZnGNOTlAoQGUR0I=; b=MtDWPsF7RdCZxU6sgowgOVmLKgncKoFD7J/Aqnow9fp/esR8fH01/I0P qGMdvAotlf2nXtlNmeeNdaLrrVclSGED0VvsFpwEgy+7j0lfcp9uublKs pwTmWWCG9RbzXCLP+5wcYxlJXIu6bkkPopPE2MRieH4hyA52rIYAjeSSZ dRoxewjHFpSPHAkvnpQAwZz9fD0sTLCngnKE5Re9XA6NMxN9W/MT73JrD qi3b/gv4TSa1lVKQvUKOZSI/xD+AGmt3mxN99qD5KC3LEk9SVNTnA5QaR NF2qRDOjTxxi+wGRELla6/R052/UbN2TyuPFhNR/A7jvhCIV4/m6XQ279 Q==; X-IronPort-AV: E=McAfee;i="6600,9927,10783"; a="367102609" X-IronPort-AV: E=Sophos;i="6.01,234,1684825200"; d="scan'208";a="367102609" Received: from orsmga007.jf.intel.com ([10.7.209.58]) by fmsmga104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 27 Jul 2023 00:14:59 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10783"; a="720785852" X-IronPort-AV: E=Sophos;i="6.01,234,1684825200"; d="scan'208";a="720785852" Received: from haibo-optiplex-7090.sh.intel.com ([10.239.159.132]) by orsmga007-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 27 Jul 2023 00:14:54 -0700 From: Haibo Xu Cc: xiaobo55x@gmail.com, haibo1.xu@intel.com, ajones@ventanamicro.com, Paul Walmsley , Palmer Dabbelt , Albert Ou , Paolo Bonzini , Shuah Khan , Anup Patel , Atish Patra , Sean Christopherson , Marc Zyngier , Colton Lewis , Vipin Sharma , Andrew Jones , Vishal Annapurve , linux-kernel@vger.kernel.org, linux-riscv@lists.infradead.org, kvm@vger.kernel.org, linux-kselftest@vger.kernel.org, kvm-riscv@lists.infradead.org Subject: [PATCH 4/4] KVM: riscv: selftests: Add sstc_timer test Date: Thu, 27 Jul 2023 15:20:08 +0800 Message-Id: X-Mailer: git-send-email 2.34.1 In-Reply-To: References: MIME-Version: 1.0 To: unlisted-recipients:; (no To-header on input) Precedence: bulk List-ID: X-Mailing-List: linux-kselftest@vger.kernel.org Add a KVM selftest to validate the Sstc timer functionality. The test was ported from arm64 arch_timer test. Signed-off-by: Haibo Xu --- tools/testing/selftests/kvm/Makefile | 1 + .../selftests/kvm/include/riscv/processor.h | 25 ++ .../selftests/kvm/include/riscv/sstc_timer.h | 70 ++++ .../selftests/kvm/lib/riscv/processor.c | 9 + .../testing/selftests/kvm/riscv/sstc_timer.c | 382 ++++++++++++++++++ 5 files changed, 487 insertions(+) create mode 100644 tools/testing/selftests/kvm/include/riscv/sstc_timer.h create mode 100644 tools/testing/selftests/kvm/riscv/sstc_timer.c diff --git a/tools/testing/selftests/kvm/Makefile b/tools/testing/selftests/kvm/Makefile index 70f3a5ba991e..92b9a26b515d 100644 --- a/tools/testing/selftests/kvm/Makefile +++ b/tools/testing/selftests/kvm/Makefile @@ -175,6 +175,7 @@ TEST_GEN_PROGS_s390x += rseq_test TEST_GEN_PROGS_s390x += set_memory_region_test TEST_GEN_PROGS_s390x += kvm_binary_stats_test +TEST_GEN_PROGS_riscv += riscv/sstc_timer TEST_GEN_PROGS_riscv += demand_paging_test TEST_GEN_PROGS_riscv += dirty_log_test TEST_GEN_PROGS_riscv += kvm_create_max_vcpus diff --git a/tools/testing/selftests/kvm/include/riscv/processor.h b/tools/testing/selftests/kvm/include/riscv/processor.h index ca53570ce6de..4846b4598bd9 100644 --- a/tools/testing/selftests/kvm/include/riscv/processor.h +++ b/tools/testing/selftests/kvm/include/riscv/processor.h @@ -39,6 +39,11 @@ static inline uint64_t __kvm_reg_id(uint64_t type, uint64_t idx, KVM_REG_RISCV_TIMER_REG(name), \ KVM_REG_SIZE_U64) +#define RISCV_ISA_EXT_REG(idx) __kvm_reg_id(KVM_REG_RISCV_ISA_EXT, \ + idx, KVM_REG_SIZE_ULONG) + +bool vcpu_has_ext(struct kvm_vcpu *vcpu, int ext); + struct ex_regs { unsigned long ra; unsigned long sp; @@ -87,6 +92,16 @@ typedef void(*handler_fn)(struct ex_regs *); void vm_install_exception_handler(struct kvm_vm *vm, int ec, handler_fn handler); void vm_install_interrupt_handler(struct kvm_vm *vm, handler_fn handler); +static inline void cpu_relax(void) +{ +#ifdef __riscv_zihintpause + asm volatile("pause" ::: "memory"); +#else + /* Encoding of the pause instruction */ + asm volatile(".4byte 0x100000F" ::: "memory"); +#endif +} + /* L3 index Bit[47:39] */ #define PGTBL_L3_INDEX_MASK 0x0000FF8000000000ULL #define PGTBL_L3_INDEX_SHIFT 39 @@ -165,6 +180,16 @@ struct sbiret sbi_ecall(int ext, int fid, unsigned long arg0, unsigned long arg3, unsigned long arg4, unsigned long arg5); +static inline void local_irq_enable(void) +{ + csr_set(CSR_SSTATUS, SR_SIE); +} + +static inline void local_irq_disable(void) +{ + csr_clear(CSR_SSTATUS, SR_SIE); +} + uint32_t guest_get_vcpuid(void); #endif /* SELFTEST_KVM_PROCESSOR_H */ diff --git a/tools/testing/selftests/kvm/include/riscv/sstc_timer.h b/tools/testing/selftests/kvm/include/riscv/sstc_timer.h new file mode 100644 index 000000000000..7c4a4b26faa0 --- /dev/null +++ b/tools/testing/selftests/kvm/include/riscv/sstc_timer.h @@ -0,0 +1,70 @@ +// SPDX-License-Identifier: GPL-2.0 */ +/* + * RISC-V SSTC Timer specific interface + * + * Copyright (c) 2023 Intel Corporation + */ + +#ifndef SELFTEST_KVM_SSTC_TIMER_H +#define SELFTEST_KVM_SSTC_TIMER_H + +#include "processor.h" + +static unsigned long timer_freq; + +#define msec_to_cycles(msec) \ + ((timer_freq) * (uint64_t)(msec) / 1000) + +#define usec_to_cycles(usec) \ + ((timer_freq) * (uint64_t)(usec) / 1000000) + +#define cycles_to_usec(cycles) \ + ((uint64_t)(cycles) * 1000000 / (timer_freq)) + +static inline uint64_t timer_get_cntct(void) +{ + return csr_read(CSR_TIME); +} + +static inline void timer_set_cval(uint64_t cval) +{ + csr_write(CSR_STIMECMP, cval); +} + +static inline uint64_t timer_get_cval(void) +{ + return csr_read(CSR_STIMECMP); +} + +static inline void timer_irq_enable(void) +{ + csr_set(CSR_SIE, IE_TIE); +} + +static inline void timer_irq_disable(void) +{ + csr_clear(CSR_SIE, IE_TIE); +} + +static inline void timer_set_next_cval_ms(uint32_t msec) +{ + uint64_t now_ct = timer_get_cntct(); + uint64_t next_ct = now_ct + msec_to_cycles(msec); + + timer_set_cval(next_ct); +} + +static inline void __delay(uint64_t cycles) +{ + uint64_t start = timer_get_cntct(); + + while ((timer_get_cntct() - start) < cycles) + cpu_relax(); +} + +static inline void udelay(unsigned long usec) +{ + __delay(usec_to_cycles(usec)); +} + +#endif /* SELFTEST_KVM_SSTC_TIMER_H */ diff --git a/tools/testing/selftests/kvm/lib/riscv/processor.c b/tools/testing/selftests/kvm/lib/riscv/processor.c index b8ad3e69a697..f6ce3d61738f 100644 --- a/tools/testing/selftests/kvm/lib/riscv/processor.c +++ b/tools/testing/selftests/kvm/lib/riscv/processor.c @@ -15,6 +15,15 @@ static vm_vaddr_t exception_handlers; +bool vcpu_has_ext(struct kvm_vcpu *vcpu, int ext) +{ + unsigned long value = 0; + + vcpu_get_reg(vcpu, RISCV_ISA_EXT_REG(ext), &value); + + return !!value; +} + static uint64_t page_align(struct kvm_vm *vm, uint64_t v) { return (v + vm->page_size) & ~(vm->page_size - 1); diff --git a/tools/testing/selftests/kvm/riscv/sstc_timer.c b/tools/testing/selftests/kvm/riscv/sstc_timer.c new file mode 100644 index 000000000000..867bd686a9c5 --- /dev/null +++ b/tools/testing/selftests/kvm/riscv/sstc_timer.c @@ -0,0 +1,382 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * sstc_timer.c - Tests the riscv64 sstc timer IRQ functionality + * + * The test validates the sstc timer IRQs using vstimecmp registers. + * This consitutes the four stages in the test. The guest's main thread + * configures the timer interrupt for a stage and waits for it to fire, + * with a timeout equal to the timer period. It asserts that the timeout + * doesn't exceed the timer period. + * + * On the other hand, upon receipt of an interrupt, the guest's interrupt + * handler validates the interrupt by checking if the architectural state + * is in compliance with the specifications. + * + * The test provides command-line options to configure the timer's + * period (-p), number of vCPUs (-n), and iterations per stage (-i). + * To stress-test the timer stack even more, an option to migrate the + * vCPUs across pCPUs (-m), at a particular rate, is also provided. + * + * Copyright (c) 2021, Google LLC. + * Copyright (c) 2023, Intel Corporation. + */ + +#define _GNU_SOURCE + +#include +#include +#include +#include +#include +#include + +#include "kvm_util.h" +#include "processor.h" +#include "sstc_timer.h" + +#define NR_VCPUS_DEF 4 +#define NR_TEST_ITERS_DEF 5 +#define TIMER_TEST_PERIOD_MS_DEF 10 +#define TIMER_TEST_ERR_MARGIN_US 100 +#define TIMER_TEST_MIGRATION_FREQ_MS 2 + +struct test_args { + int nr_vcpus; + int nr_iter; + int timer_period_ms; + int migration_freq_ms; +}; + +static struct test_args test_args = { + .nr_vcpus = NR_VCPUS_DEF, + .nr_iter = NR_TEST_ITERS_DEF, + .timer_period_ms = TIMER_TEST_PERIOD_MS_DEF, + .migration_freq_ms = TIMER_TEST_MIGRATION_FREQ_MS, +}; + +#define msecs_to_usecs(msec) ((msec) * 1000LL) + +/* Shared variables between host and guest */ +struct test_vcpu_shared_data { + int nr_iter; + uint64_t xcnt; +}; + +static struct kvm_vcpu *vcpus[KVM_MAX_VCPUS]; +static pthread_t pt_vcpu_run[KVM_MAX_VCPUS]; +static struct test_vcpu_shared_data vcpu_shared_data[KVM_MAX_VCPUS]; + +static int timer_irq = IRQ_S_TIMER; + +static unsigned long *vcpu_done_map; +static pthread_mutex_t vcpu_done_map_lock; + +static void +guest_configure_timer_action(struct test_vcpu_shared_data *shared_data) +{ + timer_set_next_cval_ms(test_args.timer_period_ms); + shared_data->xcnt = timer_get_cntct(); + timer_irq_enable(); +} + +static void guest_validate_irq(unsigned int intid, + struct test_vcpu_shared_data *shared_data) +{ + uint64_t xcnt = 0, xcnt_diff_us, cval = 0; + + timer_irq_disable(); + xcnt = timer_get_cntct(); + cval = timer_get_cval(); + + xcnt_diff_us = cycles_to_usec(xcnt - shared_data->xcnt); + + /* Make sure we are dealing with the correct timer IRQ */ + GUEST_ASSERT_2(intid == timer_irq, intid, timer_irq); + + GUEST_ASSERT_3(xcnt >= cval, xcnt, cval, xcnt_diff_us); + + WRITE_ONCE(shared_data->nr_iter, shared_data->nr_iter + 1); +} + +static void guest_irq_handler(struct ex_regs *regs) +{ + unsigned int intid = regs->cause & ~CAUSE_IRQ_FLAG; + uint32_t cpu = guest_get_vcpuid(); + struct test_vcpu_shared_data *shared_data = &vcpu_shared_data[cpu]; + + guest_validate_irq(intid, shared_data); +} + +static void guest_run(struct test_vcpu_shared_data *shared_data) +{ + uint32_t irq_iter, config_iter; + + shared_data->nr_iter = 0; + + for (config_iter = 0; config_iter < test_args.nr_iter; config_iter++) { + /* Setup the next interrupt */ + guest_configure_timer_action(shared_data); + + /* Setup a timeout for the interrupt to arrive */ + udelay(msecs_to_usecs(test_args.timer_period_ms) + + TIMER_TEST_ERR_MARGIN_US); + + irq_iter = READ_ONCE(shared_data->nr_iter); + GUEST_ASSERT_2(config_iter + 1 == irq_iter, + config_iter + 1, irq_iter); + } +} + +static void guest_code(void) +{ + uint32_t cpu = guest_get_vcpuid(); + struct test_vcpu_shared_data *shared_data = &vcpu_shared_data[cpu]; + + local_irq_disable(); + timer_irq_disable(); + local_irq_enable(); + + guest_run(shared_data); + + GUEST_DONE(); +} + +static void *test_vcpu_run(void *arg) +{ + unsigned int vcpu_idx = (unsigned long)arg; + struct ucall uc; + struct kvm_vcpu *vcpu = vcpus[vcpu_idx]; + struct kvm_vm *vm = vcpu->vm; + struct test_vcpu_shared_data *shared_data = &vcpu_shared_data[vcpu_idx]; + + vcpu_run(vcpu); + + /* Currently, any exit from guest is an indication of completion */ + pthread_mutex_lock(&vcpu_done_map_lock); + __set_bit(vcpu_idx, vcpu_done_map); + pthread_mutex_unlock(&vcpu_done_map_lock); + + switch (get_ucall(vcpu, &uc)) { + case UCALL_SYNC: + case UCALL_DONE: + break; + case UCALL_ABORT: + sync_global_from_guest(vm, *shared_data); + REPORT_GUEST_ASSERT_N(uc, "values: %lu, %lu, %lu; vcpu: %u; iter: %u", + GUEST_ASSERT_ARG(uc, 0), + GUEST_ASSERT_ARG(uc, 1), + GUEST_ASSERT_ARG(uc, 2), + vcpu_idx, + shared_data->nr_iter); + break; + default: + TEST_FAIL("Unexpected guest exit\n"); + } + + pr_info("PASS(vCPU-%d).\n", vcpu_idx); + + return NULL; +} + +static uint32_t test_get_pcpu(void) +{ + uint32_t pcpu; + unsigned int nproc_conf; + cpu_set_t online_cpuset; + + nproc_conf = get_nprocs_conf(); + sched_getaffinity(0, sizeof(cpu_set_t), &online_cpuset); + + /* Randomly find an available pCPU to place a vCPU on */ + do { + pcpu = rand() % nproc_conf; + } while (!CPU_ISSET(pcpu, &online_cpuset)); + + return pcpu; +} + +static int test_migrate_vcpu(unsigned int vcpu_idx) +{ + int ret; + cpu_set_t cpuset; + uint32_t new_pcpu = test_get_pcpu(); + + CPU_ZERO(&cpuset); + CPU_SET(new_pcpu, &cpuset); + + pr_debug("Migrating vCPU: %u to pCPU: %u\n", vcpu_idx, new_pcpu); + + ret = pthread_setaffinity_np(pt_vcpu_run[vcpu_idx], + sizeof(cpuset), &cpuset); + + /* Allow the error where the vCPU thread is already finished */ + TEST_ASSERT(ret == 0 || ret == ESRCH, + "Failed to migrate the vCPU:%u to pCPU: %u; ret: %d\n", + vcpu_idx, new_pcpu, ret); + + return ret; +} + +static void *test_vcpu_migration(void *arg) +{ + unsigned int i, n_done; + bool vcpu_done; + + do { + usleep(msecs_to_usecs(test_args.migration_freq_ms)); + + for (n_done = 0, i = 0; i < test_args.nr_vcpus; i++) { + pthread_mutex_lock(&vcpu_done_map_lock); + vcpu_done = test_bit(i, vcpu_done_map); + pthread_mutex_unlock(&vcpu_done_map_lock); + + if (vcpu_done) { + n_done++; + continue; + } + + test_migrate_vcpu(i); + } + } while (test_args.nr_vcpus != n_done); + + return NULL; +} + +static void test_run(struct kvm_vm *vm) +{ + pthread_t pt_vcpu_migration; + unsigned int i; + int ret; + + pthread_mutex_init(&vcpu_done_map_lock, NULL); + vcpu_done_map = bitmap_zalloc(test_args.nr_vcpus); + TEST_ASSERT(vcpu_done_map, "Failed to allocate vcpu done bitmap\n"); + + for (i = 0; i < (unsigned long)test_args.nr_vcpus; i++) { + ret = pthread_create(&pt_vcpu_run[i], NULL, test_vcpu_run, + (void *)(unsigned long)i); + TEST_ASSERT(!ret, "Failed to create vCPU-%d pthread\n", i); + } + + /* Spawn a thread to control the vCPU migrations */ + if (test_args.migration_freq_ms) { + srand(time(NULL)); + + ret = pthread_create(&pt_vcpu_migration, NULL, + test_vcpu_migration, NULL); + TEST_ASSERT(!ret, "Failed to create the migration pthread\n"); + } + + for (i = 0; i < test_args.nr_vcpus; i++) + pthread_join(pt_vcpu_run[i], NULL); + + if (test_args.migration_freq_ms) + pthread_join(pt_vcpu_migration, NULL); + + bitmap_free(vcpu_done_map); +} + +static void test_init_timer_freq(struct kvm_vm *vm) +{ + /* Timer frequency should be same for all the vCPUs, so query only vCPU-0 */ + vcpu_get_reg(vcpus[0], RISCV_TIMER_REG(frequency), &timer_freq); + sync_global_to_guest(vm, timer_freq); + + pr_debug("timer_freq: %lu\n", timer_freq); +} + +static struct kvm_vm *test_vm_create(void) +{ + struct kvm_vm *vm; + int nr_vcpus = test_args.nr_vcpus; + + vm = vm_create_with_vcpus(nr_vcpus, guest_code, vcpus); + __TEST_REQUIRE(vcpu_has_ext(vcpus[0], KVM_RISCV_ISA_EXT_SSTC), + "SSTC not available, skipping test\n"); + + vm_init_trap_vector_tables(vm); + vm_install_interrupt_handler(vm, guest_irq_handler); + + for (int i = 0; i < nr_vcpus; i++) + vcpu_init_trap_vector_tables(vcpus[i]); + + test_init_timer_freq(vm); + + /* Make all the test's cmdline args visible to the guest */ + sync_global_to_guest(vm, test_args); + + return vm; +} + +static void test_vm_cleanup(struct kvm_vm *vm) +{ + kvm_vm_free(vm); +} + +static void test_print_help(char *name) +{ + pr_info("Usage: %s [-h] [-n nr_vcpus] [-i iterations] [-p timer_period_ms]\n", + name); + pr_info("\t-n: Number of vCPUs to configure (default: %u; max: %u)\n", + NR_VCPUS_DEF, KVM_MAX_VCPUS); + pr_info("\t-i: Number of iterations per stage (default: %u)\n", + NR_TEST_ITERS_DEF); + pr_info("\t-p: Periodicity (in ms) of the guest timer (default: %u)\n", + TIMER_TEST_PERIOD_MS_DEF); + pr_info("\t-m: Frequency (in ms) of vCPUs to migrate to different pCPU. 0 to turn off (default: %u)\n", + TIMER_TEST_MIGRATION_FREQ_MS); + pr_info("\t-h: print this help screen\n"); +} + +static bool parse_args(int argc, char *argv[]) +{ + int opt; + + while ((opt = getopt(argc, argv, "hn:i:p:m:o:")) != -1) { + switch (opt) { + case 'n': + test_args.nr_vcpus = atoi_positive("Number of vCPUs", optarg); + if (test_args.nr_vcpus > KVM_MAX_VCPUS) { + pr_info("Max allowed vCPUs: %u\n", + KVM_MAX_VCPUS); + goto err; + } + break; + case 'i': + test_args.nr_iter = atoi_positive("Number of iterations", optarg); + break; + case 'p': + test_args.timer_period_ms = atoi_positive("Periodicity", optarg); + break; + case 'm': + test_args.migration_freq_ms = atoi_non_negative("Frequency", optarg); + break; + case 'h': + default: + goto err; + } + } + + return true; + +err: + test_print_help(argv[0]); + return false; +} + +int main(int argc, char *argv[]) +{ + struct kvm_vm *vm; + + if (!parse_args(argc, argv)) + exit(KSFT_SKIP); + + __TEST_REQUIRE(!test_args.migration_freq_ms || get_nprocs() >= 2, + "At least two physical CPUs needed for vCPU migration"); + + vm = test_vm_create(); + test_run(vm); + test_vm_cleanup(vm); + + return 0; +}