From patchwork Thu Jan 12 14:02:58 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Anup Patel X-Patchwork-Id: 13098038 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 58774C54EBD for ; Thu, 12 Jan 2023 14:03:59 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232971AbjALOD5 (ORCPT ); Thu, 12 Jan 2023 09:03:57 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57880 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233680AbjALOD1 (ORCPT ); Thu, 12 Jan 2023 09:03:27 -0500 Received: from mail-pj1-x1036.google.com (mail-pj1-x1036.google.com [IPv6:2607:f8b0:4864:20::1036]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A17635132E for ; Thu, 12 Jan 2023 06:03:26 -0800 (PST) Received: by mail-pj1-x1036.google.com with SMTP id b9-20020a17090a7ac900b00226ef160dcaso19219450pjl.2 for ; Thu, 12 Jan 2023 06:03:26 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ventanamicro.com; s=google; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=IgHRkFPpyUb1zPgUZ8Z5/eq4ALXxizOkU21iNrwXDT4=; b=EAGSFs1hkLM765+88mQ0b4gLIqycLnieeWFPSGU2I8nhlEZQjS5zWlRmhMbWgKGfhX o3CvnOHM5wUmh+69nsVQ14zhVlxaSTx0/EKRIpIc6MfLFOpbM5/3xZCcKpv/JkFMO7V4 MsushEeJfjTxU6wfk5Kg6Q+mBHaFufUU1N41RcfkYiGY5a/pBTIdyAjrBgtDd0UhZ9TS vlErdKrAeF/4FyRrvURRxsLvLN6lLtEH24ImBmrV26vkp2JdH8V3b26uSjIvBBtX2QUS XLGkTmzGWLWz0P0si5a7YFT4MecJY/sxH5KM7qxZfOIXAR7ZruaFgf2JfUP6xHxWZaea W3/Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=IgHRkFPpyUb1zPgUZ8Z5/eq4ALXxizOkU21iNrwXDT4=; b=TGfDM5F70KuDJjSjLPPxrkuK9BkHvmJAJ0p2gOpZTf0Qdex4P8/aF6P8GPPvqxiU81 mzfoMoVnnzrvdNxY0N3HvpMAEIAaYWbYrwEiaOSyv4EKtY5f/wmh1rJm4PeZcO4oMdQa LqrNKS2GxZGKpEOVyvtjQ7Wz2y7U0d1PYXg7GhoAu2Jeyd5W+tpU7km2UEhqbAdQG+kM fy+aiiLIbx2R+yf5B6LN60/7G8++TrcX/rr77XZTnGOEPGjgkPjdsYoEbak0nrJuTbrh VRcAJLOSvtRKGk3dG999CBvSU76Vkamdjvty7KXXjnkodeQiIYGADezUrp+xw0o/gh5D j7Ew== X-Gm-Message-State: AFqh2kqBXx9fXfIuTqAUCwHTxOsyA2WCAvslKlwD5ISquzgdRugoPhCC 7QAsgc+yrDpTNwjlGWvlRloV0g== X-Google-Smtp-Source: AMrXdXvVfqOdttUAhm3BBlUd33vNJ7jLX3PtRHk6qkzl6HU7bHbo3EQNB+e7PwLHWUiA88Uw94QgUg== X-Received: by 2002:a17:902:f544:b0:194:65c8:da70 with SMTP id h4-20020a170902f54400b0019465c8da70mr1199404plf.45.1673532205861; Thu, 12 Jan 2023 06:03:25 -0800 (PST) Received: from anup-ubuntu-vm.localdomain ([103.97.165.210]) by smtp.gmail.com with ESMTPSA id d11-20020a170902cecb00b001925016e34bsm12351455plg.79.2023.01.12.06.03.20 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 12 Jan 2023 06:03:25 -0800 (PST) From: Anup Patel To: Paolo Bonzini , Atish Patra Cc: Palmer Dabbelt , Paul Walmsley , Andrew Jones , Anup Patel , kvm@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, Anup Patel Subject: [PATCH 1/7] RISC-V: Add AIA related CSR defines Date: Thu, 12 Jan 2023 19:32:58 +0530 Message-Id: <20230112140304.1830648-2-apatel@ventanamicro.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230112140304.1830648-1-apatel@ventanamicro.com> References: <20230112140304.1830648-1-apatel@ventanamicro.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org The RISC-V AIA specification improves handling per-HART local interrupts in a backward compatible manner. This patch adds defines for new RISC-V AIA CSRs. Signed-off-by: Anup Patel Reviewed-by: Andrew Jones --- arch/riscv/include/asm/csr.h | 93 ++++++++++++++++++++++++++++++++++++ 1 file changed, 93 insertions(+) diff --git a/arch/riscv/include/asm/csr.h b/arch/riscv/include/asm/csr.h index 0e571f6483d9..d608dac4b19f 100644 --- a/arch/riscv/include/asm/csr.h +++ b/arch/riscv/include/asm/csr.h @@ -73,7 +73,10 @@ #define IRQ_S_EXT 9 #define IRQ_VS_EXT 10 #define IRQ_M_EXT 11 +#define IRQ_S_GEXT 12 #define IRQ_PMU_OVF 13 +#define IRQ_LOCAL_MAX (IRQ_PMU_OVF + 1) +#define IRQ_LOCAL_MASK ((_AC(1, UL) << IRQ_LOCAL_MAX) - 1) /* Exception causes */ #define EXC_INST_MISALIGNED 0 @@ -156,6 +159,27 @@ (_AC(1, UL) << IRQ_S_TIMER) | \ (_AC(1, UL) << IRQ_S_EXT)) +/* AIA CSR bits */ +#define TOPI_IID_SHIFT 16 +#define TOPI_IID_MASK _AC(0xfff, UL) +#define TOPI_IPRIO_MASK _AC(0xff, UL) +#define TOPI_IPRIO_BITS 8 + +#define TOPEI_ID_SHIFT 16 +#define TOPEI_ID_MASK _AC(0x7ff, UL) +#define TOPEI_PRIO_MASK _AC(0x7ff, UL) + +#define ISELECT_IPRIO0 0x30 +#define ISELECT_IPRIO15 0x3f +#define ISELECT_MASK _AC(0x1ff, UL) + +#define HVICTL_VTI _AC(0x40000000, UL) +#define HVICTL_IID _AC(0x0fff0000, UL) +#define HVICTL_IID_SHIFT 16 +#define HVICTL_DPR _AC(0x00000200, UL) +#define HVICTL_IPRIOM _AC(0x00000100, UL) +#define HVICTL_IPRIO _AC(0x000000ff, UL) + /* xENVCFG flags */ #define ENVCFG_STCE (_AC(1, ULL) << 63) #define ENVCFG_PBMTE (_AC(1, ULL) << 62) @@ -250,6 +274,18 @@ #define CSR_STIMECMP 0x14D #define CSR_STIMECMPH 0x15D +/* Supervisor-Level Window to Indirectly Accessed Registers (AIA) */ +#define CSR_SISELECT 0x150 +#define CSR_SIREG 0x151 + +/* Supervisor-Level Interrupts (AIA) */ +#define CSR_STOPEI 0x15c +#define CSR_STOPI 0xdb0 + +/* Supervisor-Level High-Half CSRs (AIA) */ +#define CSR_SIEH 0x114 +#define CSR_SIPH 0x154 + #define CSR_VSSTATUS 0x200 #define CSR_VSIE 0x204 #define CSR_VSTVEC 0x205 @@ -279,8 +315,32 @@ #define CSR_HGATP 0x680 #define CSR_HGEIP 0xe12 +/* Virtual Interrupts and Interrupt Priorities (H-extension with AIA) */ +#define CSR_HVIEN 0x608 +#define CSR_HVICTL 0x609 +#define CSR_HVIPRIO1 0x646 +#define CSR_HVIPRIO2 0x647 + +/* VS-Level Window to Indirectly Accessed Registers (H-extension with AIA) */ +#define CSR_VSISELECT 0x250 +#define CSR_VSIREG 0x251 + +/* VS-Level Interrupts (H-extension with AIA) */ +#define CSR_VSTOPEI 0x25c +#define CSR_VSTOPI 0xeb0 + +/* Hypervisor and VS-Level High-Half CSRs (H-extension with AIA) */ +#define CSR_HIDELEGH 0x613 +#define CSR_HVIENH 0x618 +#define CSR_HVIPH 0x655 +#define CSR_HVIPRIO1H 0x656 +#define CSR_HVIPRIO2H 0x657 +#define CSR_VSIEH 0x214 +#define CSR_VSIPH 0x254 + #define CSR_MSTATUS 0x300 #define CSR_MISA 0x301 +#define CSR_MIDELEG 0x303 #define CSR_MIE 0x304 #define CSR_MTVEC 0x305 #define CSR_MENVCFG 0x30a @@ -297,6 +357,25 @@ #define CSR_MIMPID 0xf13 #define CSR_MHARTID 0xf14 +/* Machine-Level Window to Indirectly Accessed Registers (AIA) */ +#define CSR_MISELECT 0x350 +#define CSR_MIREG 0x351 + +/* Machine-Level Interrupts (AIA) */ +#define CSR_MTOPEI 0x35c +#define CSR_MTOPI 0xfb0 + +/* Virtual Interrupts for Supervisor Level (AIA) */ +#define CSR_MVIEN 0x308 +#define CSR_MVIP 0x309 + +/* Machine-Level High-Half CSRs (AIA) */ +#define CSR_MIDELEGH 0x313 +#define CSR_MIEH 0x314 +#define CSR_MVIENH 0x318 +#define CSR_MVIPH 0x319 +#define CSR_MIPH 0x354 + #ifdef CONFIG_RISCV_M_MODE # define CSR_STATUS CSR_MSTATUS # define CSR_IE CSR_MIE @@ -307,6 +386,13 @@ # define CSR_TVAL CSR_MTVAL # define CSR_IP CSR_MIP +# define CSR_IEH CSR_MIEH +# define CSR_ISELECT CSR_MISELECT +# define CSR_IREG CSR_MIREG +# define CSR_IPH CSR_MIPH +# define CSR_TOPEI CSR_MTOPEI +# define CSR_TOPI CSR_MTOPI + # define SR_IE SR_MIE # define SR_PIE SR_MPIE # define SR_PP SR_MPP @@ -324,6 +410,13 @@ # define CSR_TVAL CSR_STVAL # define CSR_IP CSR_SIP +# define CSR_IEH CSR_SIEH +# define CSR_ISELECT CSR_SISELECT +# define CSR_IREG CSR_SIREG +# define CSR_IPH CSR_SIPH +# define CSR_TOPEI CSR_STOPEI +# define CSR_TOPI CSR_STOPI + # define SR_IE SR_SIE # define SR_PIE SR_SPIE # define SR_PP SR_SPP From patchwork Thu Jan 12 14:02:59 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Anup Patel X-Patchwork-Id: 13098040 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3EDE6C54EBC for ; Thu, 12 Jan 2023 14:04:08 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234270AbjALOED (ORCPT ); Thu, 12 Jan 2023 09:04:03 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57880 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234056AbjALODk (ORCPT ); Thu, 12 Jan 2023 09:03:40 -0500 Received: from mail-pj1-x1029.google.com (mail-pj1-x1029.google.com [IPv6:2607:f8b0:4864:20::1029]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A71D152741 for ; Thu, 12 Jan 2023 06:03:33 -0800 (PST) Received: by mail-pj1-x1029.google.com with SMTP id o7-20020a17090a0a0700b00226c9b82c3aso20961392pjo.3 for ; Thu, 12 Jan 2023 06:03:33 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ventanamicro.com; s=google; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=wjWQ9F0LaanvIi9SVaFlkrqw0WGzFdxUF931tj4YTWo=; b=T+9X/H4J/G3JlwF9psZ/cWzPV07niIG+bOZMhVU214/rAQWZSLhMuBw5PuaNrOyaiV oqNriOfX2cRc1DkYlgZq4HYRQbTaH5XagEPPtJ3krBvxRJEhA3JCjlUbuB7DUnDXQUuI 7MyPPjJ/vUwaVOW+hGCKlfcUqEFjFT5g2Dly6RCyg51Sgpc+d2G/Xa4t6nP/Aqtd7J98 Xt5v20IOdlplskt/PUCFRCi4wP2INn335nSmKdFV8gLZ5tJMMFu4xdEUnaVpHAzhcSo4 ZThl76vGl/Um6/ilm2ElYvHQOegLNry4OHmaaLOO5h6UcJgxFxVS7+wyL8gGR+kuUSCl Y7Zg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=wjWQ9F0LaanvIi9SVaFlkrqw0WGzFdxUF931tj4YTWo=; b=5H0wgwLkfqhUgACWM1i8fn023MCbadaa3Q+1+xclK0bN1V0ryMrUK3Z8ZVTe12aCgh ByRUrQcpeg5wtvoqLq0A+5BXDLJ9MTy7k7Isgjy/DVQWXTHE6TnARU5uYR6RtFE/YCIJ H2VoXoEuShLI57rXLCcn2tvmRax+am0adxNvdm5JRju4LYTVnXpxuxYy8vpVz+2NnpKQ gIzhfmA32ZyU19gye/yiywW+u1KTzgZ/+AsBqqFZdWxkajttQXewuYR0N0jRglqInxab w5qtoBLw9jGq161B4JmHiuzafj3XGjhVkiDr3Xg4XdwGepK/3XGrhL7Nn/UIY3gQDkn6 xnLA== X-Gm-Message-State: AFqh2krhal7uuM1xjo9Tu1CgW/RMTX88996FWcnRCHpbZAg9vaWhKCNe P32qctk37bgQlFQ2vA6pjE+9NA== X-Google-Smtp-Source: AMrXdXscgqqsMnhBi0KMkEXRBk0fS7yh3irzNgU/w6MnzE3yQhSAfPlAdtp/WbKEGvyg5x1cv+sNYg== X-Received: by 2002:a17:902:ba12:b0:194:4016:11ac with SMTP id j18-20020a170902ba1200b00194401611acmr10522106pls.30.1673532213046; Thu, 12 Jan 2023 06:03:33 -0800 (PST) Received: from anup-ubuntu-vm.localdomain ([103.97.165.210]) by smtp.gmail.com with ESMTPSA id d11-20020a170902cecb00b001925016e34bsm12351455plg.79.2023.01.12.06.03.26 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 12 Jan 2023 06:03:32 -0800 (PST) From: Anup Patel To: Paolo Bonzini , Atish Patra Cc: Palmer Dabbelt , Paul Walmsley , Andrew Jones , Anup Patel , kvm@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, Anup Patel Subject: [PATCH 2/7] RISC-V: Detect AIA CSRs from ISA string Date: Thu, 12 Jan 2023 19:32:59 +0530 Message-Id: <20230112140304.1830648-3-apatel@ventanamicro.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230112140304.1830648-1-apatel@ventanamicro.com> References: <20230112140304.1830648-1-apatel@ventanamicro.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org We have two extension names for AIA ISA support: Smaia (M-mode AIA CSRs) and Ssaia (S-mode AIA CSRs). We extend the ISA string parsing to detect Smaia and Ssaia extensions. Signed-off-by: Anup Patel Reviewed-by: Andrew Jones --- arch/riscv/include/asm/hwcap.h | 8 ++++++++ arch/riscv/kernel/cpu.c | 2 ++ arch/riscv/kernel/cpufeature.c | 2 ++ 3 files changed, 12 insertions(+) diff --git a/arch/riscv/include/asm/hwcap.h b/arch/riscv/include/asm/hwcap.h index 86328e3acb02..c649e85ed7bb 100644 --- a/arch/riscv/include/asm/hwcap.h +++ b/arch/riscv/include/asm/hwcap.h @@ -59,10 +59,18 @@ enum riscv_isa_ext_id { RISCV_ISA_EXT_ZIHINTPAUSE, RISCV_ISA_EXT_SSTC, RISCV_ISA_EXT_SVINVAL, + RISCV_ISA_EXT_SSAIA, + RISCV_ISA_EXT_SMAIA, RISCV_ISA_EXT_ID_MAX }; static_assert(RISCV_ISA_EXT_ID_MAX <= RISCV_ISA_EXT_MAX); +#ifdef CONFIG_RISCV_M_MODE +#define RISCV_ISA_EXT_SxAIA RISCV_ISA_EXT_SMAIA +#else +#define RISCV_ISA_EXT_SxAIA RISCV_ISA_EXT_SSAIA +#endif + /* * This enum represents the logical ID for each RISC-V ISA extension static * keys. We can use static key to optimize code path if some ISA extensions diff --git a/arch/riscv/kernel/cpu.c b/arch/riscv/kernel/cpu.c index 1b9a5a66e55a..a215ec929160 100644 --- a/arch/riscv/kernel/cpu.c +++ b/arch/riscv/kernel/cpu.c @@ -162,6 +162,8 @@ arch_initcall(riscv_cpuinfo_init); * extensions by an underscore. */ static struct riscv_isa_ext_data isa_ext_arr[] = { + __RISCV_ISA_EXT_DATA(smaia, RISCV_ISA_EXT_SMAIA), + __RISCV_ISA_EXT_DATA(ssaia, RISCV_ISA_EXT_SSAIA), __RISCV_ISA_EXT_DATA(sscofpmf, RISCV_ISA_EXT_SSCOFPMF), __RISCV_ISA_EXT_DATA(sstc, RISCV_ISA_EXT_SSTC), __RISCV_ISA_EXT_DATA(svinval, RISCV_ISA_EXT_SVINVAL), diff --git a/arch/riscv/kernel/cpufeature.c b/arch/riscv/kernel/cpufeature.c index 93e45560af30..3c5b51f519d5 100644 --- a/arch/riscv/kernel/cpufeature.c +++ b/arch/riscv/kernel/cpufeature.c @@ -228,6 +228,8 @@ void __init riscv_fill_hwcap(void) SET_ISA_EXT_MAP("zihintpause", RISCV_ISA_EXT_ZIHINTPAUSE); SET_ISA_EXT_MAP("sstc", RISCV_ISA_EXT_SSTC); SET_ISA_EXT_MAP("svinval", RISCV_ISA_EXT_SVINVAL); + SET_ISA_EXT_MAP("smaia", RISCV_ISA_EXT_SMAIA); + SET_ISA_EXT_MAP("ssaia", RISCV_ISA_EXT_SSAIA); } #undef SET_ISA_EXT_MAP } From patchwork Thu Jan 12 14:03:00 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Anup Patel X-Patchwork-Id: 13098039 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 58E08C54EBD for ; Thu, 12 Jan 2023 14:04:03 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234001AbjALOEA (ORCPT ); Thu, 12 Jan 2023 09:04:00 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57926 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234751AbjALODn (ORCPT ); Thu, 12 Jan 2023 09:03:43 -0500 Received: from mail-pj1-x1034.google.com (mail-pj1-x1034.google.com [IPv6:2607:f8b0:4864:20::1034]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8A6D6532AD for ; Thu, 12 Jan 2023 06:03:40 -0800 (PST) Received: by mail-pj1-x1034.google.com with SMTP id z1-20020a17090a66c100b00226f05b9595so5866295pjl.0 for ; Thu, 12 Jan 2023 06:03:40 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ventanamicro.com; s=google; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=41mwjRtAV9Ps8+RICLlcEu1Jp8E7xzwVnsOTMb697U8=; b=Cx9wgXiG3yGJIP13AxtRLpEyPm0oHQsEnGUEH13SkW0zvi6xNemz4LlDlYS9AOoc6E 3CHh1eoF1q5fDipXLRzUd5Ecg06hisNoF/bGEbx5ujjl5KvD1iOskHMzb/dkc63SsYQy H5mzf4RXtvetFCoBGo/S/MSYHSTLohsQ7hvwjGUGq37fBrZZlw+bEyB9p5P6s4s1lxa4 uyXKOhwk5Y5EVjbGWtYFn5aNcpmNCQG1BOL2WjHYGCW43O32Pv4blj6GahAXGHgQwKGR 6w0BzQUpAycWFRh1dU5nk3k6mkW0qLJkvte40xai5RWIQi1R81H+MT6k9LUmDVe3gvdF 8k9g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=41mwjRtAV9Ps8+RICLlcEu1Jp8E7xzwVnsOTMb697U8=; b=J5dPV8sFM5IapkpRO5AVvq/LqUIg0goMd/5L/ruggT9RJf/ip77P6SW9iBor1fc6Af gktPWs4vpsIDUx/S6aE9Axly3ZAw6Tt86CwniKoJ3UgSCYKaHC/buqqvhjMoi6m5lnb8 U4wNhbLrgJdq1I9jSyV8r8+xq3haRtLxHTVpBgh9j2xjhetDvEjHWQ621MTVs6Z6xKOh wYpyFYdO4x8Al2zViR1N7D7jHYejABoWVF4WiIEh/UBA4QVRNr9kmI87/pM7FVwfrpMh ZNsIYk3ynXR6jbNUw2kYbzv1do11dSIZVoauFARdRIwv1Q0XWA/k93Ry+shVKX4lceOL Lr3w== X-Gm-Message-State: AFqh2kp+6/7Ryz0tBPueXjgLSSiZpEy8pkZnhsGG+IOr3bFQF6PAG/Mh D4FjhodWrb7iCeD6F1JHRpjdcQ== X-Google-Smtp-Source: AMrXdXuoqF9O/2Eth/euiEsYL9iZeQ5f5ihqzpJRnRU6lFfvKPaGgzYrAJ6y+uWOnFZ67on7TXRK/g== X-Received: by 2002:a17:902:8543:b0:192:9454:7b32 with SMTP id d3-20020a170902854300b0019294547b32mr48252741plo.40.1673532219874; Thu, 12 Jan 2023 06:03:39 -0800 (PST) Received: from anup-ubuntu-vm.localdomain ([103.97.165.210]) by smtp.gmail.com with ESMTPSA id d11-20020a170902cecb00b001925016e34bsm12351455plg.79.2023.01.12.06.03.33 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 12 Jan 2023 06:03:39 -0800 (PST) From: Anup Patel To: Paolo Bonzini , Atish Patra Cc: Palmer Dabbelt , Paul Walmsley , Andrew Jones , Anup Patel , kvm@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, Anup Patel Subject: [PATCH 3/7] RISC-V: KVM: Drop the _MASK suffix from hgatp.VMID mask defines Date: Thu, 12 Jan 2023 19:33:00 +0530 Message-Id: <20230112140304.1830648-4-apatel@ventanamicro.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230112140304.1830648-1-apatel@ventanamicro.com> References: <20230112140304.1830648-1-apatel@ventanamicro.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org The hgatp.VMID mask defines are used before shifting when extracting VMID value from hgatp CSR value so based on the convention followed in the other parts of asm/csr.h, the hgatp.VMID mask defines should not have a _MASK suffix. Signed-off-by: Anup Patel Reviewed-by: Andrew Jones --- arch/riscv/include/asm/csr.h | 8 ++++---- arch/riscv/kvm/mmu.c | 3 +-- arch/riscv/kvm/vmid.c | 4 ++-- 3 files changed, 7 insertions(+), 8 deletions(-) diff --git a/arch/riscv/include/asm/csr.h b/arch/riscv/include/asm/csr.h index d608dac4b19f..36d580528f90 100644 --- a/arch/riscv/include/asm/csr.h +++ b/arch/riscv/include/asm/csr.h @@ -131,12 +131,12 @@ #define HGATP32_MODE_SHIFT 31 #define HGATP32_VMID_SHIFT 22 -#define HGATP32_VMID_MASK _AC(0x1FC00000, UL) +#define HGATP32_VMID _AC(0x1FC00000, UL) #define HGATP32_PPN _AC(0x003FFFFF, UL) #define HGATP64_MODE_SHIFT 60 #define HGATP64_VMID_SHIFT 44 -#define HGATP64_VMID_MASK _AC(0x03FFF00000000000, UL) +#define HGATP64_VMID _AC(0x03FFF00000000000, UL) #define HGATP64_PPN _AC(0x00000FFFFFFFFFFF, UL) #define HGATP_PAGE_SHIFT 12 @@ -144,12 +144,12 @@ #ifdef CONFIG_64BIT #define HGATP_PPN HGATP64_PPN #define HGATP_VMID_SHIFT HGATP64_VMID_SHIFT -#define HGATP_VMID_MASK HGATP64_VMID_MASK +#define HGATP_VMID HGATP64_VMID #define HGATP_MODE_SHIFT HGATP64_MODE_SHIFT #else #define HGATP_PPN HGATP32_PPN #define HGATP_VMID_SHIFT HGATP32_VMID_SHIFT -#define HGATP_VMID_MASK HGATP32_VMID_MASK +#define HGATP_VMID HGATP32_VMID #define HGATP_MODE_SHIFT HGATP32_MODE_SHIFT #endif diff --git a/arch/riscv/kvm/mmu.c b/arch/riscv/kvm/mmu.c index 34b57e0be2ef..034746638fa6 100644 --- a/arch/riscv/kvm/mmu.c +++ b/arch/riscv/kvm/mmu.c @@ -748,8 +748,7 @@ void kvm_riscv_gstage_update_hgatp(struct kvm_vcpu *vcpu) unsigned long hgatp = gstage_mode; struct kvm_arch *k = &vcpu->kvm->arch; - hgatp |= (READ_ONCE(k->vmid.vmid) << HGATP_VMID_SHIFT) & - HGATP_VMID_MASK; + hgatp |= (READ_ONCE(k->vmid.vmid) << HGATP_VMID_SHIFT) & HGATP_VMID; hgatp |= (k->pgd_phys >> PAGE_SHIFT) & HGATP_PPN; csr_write(CSR_HGATP, hgatp); diff --git a/arch/riscv/kvm/vmid.c b/arch/riscv/kvm/vmid.c index 6cd93995fb65..6f4d4979a759 100644 --- a/arch/riscv/kvm/vmid.c +++ b/arch/riscv/kvm/vmid.c @@ -26,9 +26,9 @@ void kvm_riscv_gstage_vmid_detect(void) /* Figure-out number of VMID bits in HW */ old = csr_read(CSR_HGATP); - csr_write(CSR_HGATP, old | HGATP_VMID_MASK); + csr_write(CSR_HGATP, old | HGATP_VMID); vmid_bits = csr_read(CSR_HGATP); - vmid_bits = (vmid_bits & HGATP_VMID_MASK) >> HGATP_VMID_SHIFT; + vmid_bits = (vmid_bits & HGATP_VMID) >> HGATP_VMID_SHIFT; vmid_bits = fls_long(vmid_bits); csr_write(CSR_HGATP, old); From patchwork Thu Jan 12 14:03:01 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Anup Patel X-Patchwork-Id: 13098041 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 534C6C54EBC for ; Thu, 12 Jan 2023 14:04:17 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229903AbjALOEQ (ORCPT ); Thu, 12 Jan 2023 09:04:16 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58278 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235088AbjALODw (ORCPT ); Thu, 12 Jan 2023 09:03:52 -0500 Received: from mail-pl1-x630.google.com (mail-pl1-x630.google.com [IPv6:2607:f8b0:4864:20::630]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id EBE4252765 for ; Thu, 12 Jan 2023 06:03:50 -0800 (PST) Received: by mail-pl1-x630.google.com with SMTP id 17so20356892pll.0 for ; Thu, 12 Jan 2023 06:03:50 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ventanamicro.com; s=google; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=2SV/c6O/BfewyDXddCTEPHmA0T8USKv1LA/ENtVmN+U=; b=hDGcvz/sl0gJsNu9TRgRCXeffJ9R4lSDlC9wc9PmZbo+OkmFFhTPKkg8zDiYjI61W1 Z8i6rE35UgSfjbvcCPF3YJQ0K+0Lac1u6DE8fqfFtMhTz0YTj6BVg28Scs24pWrV8Uj4 IcR2PIgtI1lzgylNgZUARmEDzz1irxFVENV0WTMhe6MxYJVk0y1/hIVqyfirRD9uh51Q gLog21l2bPpJO1XsgwujkhGMcmLLuw3uvoHZM+oPPWFw0G2lohAXsP57n5jeytV3TRmm bQXN3YgvNyBU0CbgFoYiKQnKAkAPekxvUMS9qNT4vtOPFuCoOo7hW4i5e77YVTbb0Rk/ aHyw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=2SV/c6O/BfewyDXddCTEPHmA0T8USKv1LA/ENtVmN+U=; b=hAV8ohfQEsXBx7EGtCVEHk8KXJj18GMsgEMc335C4G9fCFbCZRXsJCADebo21aeABv Tj/rYMLKxBvkAyYgxQelPYRDEyRzIsrfsuMQhhSjq6ZhWnxXT3dxCkuZibCSsB9WgJts o42K4Oi1xJCY/+ImNgXzy5tDNTctgUKghgu1Led6v5QCPHIUIWelQlNqJ7p4YPGaT+i7 dD/9+BOujYvsLZwL2xmVO1wnPjcNHIgDhJuScfBmbgIyczTzi/3uCiBrIwK3p3pExvSh H8hhJWbotmB3REBq5wsOcvFymxrDNl78icC9S04L1h+J62lvsb49wmUukmZLxN3lRIA5 xbiA== X-Gm-Message-State: AFqh2kr73biYa6vHxfCAtpqaIB8rFzXlp+x50Z/16B5/ANf+VjUwHI4p zFeZEF0k7Rwz8ac0nL08TR5xLg== X-Google-Smtp-Source: AMrXdXueuCgDH7HhPIyGtFaKN0XR/WX4KFaDUDsVQLipFM131in070443SMcpu6E2wPL2dn0Dq7kmw== X-Received: by 2002:a17:902:d50a:b0:192:d5cf:ab7e with SMTP id b10-20020a170902d50a00b00192d5cfab7emr49527092plg.32.1673532226956; Thu, 12 Jan 2023 06:03:46 -0800 (PST) Received: from anup-ubuntu-vm.localdomain ([103.97.165.210]) by smtp.gmail.com with ESMTPSA id d11-20020a170902cecb00b001925016e34bsm12351455plg.79.2023.01.12.06.03.40 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 12 Jan 2023 06:03:46 -0800 (PST) From: Anup Patel To: Paolo Bonzini , Atish Patra Cc: Palmer Dabbelt , Paul Walmsley , Andrew Jones , Anup Patel , kvm@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, Anup Patel Subject: [PATCH 4/7] RISC-V: KVM: Initial skeletal support for AIA Date: Thu, 12 Jan 2023 19:33:01 +0530 Message-Id: <20230112140304.1830648-5-apatel@ventanamicro.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230112140304.1830648-1-apatel@ventanamicro.com> References: <20230112140304.1830648-1-apatel@ventanamicro.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org To incrementally implement AIA support, we first add minimal skeletal support which only compiles and detects AIA hardware support at the boot-time but does not provide any functionality. Signed-off-by: Anup Patel --- arch/riscv/include/asm/kvm_aia.h | 109 ++++++++++++++++++++++++++++++ arch/riscv/include/asm/kvm_host.h | 7 ++ arch/riscv/kvm/Makefile | 1 + arch/riscv/kvm/aia.c | 66 ++++++++++++++++++ arch/riscv/kvm/main.c | 13 ++++ arch/riscv/kvm/vcpu.c | 40 ++++++++++- arch/riscv/kvm/vcpu_insn.c | 4 +- arch/riscv/kvm/vm.c | 4 ++ 8 files changed, 240 insertions(+), 4 deletions(-) create mode 100644 arch/riscv/include/asm/kvm_aia.h create mode 100644 arch/riscv/kvm/aia.c diff --git a/arch/riscv/include/asm/kvm_aia.h b/arch/riscv/include/asm/kvm_aia.h new file mode 100644 index 000000000000..52b5e8aefb30 --- /dev/null +++ b/arch/riscv/include/asm/kvm_aia.h @@ -0,0 +1,109 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* + * Copyright (C) 2021 Western Digital Corporation or its affiliates. + * Copyright (C) 2022 Ventana Micro Systems Inc. + * + * Authors: + * Anup Patel + */ + +#ifndef __KVM_RISCV_AIA_H +#define __KVM_RISCV_AIA_H + +#include +#include + +struct kvm_aia { + /* In-kernel irqchip created */ + bool in_kernel; + + /* In-kernel irqchip initialized */ + bool initialized; +}; + +struct kvm_vcpu_aia { +}; + +#define kvm_riscv_aia_initialized(k) (!!((k)->arch.aia.initialized)) + +#define irqchip_in_kernel(k) (!!((k)->arch.aia.in_kernel)) + +DECLARE_STATIC_KEY_FALSE(kvm_riscv_aia_available); +#define kvm_riscv_aia_available() \ + static_branch_unlikely(&kvm_riscv_aia_available) + +static inline void kvm_riscv_vcpu_aia_flush_interrupts(struct kvm_vcpu *vcpu) +{ +} + +static inline void kvm_riscv_vcpu_aia_sync_interrupts(struct kvm_vcpu *vcpu) +{ +} + +static inline bool kvm_riscv_vcpu_aia_has_interrupts(struct kvm_vcpu *vcpu, + u64 mask) +{ + return false; +} + +static inline void kvm_riscv_vcpu_aia_update_hvip(struct kvm_vcpu *vcpu) +{ +} + +static inline void kvm_riscv_vcpu_aia_load(struct kvm_vcpu *vcpu, int cpu) +{ +} + +static inline void kvm_riscv_vcpu_aia_put(struct kvm_vcpu *vcpu) +{ +} + +static inline int kvm_riscv_vcpu_aia_get_csr(struct kvm_vcpu *vcpu, + unsigned long reg_num, + unsigned long *out_val) +{ + *out_val = 0; + return 0; +} + +static inline int kvm_riscv_vcpu_aia_set_csr(struct kvm_vcpu *vcpu, + unsigned long reg_num, + unsigned long val) +{ + return 0; +} + +#define KVM_RISCV_VCPU_AIA_CSR_FUNCS + +static inline int kvm_riscv_vcpu_aia_update(struct kvm_vcpu *vcpu) +{ + return 1; +} + +static inline void kvm_riscv_vcpu_aia_reset(struct kvm_vcpu *vcpu) +{ +} + +static inline int kvm_riscv_vcpu_aia_init(struct kvm_vcpu *vcpu) +{ + return 0; +} + +static inline void kvm_riscv_vcpu_aia_deinit(struct kvm_vcpu *vcpu) +{ +} + +static inline void kvm_riscv_aia_init_vm(struct kvm *kvm) +{ +} + +static inline void kvm_riscv_aia_destroy_vm(struct kvm *kvm) +{ +} + +void kvm_riscv_aia_enable(void); +void kvm_riscv_aia_disable(void); +int kvm_riscv_aia_init(void); +void kvm_riscv_aia_exit(void); + +#endif diff --git a/arch/riscv/include/asm/kvm_host.h b/arch/riscv/include/asm/kvm_host.h index 93f43a3e7886..b71f0e639b39 100644 --- a/arch/riscv/include/asm/kvm_host.h +++ b/arch/riscv/include/asm/kvm_host.h @@ -14,6 +14,7 @@ #include #include #include +#include #include #include #include @@ -93,6 +94,9 @@ struct kvm_arch { /* Guest Timer */ struct kvm_guest_timer timer; + + /* AIA Guest/VM context */ + struct kvm_aia aia; }; struct kvm_cpu_trap { @@ -220,6 +224,9 @@ struct kvm_vcpu_arch { /* SBI context */ struct kvm_vcpu_sbi_context sbi_context; + /* AIA VCPU context */ + struct kvm_vcpu_aia aia; + /* Cache pages needed to program page tables with spinlock held */ struct kvm_mmu_memory_cache mmu_page_cache; diff --git a/arch/riscv/kvm/Makefile b/arch/riscv/kvm/Makefile index 019df9208bdd..adbc85a94364 100644 --- a/arch/riscv/kvm/Makefile +++ b/arch/riscv/kvm/Makefile @@ -25,3 +25,4 @@ kvm-y += vcpu_sbi_base.o kvm-y += vcpu_sbi_replace.o kvm-y += vcpu_sbi_hsm.o kvm-y += vcpu_timer.o +kvm-y += aia.o diff --git a/arch/riscv/kvm/aia.c b/arch/riscv/kvm/aia.c new file mode 100644 index 000000000000..e7fa4e014d08 --- /dev/null +++ b/arch/riscv/kvm/aia.c @@ -0,0 +1,66 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Copyright (C) 2021 Western Digital Corporation or its affiliates. + * Copyright (C) 2022 Ventana Micro Systems Inc. + * + * Authors: + * Anup Patel + */ + +#include +#include + +DEFINE_STATIC_KEY_FALSE(kvm_riscv_aia_available); + +static inline void aia_set_hvictl(bool ext_irq_pending) +{ + unsigned long hvictl; + + /* + * HVICTL.IID == 9 and HVICTL.IPRIO == 0 represents + * no interrupt in HVICTL. + */ + + hvictl = (IRQ_S_EXT << HVICTL_IID_SHIFT) & HVICTL_IID; + hvictl |= (ext_irq_pending) ? 1 : 0; + csr_write(CSR_HVICTL, hvictl); +} + +void kvm_riscv_aia_enable(void) +{ + if (!kvm_riscv_aia_available()) + return; + + aia_set_hvictl(false); + csr_write(CSR_HVIPRIO1, 0x0); + csr_write(CSR_HVIPRIO2, 0x0); +#ifdef CONFIG_32BIT + csr_write(CSR_HVIPH, 0x0); + csr_write(CSR_HIDELEGH, 0x0); + csr_write(CSR_HVIPRIO1H, 0x0); + csr_write(CSR_HVIPRIO2H, 0x0); +#endif +} + +void kvm_riscv_aia_disable(void) +{ + if (!kvm_riscv_aia_available()) + return; + + aia_set_hvictl(false); +} + +int kvm_riscv_aia_init(void) +{ + if (!riscv_isa_extension_available(NULL, SxAIA)) + return -ENODEV; + + /* Enable KVM AIA support */ + static_branch_enable(&kvm_riscv_aia_available); + + return 0; +} + +void kvm_riscv_aia_exit(void) +{ +} diff --git a/arch/riscv/kvm/main.c b/arch/riscv/kvm/main.c index 58c5489d3031..d8ff44eb04ca 100644 --- a/arch/riscv/kvm/main.c +++ b/arch/riscv/kvm/main.c @@ -53,11 +53,15 @@ int kvm_arch_hardware_enable(void) csr_write(CSR_HVIP, 0); + kvm_riscv_aia_enable(); + return 0; } void kvm_arch_hardware_disable(void) { + kvm_riscv_aia_disable(); + /* * After clearing the hideleg CSR, the host kernel will receive * spurious interrupts if hvip CSR has pending interrupts and the @@ -72,6 +76,7 @@ void kvm_arch_hardware_disable(void) int kvm_arch_init(void *opaque) { + int rc; const char *str; if (!riscv_isa_extension_available(NULL, h)) { @@ -93,6 +98,10 @@ int kvm_arch_init(void *opaque) kvm_riscv_gstage_vmid_detect(); + rc = kvm_riscv_aia_init(); + if (rc && rc != -ENODEV) + return rc; + kvm_info("hypervisor extension available\n"); switch (kvm_riscv_gstage_mode()) { @@ -115,11 +124,15 @@ int kvm_arch_init(void *opaque) kvm_info("VMID %ld bits available\n", kvm_riscv_gstage_vmid_bits()); + if (kvm_riscv_aia_available()) + kvm_info("AIA available\n"); + return 0; } void kvm_arch_exit(void) { + kvm_riscv_aia_exit(); } static int __init riscv_kvm_init(void) diff --git a/arch/riscv/kvm/vcpu.c b/arch/riscv/kvm/vcpu.c index 2260adaf2de8..3cf50eadc8ce 100644 --- a/arch/riscv/kvm/vcpu.c +++ b/arch/riscv/kvm/vcpu.c @@ -135,6 +135,8 @@ static void kvm_riscv_reset_vcpu(struct kvm_vcpu *vcpu) kvm_riscv_vcpu_timer_reset(vcpu); + kvm_riscv_vcpu_aia_reset(vcpu); + WRITE_ONCE(vcpu->arch.irqs_pending, 0); WRITE_ONCE(vcpu->arch.irqs_pending_mask, 0); @@ -155,6 +157,7 @@ int kvm_arch_vcpu_precreate(struct kvm *kvm, unsigned int id) int kvm_arch_vcpu_create(struct kvm_vcpu *vcpu) { + int rc; struct kvm_cpu_context *cntx; struct kvm_vcpu_csr *reset_csr = &vcpu->arch.guest_reset_csr; unsigned long host_isa, i; @@ -194,6 +197,11 @@ int kvm_arch_vcpu_create(struct kvm_vcpu *vcpu) /* Setup VCPU timer */ kvm_riscv_vcpu_timer_init(vcpu); + /* Setup VCPU AIA */ + rc = kvm_riscv_vcpu_aia_init(vcpu); + if (rc) + return rc; + /* Reset VCPU */ kvm_riscv_reset_vcpu(vcpu); @@ -213,6 +221,9 @@ void kvm_arch_vcpu_postcreate(struct kvm_vcpu *vcpu) void kvm_arch_vcpu_destroy(struct kvm_vcpu *vcpu) { + /* Cleanup VCPU AIA context */ + kvm_riscv_vcpu_aia_deinit(vcpu); + /* Cleanup VCPU timer */ kvm_riscv_vcpu_timer_deinit(vcpu); @@ -730,6 +741,9 @@ void kvm_riscv_vcpu_flush_interrupts(struct kvm_vcpu *vcpu) csr->hvip &= ~mask; csr->hvip |= val; } + + /* Flush AIA high interrupts */ + kvm_riscv_vcpu_aia_flush_interrupts(vcpu); } void kvm_riscv_vcpu_sync_interrupts(struct kvm_vcpu *vcpu) @@ -755,6 +769,9 @@ void kvm_riscv_vcpu_sync_interrupts(struct kvm_vcpu *vcpu) } } + /* Sync-up AIA high interrupts */ + kvm_riscv_vcpu_aia_sync_interrupts(vcpu); + /* Sync-up timer CSRs */ kvm_riscv_vcpu_timer_sync(vcpu); } @@ -791,10 +808,15 @@ int kvm_riscv_vcpu_unset_interrupt(struct kvm_vcpu *vcpu, unsigned int irq) bool kvm_riscv_vcpu_has_interrupts(struct kvm_vcpu *vcpu, unsigned long mask) { - unsigned long ie = ((vcpu->arch.guest_csr.vsie & VSIP_VALID_MASK) - << VSIP_TO_HVIP_SHIFT) & mask; + unsigned long ie; + + ie = ((vcpu->arch.guest_csr.vsie & VSIP_VALID_MASK) + << VSIP_TO_HVIP_SHIFT) & mask; + if (READ_ONCE(vcpu->arch.irqs_pending) & ie) + return true; - return (READ_ONCE(vcpu->arch.irqs_pending) & ie) ? true : false; + /* Check AIA high interrupts */ + return kvm_riscv_vcpu_aia_has_interrupts(vcpu, mask); } void kvm_riscv_vcpu_power_off(struct kvm_vcpu *vcpu) @@ -890,6 +912,8 @@ void kvm_arch_vcpu_load(struct kvm_vcpu *vcpu, int cpu) kvm_riscv_vcpu_guest_fp_restore(&vcpu->arch.guest_context, vcpu->arch.isa); + kvm_riscv_vcpu_aia_load(vcpu, cpu); + vcpu->cpu = cpu; } @@ -899,6 +923,8 @@ void kvm_arch_vcpu_put(struct kvm_vcpu *vcpu) vcpu->cpu = -1; + kvm_riscv_vcpu_aia_put(vcpu); + kvm_riscv_vcpu_guest_fp_save(&vcpu->arch.guest_context, vcpu->arch.isa); kvm_riscv_vcpu_host_fp_restore(&vcpu->arch.host_context); @@ -966,6 +992,7 @@ static void kvm_riscv_update_hvip(struct kvm_vcpu *vcpu) struct kvm_vcpu_csr *csr = &vcpu->arch.guest_csr; csr_write(CSR_HVIP, csr->hvip); + kvm_riscv_vcpu_aia_update_hvip(vcpu); } /* @@ -1040,6 +1067,13 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu) local_irq_disable(); + /* Update AIA HW state before entering guest */ + ret = kvm_riscv_vcpu_aia_update(vcpu); + if (ret <= 0) { + local_irq_enable(); + continue; + } + /* * Ensure we set mode to IN_GUEST_MODE after we disable * interrupts and before the final VCPU requests check. diff --git a/arch/riscv/kvm/vcpu_insn.c b/arch/riscv/kvm/vcpu_insn.c index 0bb52761a3f7..07e8c121922b 100644 --- a/arch/riscv/kvm/vcpu_insn.c +++ b/arch/riscv/kvm/vcpu_insn.c @@ -213,7 +213,9 @@ struct csr_func { unsigned long wr_mask); }; -static const struct csr_func csr_funcs[] = { }; +static const struct csr_func csr_funcs[] = { + KVM_RISCV_VCPU_AIA_CSR_FUNCS +}; /** * kvm_riscv_vcpu_csr_return -- Handle CSR read/write after user space diff --git a/arch/riscv/kvm/vm.c b/arch/riscv/kvm/vm.c index 65a964d7e70d..bc03d2ddcb51 100644 --- a/arch/riscv/kvm/vm.c +++ b/arch/riscv/kvm/vm.c @@ -41,6 +41,8 @@ int kvm_arch_init_vm(struct kvm *kvm, unsigned long type) return r; } + kvm_riscv_aia_init_vm(kvm); + kvm_riscv_guest_timer_init(kvm); return 0; @@ -49,6 +51,8 @@ int kvm_arch_init_vm(struct kvm *kvm, unsigned long type) void kvm_arch_destroy_vm(struct kvm *kvm) { kvm_destroy_vcpus(kvm); + + kvm_riscv_aia_destroy_vm(kvm); } int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext) From patchwork Thu Jan 12 14:03:02 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Anup Patel X-Patchwork-Id: 13098042 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id BE665C54EBD for ; Thu, 12 Jan 2023 14:04:56 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234919AbjALOEv (ORCPT ); Thu, 12 Jan 2023 09:04:51 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58288 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229973AbjALODy (ORCPT ); Thu, 12 Jan 2023 09:03:54 -0500 Received: from mail-pl1-x62e.google.com (mail-pl1-x62e.google.com [IPv6:2607:f8b0:4864:20::62e]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0F1DB52C45 for ; Thu, 12 Jan 2023 06:03:54 -0800 (PST) Received: by mail-pl1-x62e.google.com with SMTP id d9so20298865pll.9 for ; Thu, 12 Jan 2023 06:03:54 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ventanamicro.com; s=google; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=93CfSKJPDU7PTN/vln3/W3P9lvmTXOG9HS5NigDMRQk=; b=TtFdbsFxKCkUNZLrxn6bfDnrsLVCSKpAjqU5S8A4Ig2O9cIPeHoCAQx1lpiYWrUv+Q EYd/TUc4jv41W3LN52mqARuglQVgS+QQOcgJdTHKZQ/IE+T2ZgUMAryWlZvQle+GvP23 t3j8lK3E8n1MYgilQfz+07GEVVt50i1CD17O0YhGvIVouXMRJlkAR5+K/IjNYXwPMaWh XdFtpPiVNWGE9cTvfXnH9pyQCQc10WpWLSswig9shidZxHm94jkxkGRGksr/gGePXAiV VcB0Bzqi+rlWRKXhjl7Ye83PEEPsuCf6U9Ak7wTOb336nMyD8tvFTfqOcQl6mZCYZRD9 u6aQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=93CfSKJPDU7PTN/vln3/W3P9lvmTXOG9HS5NigDMRQk=; b=w6aFlhFTFSy24UHojXS5u29TLhFJHEpma5sFzVrxdDGX2rtb/0YCkxuy4Mlj4wTNeY swSTKVPqdbpjcG20jRTLOeZQ3p7AaCbk6HJ5ZZ+RY8Yl9pm/o3yOEicW7bIMu5lP7rOn d/1JwHPEqbmo1Ncom7zbzrW5x+l5/PNm392yIRvMLauIec2N99D6P7b2VFaxsBLGXD6Q 8CpKqts24osrvPStDOzwrS4AbyIQsKkNj5EcV49Rr4MDPOB8+T7sdYqiIxvSDas+XggH WDF9OCsNgVhzHgz67BkcPlCoHz/IMbloLtxetQQ3vRySG+n/o0Isj/Zw5CSRWSex2WdL BlYg== X-Gm-Message-State: AFqh2krUgbFh1oOCK/NiCgpJEdkHXkxt70dJN+Xpf3Eza6V96q9v1VOV 99x+S1Rwgz0KmAfh1p1ncZMKXw== X-Google-Smtp-Source: AMrXdXuzoNid7t8Qdhpy1sjGSlSv6eUXO5rUq30goaZ2JzM2mvwknoxllEXsjE86vEendiYHojR7/g== X-Received: by 2002:a17:902:aa8e:b0:191:4575:48aa with SMTP id d14-20020a170902aa8e00b00191457548aamr71848736plr.11.1673532233320; Thu, 12 Jan 2023 06:03:53 -0800 (PST) Received: from anup-ubuntu-vm.localdomain ([103.97.165.210]) by smtp.gmail.com with ESMTPSA id d11-20020a170902cecb00b001925016e34bsm12351455plg.79.2023.01.12.06.03.47 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 12 Jan 2023 06:03:52 -0800 (PST) From: Anup Patel To: Paolo Bonzini , Atish Patra Cc: Palmer Dabbelt , Paul Walmsley , Andrew Jones , Anup Patel , kvm@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, Anup Patel Subject: [PATCH 5/7] RISC-V: KVM: Add ONE_REG interface for AIA CSRs Date: Thu, 12 Jan 2023 19:33:02 +0530 Message-Id: <20230112140304.1830648-6-apatel@ventanamicro.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230112140304.1830648-1-apatel@ventanamicro.com> References: <20230112140304.1830648-1-apatel@ventanamicro.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org We extend the CSR ONE_REG interface to access both general CSRs and AIA CSRs. To achieve this, we introduce "subtype" field in the ONE_REG id which can be used for grouping registers within a particular "type" of ONE_REG registers. Signed-off-by: Anup Patel --- arch/riscv/include/uapi/asm/kvm.h | 15 ++++- arch/riscv/kvm/vcpu.c | 96 ++++++++++++++++++++++++------- 2 files changed, 89 insertions(+), 22 deletions(-) diff --git a/arch/riscv/include/uapi/asm/kvm.h b/arch/riscv/include/uapi/asm/kvm.h index 71992ff1f9dd..d0704eff0121 100644 --- a/arch/riscv/include/uapi/asm/kvm.h +++ b/arch/riscv/include/uapi/asm/kvm.h @@ -64,7 +64,7 @@ struct kvm_riscv_core { #define KVM_RISCV_MODE_S 1 #define KVM_RISCV_MODE_U 0 -/* CSR registers for KVM_GET_ONE_REG and KVM_SET_ONE_REG */ +/* General CSR registers for KVM_GET_ONE_REG and KVM_SET_ONE_REG */ struct kvm_riscv_csr { unsigned long sstatus; unsigned long sie; @@ -78,6 +78,10 @@ struct kvm_riscv_csr { unsigned long scounteren; }; +/* AIA CSR registers for KVM_GET_ONE_REG and KVM_SET_ONE_REG */ +struct kvm_riscv_aia_csr { +}; + /* TIMER registers for KVM_GET_ONE_REG and KVM_SET_ONE_REG */ struct kvm_riscv_timer { __u64 frequency; @@ -105,6 +109,7 @@ enum KVM_RISCV_ISA_EXT_ID { KVM_RISCV_ISA_EXT_SVINVAL, KVM_RISCV_ISA_EXT_ZIHINTPAUSE, KVM_RISCV_ISA_EXT_ZICBOM, + KVM_RISCV_ISA_EXT_SSAIA, KVM_RISCV_ISA_EXT_MAX, }; @@ -134,6 +139,8 @@ enum KVM_RISCV_SBI_EXT_ID { /* If you need to interpret the index values, here is the key: */ #define KVM_REG_RISCV_TYPE_MASK 0x00000000FF000000 #define KVM_REG_RISCV_TYPE_SHIFT 24 +#define KVM_REG_RISCV_SUBTYPE_MASK 0x0000000000FF0000 +#define KVM_REG_RISCV_SUBTYPE_SHIFT 16 /* Config registers are mapped as type 1 */ #define KVM_REG_RISCV_CONFIG (0x01 << KVM_REG_RISCV_TYPE_SHIFT) @@ -147,8 +154,12 @@ enum KVM_RISCV_SBI_EXT_ID { /* Control and status registers are mapped as type 3 */ #define KVM_REG_RISCV_CSR (0x03 << KVM_REG_RISCV_TYPE_SHIFT) +#define KVM_REG_RISCV_CSR_GENERAL 0x0 +#define KVM_REG_RISCV_CSR_AIA 0x1 #define KVM_REG_RISCV_CSR_REG(name) \ - (offsetof(struct kvm_riscv_csr, name) / sizeof(unsigned long)) + (offsetof(struct kvm_riscv_csr, name) / sizeof(unsigned long)) +#define KVM_REG_RISCV_CSR_AIA_REG(name) \ + (offsetof(struct kvm_riscv_aia_csr, name) / sizeof(unsigned long)) /* Timer registers are mapped as type 4 */ #define KVM_REG_RISCV_TIMER (0x04 << KVM_REG_RISCV_TYPE_SHIFT) diff --git a/arch/riscv/kvm/vcpu.c b/arch/riscv/kvm/vcpu.c index 3cf50eadc8ce..37933ea20274 100644 --- a/arch/riscv/kvm/vcpu.c +++ b/arch/riscv/kvm/vcpu.c @@ -58,6 +58,7 @@ static const unsigned long kvm_isa_ext_arr[] = { [KVM_RISCV_ISA_EXT_I] = RISCV_ISA_EXT_i, [KVM_RISCV_ISA_EXT_M] = RISCV_ISA_EXT_m, + KVM_ISA_EXT_ARR(SSAIA), KVM_ISA_EXT_ARR(SSTC), KVM_ISA_EXT_ARR(SVINVAL), KVM_ISA_EXT_ARR(SVPBMT), @@ -96,6 +97,7 @@ static bool kvm_riscv_vcpu_isa_disable_allowed(unsigned long ext) case KVM_RISCV_ISA_EXT_C: case KVM_RISCV_ISA_EXT_I: case KVM_RISCV_ISA_EXT_M: + case KVM_RISCV_ISA_EXT_SSAIA: case KVM_RISCV_ISA_EXT_SSTC: case KVM_RISCV_ISA_EXT_SVINVAL: case KVM_RISCV_ISA_EXT_ZIHINTPAUSE: @@ -451,30 +453,79 @@ static int kvm_riscv_vcpu_set_reg_core(struct kvm_vcpu *vcpu, return 0; } +static int kvm_riscv_vcpu_general_get_csr(struct kvm_vcpu *vcpu, + unsigned long reg_num, + unsigned long *out_val) +{ + struct kvm_vcpu_csr *csr = &vcpu->arch.guest_csr; + + if (reg_num >= sizeof(struct kvm_riscv_csr) / sizeof(unsigned long)) + return -EINVAL; + + if (reg_num == KVM_REG_RISCV_CSR_REG(sip)) { + kvm_riscv_vcpu_flush_interrupts(vcpu); + *out_val = (csr->hvip >> VSIP_TO_HVIP_SHIFT) & VSIP_VALID_MASK; + } else + *out_val = ((unsigned long *)csr)[reg_num]; + + return 0; +} + static int kvm_riscv_vcpu_get_reg_csr(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg) { - struct kvm_vcpu_csr *csr = &vcpu->arch.guest_csr; + int rc; unsigned long __user *uaddr = (unsigned long __user *)(unsigned long)reg->addr; unsigned long reg_num = reg->id & ~(KVM_REG_ARCH_MASK | KVM_REG_SIZE_MASK | KVM_REG_RISCV_CSR); - unsigned long reg_val; + unsigned long reg_val, reg_subtype; if (KVM_REG_SIZE(reg->id) != sizeof(unsigned long)) return -EINVAL; + + reg_subtype = (reg_num & KVM_REG_RISCV_SUBTYPE_MASK) + >> KVM_REG_RISCV_SUBTYPE_SHIFT; + reg_num &= ~KVM_REG_RISCV_SUBTYPE_MASK; + switch (reg_subtype) { + case KVM_REG_RISCV_CSR_GENERAL: + rc = kvm_riscv_vcpu_general_get_csr(vcpu, reg_num, ®_val); + break; + case KVM_REG_RISCV_CSR_AIA: + rc = kvm_riscv_vcpu_aia_get_csr(vcpu, reg_num, ®_val); + break; + default: + rc = -EINVAL; + break; + } + if (rc) + return rc; + + if (copy_to_user(uaddr, ®_val, KVM_REG_SIZE(reg->id))) + return -EFAULT; + + return 0; +} + +static inline int kvm_riscv_vcpu_general_set_csr(struct kvm_vcpu *vcpu, + unsigned long reg_num, + unsigned long reg_val) +{ + struct kvm_vcpu_csr *csr = &vcpu->arch.guest_csr; + if (reg_num >= sizeof(struct kvm_riscv_csr) / sizeof(unsigned long)) return -EINVAL; if (reg_num == KVM_REG_RISCV_CSR_REG(sip)) { - kvm_riscv_vcpu_flush_interrupts(vcpu); - reg_val = (csr->hvip >> VSIP_TO_HVIP_SHIFT) & VSIP_VALID_MASK; - } else - reg_val = ((unsigned long *)csr)[reg_num]; + reg_val &= VSIP_VALID_MASK; + reg_val <<= VSIP_TO_HVIP_SHIFT; + } - if (copy_to_user(uaddr, ®_val, KVM_REG_SIZE(reg->id))) - return -EFAULT; + ((unsigned long *)csr)[reg_num] = reg_val; + + if (reg_num == KVM_REG_RISCV_CSR_REG(sip)) + WRITE_ONCE(vcpu->arch.irqs_pending_mask, 0); return 0; } @@ -482,31 +533,36 @@ static int kvm_riscv_vcpu_get_reg_csr(struct kvm_vcpu *vcpu, static int kvm_riscv_vcpu_set_reg_csr(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg) { - struct kvm_vcpu_csr *csr = &vcpu->arch.guest_csr; + int rc; unsigned long __user *uaddr = (unsigned long __user *)(unsigned long)reg->addr; unsigned long reg_num = reg->id & ~(KVM_REG_ARCH_MASK | KVM_REG_SIZE_MASK | KVM_REG_RISCV_CSR); - unsigned long reg_val; + unsigned long reg_val, reg_subtype; if (KVM_REG_SIZE(reg->id) != sizeof(unsigned long)) return -EINVAL; - if (reg_num >= sizeof(struct kvm_riscv_csr) / sizeof(unsigned long)) - return -EINVAL; if (copy_from_user(®_val, uaddr, KVM_REG_SIZE(reg->id))) return -EFAULT; - if (reg_num == KVM_REG_RISCV_CSR_REG(sip)) { - reg_val &= VSIP_VALID_MASK; - reg_val <<= VSIP_TO_HVIP_SHIFT; + reg_subtype = (reg_num & KVM_REG_RISCV_SUBTYPE_MASK) + >> KVM_REG_RISCV_SUBTYPE_SHIFT; + reg_num &= ~KVM_REG_RISCV_SUBTYPE_MASK; + switch (reg_subtype) { + case KVM_REG_RISCV_CSR_GENERAL: + rc = kvm_riscv_vcpu_general_set_csr(vcpu, reg_num, reg_val); + break; + case KVM_REG_RISCV_CSR_AIA: + rc = kvm_riscv_vcpu_aia_set_csr(vcpu, reg_num, reg_val); + break; + default: + rc = -EINVAL; + break; } - - ((unsigned long *)csr)[reg_num] = reg_val; - - if (reg_num == KVM_REG_RISCV_CSR_REG(sip)) - WRITE_ONCE(vcpu->arch.irqs_pending_mask, 0); + if (rc) + return rc; return 0; } From patchwork Thu Jan 12 14:03:03 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Anup Patel X-Patchwork-Id: 13098043 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id EF493C54EBC for ; Thu, 12 Jan 2023 14:05:16 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234094AbjALOFP (ORCPT ); Thu, 12 Jan 2023 09:05:15 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58688 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234394AbjALOEE (ORCPT ); Thu, 12 Jan 2023 09:04:04 -0500 Received: from mail-pj1-x102d.google.com (mail-pj1-x102d.google.com [IPv6:2607:f8b0:4864:20::102d]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 61C9D532AD for ; Thu, 12 Jan 2023 06:04:01 -0800 (PST) Received: by mail-pj1-x102d.google.com with SMTP id b9-20020a17090a7ac900b00226ef160dcaso19221929pjl.2 for ; Thu, 12 Jan 2023 06:04:01 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ventanamicro.com; s=google; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=ZSCwI6Hn7zf+zDHk8xETyaplhkoPhkmk0mIUeWBiGYU=; b=BPjrgLHcwamSELpFm8kNzRgaQPHyGPZ0KIHVrdCcSui67T9slpcn4Y1vIOKtp6baOl qXWMWRScaVWXUbpxNoTg7VY9+V8/aFZf1RZxI97FSbgDAHKK0pQ1nKdPkwCVYW9/j4wP yrHjBLPjcQ1XgXnjJRAYBeqn9VgZfLwwpcr+FYR5neFqoYJJ/KUEv4C6PaXhgiZ4UDoX GaWEjIGWxYdWsdh2V/l4g/7J5X1z5+Fo3JWsRqdvZI6FlrvVLe8Si9rePnIviLxEc29c xR4E33F4Ah1PoEs/QMI2xhjAr+l7POVjmDuisMZ2UoIws5Fu8UWrEspYw1Gyu3kanOzc 2mwg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=ZSCwI6Hn7zf+zDHk8xETyaplhkoPhkmk0mIUeWBiGYU=; b=k9qkG2Ba6oyLD6tbwRkuGHKaVPYd73qcGhKWMwBd+voa4FvFvIn7mmYmH6LugHmP3k USKXX5cbKJRPmBYTNQMhpbCVEuaUjxtOAnRcMURyoTHrn+l0eLHR+x3pTMGG57FxrZ0b UsyG19377POsyPEyOQCzpfhdwTWxBUyjDPBlOkE2xNs3SchuY7SU0CYF8MBF+wG6UbAD M04A7D9xVEljrIH8U3kVmlINLN+7+1udBYwWeqJ8tKfy4wzk6HoGaRkQOrxT2IQNFEqN 9VDCvZfJjbfty8txicWs057A2JKhQe/OV2YMkPQ5Vwd3GMPtXkb5pjMwqo8OjO14PppU ppPg== X-Gm-Message-State: AFqh2kr4CSMlWxwYdNRMNUL9ARoCanVGOePyC5/E2vJ369rhsyUJxwMj oji6UxchkPj8FCIV0S8ZadoOxA== X-Google-Smtp-Source: AMrXdXvxaBOHJmlcHxrzszsFQ779da1pF5S+vwcVUkJDKrL5FslhlWsGnesVlDOmLiDFuUmjtyacew== X-Received: by 2002:a17:902:e785:b0:193:3d20:a0fd with SMTP id cp5-20020a170902e78500b001933d20a0fdmr11427678plb.65.1673532240619; Thu, 12 Jan 2023 06:04:00 -0800 (PST) Received: from anup-ubuntu-vm.localdomain ([103.97.165.210]) by smtp.gmail.com with ESMTPSA id d11-20020a170902cecb00b001925016e34bsm12351455plg.79.2023.01.12.06.03.53 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 12 Jan 2023 06:04:00 -0800 (PST) From: Anup Patel To: Paolo Bonzini , Atish Patra Cc: Palmer Dabbelt , Paul Walmsley , Andrew Jones , Anup Patel , kvm@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, Anup Patel Subject: [PATCH 6/7] RISC-V: KVM: Virtualize per-HART AIA CSRs Date: Thu, 12 Jan 2023 19:33:03 +0530 Message-Id: <20230112140304.1830648-7-apatel@ventanamicro.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230112140304.1830648-1-apatel@ventanamicro.com> References: <20230112140304.1830648-1-apatel@ventanamicro.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org The AIA specification introduce per-HART AIA CSRs which primarily support: * 64 local interrupts on both RV64 and RV32 * priority for each of the 64 local interrupts * interrupt filtering for local interrupts This patch virtualize above mentioned AIA CSRs and also extend ONE_REG interface to allow user-space save/restore Guest/VM view of these CSRs. Signed-off-by: Anup Patel --- arch/riscv/include/asm/kvm_aia.h | 88 +++++---- arch/riscv/include/asm/kvm_host.h | 7 +- arch/riscv/include/uapi/asm/kvm.h | 7 + arch/riscv/kvm/aia.c | 317 ++++++++++++++++++++++++++++++ arch/riscv/kvm/vcpu.c | 53 +++-- 5 files changed, 415 insertions(+), 57 deletions(-) diff --git a/arch/riscv/include/asm/kvm_aia.h b/arch/riscv/include/asm/kvm_aia.h index 52b5e8aefb30..1bbac1266f34 100644 --- a/arch/riscv/include/asm/kvm_aia.h +++ b/arch/riscv/include/asm/kvm_aia.h @@ -12,6 +12,7 @@ #include #include +#include struct kvm_aia { /* In-kernel irqchip created */ @@ -21,7 +22,22 @@ struct kvm_aia { bool initialized; }; +struct kvm_vcpu_aia_csr { + unsigned long vsiselect; + unsigned long hviprio1; + unsigned long hviprio2; + unsigned long vsieh; + unsigned long hviph; + unsigned long hviprio1h; + unsigned long hviprio2h; +}; + struct kvm_vcpu_aia { + /* CPU AIA CSR context of Guest VCPU */ + struct kvm_vcpu_aia_csr guest_csr; + + /* CPU AIA CSR context upon Guest VCPU reset */ + struct kvm_vcpu_aia_csr guest_reset_csr; }; #define kvm_riscv_aia_initialized(k) (!!((k)->arch.aia.initialized)) @@ -32,48 +48,50 @@ DECLARE_STATIC_KEY_FALSE(kvm_riscv_aia_available); #define kvm_riscv_aia_available() \ static_branch_unlikely(&kvm_riscv_aia_available) -static inline void kvm_riscv_vcpu_aia_flush_interrupts(struct kvm_vcpu *vcpu) -{ -} - -static inline void kvm_riscv_vcpu_aia_sync_interrupts(struct kvm_vcpu *vcpu) -{ -} - -static inline bool kvm_riscv_vcpu_aia_has_interrupts(struct kvm_vcpu *vcpu, - u64 mask) -{ - return false; -} - -static inline void kvm_riscv_vcpu_aia_update_hvip(struct kvm_vcpu *vcpu) -{ -} - -static inline void kvm_riscv_vcpu_aia_load(struct kvm_vcpu *vcpu, int cpu) -{ -} - -static inline void kvm_riscv_vcpu_aia_put(struct kvm_vcpu *vcpu) +#define KVM_RISCV_AIA_IMSIC_TOPEI (ISELECT_MASK + 1) +static inline int kvm_riscv_vcpu_aia_imsic_rmw(struct kvm_vcpu *vcpu, + unsigned long isel, + unsigned long *val, + unsigned long new_val, + unsigned long wr_mask) { + return 0; } -static inline int kvm_riscv_vcpu_aia_get_csr(struct kvm_vcpu *vcpu, - unsigned long reg_num, - unsigned long *out_val) +#ifdef CONFIG_32BIT +void kvm_riscv_vcpu_aia_flush_interrupts(struct kvm_vcpu *vcpu); +void kvm_riscv_vcpu_aia_sync_interrupts(struct kvm_vcpu *vcpu); +#else +static inline void kvm_riscv_vcpu_aia_flush_interrupts(struct kvm_vcpu *vcpu) { - *out_val = 0; - return 0; } - -static inline int kvm_riscv_vcpu_aia_set_csr(struct kvm_vcpu *vcpu, - unsigned long reg_num, - unsigned long val) +static inline void kvm_riscv_vcpu_aia_sync_interrupts(struct kvm_vcpu *vcpu) { - return 0; } - -#define KVM_RISCV_VCPU_AIA_CSR_FUNCS +#endif +bool kvm_riscv_vcpu_aia_has_interrupts(struct kvm_vcpu *vcpu, u64 mask); + +void kvm_riscv_vcpu_aia_update_hvip(struct kvm_vcpu *vcpu); +void kvm_riscv_vcpu_aia_load(struct kvm_vcpu *vcpu, int cpu); +void kvm_riscv_vcpu_aia_put(struct kvm_vcpu *vcpu); +int kvm_riscv_vcpu_aia_get_csr(struct kvm_vcpu *vcpu, + unsigned long reg_num, + unsigned long *out_val); +int kvm_riscv_vcpu_aia_set_csr(struct kvm_vcpu *vcpu, + unsigned long reg_num, + unsigned long val); + +int kvm_riscv_vcpu_aia_rmw_topei(struct kvm_vcpu *vcpu, + unsigned int csr_num, + unsigned long *val, + unsigned long new_val, + unsigned long wr_mask); +int kvm_riscv_vcpu_aia_rmw_ireg(struct kvm_vcpu *vcpu, unsigned int csr_num, + unsigned long *val, unsigned long new_val, + unsigned long wr_mask); +#define KVM_RISCV_VCPU_AIA_CSR_FUNCS \ +{ .base = CSR_SIREG, .count = 1, .func = kvm_riscv_vcpu_aia_rmw_ireg }, \ +{ .base = CSR_STOPEI, .count = 1, .func = kvm_riscv_vcpu_aia_rmw_topei }, static inline int kvm_riscv_vcpu_aia_update(struct kvm_vcpu *vcpu) { diff --git a/arch/riscv/include/asm/kvm_host.h b/arch/riscv/include/asm/kvm_host.h index b71f0e639b39..7bae7506b7d4 100644 --- a/arch/riscv/include/asm/kvm_host.h +++ b/arch/riscv/include/asm/kvm_host.h @@ -203,8 +203,9 @@ struct kvm_vcpu_arch { * in irqs_pending. Our approach is modeled around multiple producer * and single consumer problem where the consumer is the VCPU itself. */ - unsigned long irqs_pending; - unsigned long irqs_pending_mask; +#define KVM_RISCV_VCPU_NR_IRQS 64 + DECLARE_BITMAP(irqs_pending, KVM_RISCV_VCPU_NR_IRQS); + DECLARE_BITMAP(irqs_pending_mask, KVM_RISCV_VCPU_NR_IRQS); /* VCPU Timer */ struct kvm_vcpu_timer timer; @@ -331,7 +332,7 @@ int kvm_riscv_vcpu_set_interrupt(struct kvm_vcpu *vcpu, unsigned int irq); int kvm_riscv_vcpu_unset_interrupt(struct kvm_vcpu *vcpu, unsigned int irq); void kvm_riscv_vcpu_flush_interrupts(struct kvm_vcpu *vcpu); void kvm_riscv_vcpu_sync_interrupts(struct kvm_vcpu *vcpu); -bool kvm_riscv_vcpu_has_interrupts(struct kvm_vcpu *vcpu, unsigned long mask); +bool kvm_riscv_vcpu_has_interrupts(struct kvm_vcpu *vcpu, u64 mask); void kvm_riscv_vcpu_power_off(struct kvm_vcpu *vcpu); void kvm_riscv_vcpu_power_on(struct kvm_vcpu *vcpu); diff --git a/arch/riscv/include/uapi/asm/kvm.h b/arch/riscv/include/uapi/asm/kvm.h index d0704eff0121..8aae65424a43 100644 --- a/arch/riscv/include/uapi/asm/kvm.h +++ b/arch/riscv/include/uapi/asm/kvm.h @@ -80,6 +80,13 @@ struct kvm_riscv_csr { /* AIA CSR registers for KVM_GET_ONE_REG and KVM_SET_ONE_REG */ struct kvm_riscv_aia_csr { + unsigned long siselect; + unsigned long siprio1; + unsigned long siprio2; + unsigned long sieh; + unsigned long siph; + unsigned long siprio1h; + unsigned long siprio2h; }; /* TIMER registers for KVM_GET_ONE_REG and KVM_SET_ONE_REG */ diff --git a/arch/riscv/kvm/aia.c b/arch/riscv/kvm/aia.c index e7fa4e014d08..28610f427cb7 100644 --- a/arch/riscv/kvm/aia.c +++ b/arch/riscv/kvm/aia.c @@ -26,6 +26,323 @@ static inline void aia_set_hvictl(bool ext_irq_pending) csr_write(CSR_HVICTL, hvictl); } +#ifdef CONFIG_32BIT +void kvm_riscv_vcpu_aia_flush_interrupts(struct kvm_vcpu *vcpu) +{ + struct kvm_vcpu_aia_csr *csr = &vcpu->arch.aia.guest_csr; + unsigned long mask, val; + + if (!kvm_riscv_aia_available()) + return; + + if (READ_ONCE(vcpu->arch.irqs_pending_mask[1])) { + mask = xchg_acquire(&vcpu->arch.irqs_pending_mask[1], 0); + val = READ_ONCE(vcpu->arch.irqs_pending[1]) & mask; + + csr->hviph &= ~mask; + csr->hviph |= val; + } +} + +void kvm_riscv_vcpu_aia_sync_interrupts(struct kvm_vcpu *vcpu) +{ + struct kvm_vcpu_aia_csr *csr = &vcpu->arch.aia.guest_csr; + + if (kvm_riscv_aia_available()) + csr->vsieh = csr_read(CSR_VSIEH); +} +#endif + +bool kvm_riscv_vcpu_aia_has_interrupts(struct kvm_vcpu *vcpu, u64 mask) +{ + unsigned long seip; + + if (!kvm_riscv_aia_available()) + return false; + +#ifdef CONFIG_32BIT + if (READ_ONCE(vcpu->arch.irqs_pending[1]) & + (vcpu->arch.aia.guest_csr.vsieh & (unsigned long)(mask >> 32))) + return true; +#endif + + seip = vcpu->arch.guest_csr.vsie; + seip &= (unsigned long)mask; + seip &= BIT(IRQ_S_EXT); + if (!kvm_riscv_aia_initialized(vcpu->kvm) || !seip) + return false; + + return false; +} + +void kvm_riscv_vcpu_aia_update_hvip(struct kvm_vcpu *vcpu) +{ + struct kvm_vcpu_csr *csr = &vcpu->arch.guest_csr; + + if (!kvm_riscv_aia_available()) + return; + +#ifdef CONFIG_32BIT + csr_write(CSR_HVIPH, vcpu->arch.aia.guest_csr.hviph); +#endif + aia_set_hvictl((csr->hvip & BIT(IRQ_VS_EXT)) ? true : false); +} + +void kvm_riscv_vcpu_aia_load(struct kvm_vcpu *vcpu, int cpu) +{ + struct kvm_vcpu_aia_csr *csr = &vcpu->arch.aia.guest_csr; + + if (!kvm_riscv_aia_available()) + return; + + csr_write(CSR_VSISELECT, csr->vsiselect); + csr_write(CSR_HVIPRIO1, csr->hviprio1); + csr_write(CSR_HVIPRIO2, csr->hviprio2); +#ifdef CONFIG_32BIT + csr_write(CSR_VSIEH, csr->vsieh); + csr_write(CSR_HVIPH, csr->hviph); + csr_write(CSR_HVIPRIO1H, csr->hviprio1h); + csr_write(CSR_HVIPRIO2H, csr->hviprio2h); +#endif +} + +void kvm_riscv_vcpu_aia_put(struct kvm_vcpu *vcpu) +{ + struct kvm_vcpu_aia_csr *csr = &vcpu->arch.aia.guest_csr; + + if (!kvm_riscv_aia_available()) + return; + + csr->vsiselect = csr_read(CSR_VSISELECT); + csr->hviprio1 = csr_read(CSR_HVIPRIO1); + csr->hviprio2 = csr_read(CSR_HVIPRIO2); +#ifdef CONFIG_32BIT + csr->vsieh = csr_read(CSR_VSIEH); + csr->hviph = csr_read(CSR_HVIPH); + csr->hviprio1h = csr_read(CSR_HVIPRIO1H); + csr->hviprio2h = csr_read(CSR_HVIPRIO2H); +#endif +} + +int kvm_riscv_vcpu_aia_get_csr(struct kvm_vcpu *vcpu, + unsigned long reg_num, + unsigned long *out_val) +{ + struct kvm_vcpu_aia_csr *csr = &vcpu->arch.aia.guest_csr; + + if (reg_num >= sizeof(struct kvm_riscv_aia_csr) / sizeof(unsigned long)) + return -EINVAL; + + *out_val = 0; + if (kvm_riscv_aia_available()) + *out_val = ((unsigned long *)csr)[reg_num]; + + return 0; +} + +int kvm_riscv_vcpu_aia_set_csr(struct kvm_vcpu *vcpu, + unsigned long reg_num, + unsigned long val) +{ + struct kvm_vcpu_aia_csr *csr = &vcpu->arch.aia.guest_csr; + + if (reg_num >= sizeof(struct kvm_riscv_aia_csr) / sizeof(unsigned long)) + return -EINVAL; + + if (kvm_riscv_aia_available()) { + ((unsigned long *)csr)[reg_num] = val; + +#ifdef CONFIG_32BIT + if (reg_num == KVM_REG_RISCV_CSR_AIA_REG(siph)) + WRITE_ONCE(vcpu->arch.irqs_pending_mask[1], 0); +#endif + } + + return 0; +} + +int kvm_riscv_vcpu_aia_rmw_topei(struct kvm_vcpu *vcpu, + unsigned int csr_num, + unsigned long *val, + unsigned long new_val, + unsigned long wr_mask) +{ + /* If AIA not available then redirect trap */ + if (!kvm_riscv_aia_available()) + return KVM_INSN_ILLEGAL_TRAP; + + /* If AIA not initialized then forward to user space */ + if (!kvm_riscv_aia_initialized(vcpu->kvm)) + return KVM_INSN_EXIT_TO_USER_SPACE; + + return kvm_riscv_vcpu_aia_imsic_rmw(vcpu, KVM_RISCV_AIA_IMSIC_TOPEI, + val, new_val, wr_mask); +} + +/* + * External IRQ priority always read-only zero. This means default + * priority order is always preferred for external IRQs unless + * HVICTL.IID == 9 and HVICTL.IPRIO != 0 + */ +static int aia_irq2bitpos[] = { +0, 8, -1, -1, 16, 24, -1, -1, /* 0 - 7 */ +32, -1, -1, -1, -1, 40, 48, 56, /* 8 - 15 */ +64, 72, 80, 88, 96, 104, 112, 120, /* 16 - 23 */ +-1, -1, -1, -1, -1, -1, -1, -1, /* 24 - 31 */ +-1, -1, -1, -1, -1, -1, -1, -1, /* 32 - 39 */ +-1, -1, -1, -1, -1, -1, -1, -1, /* 40 - 47 */ +-1, -1, -1, -1, -1, -1, -1, -1, /* 48 - 55 */ +-1, -1, -1, -1, -1, -1, -1, -1, /* 56 - 63 */ +}; + +static u8 aia_get_iprio8(struct kvm_vcpu *vcpu, unsigned int irq) +{ + unsigned long hviprio; + int bitpos = aia_irq2bitpos[irq]; + + if (bitpos < 0) + return 0; + + switch (bitpos / BITS_PER_LONG) { + case 0: + hviprio = csr_read(CSR_HVIPRIO1); + break; + case 1: +#ifndef CONFIG_32BIT + hviprio = csr_read(CSR_HVIPRIO2); + break; +#else + hviprio = csr_read(CSR_HVIPRIO1H); + break; + case 2: + hviprio = csr_read(CSR_HVIPRIO2); + break; + case 3: + hviprio = csr_read(CSR_HVIPRIO2H); + break; +#endif + default: + return 0; + }; + + return (hviprio >> (bitpos % BITS_PER_LONG)) & TOPI_IPRIO_MASK; +} + +static void aia_set_iprio8(struct kvm_vcpu *vcpu, unsigned int irq, u8 prio) +{ + unsigned long hviprio; + int bitpos = aia_irq2bitpos[irq]; + + if (bitpos < 0) + return; + + switch (bitpos / BITS_PER_LONG) { + case 0: + hviprio = csr_read(CSR_HVIPRIO1); + break; + case 1: +#ifndef CONFIG_32BIT + hviprio = csr_read(CSR_HVIPRIO2); + break; +#else + hviprio = csr_read(CSR_HVIPRIO1H); + break; + case 2: + hviprio = csr_read(CSR_HVIPRIO2); + break; + case 3: + hviprio = csr_read(CSR_HVIPRIO2H); + break; +#endif + default: + return; + }; + + hviprio &= ~((unsigned long)TOPI_IPRIO_MASK << + (bitpos % BITS_PER_LONG)); + hviprio |= (unsigned long)prio << (bitpos % BITS_PER_LONG); + + switch (bitpos / BITS_PER_LONG) { + case 0: + csr_write(CSR_HVIPRIO1, hviprio); + break; + case 1: +#ifndef CONFIG_32BIT + csr_write(CSR_HVIPRIO2, hviprio); + break; +#else + csr_write(CSR_HVIPRIO1H, hviprio); + break; + case 2: + csr_write(CSR_HVIPRIO2, hviprio); + break; + case 3: + csr_write(CSR_HVIPRIO2H, hviprio); + break; +#endif + default: + return; + }; +} + +static int aia_rmw_iprio(struct kvm_vcpu *vcpu, unsigned int isel, + unsigned long *val, unsigned long new_val, + unsigned long wr_mask) +{ + int i, firq, nirqs; + unsigned long old_val; + +#ifndef CONFIG_32BIT + if (isel & 0x1) + return KVM_INSN_ILLEGAL_TRAP; +#endif + + nirqs = 4 * (BITS_PER_LONG / 32); + firq = ((isel - ISELECT_IPRIO0) / (BITS_PER_LONG / 32)) * (nirqs); + + old_val = 0; + for (i = 0; i < nirqs; i++) + old_val |= (unsigned long)aia_get_iprio8(vcpu, firq + i) << + (TOPI_IPRIO_BITS * i); + + if (val) + *val = old_val; + + if (wr_mask) { + new_val = (old_val & ~wr_mask) | (new_val & wr_mask); + for (i = 0; i < nirqs; i++) + aia_set_iprio8(vcpu, firq + i, + (new_val >> (TOPI_IPRIO_BITS * i)) & TOPI_IPRIO_MASK); + } + + return KVM_INSN_CONTINUE_NEXT_SEPC; +} + +#define IMSIC_FIRST 0x70 +#define IMSIC_LAST 0xff +int kvm_riscv_vcpu_aia_rmw_ireg(struct kvm_vcpu *vcpu, unsigned int csr_num, + unsigned long *val, unsigned long new_val, + unsigned long wr_mask) +{ + unsigned int isel; + + /* If AIA not available then redirect trap */ + if (!kvm_riscv_aia_available()) + return KVM_INSN_ILLEGAL_TRAP; + + /* First try to emulate in kernel space */ + isel = csr_read(CSR_VSISELECT) & ISELECT_MASK; + if (isel >= ISELECT_IPRIO0 && isel <= ISELECT_IPRIO15) + return aia_rmw_iprio(vcpu, isel, val, new_val, wr_mask); + else if (isel >= IMSIC_FIRST && isel <= IMSIC_LAST && + kvm_riscv_aia_initialized(vcpu->kvm)) + return kvm_riscv_vcpu_aia_imsic_rmw(vcpu, isel, val, new_val, + wr_mask); + + /* We can't handle it here so redirect to user space */ + return KVM_INSN_EXIT_TO_USER_SPACE; +} + void kvm_riscv_aia_enable(void) { if (!kvm_riscv_aia_available()) diff --git a/arch/riscv/kvm/vcpu.c b/arch/riscv/kvm/vcpu.c index 37933ea20274..151b35b3b05f 100644 --- a/arch/riscv/kvm/vcpu.c +++ b/arch/riscv/kvm/vcpu.c @@ -139,8 +139,8 @@ static void kvm_riscv_reset_vcpu(struct kvm_vcpu *vcpu) kvm_riscv_vcpu_aia_reset(vcpu); - WRITE_ONCE(vcpu->arch.irqs_pending, 0); - WRITE_ONCE(vcpu->arch.irqs_pending_mask, 0); + bitmap_zero(vcpu->arch.irqs_pending, KVM_RISCV_VCPU_NR_IRQS); + bitmap_zero(vcpu->arch.irqs_pending_mask, KVM_RISCV_VCPU_NR_IRQS); vcpu->arch.hfence_head = 0; vcpu->arch.hfence_tail = 0; @@ -465,6 +465,7 @@ static int kvm_riscv_vcpu_general_get_csr(struct kvm_vcpu *vcpu, if (reg_num == KVM_REG_RISCV_CSR_REG(sip)) { kvm_riscv_vcpu_flush_interrupts(vcpu); *out_val = (csr->hvip >> VSIP_TO_HVIP_SHIFT) & VSIP_VALID_MASK; + *out_val |= csr->hvip & ~IRQ_LOCAL_MASK; } else *out_val = ((unsigned long *)csr)[reg_num]; @@ -525,7 +526,7 @@ static inline int kvm_riscv_vcpu_general_set_csr(struct kvm_vcpu *vcpu, ((unsigned long *)csr)[reg_num] = reg_val; if (reg_num == KVM_REG_RISCV_CSR_REG(sip)) - WRITE_ONCE(vcpu->arch.irqs_pending_mask, 0); + WRITE_ONCE(vcpu->arch.irqs_pending_mask[0], 0); return 0; } @@ -790,9 +791,9 @@ void kvm_riscv_vcpu_flush_interrupts(struct kvm_vcpu *vcpu) struct kvm_vcpu_csr *csr = &vcpu->arch.guest_csr; unsigned long mask, val; - if (READ_ONCE(vcpu->arch.irqs_pending_mask)) { - mask = xchg_acquire(&vcpu->arch.irqs_pending_mask, 0); - val = READ_ONCE(vcpu->arch.irqs_pending) & mask; + if (READ_ONCE(vcpu->arch.irqs_pending_mask[0])) { + mask = xchg_acquire(&vcpu->arch.irqs_pending_mask[0], 0); + val = READ_ONCE(vcpu->arch.irqs_pending[0]) & mask; csr->hvip &= ~mask; csr->hvip |= val; @@ -816,12 +817,12 @@ void kvm_riscv_vcpu_sync_interrupts(struct kvm_vcpu *vcpu) if ((csr->hvip ^ hvip) & (1UL << IRQ_VS_SOFT)) { if (hvip & (1UL << IRQ_VS_SOFT)) { if (!test_and_set_bit(IRQ_VS_SOFT, - &v->irqs_pending_mask)) - set_bit(IRQ_VS_SOFT, &v->irqs_pending); + v->irqs_pending_mask)) + set_bit(IRQ_VS_SOFT, v->irqs_pending); } else { if (!test_and_set_bit(IRQ_VS_SOFT, - &v->irqs_pending_mask)) - clear_bit(IRQ_VS_SOFT, &v->irqs_pending); + v->irqs_pending_mask)) + clear_bit(IRQ_VS_SOFT, v->irqs_pending); } } @@ -834,14 +835,20 @@ void kvm_riscv_vcpu_sync_interrupts(struct kvm_vcpu *vcpu) int kvm_riscv_vcpu_set_interrupt(struct kvm_vcpu *vcpu, unsigned int irq) { - if (irq != IRQ_VS_SOFT && + /* + * We only allow VS-mode software, timer, and external + * interrupts when irq is one of the local interrupts + * defined by RISC-V privilege specification. + */ + if (irq < IRQ_LOCAL_MAX && + irq != IRQ_VS_SOFT && irq != IRQ_VS_TIMER && irq != IRQ_VS_EXT) return -EINVAL; - set_bit(irq, &vcpu->arch.irqs_pending); + set_bit(irq, vcpu->arch.irqs_pending); smp_mb__before_atomic(); - set_bit(irq, &vcpu->arch.irqs_pending_mask); + set_bit(irq, vcpu->arch.irqs_pending_mask); kvm_vcpu_kick(vcpu); @@ -850,25 +857,33 @@ int kvm_riscv_vcpu_set_interrupt(struct kvm_vcpu *vcpu, unsigned int irq) int kvm_riscv_vcpu_unset_interrupt(struct kvm_vcpu *vcpu, unsigned int irq) { - if (irq != IRQ_VS_SOFT && + /* + * We only allow VS-mode software, timer, and external + * interrupts when irq is one of the local interrupts + * defined by RISC-V privilege specification. + */ + if (irq < IRQ_LOCAL_MAX && + irq != IRQ_VS_SOFT && irq != IRQ_VS_TIMER && irq != IRQ_VS_EXT) return -EINVAL; - clear_bit(irq, &vcpu->arch.irqs_pending); + clear_bit(irq, vcpu->arch.irqs_pending); smp_mb__before_atomic(); - set_bit(irq, &vcpu->arch.irqs_pending_mask); + set_bit(irq, vcpu->arch.irqs_pending_mask); return 0; } -bool kvm_riscv_vcpu_has_interrupts(struct kvm_vcpu *vcpu, unsigned long mask) +bool kvm_riscv_vcpu_has_interrupts(struct kvm_vcpu *vcpu, u64 mask) { unsigned long ie; ie = ((vcpu->arch.guest_csr.vsie & VSIP_VALID_MASK) - << VSIP_TO_HVIP_SHIFT) & mask; - if (READ_ONCE(vcpu->arch.irqs_pending) & ie) + << VSIP_TO_HVIP_SHIFT) & (unsigned long)mask; + ie |= vcpu->arch.guest_csr.vsie & ~IRQ_LOCAL_MASK & + (unsigned long)mask; + if (READ_ONCE(vcpu->arch.irqs_pending[0]) & ie) return true; /* Check AIA high interrupts */ From patchwork Thu Jan 12 14:03:04 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Anup Patel X-Patchwork-Id: 13098044 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 76338C54EBD for ; Thu, 12 Jan 2023 14:05:32 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234931AbjALOF3 (ORCPT ); Thu, 12 Jan 2023 09:05:29 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58290 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232183AbjALOEs (ORCPT ); Thu, 12 Jan 2023 09:04:48 -0500 Received: from mail-pj1-x1031.google.com (mail-pj1-x1031.google.com [IPv6:2607:f8b0:4864:20::1031]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0B89F53734 for ; Thu, 12 Jan 2023 06:04:07 -0800 (PST) Received: by mail-pj1-x1031.google.com with SMTP id n12so19243487pjp.1 for ; Thu, 12 Jan 2023 06:04:07 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ventanamicro.com; s=google; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=naEZ7E42PmGrRmDyZ0gtM0Fhp8kdPaxgbSF32fZA2K4=; b=DXHpGxIyY+TLL0frPofrZ9q2CiUY7Dd0fi1Jk1xnjjMveYPO0A+gwDoNE7UdZHcmJG Hq68WQIfPxtP/7E8wyM9CJUDJ0+lj8YZlAWMVqZZNGqFU20yz7e2ciqgYvXMxAdpq9+m 8WAd7CUb/vQsHfG2YzFSFTgxTgByQ9ob0wHJlK+RLICjyKld+3ded8UGpaNd07qYuoUi QzH3NoT0tDM9udjLt+u11hqaA3XsoJEEDdLNq0vlzZ4aHXF7Onw6ARWC/sGCroykusLA KMFwPSDwCDmcowN4P+4GLaIeI12MDFi2NHLfSm66BH3fExph3T6FI4HU/rNWfEe9lEF/ tTzQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=naEZ7E42PmGrRmDyZ0gtM0Fhp8kdPaxgbSF32fZA2K4=; b=YYKjzJGR7MnEVDzLSPBzxivwSl+hNa4M/lIaArXppRoyFRWYQMA/saDTr3yH87sPVr yM6PXRr9KVNij7IGryYIHqSb32PlZgoN8EEdGnC7Mf/vdap7FAbTk9oMtDyvMI1T3FGU v3rJLV64X1R/cT7wLsu2wXNrBGdCLQKHQ7z49ZMk7OEwxrwpE8EsT32TFtwtym/dfYRB Aeq0sQ+0qCaZgkEZykewhsmvJ7nqOiV87NiDC6UAwaSt50r1bA4Zg6kRYweS8zI04Gcg oNLWMzli9B081x/N7n/I5xUEKcBCV7G4taFEyQ4UlWXQ7Jhv+hZyu/LS+60wiGkinwOP B1iQ== X-Gm-Message-State: AFqh2kphXIgs9amwXwg6o9z4N/Stndf0Gjx8hNmGwevQ5DQN5WiX08SU PUrjTaKLGy3BU9YZcy8YX9AUzA== X-Google-Smtp-Source: AMrXdXuN9tLD5Iu7dOv99wOCPz19unMbuxcKS187BtUFaZN8sI0aKTJT7m6TIaUqJ5Leogz7fGArjA== X-Received: by 2002:a17:902:f691:b0:194:4a8e:7eee with SMTP id l17-20020a170902f69100b001944a8e7eeemr9746512plg.65.1673532247101; Thu, 12 Jan 2023 06:04:07 -0800 (PST) Received: from anup-ubuntu-vm.localdomain ([103.97.165.210]) by smtp.gmail.com with ESMTPSA id d11-20020a170902cecb00b001925016e34bsm12351455plg.79.2023.01.12.06.04.01 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 12 Jan 2023 06:04:06 -0800 (PST) From: Anup Patel To: Paolo Bonzini , Atish Patra Cc: Palmer Dabbelt , Paul Walmsley , Andrew Jones , Anup Patel , kvm@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, Anup Patel Subject: [PATCH 7/7] RISC-V: KVM: Implement guest external interrupt line management Date: Thu, 12 Jan 2023 19:33:04 +0530 Message-Id: <20230112140304.1830648-8-apatel@ventanamicro.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230112140304.1830648-1-apatel@ventanamicro.com> References: <20230112140304.1830648-1-apatel@ventanamicro.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org The RISC-V host will have one guest external interrupt line for each VS-level IMSICs associated with a HART. The guest external interrupt lines are per-HART resources and hypervisor can use HGEIE, HGEIP, and HIE CSRs to manage these guest external interrupt lines. Signed-off-by: Anup Patel --- arch/riscv/include/asm/kvm_aia.h | 10 ++ arch/riscv/kvm/aia.c | 241 +++++++++++++++++++++++++++++++ arch/riscv/kvm/main.c | 3 +- arch/riscv/kvm/vcpu.c | 2 + 4 files changed, 255 insertions(+), 1 deletion(-) diff --git a/arch/riscv/include/asm/kvm_aia.h b/arch/riscv/include/asm/kvm_aia.h index 1bbac1266f34..2ed2a0c895b1 100644 --- a/arch/riscv/include/asm/kvm_aia.h +++ b/arch/riscv/include/asm/kvm_aia.h @@ -44,10 +44,15 @@ struct kvm_vcpu_aia { #define irqchip_in_kernel(k) (!!((k)->arch.aia.in_kernel)) +extern unsigned int kvm_riscv_aia_nr_hgei; DECLARE_STATIC_KEY_FALSE(kvm_riscv_aia_available); #define kvm_riscv_aia_available() \ static_branch_unlikely(&kvm_riscv_aia_available) +static inline void kvm_riscv_vcpu_aia_imsic_release(struct kvm_vcpu *vcpu) +{ +} + #define KVM_RISCV_AIA_IMSIC_TOPEI (ISELECT_MASK + 1) static inline int kvm_riscv_vcpu_aia_imsic_rmw(struct kvm_vcpu *vcpu, unsigned long isel, @@ -119,6 +124,11 @@ static inline void kvm_riscv_aia_destroy_vm(struct kvm *kvm) { } +int kvm_riscv_aia_alloc_hgei(int cpu, struct kvm_vcpu *owner, + void __iomem **hgei_va, phys_addr_t *hgei_pa); +void kvm_riscv_aia_free_hgei(int cpu, int hgei); +void kvm_riscv_aia_wakeon_hgei(struct kvm_vcpu *owner, bool enable); + void kvm_riscv_aia_enable(void); void kvm_riscv_aia_disable(void); int kvm_riscv_aia_init(void); diff --git a/arch/riscv/kvm/aia.c b/arch/riscv/kvm/aia.c index 28610f427cb7..97e23df31ee7 100644 --- a/arch/riscv/kvm/aia.c +++ b/arch/riscv/kvm/aia.c @@ -7,11 +7,46 @@ * Anup Patel */ +#include +#include +#include #include +#include +#include #include +struct aia_hgei_control { + raw_spinlock_t lock; + unsigned long free_bitmap; + struct kvm_vcpu *owners[BITS_PER_LONG]; +}; +static DEFINE_PER_CPU(struct aia_hgei_control, aia_hgei); +static int hgei_parent_irq; + +unsigned int kvm_riscv_aia_nr_hgei; DEFINE_STATIC_KEY_FALSE(kvm_riscv_aia_available); +static int aia_find_hgei(struct kvm_vcpu *owner) +{ + int i, hgei; + unsigned long flags; + struct aia_hgei_control *hgctrl = this_cpu_ptr(&aia_hgei); + + raw_spin_lock_irqsave(&hgctrl->lock, flags); + + hgei = -1; + for (i = 1; i <= kvm_riscv_aia_nr_hgei; i++) { + if (hgctrl->owners[i] == owner) { + hgei = i; + break; + } + } + + raw_spin_unlock_irqrestore(&hgctrl->lock, flags); + + return hgei; +} + static inline void aia_set_hvictl(bool ext_irq_pending) { unsigned long hvictl; @@ -55,6 +90,7 @@ void kvm_riscv_vcpu_aia_sync_interrupts(struct kvm_vcpu *vcpu) bool kvm_riscv_vcpu_aia_has_interrupts(struct kvm_vcpu *vcpu, u64 mask) { + int hgei; unsigned long seip; if (!kvm_riscv_aia_available()) @@ -72,6 +108,10 @@ bool kvm_riscv_vcpu_aia_has_interrupts(struct kvm_vcpu *vcpu, u64 mask) if (!kvm_riscv_aia_initialized(vcpu->kvm) || !seip) return false; + hgei = aia_find_hgei(vcpu); + if (hgei > 0) + return (csr_read(CSR_HGEIP) & BIT(hgei)) ? true : false; + return false; } @@ -343,6 +383,144 @@ int kvm_riscv_vcpu_aia_rmw_ireg(struct kvm_vcpu *vcpu, unsigned int csr_num, return KVM_INSN_EXIT_TO_USER_SPACE; } +int kvm_riscv_aia_alloc_hgei(int cpu, struct kvm_vcpu *owner, + void __iomem **hgei_va, phys_addr_t *hgei_pa) +{ + int ret = -ENOENT; + unsigned long flags; + struct aia_hgei_control *hgctrl = per_cpu_ptr(&aia_hgei, cpu); + + if (!kvm_riscv_aia_available()) + return -ENODEV; + if (!hgctrl) + return -ENODEV; + + raw_spin_lock_irqsave(&hgctrl->lock, flags); + + if (hgctrl->free_bitmap) { + ret = __ffs(hgctrl->free_bitmap); + hgctrl->free_bitmap &= ~BIT(ret); + hgctrl->owners[ret] = owner; + } + + raw_spin_unlock_irqrestore(&hgctrl->lock, flags); + + /* TODO: To be updated later by AIA in-kernel irqchip support */ + if (hgei_va) + *hgei_va = NULL; + if (hgei_pa) + *hgei_pa = 0; + + return ret; +} + +void kvm_riscv_aia_free_hgei(int cpu, int hgei) +{ + unsigned long flags; + struct aia_hgei_control *hgctrl = per_cpu_ptr(&aia_hgei, cpu); + + if (!kvm_riscv_aia_available() || !hgctrl) + return; + + raw_spin_lock_irqsave(&hgctrl->lock, flags); + + if (hgei > 0 && hgei <= kvm_riscv_aia_nr_hgei) { + if (!(hgctrl->free_bitmap & BIT(hgei))) { + hgctrl->free_bitmap |= BIT(hgei); + hgctrl->owners[hgei] = NULL; + } + } + + raw_spin_unlock_irqrestore(&hgctrl->lock, flags); +} + +void kvm_riscv_aia_wakeon_hgei(struct kvm_vcpu *owner, bool enable) +{ + int hgei; + + if (!kvm_riscv_aia_available()) + return; + + hgei = aia_find_hgei(owner); + if (hgei > 0) { + if (enable) + csr_set(CSR_HGEIE, BIT(hgei)); + else + csr_clear(CSR_HGEIE, BIT(hgei)); + } +} + +static irqreturn_t hgei_interrupt(int irq, void *dev_id) +{ + int i; + unsigned long hgei_mask, flags; + struct aia_hgei_control *hgctrl = this_cpu_ptr(&aia_hgei); + + hgei_mask = csr_read(CSR_HGEIP) & csr_read(CSR_HGEIE); + csr_clear(CSR_HGEIE, hgei_mask); + + raw_spin_lock_irqsave(&hgctrl->lock, flags); + + for_each_set_bit(i, &hgei_mask, BITS_PER_LONG) { + if (hgctrl->owners[i]) + kvm_vcpu_kick(hgctrl->owners[i]); + } + + raw_spin_unlock_irqrestore(&hgctrl->lock, flags); + + return IRQ_HANDLED; +} + +static int aia_hgei_init(void) +{ + int cpu, rc; + struct irq_domain *domain; + struct aia_hgei_control *hgctrl; + + /* Initialize per-CPU guest external interrupt line management */ + for_each_possible_cpu(cpu) { + hgctrl = per_cpu_ptr(&aia_hgei, cpu); + raw_spin_lock_init(&hgctrl->lock); + if (kvm_riscv_aia_nr_hgei) { + hgctrl->free_bitmap = + BIT(kvm_riscv_aia_nr_hgei + 1) - 1; + hgctrl->free_bitmap &= ~BIT(0); + } else + hgctrl->free_bitmap = 0; + } + + /* Find INTC irq domain */ + domain = irq_find_matching_fwnode(riscv_get_intc_hwnode(), + DOMAIN_BUS_ANY); + if (!domain) { + kvm_err("unable to find INTC domain\n"); + return -ENOENT; + } + + /* Map per-CPU SGEI interrupt from INTC domain */ + hgei_parent_irq = irq_create_mapping(domain, IRQ_S_GEXT); + if (!hgei_parent_irq) { + kvm_err("unable to map SGEI IRQ\n"); + return -ENOMEM; + } + + /* Request per-CPU SGEI interrupt */ + rc = request_percpu_irq(hgei_parent_irq, hgei_interrupt, + "riscv-kvm", &aia_hgei); + if (rc) { + kvm_err("failed to request SGEI IRQ\n"); + return rc; + } + + return 0; +} + +static void aia_hgei_exit(void) +{ + /* Free per-CPU SGEI interrupt */ + free_percpu_irq(hgei_parent_irq, &aia_hgei); +} + void kvm_riscv_aia_enable(void) { if (!kvm_riscv_aia_available()) @@ -357,21 +535,79 @@ void kvm_riscv_aia_enable(void) csr_write(CSR_HVIPRIO1H, 0x0); csr_write(CSR_HVIPRIO2H, 0x0); #endif + + /* Enable per-CPU SGEI interrupt */ + enable_percpu_irq(hgei_parent_irq, + irq_get_trigger_type(hgei_parent_irq)); + csr_set(CSR_HIE, BIT(IRQ_S_GEXT)); } void kvm_riscv_aia_disable(void) { + int i; + unsigned long flags; + struct kvm_vcpu *vcpu; + struct aia_hgei_control *hgctrl = this_cpu_ptr(&aia_hgei); + if (!kvm_riscv_aia_available()) return; + /* Disable per-CPU SGEI interrupt */ + csr_clear(CSR_HIE, BIT(IRQ_S_GEXT)); + disable_percpu_irq(hgei_parent_irq); + aia_set_hvictl(false); + + raw_spin_lock_irqsave(&hgctrl->lock, flags); + + for (i = 0; i <= kvm_riscv_aia_nr_hgei; i++) { + vcpu = hgctrl->owners[i]; + if (!vcpu) + continue; + + /* + * We release hgctrl->lock before notifying IMSIC + * so that we don't have lock ordering issues. + */ + raw_spin_unlock_irqrestore(&hgctrl->lock, flags); + + /* Notify IMSIC */ + kvm_riscv_vcpu_aia_imsic_release(vcpu); + + /* + * Wakeup VCPU if it was blocked so that it can + * run on other HARTs + */ + if (csr_read(CSR_HGEIE) & BIT(i)) { + csr_clear(CSR_HGEIE, BIT(i)); + kvm_vcpu_kick(vcpu); + } + + raw_spin_lock_irqsave(&hgctrl->lock, flags); + } + + raw_spin_unlock_irqrestore(&hgctrl->lock, flags); } int kvm_riscv_aia_init(void) { + int rc; + if (!riscv_isa_extension_available(NULL, SxAIA)) return -ENODEV; + /* Figure-out number of bits in HGEIE */ + csr_write(CSR_HGEIE, -1UL); + kvm_riscv_aia_nr_hgei = fls_long(csr_read(CSR_HGEIE)); + csr_write(CSR_HGEIE, 0); + if (kvm_riscv_aia_nr_hgei) + kvm_riscv_aia_nr_hgei--; + + /* Initialize guest external interrupt line management */ + rc = aia_hgei_init(); + if (rc) + return rc; + /* Enable KVM AIA support */ static_branch_enable(&kvm_riscv_aia_available); @@ -380,4 +616,9 @@ int kvm_riscv_aia_init(void) void kvm_riscv_aia_exit(void) { + if (!kvm_riscv_aia_available()) + return; + + /* Cleanup the HGEI state */ + aia_hgei_exit(); } diff --git a/arch/riscv/kvm/main.c b/arch/riscv/kvm/main.c index d8ff44eb04ca..5cf37dbe8a38 100644 --- a/arch/riscv/kvm/main.c +++ b/arch/riscv/kvm/main.c @@ -125,7 +125,8 @@ int kvm_arch_init(void *opaque) kvm_info("VMID %ld bits available\n", kvm_riscv_gstage_vmid_bits()); if (kvm_riscv_aia_available()) - kvm_info("AIA available\n"); + kvm_info("AIA available with %d guest external interrupts\n", + kvm_riscv_aia_nr_hgei); return 0; } diff --git a/arch/riscv/kvm/vcpu.c b/arch/riscv/kvm/vcpu.c index 151b35b3b05f..1daa1936b642 100644 --- a/arch/riscv/kvm/vcpu.c +++ b/arch/riscv/kvm/vcpu.c @@ -240,10 +240,12 @@ int kvm_cpu_has_pending_timer(struct kvm_vcpu *vcpu) void kvm_arch_vcpu_blocking(struct kvm_vcpu *vcpu) { + kvm_riscv_aia_wakeon_hgei(vcpu, true); } void kvm_arch_vcpu_unblocking(struct kvm_vcpu *vcpu) { + kvm_riscv_aia_wakeon_hgei(vcpu, false); } int kvm_arch_vcpu_runnable(struct kvm_vcpu *vcpu)