From patchwork Tue Jan 3 14:14:01 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Anup Patel X-Patchwork-Id: 13087750 X-Patchwork-Delegate: palmer@dabbelt.com Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 30AA3C3DA7D for ; Tue, 3 Jan 2023 17:08:49 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=Lc4PxHEc2Ai3VaLLyWFz1YoFhTpPzSNd2mAfHP1ZHfM=; b=2WVEinC22PDmkG 6V1oyaStm3JcmUcPX/yqKn/FZ89wmkpsSPRnUOQXF2/+sKXnYearRuHdOGUpxefGJO1fvhxI1Wx0y UOYzqJUtB6XlT24cJr/U3DHR8OtKbOodXriTO63+O4pCbu4O2dhJxgSHRh+yaku+cx/pA5hhtv0oZ eD6plZJLJqT2B5SHHeTLpLsUlWwuv+Hi0zTHJFMoGOKaM09CmCRH/96wGHCsDQUU3+TJIWpVE13GH TSuLgxqL7u4IXjbePlMzF+2tLW2Zpyqc8DomQIikbnfRKzJyZBlqZfd4FQBzXvkG6V5BvIcZAC/cL NfyTevdQdcEd7sOVor0A==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1pCkmA-003FbR-29; Tue, 03 Jan 2023 17:08:34 +0000 Received: from mail-pg1-x529.google.com ([2607:f8b0:4864:20::529]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1pCi3e-001rYn-Dy for linux-riscv@lists.infradead.org; Tue, 03 Jan 2023 14:14:28 +0000 Received: by mail-pg1-x529.google.com with SMTP id b12so20197086pgj.6 for ; Tue, 03 Jan 2023 06:14:24 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ventanamicro.com; s=google; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=daXlLeLw3kNDzBbMmxQMtFlEI6BHAyH4slFZ3YIhVkM=; b=evMFet2z+EJ1hVR3HKuxGmSdEu6XPwfvkzgldhBr6la+uO4K2yOWPednaoXcqmEknE 89P9OaRMVph4Du8nIirIU7ZJjT/TUImmEI8fpiwTDi601XCzKT0q817yz0B1rof3aKwC Hku0T5rwU+1eouvUbIHI9V7C40yiEKNufRZG8JzuMrYRZJeSnwlHrpG5v+j/IN31SLsD cJdsIMyLpwjxLVHlBV07TLirdSUxknJULDXDT8N7rbxHSLGcP6IrnWtezAxGZVkwbHkS iZ5WIn8PTVfK3dPtThCIrL6UiTG6FrmOIn/xdULqPMEkyxqyLEgAf94yjngdxtiwhBiS 2uHA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=daXlLeLw3kNDzBbMmxQMtFlEI6BHAyH4slFZ3YIhVkM=; b=d+NF7SqYp8Lbp+dOMq6N85kx4dmTxQlS37/+k7nTqTzjsZ0yfuNTh05Pkb8VdnPddn CCTI6oBnn++TRKcMtAh+PXSVZbPU3623GzpxDuY8FAoe40jNwbcr91JExJuP/YBzkmZY H9tmRPTnbaSZZU/AX3R46Y5FjWnivEtsM3gI3O6l/te5TKGIs4I0shp/ByCQWjuFQO6c iLqxpacsU73YbNna0lV+L8YtkfzS30AlzHr6wo5QNOGe+AZy1GrB3nJMSJWQiAwrGiDs enX4qYNPcX1ivyRilHrOt5ccryEZoTsJ4m29vMftyOvu+Vv6uqbzF5DMC4P08othaMgT fWHw== X-Gm-Message-State: AFqh2kqXwAPCErQajpI1a2/zDX7oU7oXWqkp5ww16jXVVMKARCmx1c3g YWb7bqR6T8KFaJo3v62NWaRmmQ== X-Google-Smtp-Source: AMrXdXvVyhWM9MkWqENdqR09xUyoArxlPBiWBymqYDKzgQxLglEHoKaV3YlNAApGwngdrvlfG5EfQA== X-Received: by 2002:a62:2903:0:b0:57f:f2cd:6180 with SMTP id p3-20020a622903000000b0057ff2cd6180mr43349748pfp.0.1672755263781; Tue, 03 Jan 2023 06:14:23 -0800 (PST) Received: from anup-ubuntu-vm.localdomain ([171.76.85.241]) by smtp.gmail.com with ESMTPSA id h1-20020a628301000000b0056be4dbd4besm5936035pfe.111.2023.01.03.06.14.19 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 03 Jan 2023 06:14:23 -0800 (PST) From: Anup Patel To: Palmer Dabbelt , Paul Walmsley , Thomas Gleixner , Marc Zyngier , Rob Herring , Krzysztof Kozlowski Cc: Atish Patra , Alistair Francis , Anup Patel , linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, devicetree@vger.kernel.org, Anup Patel Subject: [PATCH v2 1/9] RISC-V: Add AIA related CSR defines Date: Tue, 3 Jan 2023 19:44:01 +0530 Message-Id: <20230103141409.772298-2-apatel@ventanamicro.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230103141409.772298-1-apatel@ventanamicro.com> References: <20230103141409.772298-1-apatel@ventanamicro.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230103_061426_523830_CF294C09 X-CRM114-Status: UNSURE ( 9.20 ) X-CRM114-Notice: Please train this message. X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org The RISC-V AIA specification improves handling per-HART local interrupts in a backward compatible manner. This patch adds defines for new RISC-V AIA CSRs. Signed-off-by: Anup Patel Reviewed-by: Conor Dooley --- arch/riscv/include/asm/csr.h | 92 ++++++++++++++++++++++++++++++++++++ 1 file changed, 92 insertions(+) diff --git a/arch/riscv/include/asm/csr.h b/arch/riscv/include/asm/csr.h index 0e571f6483d9..4e1356bad7b2 100644 --- a/arch/riscv/include/asm/csr.h +++ b/arch/riscv/include/asm/csr.h @@ -73,7 +73,10 @@ #define IRQ_S_EXT 9 #define IRQ_VS_EXT 10 #define IRQ_M_EXT 11 +#define IRQ_S_GEXT 12 #define IRQ_PMU_OVF 13 +#define IRQ_LOCAL_MAX (IRQ_PMU_OVF + 1) +#define IRQ_LOCAL_MASK ((_AC(1, UL) << IRQ_LOCAL_MAX) - 1) /* Exception causes */ #define EXC_INST_MISALIGNED 0 @@ -156,6 +159,26 @@ (_AC(1, UL) << IRQ_S_TIMER) | \ (_AC(1, UL) << IRQ_S_EXT)) +/* AIA CSR bits */ +#define TOPI_IID_SHIFT 16 +#define TOPI_IID_MASK 0xfff +#define TOPI_IPRIO_MASK 0xff +#define TOPI_IPRIO_BITS 8 + +#define TOPEI_ID_SHIFT 16 +#define TOPEI_ID_MASK 0x7ff +#define TOPEI_PRIO_MASK 0x7ff + +#define ISELECT_IPRIO0 0x30 +#define ISELECT_IPRIO15 0x3f +#define ISELECT_MASK 0x1ff + +#define HVICTL_VTI 0x40000000 +#define HVICTL_IID 0x0fff0000 +#define HVICTL_IID_SHIFT 16 +#define HVICTL_IPRIOM 0x00000100 +#define HVICTL_IPRIO 0x000000ff + /* xENVCFG flags */ #define ENVCFG_STCE (_AC(1, ULL) << 63) #define ENVCFG_PBMTE (_AC(1, ULL) << 62) @@ -250,6 +273,18 @@ #define CSR_STIMECMP 0x14D #define CSR_STIMECMPH 0x15D +/* Supervisor-Level Window to Indirectly Accessed Registers (AIA) */ +#define CSR_SISELECT 0x150 +#define CSR_SIREG 0x151 + +/* Supervisor-Level Interrupts (AIA) */ +#define CSR_STOPEI 0x15c +#define CSR_STOPI 0xdb0 + +/* Supervisor-Level High-Half CSRs (AIA) */ +#define CSR_SIEH 0x114 +#define CSR_SIPH 0x154 + #define CSR_VSSTATUS 0x200 #define CSR_VSIE 0x204 #define CSR_VSTVEC 0x205 @@ -279,8 +314,32 @@ #define CSR_HGATP 0x680 #define CSR_HGEIP 0xe12 +/* Virtual Interrupts and Interrupt Priorities (H-extension with AIA) */ +#define CSR_HVIEN 0x608 +#define CSR_HVICTL 0x609 +#define CSR_HVIPRIO1 0x646 +#define CSR_HVIPRIO2 0x647 + +/* VS-Level Window to Indirectly Accessed Registers (H-extension with AIA) */ +#define CSR_VSISELECT 0x250 +#define CSR_VSIREG 0x251 + +/* VS-Level Interrupts (H-extension with AIA) */ +#define CSR_VSTOPEI 0x25c +#define CSR_VSTOPI 0xeb0 + +/* Hypervisor and VS-Level High-Half CSRs (H-extension with AIA) */ +#define CSR_HIDELEGH 0x613 +#define CSR_HVIENH 0x618 +#define CSR_HVIPH 0x655 +#define CSR_HVIPRIO1H 0x656 +#define CSR_HVIPRIO2H 0x657 +#define CSR_VSIEH 0x214 +#define CSR_VSIPH 0x254 + #define CSR_MSTATUS 0x300 #define CSR_MISA 0x301 +#define CSR_MIDELEG 0x303 #define CSR_MIE 0x304 #define CSR_MTVEC 0x305 #define CSR_MENVCFG 0x30a @@ -297,6 +356,25 @@ #define CSR_MIMPID 0xf13 #define CSR_MHARTID 0xf14 +/* Machine-Level Window to Indirectly Accessed Registers (AIA) */ +#define CSR_MISELECT 0x350 +#define CSR_MIREG 0x351 + +/* Machine-Level Interrupts (AIA) */ +#define CSR_MTOPEI 0x35c +#define CSR_MTOPI 0xfb0 + +/* Virtual Interrupts for Supervisor Level (AIA) */ +#define CSR_MVIEN 0x308 +#define CSR_MVIP 0x309 + +/* Machine-Level High-Half CSRs (AIA) */ +#define CSR_MIDELEGH 0x313 +#define CSR_MIEH 0x314 +#define CSR_MVIENH 0x318 +#define CSR_MVIPH 0x319 +#define CSR_MIPH 0x354 + #ifdef CONFIG_RISCV_M_MODE # define CSR_STATUS CSR_MSTATUS # define CSR_IE CSR_MIE @@ -307,6 +385,13 @@ # define CSR_TVAL CSR_MTVAL # define CSR_IP CSR_MIP +# define CSR_IEH CSR_MIEH +# define CSR_ISELECT CSR_MISELECT +# define CSR_IREG CSR_MIREG +# define CSR_IPH CSR_MIPH +# define CSR_TOPEI CSR_MTOPEI +# define CSR_TOPI CSR_MTOPI + # define SR_IE SR_MIE # define SR_PIE SR_MPIE # define SR_PP SR_MPP @@ -324,6 +409,13 @@ # define CSR_TVAL CSR_STVAL # define CSR_IP CSR_SIP +# define CSR_IEH CSR_SIEH +# define CSR_ISELECT CSR_SISELECT +# define CSR_IREG CSR_SIREG +# define CSR_IPH CSR_SIPH +# define CSR_TOPEI CSR_STOPEI +# define CSR_TOPI CSR_STOPI + # define SR_IE SR_SIE # define SR_PIE SR_SPIE # define SR_PP SR_SPP From patchwork Tue Jan 3 14:14:02 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Anup Patel X-Patchwork-Id: 13087751 X-Patchwork-Delegate: palmer@dabbelt.com Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 09904C3DA7D for ; Tue, 3 Jan 2023 17:09:33 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=SjCN/Ply8bmJrsQCeVAlqZYI4/zNM/8aO9MxbbFjPrc=; b=Ys0zPSeiZBGffu /iHrU2+CQFEB1Zihqcqoi/U2PbvQLpWWvM46lW+fCvv8XpCT0geY6atHixTmO+y3RtNwZNwQs58E8 kODMhDevbUxyZTeMoaReCs9elrWoH+Ck75wt7iIRwrwG48mvbyIJkV3DTZ5JmlhfPAKXeWcI+bJlg u+NxmuBBD3pHYeCH9x+tliLmGj9oFJiA3CcwCPaog5LERr5lVlf+RyEZzu9Qr7Oc1cPUyEP5AaUba asm2C50lpoagc+ZOmlub8rAf5MwQE4rQFC2Xs7JiXQBEu+x2KrAIGouL7hjGw2xKDSTxZ4wZhgq/z brEJhaOcBXVnZunfq0ng==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1pCkmt-003Fyh-L5; Tue, 03 Jan 2023 17:09:19 +0000 Received: from mail-pf1-x431.google.com ([2607:f8b0:4864:20::431]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1pCi3k-001rbQ-GN for linux-riscv@lists.infradead.org; Tue, 03 Jan 2023 14:14:34 +0000 Received: by mail-pf1-x431.google.com with SMTP id z7so14946680pfq.13 for ; Tue, 03 Jan 2023 06:14:29 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ventanamicro.com; s=google; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=wjWQ9F0LaanvIi9SVaFlkrqw0WGzFdxUF931tj4YTWo=; b=GZYVbitECZoAD3ey9QrMfJLiWi/l3+YL/cWg1JzO1nEzdeiaUZkCF3gCUl7Tk0Z94k ABJ8gC8Q0sOTdWpEXCrp4WrrKo4DeeWMs1QMADuHyb0qhckjOYD6zeVCYqL2BvVBdTUy uw+FLxBFbVQearylGjJ9Pl4QIjAAzRkUwDK7iZrV4N+d78BEHyeLkaP4bQfvLaO4dofJ HtFaNK1XxPw7PDSfRRkzFQU9IZroxEyfXEFqwQaj6dCk1LF/oAvKIT4gpf4zePI/8ijm 4nbgogtin1Cod/6Ae5EXr62dRm1YRLLUC+36silPmflY8kwSkKK5bsfW8Kso7k3Akexq gpVg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=wjWQ9F0LaanvIi9SVaFlkrqw0WGzFdxUF931tj4YTWo=; b=rkLBjd8oGKxQX1P+q5vN4KzePJp845WoOsmpTkZxsxY1cN0q05+zIu6UvHpSrYlXB+ hqMtXUNSfKfejPiC/0b+2QytQkCnGvnMGXQT+EBQA6TTUokAqT4/pxsVBkb2E/Kacv9x 7xlstLLpF7dc5eXGPrTS/K+VzR7/tlDbymElWRNf5P+K/a8DsyTyBbZu8xho8wGinDRP XoFw37pXB3N37OeY/pPsS9zq09bNY/oKAa+tUcd8cAWXuhpXhwvkqs0L/C+BFKKkLptz AmTXotHr9Cp5tiVpXgIDrL3yw/OktQ3DAT6rImnaxG/oaZY9E39jjvBZuiYOjE0bW55u pJfA== X-Gm-Message-State: AFqh2kodx4UN8UR5oNLlnBGJhZNW8I+0ka3C0gNGgyHukRxWh39Er6wf fEKyxI6YBZDqNPkqTBb8FrQqvQ== X-Google-Smtp-Source: AMrXdXvf/gvvn3/oN9lIhKpzGY3lwKjFoI6A82HnYpIyBCDuDPQ3CtXCyMnpDhbny2VDXnNqlYISDg== X-Received: by 2002:a05:6a00:1d9d:b0:566:900d:5af2 with SMTP id z29-20020a056a001d9d00b00566900d5af2mr47650556pfw.34.1672755268316; Tue, 03 Jan 2023 06:14:28 -0800 (PST) Received: from anup-ubuntu-vm.localdomain ([171.76.85.241]) by smtp.gmail.com with ESMTPSA id h1-20020a628301000000b0056be4dbd4besm5936035pfe.111.2023.01.03.06.14.24 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 03 Jan 2023 06:14:27 -0800 (PST) From: Anup Patel To: Palmer Dabbelt , Paul Walmsley , Thomas Gleixner , Marc Zyngier , Rob Herring , Krzysztof Kozlowski Cc: Atish Patra , Alistair Francis , Anup Patel , linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, devicetree@vger.kernel.org, Anup Patel Subject: [PATCH v2 2/9] RISC-V: Detect AIA CSRs from ISA string Date: Tue, 3 Jan 2023 19:44:02 +0530 Message-Id: <20230103141409.772298-3-apatel@ventanamicro.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230103141409.772298-1-apatel@ventanamicro.com> References: <20230103141409.772298-1-apatel@ventanamicro.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230103_061432_683325_CB88E2A8 X-CRM114-Status: UNSURE ( 9.97 ) X-CRM114-Notice: Please train this message. X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org We have two extension names for AIA ISA support: Smaia (M-mode AIA CSRs) and Ssaia (S-mode AIA CSRs). We extend the ISA string parsing to detect Smaia and Ssaia extensions. Signed-off-by: Anup Patel --- arch/riscv/include/asm/hwcap.h | 8 ++++++++ arch/riscv/kernel/cpu.c | 2 ++ arch/riscv/kernel/cpufeature.c | 2 ++ 3 files changed, 12 insertions(+) diff --git a/arch/riscv/include/asm/hwcap.h b/arch/riscv/include/asm/hwcap.h index 86328e3acb02..c649e85ed7bb 100644 --- a/arch/riscv/include/asm/hwcap.h +++ b/arch/riscv/include/asm/hwcap.h @@ -59,10 +59,18 @@ enum riscv_isa_ext_id { RISCV_ISA_EXT_ZIHINTPAUSE, RISCV_ISA_EXT_SSTC, RISCV_ISA_EXT_SVINVAL, + RISCV_ISA_EXT_SSAIA, + RISCV_ISA_EXT_SMAIA, RISCV_ISA_EXT_ID_MAX }; static_assert(RISCV_ISA_EXT_ID_MAX <= RISCV_ISA_EXT_MAX); +#ifdef CONFIG_RISCV_M_MODE +#define RISCV_ISA_EXT_SxAIA RISCV_ISA_EXT_SMAIA +#else +#define RISCV_ISA_EXT_SxAIA RISCV_ISA_EXT_SSAIA +#endif + /* * This enum represents the logical ID for each RISC-V ISA extension static * keys. We can use static key to optimize code path if some ISA extensions diff --git a/arch/riscv/kernel/cpu.c b/arch/riscv/kernel/cpu.c index 1b9a5a66e55a..a215ec929160 100644 --- a/arch/riscv/kernel/cpu.c +++ b/arch/riscv/kernel/cpu.c @@ -162,6 +162,8 @@ arch_initcall(riscv_cpuinfo_init); * extensions by an underscore. */ static struct riscv_isa_ext_data isa_ext_arr[] = { + __RISCV_ISA_EXT_DATA(smaia, RISCV_ISA_EXT_SMAIA), + __RISCV_ISA_EXT_DATA(ssaia, RISCV_ISA_EXT_SSAIA), __RISCV_ISA_EXT_DATA(sscofpmf, RISCV_ISA_EXT_SSCOFPMF), __RISCV_ISA_EXT_DATA(sstc, RISCV_ISA_EXT_SSTC), __RISCV_ISA_EXT_DATA(svinval, RISCV_ISA_EXT_SVINVAL), diff --git a/arch/riscv/kernel/cpufeature.c b/arch/riscv/kernel/cpufeature.c index 93e45560af30..3c5b51f519d5 100644 --- a/arch/riscv/kernel/cpufeature.c +++ b/arch/riscv/kernel/cpufeature.c @@ -228,6 +228,8 @@ void __init riscv_fill_hwcap(void) SET_ISA_EXT_MAP("zihintpause", RISCV_ISA_EXT_ZIHINTPAUSE); SET_ISA_EXT_MAP("sstc", RISCV_ISA_EXT_SSTC); SET_ISA_EXT_MAP("svinval", RISCV_ISA_EXT_SVINVAL); + SET_ISA_EXT_MAP("smaia", RISCV_ISA_EXT_SMAIA); + SET_ISA_EXT_MAP("ssaia", RISCV_ISA_EXT_SSAIA); } #undef SET_ISA_EXT_MAP } From patchwork Tue Jan 3 14:14:03 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Anup Patel X-Patchwork-Id: 13087752 X-Patchwork-Delegate: palmer@dabbelt.com Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 3CB70C3DA7D for ; Tue, 3 Jan 2023 17:10:14 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=SacLgN/ASyTi27ExpNCVx+3NR7uzqMlei/WSvoj8iek=; b=DP2TqW8KZy0SPJ rz65jyniQjfUL0C2sx2sQT7b5ggHL89QDKYRPbl9ebQ38n/Q2iMqQgpXvqJJrpINrc47rtU1OTGIM 8wV+h0vgRoK2rul6Ce2dcNB7YYulo0HQdg9aUUZsgt+kpCMtrimk/aW2fGn0gqKCccnZpHPehpJb4 aa5jgLoJZ6J8X/82tyob1Qtcow+rmMDPnaVwJSOrSICU6DRbWeNcCkOaA52uVBKKnraEtE1eynaUV MCoY9Elt2mPpEO7pPJiGCJHbfc00HC2SdAebAexKNnG6znY3gy7XeSxM14T5En8lCxAWgs6tMbdtK NfoI3aJ7j199lLoGYNow==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1pCknZ-003GLJ-3f; Tue, 03 Jan 2023 17:10:01 +0000 Received: from mail-pg1-x52c.google.com ([2607:f8b0:4864:20::52c]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1pCi3o-001rdh-Er for linux-riscv@lists.infradead.org; Tue, 03 Jan 2023 14:14:38 +0000 Received: by mail-pg1-x52c.google.com with SMTP id e10so6291041pgc.9 for ; Tue, 03 Jan 2023 06:14:33 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ventanamicro.com; s=google; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=sMnEu1L7WFfep+GtEK3XbJoNHy7vRLVmrYxDUF3pXUo=; b=SiKPY3h/+9QEDnzkM6/2/4jx9hnsbrjoGkusw+bqIV00BTHZgnJU/z3vR7NH8UkLCX MVCfaH2Ullt0x5D0HZZKT5PPk9gnALZQoecgANY7tgYoy/Ek3vzTCVq8IsvkwwyvZaRz hfvy8BD5m7FHvUxCJZ2/THWuUht24BdVVooFkzN8qG5HUMd2zejSzHFqjTSzKiAn4SoN lPC7iA6xcFKqUTF1vdxfGG4fR9ux91j9f2wtRDYPeIMgm8A4sIhaIxvEy3UKxEMlY1AO Yd7DZu3x1fg6Amhv6We8zy4YMbnfOsLahh7dmS7LsFVj9UT8dSZtIJJJegt0uGAJy6lX thEg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=sMnEu1L7WFfep+GtEK3XbJoNHy7vRLVmrYxDUF3pXUo=; b=mJ9Fx5KZwa5yYRI3Hk8Zo3Vhnsdia7PJCN53yrUb7IE+I86PHAFezeEGZWhy8QICHJ IJWXFUwjN2PiY4s6OUEPPiVAnVvLrVqHhMeUJsB0Ke0GnvO8Fb3DQwStg7gb/FiOM2hV ry7Vy8x8OCILhuqeLR5i7lCPTdnHpS/9gIVAsH2kRJIVfamOHHa0cVHKe9y8sMWvUbOM NR8QgDZK6A0oethQzBpgolfsbuUEptD2/P1MMn4xeXGK2Qvsu0Gsgg73aPXe3OJFdVFn SVe9Iog2rhKdgjm/CAnfg4j2kH267th3ArLG7r8DFQF4yR/nQoTXT3Lym+1CATZ33rXj xV+A== X-Gm-Message-State: AFqh2kq2l0jPh5xG4R9TKhHsDPS1UvRIhvK04sFOhsd7jjHmnsxA6LHd e6Q99+8pZdcZCSUYtGc1qU6xLQ== X-Google-Smtp-Source: AMrXdXtXDJqCqs/z8eCUFfnfYZgkufc/9nvHnLgJ8ivBhVDFka0v/ZeE33WoH5DpPi+xlC2/Yvfdxw== X-Received: by 2002:aa7:99cb:0:b0:580:d188:f516 with SMTP id v11-20020aa799cb000000b00580d188f516mr41042481pfi.19.1672755272500; Tue, 03 Jan 2023 06:14:32 -0800 (PST) Received: from anup-ubuntu-vm.localdomain ([171.76.85.241]) by smtp.gmail.com with ESMTPSA id h1-20020a628301000000b0056be4dbd4besm5936035pfe.111.2023.01.03.06.14.28 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 03 Jan 2023 06:14:32 -0800 (PST) From: Anup Patel To: Palmer Dabbelt , Paul Walmsley , Thomas Gleixner , Marc Zyngier , Rob Herring , Krzysztof Kozlowski Cc: Atish Patra , Alistair Francis , Anup Patel , linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, devicetree@vger.kernel.org, Anup Patel Subject: [PATCH v2 3/9] irqchip/riscv-intc: Add support for RISC-V AIA Date: Tue, 3 Jan 2023 19:44:03 +0530 Message-Id: <20230103141409.772298-4-apatel@ventanamicro.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230103141409.772298-1-apatel@ventanamicro.com> References: <20230103141409.772298-1-apatel@ventanamicro.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230103_061436_566258_ADC6C9CE X-CRM114-Status: GOOD ( 15.53 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org The RISC-V advanced interrupt architecture (AIA) extends the per-HART local interrupts in following ways: 1. Minimum 64 local interrupts for both RV32 and RV64 2. Ability to process multiple pending local interrupts in same interrupt handler 3. Priority configuration for each local interrupts 4. Special CSRs to configure/access the per-HART MSI controller This patch adds support for RISC-V AIA in the RISC-V intc driver. Signed-off-by: Anup Patel --- drivers/irqchip/irq-riscv-intc.c | 37 ++++++++++++++++++++++++++------ 1 file changed, 31 insertions(+), 6 deletions(-) diff --git a/drivers/irqchip/irq-riscv-intc.c b/drivers/irqchip/irq-riscv-intc.c index f229e3e66387..880d1639aadc 100644 --- a/drivers/irqchip/irq-riscv-intc.c +++ b/drivers/irqchip/irq-riscv-intc.c @@ -16,6 +16,7 @@ #include #include #include +#include static struct irq_domain *intc_domain; @@ -29,6 +30,15 @@ static asmlinkage void riscv_intc_irq(struct pt_regs *regs) generic_handle_domain_irq(intc_domain, cause); } +static asmlinkage void riscv_intc_aia_irq(struct pt_regs *regs) +{ + unsigned long topi; + + while ((topi = csr_read(CSR_TOPI))) + generic_handle_domain_irq(intc_domain, + topi >> TOPI_IID_SHIFT); +} + /* * On RISC-V systems local interrupts are masked or unmasked by writing * the SIE (Supervisor Interrupt Enable) CSR. As CSRs can only be written @@ -38,12 +48,18 @@ static asmlinkage void riscv_intc_irq(struct pt_regs *regs) static void riscv_intc_irq_mask(struct irq_data *d) { - csr_clear(CSR_IE, BIT(d->hwirq)); + if (d->hwirq < BITS_PER_LONG) + csr_clear(CSR_IE, BIT(d->hwirq)); + else + csr_clear(CSR_IEH, BIT(d->hwirq - BITS_PER_LONG)); } static void riscv_intc_irq_unmask(struct irq_data *d) { - csr_set(CSR_IE, BIT(d->hwirq)); + if (d->hwirq < BITS_PER_LONG) + csr_set(CSR_IE, BIT(d->hwirq)); + else + csr_set(CSR_IEH, BIT(d->hwirq - BITS_PER_LONG)); } static void riscv_intc_irq_eoi(struct irq_data *d) @@ -115,7 +131,7 @@ static struct fwnode_handle *riscv_intc_hwnode(void) static int __init riscv_intc_init(struct device_node *node, struct device_node *parent) { - int rc; + int rc, nr_irqs; unsigned long hartid; rc = riscv_of_parent_hartid(node, &hartid); @@ -133,14 +149,21 @@ static int __init riscv_intc_init(struct device_node *node, if (riscv_hartid_to_cpuid(hartid) != smp_processor_id()) return 0; - intc_domain = irq_domain_add_linear(node, BITS_PER_LONG, + nr_irqs = BITS_PER_LONG; + if (riscv_isa_extension_available(NULL, SxAIA) && BITS_PER_LONG == 32) + nr_irqs = nr_irqs * 2; + + intc_domain = irq_domain_add_linear(node, nr_irqs, &riscv_intc_domain_ops, NULL); if (!intc_domain) { pr_err("unable to add IRQ domain\n"); return -ENXIO; } - rc = set_handle_irq(&riscv_intc_irq); + if (riscv_isa_extension_available(NULL, SxAIA)) + rc = set_handle_irq(&riscv_intc_aia_irq); + else + rc = set_handle_irq(&riscv_intc_irq); if (rc) { pr_err("failed to set irq handler\n"); return rc; @@ -148,7 +171,9 @@ static int __init riscv_intc_init(struct device_node *node, riscv_set_intc_hwnode_fn(riscv_intc_hwnode); - pr_info("%d local interrupts mapped\n", BITS_PER_LONG); + pr_info("%d local interrupts mapped%s\n", + nr_irqs, (riscv_isa_extension_available(NULL, SxAIA)) ? + " using AIA" : ""); return 0; } From patchwork Tue Jan 3 14:14:04 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Anup Patel X-Patchwork-Id: 13087753 X-Patchwork-Delegate: palmer@dabbelt.com Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id AB04BC46467 for ; Tue, 3 Jan 2023 17:10:50 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=Pk5rLLpIYqoXCqFgazgAUm4W0WTq/oiNrwUrPVxWhnQ=; b=43HOf5F6QAB1Qb LOiBrWvTD3ad9mGsXniH5tZqZjEX/qKAkTgK7cVhbghsfijdMluVqhftPa6X7uYkE0wRVcZOkRmch IM45TVaqvx6zuSKbU5k03TFvE7r6MBc8VT2NO1B0N3ivpuvuJwl+jji4Nv5aexiNK1nkjtYlKCQkQ CxQJCi62IsGvDScOB/YOB5tadwJWhInx9E2O1cKcus0Wk/inHobqESVFlxtycZgcLt9ZlOg+4uXoW 8HUcaIgTkpxN4CMJaFq5swCIB3BeBuHG+73riDdb2DqUJUw6afmVu02bSCuU5XANmrbjl2HTdR/cB PFep2MBnH9wtTCyelCng==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1pCkoA-003Gen-IE; Tue, 03 Jan 2023 17:10:38 +0000 Received: from mail-pl1-x62b.google.com ([2607:f8b0:4864:20::62b]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1pCi3s-001rfe-Q6 for linux-riscv@lists.infradead.org; Tue, 03 Jan 2023 14:14:44 +0000 Received: by mail-pl1-x62b.google.com with SMTP id jl4so26430375plb.8 for ; Tue, 03 Jan 2023 06:14:37 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ventanamicro.com; s=google; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=PybAMKTSkhhmmB7NHRr0O8pZxotYwnZMFF8ljzRqNkQ=; b=brOs4dc0lOFXuYZR26ZakZlhErUcq+oNhKNQFlP/g3FgABYBFtqhqpLCDq9CgSCVuA d0kAXC7ZNDNIS3UhEgt9KMKbcyIlVll38BsYd9stpedATzo9KJvXwxD7Ghl5VRoTcDSx G4Tak/o7Kmw0aFvHIN4fig0IvDF+z6glLwvvJnJCj0+n2Crq4pPoYKuDyG54qaBEXRnD ZEjAPb9VMzel6rSrSzYbjIwtq5N2zgJtDovPS5Bh+AX2HWKxazk5G5/eS700JJ7ztLtH tuSHxUesX4GI5KYLXtQ2babfIB2A47GyKD8mS2U3LXL1qNR01oqQzdFdrsSMArooCbVP eAdw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=PybAMKTSkhhmmB7NHRr0O8pZxotYwnZMFF8ljzRqNkQ=; b=xfm5hu6Gceu1SO3gZUd/ThJsIgTcviIKLKMixpo5r4qmnV490M7zZD8nbj2ZYEAAbx PLiezCjDFt+meThTGg+LmVdRxRPhMF3NPHK7KKtc/0/4SAvgRgtnNA9wmYiJeKtDsP2B U4MuRWq6CiVHsmEo2Y1NLgN2ycsZJLhHvdUR5RKmiaDTKyPpfM15Z6JhUitNA85ztmwJ vrToNehBDk4D+ouaB1zwtaqjdMBji50J6qOwnxcwKiFxdS6caqmGG/W1B0bJ4nBHslL+ W1u2Al+vKxG3gS/aZo9o3MF9smZjhLKk90eeHtyWqgLYnPtrGGdfFBRxME8A+tAsUfSs FSyA== X-Gm-Message-State: AFqh2kqd+qKeMxHcacOImPcFx4DA45JkJJIYsyityXJ3C3GcaBS2sEqp u2FuqYH5eC6B53+0cSFyLgKRxA== X-Google-Smtp-Source: AMrXdXvqioKRDO01/F4B0F5J2nxAB+b+EhjP6kHywpfZzVFDzAyXmLfie5fvI13TqmREa4KI0JVutw== X-Received: by 2002:a05:6a20:6f61:b0:9d:efd3:66f7 with SMTP id gu33-20020a056a206f6100b0009defd366f7mr45087577pzb.62.1672755276672; Tue, 03 Jan 2023 06:14:36 -0800 (PST) Received: from anup-ubuntu-vm.localdomain ([171.76.85.241]) by smtp.gmail.com with ESMTPSA id h1-20020a628301000000b0056be4dbd4besm5936035pfe.111.2023.01.03.06.14.32 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 03 Jan 2023 06:14:36 -0800 (PST) From: Anup Patel To: Palmer Dabbelt , Paul Walmsley , Thomas Gleixner , Marc Zyngier , Rob Herring , Krzysztof Kozlowski Cc: Atish Patra , Alistair Francis , Anup Patel , linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, devicetree@vger.kernel.org, Anup Patel Subject: [PATCH v2 4/9] dt-bindings: interrupt-controller: Add RISC-V incoming MSI controller Date: Tue, 3 Jan 2023 19:44:04 +0530 Message-Id: <20230103141409.772298-5-apatel@ventanamicro.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230103141409.772298-1-apatel@ventanamicro.com> References: <20230103141409.772298-1-apatel@ventanamicro.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230103_061440_954276_7506EB55 X-CRM114-Status: GOOD ( 18.90 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org We add DT bindings document for the RISC-V incoming MSI controller (IMSIC) defined by the RISC-V advanced interrupt architecture (AIA) specification. Signed-off-by: Anup Patel --- .../interrupt-controller/riscv,imsics.yaml | 168 ++++++++++++++++++ 1 file changed, 168 insertions(+) create mode 100644 Documentation/devicetree/bindings/interrupt-controller/riscv,imsics.yaml diff --git a/Documentation/devicetree/bindings/interrupt-controller/riscv,imsics.yaml b/Documentation/devicetree/bindings/interrupt-controller/riscv,imsics.yaml new file mode 100644 index 000000000000..b9db03b6e95f --- /dev/null +++ b/Documentation/devicetree/bindings/interrupt-controller/riscv,imsics.yaml @@ -0,0 +1,168 @@ +# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause) +%YAML 1.2 +--- +$id: http://devicetree.org/schemas/interrupt-controller/riscv,imsics.yaml# +$schema: http://devicetree.org/meta-schemas/core.yaml# + +title: RISC-V Incoming MSI Controller (IMSIC) + +maintainers: + - Anup Patel + +description: | + The RISC-V advanced interrupt architecture (AIA) defines a per-CPU incoming + MSI controller (IMSIC) for handling MSIs in a RISC-V platform. The RISC-V + AIA specification can be found at https://github.com/riscv/riscv-aia. + + The IMSIC is a per-CPU (or per-HART) device with separate interrupt file + for each privilege level (machine or supervisor). The configuration of + a IMSIC interrupt file is done using AIA CSRs and it also has a 4KB MMIO + space to receive MSIs from devices. Each IMSIC interrupt file supports a + fixed number of interrupt identities (to distinguish MSIs from devices) + which is same for given privilege level across CPUs (or HARTs). + + The device tree of a RISC-V platform will have one IMSIC device tree node + for each privilege level (machine or supervisor) which collectively describe + IMSIC interrupt files at that privilege level across CPUs (or HARTs). + + The arrangement of IMSIC interrupt files in MMIO space of a RISC-V platform + follows a particular scheme defined by the RISC-V AIA specification. A IMSIC + group is a set of IMSIC interrupt files co-located in MMIO space and we can + have multiple IMSIC groups (i.e. clusters, sockets, chiplets, etc) in a + RISC-V platform. The MSI target address of a IMSIC interrupt file at given + privilege level (machine or supervisor) encodes group index, HART index, + and guest index (shown below). + + XLEN-1 >=24 12 0 + | | | | + ------------------------------------------------------------- + |xxxxxx|Group Index|xxxxxxxxxxx|HART Index|Guest Index| 0 | + ------------------------------------------------------------- + +allOf: + - $ref: /schemas/interrupt-controller.yaml# + - $ref: /schemas/interrupt-controller/msi-controller.yaml# + +properties: + compatible: + items: + - enum: + - riscv,qemu-imsics + - const: riscv,imsics + + reg: + minItems: 1 + maxItems: 16384 + description: + Base address of each IMSIC group. + + interrupt-controller: true + + "#interrupt-cells": + const: 0 + + msi-controller: true + + interrupts-extended: + minItems: 1 + maxItems: 16384 + description: + This property represents the set of CPUs (or HARTs) for which given + device tree node describes the IMSIC interrupt files. Each node pointed + to should be a riscv,cpu-intc node, which has a riscv node (i.e. RISC-V + HART) as parent. + + riscv,num-ids: + $ref: /schemas/types.yaml#/definitions/uint32 + minimum: 63 + maximum: 2047 + description: + Number of interrupt identities supported by IMSIC interrupt file. + + riscv,num-guest-ids: + $ref: /schemas/types.yaml#/definitions/uint32 + minimum: 63 + maximum: 2047 + description: + Number of interrupt identities are supported by IMSIC guest interrupt + file. When not specified it is assumed to be same as specified by the + riscv,num-ids property. + + riscv,guest-index-bits: + minimum: 0 + maximum: 7 + default: 0 + description: + Number of guest index bits in the MSI target address. When not + specified it is assumed to be 0. + + riscv,hart-index-bits: + minimum: 0 + maximum: 15 + description: + Number of HART index bits in the MSI target address. When not + specified it is estimated based on the interrupts-extended property. + + riscv,group-index-bits: + minimum: 0 + maximum: 7 + default: 0 + description: + Number of group index bits in the MSI target address. When not + specified it is assumed to be 0. + + riscv,group-index-shift: + $ref: /schemas/types.yaml#/definitions/uint32 + minimum: 0 + maximum: 55 + default: 24 + description: + The least significant bit position of the group index bits in the + MSI target address. When not specified it is assumed to be 24. + +required: + - compatible + - reg + - interrupt-controller + - msi-controller + - interrupts-extended + - riscv,num-ids + +unevaluatedProperties: false + +examples: + - | + // Example 1 (Machine-level IMSIC files with just one group): + + imsic_mlevel: interrupt-controller@24000000 { + compatible = "riscv,qemu-imsics", "riscv,imsics"; + interrupts-extended = <&cpu1_intc 11>, + <&cpu2_intc 11>, + <&cpu3_intc 11>, + <&cpu4_intc 11>; + reg = <0x28000000 0x4000>; + interrupt-controller; + #interrupt-cells = <0>; + msi-controller; + riscv,num-ids = <127>; + }; + + - | + // Example 2 (Supervisor-level IMSIC files with two groups): + + imsic_slevel: interrupt-controller@28000000 { + compatible = "riscv,qemu-imsics", "riscv,imsics"; + interrupts-extended = <&cpu1_intc 9>, + <&cpu2_intc 9>, + <&cpu3_intc 9>, + <&cpu4_intc 9>; + reg = <0x28000000 0x2000>, /* Group0 IMSICs */ + <0x29000000 0x2000>; /* Group1 IMSICs */ + interrupt-controller; + #interrupt-cells = <0>; + msi-controller; + riscv,num-ids = <127>; + riscv,group-index-bits = <1>; + riscv,group-index-shift = <24>; + }; +... From patchwork Tue Jan 3 14:14:05 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Anup Patel X-Patchwork-Id: 13087757 X-Patchwork-Delegate: palmer@dabbelt.com Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 0E3C9C46467 for ; Tue, 3 Jan 2023 17:12:03 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=ePBk0yhbPKOuxthwz02jLylQ+Brkpdqwbn1HvGul93s=; b=P9vezKOTt11GDb oLyN//mOZsgDKyJWDAauij7EpGSkW3jqmQd9mtsicR7qu0LtuHjJ40RjO8HDTHslMWUUj/9WTYrcx a6kiceX71/6xwXSt5vIDOEs4XHCfx67b9+FOa/zRNVathpJHtfGpt7CX1BrxpPAjB0u6EnEwWR+/h fTq0dC2gtcwBUrnKZRnXi3Q99C51+IodDfUPiZfdxSqn2BnrbVvtm4NyloSScfl+kwIQDZhyW4FJi oT6+JBVWwlnFN6WgxJcOjEK5ALYaTSyQPBF/IBYaEotmQ9GMsIbf7V9pHri9gJCeBLU9Eea+Gn3l5 3yrxI46y0zymXl7fSXPg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1pCkpJ-003HE1-6l; Tue, 03 Jan 2023 17:11:49 +0000 Received: from mail-pf1-x42e.google.com ([2607:f8b0:4864:20::42e]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1pCi3w-001riI-Cy for linux-riscv@lists.infradead.org; Tue, 03 Jan 2023 14:14:51 +0000 Received: by mail-pf1-x42e.google.com with SMTP id 124so20656007pfy.0 for ; Tue, 03 Jan 2023 06:14:42 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ventanamicro.com; s=google; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=trRO4744M3ilWw7Ei4923aocVVkbHvVgluh50v7oxsk=; b=FQ4RX2fDpVJu7hLPp6p+JBZjDa97tDSro5vDBi35W767jkwtfq1aMiXHUmQwwVEuHF 35zJqqKtX5sGEAZYtiKzrlGs9E350fcptgeWaz79sYFwyO8vTOpZi6o2vF0sjp3RZfXZ q+1OrXUjYdu1oLlFcANgmR8GO0KtR/9wfjDGFo68VOGQ5qxr4eKmlFMuWw/1L4O/FCBq RtaXuBfdEOhPqi0oxKBqVijjZOuHbyqzUoIcYykyE54hL8+sw3XUhBBYSyxHiaRSvTb7 YeRe9gk8gdsDnQOYWf7qFcWQUTU8PfRc3CZTxVYoicZuPSYaLSGOscma8DLv9fQAZoKx OaBA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=trRO4744M3ilWw7Ei4923aocVVkbHvVgluh50v7oxsk=; b=jFqGNj+DitEgmSc8LWTNBPSEkC1Wluda5NKSStifAnifnNHc0arg3Czq+9ZeWNyLzm natDuw7/OdVpIIUYSIGhEjEpLccpfKiHRynRZYhHozjJb6HPO0DWBDL5u+JWFHYfzwjI lWnZgOsDp+g+tDv3OH1XpbPYKql+JupRnpuW5kSG7xKIRpRLu+Orz1TChcunRQOqtOOR QNdXRs0KS9SVfEQwPMMvwDasMXZALCL5KcUEo129hHfzweO5JIMTr5fYh+DWo9F/MUDl J4LQNKg24mzqNvA26fcWCT6xICrXqpcxsTaqivHUHX1D/BYvSuAi0PnfqqxPLMX1scY/ Y4OA== X-Gm-Message-State: AFqh2krF1cgSXE8CMNhLPLx6PrkGGlxHGvS+F2mGw4N6eB2B87Or9WnV 91XzK4Jq/H7Oa049pI+U7xS1Gw== X-Google-Smtp-Source: AMrXdXuRKM6fefZXqofwSkCeJWoILNnxpMg2f5UY+KkVPAtfJ/xNs3H7wPVIsp0Luq/Foam1alYGtg== X-Received: by 2002:a62:aa12:0:b0:582:7ea5:c298 with SMTP id e18-20020a62aa12000000b005827ea5c298mr6708115pff.32.1672755281293; Tue, 03 Jan 2023 06:14:41 -0800 (PST) Received: from anup-ubuntu-vm.localdomain ([171.76.85.241]) by smtp.gmail.com with ESMTPSA id h1-20020a628301000000b0056be4dbd4besm5936035pfe.111.2023.01.03.06.14.37 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 03 Jan 2023 06:14:40 -0800 (PST) From: Anup Patel To: Palmer Dabbelt , Paul Walmsley , Thomas Gleixner , Marc Zyngier , Rob Herring , Krzysztof Kozlowski Cc: Atish Patra , Alistair Francis , Anup Patel , linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, devicetree@vger.kernel.org, Anup Patel Subject: [PATCH v2 5/9] irqchip: Add RISC-V incoming MSI controller driver Date: Tue, 3 Jan 2023 19:44:05 +0530 Message-Id: <20230103141409.772298-6-apatel@ventanamicro.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230103141409.772298-1-apatel@ventanamicro.com> References: <20230103141409.772298-1-apatel@ventanamicro.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230103_061445_790774_FBEB7CB8 X-CRM114-Status: GOOD ( 26.35 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org The RISC-V advanced interrupt architecture (AIA) specification defines a new MSI controller for managing MSIs on a RISC-V platform. This new MSI controller is referred to as incoming message signaled interrupt controller (IMSIC) which manages MSI on per-HART (or per-CPU) basis. (For more details refer https://github.com/riscv/riscv-aia) This patch adds an irqchip driver for RISC-V IMSIC found on RISC-V platforms. Signed-off-by: Anup Patel --- drivers/irqchip/Kconfig | 14 +- drivers/irqchip/Makefile | 1 + drivers/irqchip/irq-riscv-imsic.c | 1174 +++++++++++++++++++++++++++ include/linux/irqchip/riscv-imsic.h | 92 +++ 4 files changed, 1280 insertions(+), 1 deletion(-) create mode 100644 drivers/irqchip/irq-riscv-imsic.c create mode 100644 include/linux/irqchip/riscv-imsic.h diff --git a/drivers/irqchip/Kconfig b/drivers/irqchip/Kconfig index 9e65345ca3f6..a1315189a595 100644 --- a/drivers/irqchip/Kconfig +++ b/drivers/irqchip/Kconfig @@ -29,7 +29,6 @@ config ARM_GIC_V2M config GIC_NON_BANKED bool - config ARM_GIC_V3 bool select IRQ_DOMAIN_HIERARCHY @@ -548,6 +547,19 @@ config SIFIVE_PLIC select IRQ_DOMAIN_HIERARCHY select GENERIC_IRQ_EFFECTIVE_AFF_MASK if SMP +config RISCV_IMSIC + bool + depends on RISCV + select IRQ_DOMAIN_HIERARCHY + select GENERIC_MSI_IRQ_DOMAIN + +config RISCV_IMSIC_PCI + bool + depends on RISCV_IMSIC + depends on PCI + depends on PCI_MSI + default RISCV_IMSIC + config EXYNOS_IRQ_COMBINER bool "Samsung Exynos IRQ combiner support" if COMPILE_TEST depends on (ARCH_EXYNOS && ARM) || COMPILE_TEST diff --git a/drivers/irqchip/Makefile b/drivers/irqchip/Makefile index 87b49a10962c..22c723cc6ec8 100644 --- a/drivers/irqchip/Makefile +++ b/drivers/irqchip/Makefile @@ -96,6 +96,7 @@ obj-$(CONFIG_QCOM_MPM) += irq-qcom-mpm.o obj-$(CONFIG_CSKY_MPINTC) += irq-csky-mpintc.o obj-$(CONFIG_CSKY_APB_INTC) += irq-csky-apb-intc.o obj-$(CONFIG_RISCV_INTC) += irq-riscv-intc.o +obj-$(CONFIG_RISCV_IMSIC) += irq-riscv-imsic.o obj-$(CONFIG_SIFIVE_PLIC) += irq-sifive-plic.o obj-$(CONFIG_IMX_IRQSTEER) += irq-imx-irqsteer.o obj-$(CONFIG_IMX_INTMUX) += irq-imx-intmux.o diff --git a/drivers/irqchip/irq-riscv-imsic.c b/drivers/irqchip/irq-riscv-imsic.c new file mode 100644 index 000000000000..4c16b66738d6 --- /dev/null +++ b/drivers/irqchip/irq-riscv-imsic.c @@ -0,0 +1,1174 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Copyright (C) 2021 Western Digital Corporation or its affiliates. + * Copyright (C) 2022 Ventana Micro Systems Inc. + */ + +#define pr_fmt(fmt) "riscv-imsic: " fmt +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#define IMSIC_DISABLE_EIDELIVERY 0 +#define IMSIC_ENABLE_EIDELIVERY 1 +#define IMSIC_DISABLE_EITHRESHOLD 1 +#define IMSIC_ENABLE_EITHRESHOLD 0 + +#define imsic_csr_write(__c, __v) \ +do { \ + csr_write(CSR_ISELECT, __c); \ + csr_write(CSR_IREG, __v); \ +} while (0) + +#define imsic_csr_read(__c) \ +({ \ + unsigned long __v; \ + csr_write(CSR_ISELECT, __c); \ + __v = csr_read(CSR_IREG); \ + __v; \ +}) + +#define imsic_csr_set(__c, __v) \ +do { \ + csr_write(CSR_ISELECT, __c); \ + csr_set(CSR_IREG, __v); \ +} while (0) + +#define imsic_csr_clear(__c, __v) \ +do { \ + csr_write(CSR_ISELECT, __c); \ + csr_clear(CSR_IREG, __v); \ +} while (0) + +struct imsic_mmio { + phys_addr_t pa; + void __iomem *va; + unsigned long size; +}; + +struct imsic_priv { + /* Global configuration common for all HARTs */ + struct imsic_global_config global; + + /* MMIO regions */ + u32 num_mmios; + struct imsic_mmio *mmios; + + /* Global state of interrupt identities */ + raw_spinlock_t ids_lock; + unsigned long *ids_used_bimap; + unsigned long *ids_enabled_bimap; + unsigned int *ids_target_cpu; + + /* Mask for connected CPUs */ + struct cpumask lmask; + + /* IPI interrupt identity */ + u32 ipi_id; + u32 ipi_lsync_id; + + /* IRQ domains */ + struct irq_domain *base_domain; + struct irq_domain *pci_domain; + struct irq_domain *plat_domain; +}; + +struct imsic_handler { + /* Local configuration for given HART */ + struct imsic_local_config local; + + /* Pointer to private context */ + struct imsic_priv *priv; +}; + +static bool imsic_init_done; + +static int imsic_parent_irq; +static DEFINE_PER_CPU(struct imsic_handler, imsic_handlers); + +const struct imsic_global_config *imsic_get_global_config(void) +{ + struct imsic_handler *handler = this_cpu_ptr(&imsic_handlers); + + if (!handler || !handler->priv) + return NULL; + + return &handler->priv->global; +} +EXPORT_SYMBOL_GPL(imsic_get_global_config); + +const struct imsic_local_config *imsic_get_local_config(unsigned int cpu) +{ + struct imsic_handler *handler = per_cpu_ptr(&imsic_handlers, cpu); + + if (!handler || !handler->priv) + return NULL; + + return &handler->local; +} +EXPORT_SYMBOL_GPL(imsic_get_local_config); + +static int imsic_cpu_page_phys(unsigned int cpu, + unsigned int guest_index, + phys_addr_t *out_msi_pa) +{ + struct imsic_handler *handler = per_cpu_ptr(&imsic_handlers, cpu); + struct imsic_global_config *global; + struct imsic_local_config *local; + + if (!handler || !handler->priv) + return -ENODEV; + local = &handler->local; + global = &handler->priv->global; + + if (BIT(global->guest_index_bits) <= guest_index) + return -EINVAL; + + if (out_msi_pa) + *out_msi_pa = local->msi_pa + + (guest_index * IMSIC_MMIO_PAGE_SZ); + + return 0; +} + +static int imsic_get_cpu(struct imsic_priv *priv, + const struct cpumask *mask_val, bool force, + unsigned int *out_target_cpu) +{ + struct cpumask amask; + unsigned int cpu; + + cpumask_and(&amask, &priv->lmask, mask_val); + + if (force) + cpu = cpumask_first(&amask); + else + cpu = cpumask_any_and(&amask, cpu_online_mask); + + if (cpu >= nr_cpu_ids) + return -EINVAL; + + if (out_target_cpu) + *out_target_cpu = cpu; + + return 0; +} + +static int imsic_get_cpu_msi_msg(unsigned int cpu, unsigned int id, + struct msi_msg *msg) +{ + phys_addr_t msi_addr; + int err; + + err = imsic_cpu_page_phys(cpu, 0, &msi_addr); + if (err) + return err; + + msg->address_hi = upper_32_bits(msi_addr); + msg->address_lo = lower_32_bits(msi_addr); + msg->data = id; + + return err; +} + +static void imsic_id_set_target(struct imsic_priv *priv, + unsigned int id, unsigned int target_cpu) +{ + unsigned long flags; + + raw_spin_lock_irqsave(&priv->ids_lock, flags); + priv->ids_target_cpu[id] = target_cpu; + raw_spin_unlock_irqrestore(&priv->ids_lock, flags); +} + +static unsigned int imsic_id_get_target(struct imsic_priv *priv, + unsigned int id) +{ + unsigned int ret; + unsigned long flags; + + raw_spin_lock_irqsave(&priv->ids_lock, flags); + ret = priv->ids_target_cpu[id]; + raw_spin_unlock_irqrestore(&priv->ids_lock, flags); + + return ret; +} + +static void __imsic_eix_update(unsigned long base_id, + unsigned long num_id, bool pend, bool val) +{ + unsigned long i, isel, ireg, flags; + unsigned long id = base_id, last_id = base_id + num_id; + + while (id < last_id) { + isel = id / BITS_PER_LONG; + isel *= BITS_PER_LONG / IMSIC_EIPx_BITS; + isel += (pend) ? IMSIC_EIP0 : IMSIC_EIE0; + + ireg = 0; + for (i = id & (__riscv_xlen - 1); + (id < last_id) && (i < __riscv_xlen); i++) { + ireg |= BIT(i); + id++; + } + + /* + * The IMSIC EIEx and EIPx registers are indirectly + * accessed via using ISELECT and IREG CSRs so we + * save/restore local IRQ to ensure that we don't + * get preempted while accessing IMSIC registers. + */ + local_irq_save(flags); + if (val) + imsic_csr_set(isel, ireg); + else + imsic_csr_clear(isel, ireg); + local_irq_restore(flags); + } +} + +#define __imsic_id_enable(__id) \ + __imsic_eix_update((__id), 1, false, true) +#define __imsic_id_disable(__id) \ + __imsic_eix_update((__id), 1, false, false) + +#ifdef CONFIG_SMP +static void __imsic_id_smp_sync(struct imsic_priv *priv) +{ + struct imsic_handler *handler; + struct cpumask amask; + int cpu; + + cpumask_and(&amask, &priv->lmask, cpu_online_mask); + for_each_cpu(cpu, &amask) { + if (cpu == smp_processor_id()) + continue; + + handler = per_cpu_ptr(&imsic_handlers, cpu); + if (!handler || !handler->priv || !handler->local.msi_va) { + pr_warn("CPU%d: handler not initialized\n", cpu); + continue; + } + + writel(handler->priv->ipi_lsync_id, handler->local.msi_va); + } +} +#else +#define __imsic_id_smp_sync(__priv) +#endif + +static void imsic_id_enable(struct imsic_priv *priv, unsigned int id) +{ + unsigned long flags; + + raw_spin_lock_irqsave(&priv->ids_lock, flags); + bitmap_set(priv->ids_enabled_bimap, id, 1); + __imsic_id_enable(id); + raw_spin_unlock_irqrestore(&priv->ids_lock, flags); + + __imsic_id_smp_sync(priv); +} + +static void imsic_id_disable(struct imsic_priv *priv, unsigned int id) +{ + unsigned long flags; + + raw_spin_lock_irqsave(&priv->ids_lock, flags); + bitmap_clear(priv->ids_enabled_bimap, id, 1); + __imsic_id_disable(id); + raw_spin_unlock_irqrestore(&priv->ids_lock, flags); + + __imsic_id_smp_sync(priv); +} + +static void imsic_ids_local_sync(struct imsic_priv *priv) +{ + int i; + unsigned long flags; + + raw_spin_lock_irqsave(&priv->ids_lock, flags); + for (i = 1; i <= priv->global.nr_ids; i++) { + if (priv->ipi_id == i || priv->ipi_lsync_id == i) + continue; + + if (test_bit(i, priv->ids_enabled_bimap)) + __imsic_id_enable(i); + else + __imsic_id_disable(i); + } + raw_spin_unlock_irqrestore(&priv->ids_lock, flags); +} + +static void imsic_ids_local_delivery(struct imsic_priv *priv, bool enable) +{ + if (enable) { + imsic_csr_write(IMSIC_EITHRESHOLD, IMSIC_ENABLE_EITHRESHOLD); + imsic_csr_write(IMSIC_EIDELIVERY, IMSIC_ENABLE_EIDELIVERY); + } else { + imsic_csr_write(IMSIC_EIDELIVERY, IMSIC_DISABLE_EIDELIVERY); + imsic_csr_write(IMSIC_EITHRESHOLD, IMSIC_DISABLE_EITHRESHOLD); + } +} + +static int imsic_ids_alloc(struct imsic_priv *priv, + unsigned int max_id, unsigned int order) +{ + int ret; + unsigned long flags; + + if ((priv->global.nr_ids < max_id) || + (max_id < BIT(order))) + return -EINVAL; + + raw_spin_lock_irqsave(&priv->ids_lock, flags); + ret = bitmap_find_free_region(priv->ids_used_bimap, + max_id + 1, order); + raw_spin_unlock_irqrestore(&priv->ids_lock, flags); + + return ret; +} + +static void imsic_ids_free(struct imsic_priv *priv, unsigned int base_id, + unsigned int order) +{ + unsigned long flags; + + raw_spin_lock_irqsave(&priv->ids_lock, flags); + bitmap_release_region(priv->ids_used_bimap, base_id, order); + raw_spin_unlock_irqrestore(&priv->ids_lock, flags); +} + +static int __init imsic_ids_init(struct imsic_priv *priv) +{ + int i; + struct imsic_global_config *global = &priv->global; + + raw_spin_lock_init(&priv->ids_lock); + + /* Allocate used bitmap */ + priv->ids_used_bimap = kcalloc(BITS_TO_LONGS(global->nr_ids + 1), + sizeof(unsigned long), GFP_KERNEL); + if (!priv->ids_used_bimap) + return -ENOMEM; + + /* Allocate enabled bitmap */ + priv->ids_enabled_bimap = kcalloc(BITS_TO_LONGS(global->nr_ids + 1), + sizeof(unsigned long), GFP_KERNEL); + if (!priv->ids_enabled_bimap) { + kfree(priv->ids_used_bimap); + return -ENOMEM; + } + + /* Allocate target CPU array */ + priv->ids_target_cpu = kcalloc(global->nr_ids + 1, + sizeof(unsigned int), GFP_KERNEL); + if (!priv->ids_target_cpu) { + kfree(priv->ids_enabled_bimap); + kfree(priv->ids_used_bimap); + return -ENOMEM; + } + for (i = 0; i <= global->nr_ids; i++) + priv->ids_target_cpu[i] = UINT_MAX; + + /* Reserve ID#0 because it is special and never implemented */ + bitmap_set(priv->ids_used_bimap, 0, 1); + + return 0; +} + +static void __init imsic_ids_cleanup(struct imsic_priv *priv) +{ + kfree(priv->ids_target_cpu); + kfree(priv->ids_enabled_bimap); + kfree(priv->ids_used_bimap); +} + +#ifdef CONFIG_SMP +static void imsic_ipi_send(unsigned int cpu) +{ + struct imsic_handler *handler = per_cpu_ptr(&imsic_handlers, cpu); + + if (!handler || !handler->priv || !handler->local.msi_va) { + pr_warn("CPU%d: handler not initialized\n", cpu); + return; + } + + writel(handler->priv->ipi_id, handler->local.msi_va); +} + +static void imsic_ipi_enable(struct imsic_priv *priv) +{ + __imsic_id_enable(priv->ipi_id); + __imsic_id_enable(priv->ipi_lsync_id); +} + +static int __init imsic_ipi_domain_init(struct imsic_priv *priv) +{ + int virq; + + /* Allocate interrupt identity for IPIs */ + virq = imsic_ids_alloc(priv, priv->global.nr_ids, get_count_order(1)); + if (virq < 0) + return virq; + priv->ipi_id = virq; + + /* Create IMSIC IPI multiplexing */ + virq = ipi_mux_create(BITS_PER_BYTE, imsic_ipi_send); + if (virq <= 0) { + imsic_ids_free(priv, priv->ipi_id, get_count_order(1)); + return (virq < 0) ? virq : -ENOMEM; + } + + /* Set vIRQ range */ + riscv_ipi_set_virq_range(virq, BITS_PER_BYTE, true); + + /* Allocate interrupt identity for local enable/disable sync */ + virq = imsic_ids_alloc(priv, priv->global.nr_ids, get_count_order(1)); + if (virq < 0) { + imsic_ids_free(priv, priv->ipi_id, get_count_order(1)); + return virq; + } + priv->ipi_lsync_id = virq; + + return 0; +} + +static void __init imsic_ipi_domain_cleanup(struct imsic_priv *priv) +{ + imsic_ids_free(priv, priv->ipi_lsync_id, get_count_order(1)); + if (priv->ipi_id) + imsic_ids_free(priv, priv->ipi_id, get_count_order(1)); +} +#else +static void imsic_ipi_enable(struct imsic_priv *priv) +{ +} + +static int __init imsic_ipi_domain_init(struct imsic_priv *priv) +{ + /* Clear the IPI ids because we are not using IPIs */ + priv->ipi_id = 0; + priv->ipi_lsync_id = 0; + return 0; +} + +static void __init imsic_ipi_domain_cleanup(struct imsic_priv *priv) +{ +} +#endif + +static void imsic_irq_mask(struct irq_data *d) +{ + imsic_id_disable(irq_data_get_irq_chip_data(d), d->hwirq); +} + +static void imsic_irq_unmask(struct irq_data *d) +{ + imsic_id_enable(irq_data_get_irq_chip_data(d), d->hwirq); +} + +static void imsic_irq_compose_msi_msg(struct irq_data *d, + struct msi_msg *msg) +{ + struct imsic_priv *priv = irq_data_get_irq_chip_data(d); + unsigned int cpu; + int err; + + cpu = imsic_id_get_target(priv, d->hwirq); + WARN_ON(cpu == UINT_MAX); + + err = imsic_get_cpu_msi_msg(cpu, d->hwirq, msg); + WARN_ON(err); + + iommu_dma_compose_msi_msg(irq_data_get_msi_desc(d), msg); +} + +#ifdef CONFIG_SMP +static int imsic_irq_set_affinity(struct irq_data *d, + const struct cpumask *mask_val, + bool force) +{ + struct imsic_priv *priv = irq_data_get_irq_chip_data(d); + unsigned int target_cpu; + int rc; + + rc = imsic_get_cpu(priv, mask_val, force, &target_cpu); + if (rc) + return rc; + + imsic_id_set_target(priv, d->hwirq, target_cpu); + irq_data_update_effective_affinity(d, cpumask_of(target_cpu)); + + return IRQ_SET_MASK_OK; +} +#endif + +static struct irq_chip imsic_irq_base_chip = { + .name = "RISC-V IMSIC-BASE", + .irq_mask = imsic_irq_mask, + .irq_unmask = imsic_irq_unmask, +#ifdef CONFIG_SMP + .irq_set_affinity = imsic_irq_set_affinity, +#endif + .irq_compose_msi_msg = imsic_irq_compose_msi_msg, + .flags = IRQCHIP_SKIP_SET_WAKE | + IRQCHIP_MASK_ON_SUSPEND, +}; + +static int imsic_irq_domain_alloc(struct irq_domain *domain, + unsigned int virq, + unsigned int nr_irqs, + void *args) +{ + struct imsic_priv *priv = domain->host_data; + msi_alloc_info_t *info = args; + phys_addr_t msi_addr; + int i, hwirq, err = 0; + unsigned int cpu; + + err = imsic_get_cpu(priv, &priv->lmask, false, &cpu); + if (err) + return err; + + err = imsic_cpu_page_phys(cpu, 0, &msi_addr); + if (err) + return err; + + hwirq = imsic_ids_alloc(priv, priv->global.nr_ids, + get_count_order(nr_irqs)); + if (hwirq < 0) + return hwirq; + + err = iommu_dma_prepare_msi(info->desc, msi_addr); + if (err) + goto fail; + + for (i = 0; i < nr_irqs; i++) { + imsic_id_set_target(priv, hwirq + i, cpu); + irq_domain_set_info(domain, virq + i, hwirq + i, + &imsic_irq_base_chip, priv, + handle_simple_irq, NULL, NULL); + irq_set_noprobe(virq + i); + irq_set_affinity(virq + i, &priv->lmask); + } + + return 0; + +fail: + imsic_ids_free(priv, hwirq, get_count_order(nr_irqs)); + return err; +} + +static void imsic_irq_domain_free(struct irq_domain *domain, + unsigned int virq, + unsigned int nr_irqs) +{ + struct irq_data *d = irq_domain_get_irq_data(domain, virq); + struct imsic_priv *priv = domain->host_data; + + imsic_ids_free(priv, d->hwirq, get_count_order(nr_irqs)); + irq_domain_free_irqs_parent(domain, virq, nr_irqs); +} + +static const struct irq_domain_ops imsic_base_domain_ops = { + .alloc = imsic_irq_domain_alloc, + .free = imsic_irq_domain_free, +}; + +#ifdef CONFIG_RISCV_IMSIC_PCI + +static void imsic_pci_mask_irq(struct irq_data *d) +{ + pci_msi_mask_irq(d); + irq_chip_mask_parent(d); +} + +static void imsic_pci_unmask_irq(struct irq_data *d) +{ + pci_msi_unmask_irq(d); + irq_chip_unmask_parent(d); +} + +static struct irq_chip imsic_pci_irq_chip = { + .name = "RISC-V IMSIC-PCI", + .irq_mask = imsic_pci_mask_irq, + .irq_unmask = imsic_pci_unmask_irq, + .irq_eoi = irq_chip_eoi_parent, +}; + +static struct msi_domain_ops imsic_pci_domain_ops = { +}; + +static struct msi_domain_info imsic_pci_domain_info = { + .flags = (MSI_FLAG_USE_DEF_DOM_OPS | MSI_FLAG_USE_DEF_CHIP_OPS | + MSI_FLAG_PCI_MSIX | MSI_FLAG_MULTI_PCI_MSI), + .ops = &imsic_pci_domain_ops, + .chip = &imsic_pci_irq_chip, +}; + +#endif + +static struct irq_chip imsic_plat_irq_chip = { + .name = "RISC-V IMSIC-PLAT", +}; + +static struct msi_domain_ops imsic_plat_domain_ops = { +}; + +static struct msi_domain_info imsic_plat_domain_info = { + .flags = (MSI_FLAG_USE_DEF_DOM_OPS | MSI_FLAG_USE_DEF_CHIP_OPS), + .ops = &imsic_plat_domain_ops, + .chip = &imsic_plat_irq_chip, +}; + +static int __init imsic_irq_domains_init(struct imsic_priv *priv, + struct fwnode_handle *fwnode) +{ + /* Create Base IRQ domain */ + priv->base_domain = irq_domain_create_tree(fwnode, + &imsic_base_domain_ops, priv); + if (!priv->base_domain) { + pr_err("Failed to create IMSIC base domain\n"); + return -ENOMEM; + } + irq_domain_update_bus_token(priv->base_domain, DOMAIN_BUS_NEXUS); + +#ifdef CONFIG_RISCV_IMSIC_PCI + /* Create PCI MSI domain */ + priv->pci_domain = pci_msi_create_irq_domain(fwnode, + &imsic_pci_domain_info, + priv->base_domain); + if (!priv->pci_domain) { + pr_err("Failed to create IMSIC PCI domain\n"); + irq_domain_remove(priv->base_domain); + return -ENOMEM; + } +#endif + + /* Create Platform MSI domain */ + priv->plat_domain = platform_msi_create_irq_domain(fwnode, + &imsic_plat_domain_info, + priv->base_domain); + if (!priv->plat_domain) { + pr_err("Failed to create IMSIC platform domain\n"); + if (priv->pci_domain) + irq_domain_remove(priv->pci_domain); + irq_domain_remove(priv->base_domain); + return -ENOMEM; + } + + return 0; +} + +/* + * To handle an interrupt, we read the TOPEI CSR and write zero in one + * instruction. If TOPEI CSR is non-zero then we translate TOPEI.ID to + * Linux interrupt number and let Linux IRQ subsystem handle it. + */ +static void imsic_handle_irq(struct irq_desc *desc) +{ + struct imsic_handler *handler = this_cpu_ptr(&imsic_handlers); + struct irq_chip *chip = irq_desc_get_chip(desc); + struct imsic_priv *priv = handler->priv; + irq_hw_number_t hwirq; + int err; + + WARN_ON_ONCE(!handler->priv); + + chained_irq_enter(chip, desc); + + while ((hwirq = csr_swap(CSR_TOPEI, 0))) { + hwirq = hwirq >> TOPEI_ID_SHIFT; + + if (hwirq == priv->ipi_id) { +#ifdef CONFIG_SMP + ipi_mux_process(); +#endif + continue; + } else if (hwirq == priv->ipi_lsync_id) { + imsic_ids_local_sync(priv); + continue; + } + + err = generic_handle_domain_irq(priv->base_domain, hwirq); + if (unlikely(err)) + pr_warn_ratelimited( + "hwirq %lu mapping not found\n", hwirq); + } + + chained_irq_exit(chip, desc); +} + +static int imsic_starting_cpu(unsigned int cpu) +{ + struct imsic_handler *handler = this_cpu_ptr(&imsic_handlers); + struct imsic_priv *priv = handler->priv; + + /* Enable per-CPU parent interrupt */ + if (imsic_parent_irq) + enable_percpu_irq(imsic_parent_irq, + irq_get_trigger_type(imsic_parent_irq)); + else + pr_warn("cpu%d: parent irq not available\n", cpu); + + /* Enable IPIs */ + imsic_ipi_enable(priv); + + /* + * Interrupts identities might have been enabled/disabled while + * this CPU was not running so sync-up local enable/disable state. + */ + imsic_ids_local_sync(priv); + + /* Locally enable interrupt delivery */ + imsic_ids_local_delivery(priv, true); + + return 0; +} + +struct imsic_fwnode_ops { + u32 (*nr_parent_irq)(struct fwnode_handle *fwnode, + void *fwopaque); + int (*parent_hartid)(struct fwnode_handle *fwnode, + void *fwopaque, u32 index, + unsigned long *out_hartid); + u32 (*nr_mmio)(struct fwnode_handle *fwnode, void *fwopaque); + int (*mmio_to_resource)(struct fwnode_handle *fwnode, + void *fwopaque, u32 index, + struct resource *res); + void __iomem *(*mmio_map)(struct fwnode_handle *fwnode, + void *fwopaque, u32 index); + int (*read_u32)(struct fwnode_handle *fwnode, + void *fwopaque, const char *prop, u32 *out_val); + bool (*read_bool)(struct fwnode_handle *fwnode, + void *fwopaque, const char *prop); +}; + +static int __init imsic_init(struct imsic_fwnode_ops *fwops, + struct fwnode_handle *fwnode, + void *fwopaque) +{ + struct resource res; + phys_addr_t base_addr; + int rc, nr_parent_irqs; + struct imsic_mmio *mmio; + struct imsic_priv *priv; + struct irq_domain *domain; + struct imsic_handler *handler; + struct imsic_global_config *global; + u32 i, tmp, nr_handlers = 0; + + if (imsic_init_done) { + pr_err("%pfwP: already initialized hence ignoring\n", + fwnode); + return -ENODEV; + } + + if (!riscv_isa_extension_available(NULL, SxAIA)) { + pr_err("%pfwP: AIA support not available\n", fwnode); + return -ENODEV; + } + + priv = kzalloc(sizeof(*priv), GFP_KERNEL); + if (!priv) + return -ENOMEM; + global = &priv->global; + + /* Find number of parent interrupts */ + nr_parent_irqs = fwops->nr_parent_irq(fwnode, fwopaque); + if (!nr_parent_irqs) { + pr_err("%pfwP: no parent irqs available\n", fwnode); + return -EINVAL; + } + + /* Find number of guest index bits in MSI address */ + rc = fwops->read_u32(fwnode, fwopaque, "riscv,guest-index-bits", + &global->guest_index_bits); + if (rc) + global->guest_index_bits = 0; + tmp = BITS_PER_LONG - IMSIC_MMIO_PAGE_SHIFT; + if (tmp < global->guest_index_bits) { + pr_err("%pfwP: guest index bits too big\n", fwnode); + return -EINVAL; + } + + /* Find number of HART index bits */ + rc = fwops->read_u32(fwnode, fwopaque, "riscv,hart-index-bits", + &global->hart_index_bits); + if (rc) { + /* Assume default value */ + global->hart_index_bits = __fls(nr_parent_irqs); + if (BIT(global->hart_index_bits) < nr_parent_irqs) + global->hart_index_bits++; + } + tmp = BITS_PER_LONG - IMSIC_MMIO_PAGE_SHIFT - + global->guest_index_bits; + if (tmp < global->hart_index_bits) { + pr_err("%pfwP: HART index bits too big\n", fwnode); + return -EINVAL; + } + + /* Find number of group index bits */ + rc = fwops->read_u32(fwnode, fwopaque, "riscv,group-index-bits", + &global->group_index_bits); + if (rc) + global->group_index_bits = 0; + tmp = BITS_PER_LONG - IMSIC_MMIO_PAGE_SHIFT - + global->guest_index_bits - global->hart_index_bits; + if (tmp < global->group_index_bits) { + pr_err("%pfwP: group index bits too big\n", fwnode); + return -EINVAL; + } + + /* + * Find first bit position of group index. + * If not specified assumed the default APLIC-IMSIC configuration. + */ + rc = fwops->read_u32(fwnode, fwopaque, "riscv,group-index-shift", + &global->group_index_shift); + if (rc) + global->group_index_shift = IMSIC_MMIO_PAGE_SHIFT * 2; + tmp = global->group_index_bits + global->group_index_shift - 1; + if (tmp >= BITS_PER_LONG) { + pr_err("%pfwP: group index shift too big\n", fwnode); + return -EINVAL; + } + + /* Find number of interrupt identities */ + rc = fwops->read_u32(fwnode, fwopaque, "riscv,num-ids", + &global->nr_ids); + if (rc) { + pr_err("%pfwP: number of interrupt identities not found\n", + fwnode); + return rc; + } + if ((global->nr_ids < IMSIC_MIN_ID) || + (global->nr_ids >= IMSIC_MAX_ID) || + ((global->nr_ids & IMSIC_MIN_ID) != IMSIC_MIN_ID)) { + pr_err("%pfwP: invalid number of interrupt identities\n", + fwnode); + return -EINVAL; + } + + /* Find number of guest interrupt identities */ + if (fwops->read_u32(fwnode, fwopaque, "riscv,num-guest-ids", + &global->nr_guest_ids)) + global->nr_guest_ids = global->nr_ids; + if ((global->nr_guest_ids < IMSIC_MIN_ID) || + (global->nr_guest_ids >= IMSIC_MAX_ID) || + ((global->nr_guest_ids & IMSIC_MIN_ID) != IMSIC_MIN_ID)) { + pr_err("%pfwP: invalid number of guest interrupt identities\n", + fwnode); + return -EINVAL; + } + + /* Compute base address */ + rc = fwops->mmio_to_resource(fwnode, fwopaque, 0, &res); + if (rc) { + pr_err("%pfwP: first MMIO resource not found\n", fwnode); + return -EINVAL; + } + global->base_addr = res.start; + global->base_addr &= ~(BIT(global->guest_index_bits + + global->hart_index_bits + + IMSIC_MMIO_PAGE_SHIFT) - 1); + global->base_addr &= ~((BIT(global->group_index_bits) - 1) << + global->group_index_shift); + + /* Find number of MMIO register sets */ + priv->num_mmios = fwops->nr_mmio(fwnode, fwopaque); + + /* Allocate MMIO register sets */ + priv->mmios = kcalloc(priv->num_mmios, sizeof(*mmio), GFP_KERNEL); + if (!priv->mmios) { + rc = -ENOMEM; + goto out_free_priv; + } + + /* Parse and map MMIO register sets */ + for (i = 0; i < priv->num_mmios; i++) { + mmio = &priv->mmios[i]; + rc = fwops->mmio_to_resource(fwnode, fwopaque, i, &res); + if (rc) { + pr_err("%pfwP: unable to parse MMIO regset %d\n", + fwnode, i); + goto out_iounmap; + } + mmio->pa = res.start; + mmio->size = res.end - res.start + 1; + + base_addr = mmio->pa; + base_addr &= ~(BIT(global->guest_index_bits + + global->hart_index_bits + + IMSIC_MMIO_PAGE_SHIFT) - 1); + base_addr &= ~((BIT(global->group_index_bits) - 1) << + global->group_index_shift); + if (base_addr != global->base_addr) { + rc = -EINVAL; + pr_err("%pfwP: address mismatch for regset %d\n", + fwnode, i); + goto out_iounmap; + } + + mmio->va = fwops->mmio_map(fwnode, fwopaque, i); + if (!mmio->va) { + rc = -EIO; + pr_err("%pfwP: unable to map MMIO regset %d\n", + fwnode, i); + goto out_iounmap; + } + } + + /* Initialize interrupt identity management */ + rc = imsic_ids_init(priv); + if (rc) { + pr_err("%pfwP: failed to initialize interrupt management\n", + fwnode); + goto out_iounmap; + } + + /* Configure handlers for target CPUs */ + for (i = 0; i < nr_parent_irqs; i++) { + unsigned long reloff, hartid; + int j, cpu; + + rc = fwops->parent_hartid(fwnode, fwopaque, i, &hartid); + if (rc) { + pr_warn("%pfwP: hart ID for parent irq%d not found\n", + fwnode, i); + continue; + } + + cpu = riscv_hartid_to_cpuid(hartid); + if (cpu < 0) { + pr_warn("%pfwP: invalid cpuid for parent irq%d\n", + fwnode, i); + continue; + } + + /* Find MMIO location of MSI page */ + mmio = NULL; + reloff = i * BIT(global->guest_index_bits) * + IMSIC_MMIO_PAGE_SZ; + for (j = 0; priv->num_mmios; j++) { + if (reloff < priv->mmios[j].size) { + mmio = &priv->mmios[j]; + break; + } + + /* + * MMIO region size may not be aligned to + * BIT(global->guest_index_bits) * IMSIC_MMIO_PAGE_SZ + * if holes are present. + */ + reloff -= ALIGN(priv->mmios[j].size, + BIT(global->guest_index_bits) * IMSIC_MMIO_PAGE_SZ); + } + if (!mmio) { + pr_warn("%pfwP: MMIO not found for parent irq%d\n", + fwnode, i); + continue; + } + + handler = per_cpu_ptr(&imsic_handlers, cpu); + if (handler->priv) { + pr_warn("%pfwP: CPU%d handler already configured.\n", + fwnode, cpu); + goto done; + } + + cpumask_set_cpu(cpu, &priv->lmask); + handler->local.msi_pa = mmio->pa + reloff; + handler->local.msi_va = mmio->va + reloff; + handler->priv = priv; + +done: + nr_handlers++; + } + + /* If no CPU handlers found then can't take interrupts */ + if (!nr_handlers) { + pr_err("%pfwP: No CPU handlers found\n", fwnode); + rc = -ENODEV; + goto out_ids_cleanup; + } + + /* Find parent domain and register chained handler */ + domain = irq_find_matching_fwnode(riscv_get_intc_hwnode(), + DOMAIN_BUS_ANY); + if (!domain) { + pr_err("%pfwP: Failed to find INTC domain\n", fwnode); + rc = -ENOENT; + goto out_ids_cleanup; + } + imsic_parent_irq = irq_create_mapping(domain, RV_IRQ_EXT); + if (!imsic_parent_irq) { + pr_err("%pfwP: Failed to create INTC mapping\n", fwnode); + rc = -ENOENT; + goto out_ids_cleanup; + } + irq_set_chained_handler(imsic_parent_irq, imsic_handle_irq); + + /* Initialize IPI domain */ + rc = imsic_ipi_domain_init(priv); + if (rc) { + pr_err("%pfwP: Failed to initialize IPI domain\n", fwnode); + goto out_ids_cleanup; + } + + /* Initialize IRQ and MSI domains */ + rc = imsic_irq_domains_init(priv, fwnode); + if (rc) { + pr_err("%pfwP: Failed to initialize IRQ and MSI domains\n", + fwnode); + goto out_ipi_domain_cleanup; + } + + /* + * Setup cpuhp state + * + * Don't disable per-CPU IMSIC file when CPU goes offline + * because this affects IPI and the masking/unmasking of + * virtual IPIs is done via generic IPI-Mux + */ + cpuhp_setup_state(CPUHP_AP_ONLINE_DYN, + "irqchip/riscv/imsic:starting", + imsic_starting_cpu, NULL); + + /* + * Only one IMSIC instance allowed in a platform for clean + * implementation of SMP IRQ affinity and per-CPU IPIs. + * + * This means on a multi-socket (or multi-die) platform we + * will have multiple MMIO regions for one IMSIC instance. + */ + imsic_init_done = true; + + pr_info("%pfwP: hart-index-bits: %d, guest-index-bits: %d\n", + fwnode, global->hart_index_bits, global->guest_index_bits); + pr_info("%pfwP: group-index-bits: %d, group-index-shift: %d\n", + fwnode, global->group_index_bits, global->group_index_shift); + pr_info("%pfwP: mapped %d interrupts for %d CPUs at %pa\n", + fwnode, global->nr_ids, nr_handlers, &global->base_addr); + if (priv->ipi_lsync_id) + pr_info("%pfwP: enable/disable sync using interrupt %d\n", + fwnode, priv->ipi_lsync_id); + if (priv->ipi_id) + pr_info("%pfwP: providing IPIs using interrupt %d\n", + fwnode, priv->ipi_id); + + return 0; + +out_ipi_domain_cleanup: + imsic_ipi_domain_cleanup(priv); +out_ids_cleanup: + imsic_ids_cleanup(priv); +out_iounmap: + for (i = 0; i < priv->num_mmios; i++) { + if (priv->mmios[i].va) + iounmap(priv->mmios[i].va); + } + kfree(priv->mmios); +out_free_priv: + kfree(priv); + return rc; +} + +static u32 __init imsic_dt_nr_parent_irq(struct fwnode_handle *fwnode, + void *fwopaque) +{ + return of_irq_count(to_of_node(fwnode)); +} + +static int __init imsic_dt_parent_hartid(struct fwnode_handle *fwnode, + void *fwopaque, u32 index, + unsigned long *out_hartid) +{ + struct of_phandle_args parent; + int rc; + + rc = of_irq_parse_one(to_of_node(fwnode), index, &parent); + if (rc) + return rc; + + /* + * Skip interrupts other than external interrupts for + * current privilege level. + */ + if (parent.args[0] != RV_IRQ_EXT) + return -EINVAL; + + return riscv_of_parent_hartid(parent.np, out_hartid); +} + +static u32 __init imsic_dt_nr_mmio(struct fwnode_handle *fwnode, + void *fwopaque) +{ + u32 ret = 0; + struct resource res; + + while (!of_address_to_resource(to_of_node(fwnode), ret, &res)) + ret++; + + return ret; +} + +static int __init imsic_mmio_to_resource(struct fwnode_handle *fwnode, + void *fwopaque, u32 index, + struct resource *res) +{ + return of_address_to_resource(to_of_node(fwnode), index, res); +} + +static void __iomem __init *imsic_dt_mmio_map(struct fwnode_handle *fwnode, + void *fwopaque, u32 index) +{ + return of_iomap(to_of_node(fwnode), index); +} + +static int __init imsic_dt_read_u32(struct fwnode_handle *fwnode, + void *fwopaque, const char *prop, + u32 *out_val) +{ + return of_property_read_u32(to_of_node(fwnode), prop, out_val); +} + +static bool __init imsic_dt_read_bool(struct fwnode_handle *fwnode, + void *fwopaque, const char *prop) +{ + return of_property_read_bool(to_of_node(fwnode), prop); +} + +static int __init imsic_dt_init(struct device_node *node, + struct device_node *parent) +{ + struct imsic_fwnode_ops ops = { + .nr_parent_irq = imsic_dt_nr_parent_irq, + .parent_hartid = imsic_dt_parent_hartid, + .nr_mmio = imsic_dt_nr_mmio, + .mmio_to_resource = imsic_mmio_to_resource, + .mmio_map = imsic_dt_mmio_map, + .read_u32 = imsic_dt_read_u32, + .read_bool = imsic_dt_read_bool, + }; + + return imsic_init(&ops, &node->fwnode, NULL); +} +IRQCHIP_DECLARE(riscv_imsic, "riscv,imsics", imsic_dt_init); diff --git a/include/linux/irqchip/riscv-imsic.h b/include/linux/irqchip/riscv-imsic.h new file mode 100644 index 000000000000..5d1387adc0ba --- /dev/null +++ b/include/linux/irqchip/riscv-imsic.h @@ -0,0 +1,92 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* + * Copyright (C) 2021 Western Digital Corporation or its affiliates. + * Copyright (C) 2022 Ventana Micro Systems Inc. + */ +#ifndef __LINUX_IRQCHIP_RISCV_IMSIC_H +#define __LINUX_IRQCHIP_RISCV_IMSIC_H + +#include +#include + +#define IMSIC_MMIO_PAGE_SHIFT 12 +#define IMSIC_MMIO_PAGE_SZ (1UL << IMSIC_MMIO_PAGE_SHIFT) +#define IMSIC_MMIO_PAGE_LE 0x00 +#define IMSIC_MMIO_PAGE_BE 0x04 + +#define IMSIC_MIN_ID 63 +#define IMSIC_MAX_ID 2048 + +#define IMSIC_EIDELIVERY 0x70 + +#define IMSIC_EITHRESHOLD 0x72 + +#define IMSIC_EIP0 0x80 +#define IMSIC_EIP63 0xbf +#define IMSIC_EIPx_BITS 32 + +#define IMSIC_EIE0 0xc0 +#define IMSIC_EIE63 0xff +#define IMSIC_EIEx_BITS 32 + +#define IMSIC_FIRST IMSIC_EIDELIVERY +#define IMSIC_LAST IMSIC_EIE63 + +#define IMSIC_MMIO_SETIPNUM_LE 0x00 +#define IMSIC_MMIO_SETIPNUM_BE 0x04 + +struct imsic_global_config { + /* + * MSI Target Address Scheme + * + * XLEN-1 12 0 + * | | | + * ------------------------------------------------------------- + * |xxxxxx|Group Index|xxxxxxxxxxx|HART Index|Guest Index| 0 | + * ------------------------------------------------------------- + */ + + /* Bits representing Guest index, HART index, and Group index */ + u32 guest_index_bits; + u32 hart_index_bits; + u32 group_index_bits; + u32 group_index_shift; + + /* Global base address matching all target MSI addresses */ + phys_addr_t base_addr; + + /* Number of interrupt identities */ + u32 nr_ids; + + /* Number of guest interrupt identities */ + u32 nr_guest_ids; +}; + +struct imsic_local_config { + phys_addr_t msi_pa; + void __iomem *msi_va; +}; + +#ifdef CONFIG_RISCV_IMSIC + +extern const struct imsic_global_config *imsic_get_global_config(void); + +extern const struct imsic_local_config *imsic_get_local_config( + unsigned int cpu); + +#else + +static inline const struct imsic_global_config *imsic_get_global_config(void) +{ + return NULL; +} + +static inline const struct imsic_local_config *imsic_get_local_config( + unsigned int cpu) +{ + return NULL; +} + +#endif + +#endif From patchwork Tue Jan 3 14:14:06 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Anup Patel X-Patchwork-Id: 13087756 X-Patchwork-Delegate: palmer@dabbelt.com Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 78F81C53210 for ; Tue, 3 Jan 2023 17:11:27 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=d013rXze14rwLDo8jkkcOncqSGBJPKLL9w0cEizkGW0=; b=cT3NV3qzgfgQxi iqn0rc42IMwy8A4cJ7rGyZIGKTYwASVs8p11uhWM2A3OYmh9qaBjaFQCSQYIZ5oeF/DPCAEr1DLU2 F38UVZri2nPfKmJT1Eh/nt6Wwl9b++HFlLbaRdBfQxgqcvJdwF2Eg7ZsSgD/jm8Q6as65JpVIISgh 8Eutkv0qYkCGpYJ12nnVprW5AIeqfv/4QqULLDBQMZGmt2ad+4pHa1AmtQxGjOeG0My5dqRB1Bm1h iXGao5mX8wciGZkK346E7YgqRBmRU5C4Yqa3lBrP5qh1wvNo64S4SRo3gzs8PF6kUB34waCStIpYm xGBrooYONAAdsBeWQjhg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1pCkok-003Gwd-4Y; Tue, 03 Jan 2023 17:11:14 +0000 Received: from mail-pg1-x52d.google.com ([2607:f8b0:4864:20::52d]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1pCi40-001rkG-2u for linux-riscv@lists.infradead.org; Tue, 03 Jan 2023 14:14:50 +0000 Received: by mail-pg1-x52d.google.com with SMTP id h192so15369353pgc.7 for ; Tue, 03 Jan 2023 06:14:46 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ventanamicro.com; s=google; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=Qxy1AII8mzQRWc0P6lSKBTCvnK5R7b9MxsVjt7NSjH8=; b=OK6PGwt4wzY9D/3RnfGc+XZ7yBXLZLVPotryDSjOPiqJQElCG+rsGRiqohGB/EImGp QU+6p2U6O2Q9AHUNwHHsuw5SzXR4+WkyA85++UES3c8HP3vATjgwYtWzxLf1O/K4Cz9s VzHl0fLv6NKbjGMtWSwNEiphs4VcXEeNK0niO0jGLXLVrsb/zEywDJn8RCCzZVXDjhte rZpRFwsxnEtf6ezq8mEHq7qZhfwjkGJHAUC9bSrkgDG5vBM7FJvU7dZ7o0M4vXN/Qhpr E7ebKQahi3Jq4dlJp2NcVswmII79ZtTU8Q2AKCwpHAlMESKdWx2TqAt5hpcCMHmRegqe s4GQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=Qxy1AII8mzQRWc0P6lSKBTCvnK5R7b9MxsVjt7NSjH8=; b=ncm8ZKWQI18KFWl4mQv7bt3o3rfzy35ViqwAqD2nRaLik6mxChPQPHXpspYZsdAk9D q1Hx/eS/mQuMY5otIoT/mP1oYSRNF4PSsQantZ0f10GH2ABtNSYw8Nu3FoiQMh93wRxs j/Je7sOWBXqhAf5sunHYir40+ENYivLJkfeidVJY+rk8cOy86c5T9wsjSuzketkloO18 9hOGzYsdXlmTJkfAUL7oRfUqjsB+WZSmgk6/uaJMIbq9jfmhJQsLkfmh/ELxUqNXXuCZ PEK+sBsNqa95v1ZY/gcEhiWIbYZgTFo2EefzdglkXYuvs7upKdlnyX87DOnvR3PoIf3G sDvw== X-Gm-Message-State: AFqh2kqxcJ5hGdrKZfBMOAc3PfhXTq18CpbAaeaz4l43mGCBugTtJxxM dKHy7APUHCPL0Lmki3Q6Y1HS9g== X-Google-Smtp-Source: AMrXdXvsk2ds4lPc3kxJKGZucLF/Wkv1k1fz59SMG/UzSr+MrpjPQ9fKZ0kUGFT06+58QOFmMFg5Dw== X-Received: by 2002:a62:8382:0:b0:581:21c2:804c with SMTP id h124-20020a628382000000b0058121c2804cmr29057091pfe.17.1672755285453; Tue, 03 Jan 2023 06:14:45 -0800 (PST) Received: from anup-ubuntu-vm.localdomain ([171.76.85.241]) by smtp.gmail.com with ESMTPSA id h1-20020a628301000000b0056be4dbd4besm5936035pfe.111.2023.01.03.06.14.41 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 03 Jan 2023 06:14:44 -0800 (PST) From: Anup Patel To: Palmer Dabbelt , Paul Walmsley , Thomas Gleixner , Marc Zyngier , Rob Herring , Krzysztof Kozlowski Cc: Atish Patra , Alistair Francis , Anup Patel , linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, devicetree@vger.kernel.org, Anup Patel Subject: [PATCH v2 6/9] dt-bindings: interrupt-controller: Add RISC-V advanced PLIC Date: Tue, 3 Jan 2023 19:44:06 +0530 Message-Id: <20230103141409.772298-7-apatel@ventanamicro.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230103141409.772298-1-apatel@ventanamicro.com> References: <20230103141409.772298-1-apatel@ventanamicro.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230103_061448_220627_AA88507D X-CRM114-Status: GOOD ( 15.25 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org We add DT bindings document for RISC-V advanced platform level interrupt controller (APLIC) defined by the RISC-V advanced interrupt architecture (AIA) specification. Signed-off-by: Anup Patel --- .../interrupt-controller/riscv,aplic.yaml | 159 ++++++++++++++++++ 1 file changed, 159 insertions(+) create mode 100644 Documentation/devicetree/bindings/interrupt-controller/riscv,aplic.yaml diff --git a/Documentation/devicetree/bindings/interrupt-controller/riscv,aplic.yaml b/Documentation/devicetree/bindings/interrupt-controller/riscv,aplic.yaml new file mode 100644 index 000000000000..b7f20aad72c2 --- /dev/null +++ b/Documentation/devicetree/bindings/interrupt-controller/riscv,aplic.yaml @@ -0,0 +1,159 @@ +# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause) +%YAML 1.2 +--- +$id: http://devicetree.org/schemas/interrupt-controller/riscv,aplic.yaml# +$schema: http://devicetree.org/meta-schemas/core.yaml# + +title: RISC-V Advanced Platform Level Interrupt Controller (APLIC) + +maintainers: + - Anup Patel + +description: + The RISC-V advanced interrupt architecture (AIA) defines an advanced + platform level interrupt controller (APLIC) for handling wired interrupts + in a RISC-V platform. The RISC-V AIA specification can be found at + https://github.com/riscv/riscv-aia. + + The RISC-V APLIC is implemented as hierarchical APLIC domains where all + interrupt sources connect to the root domain which can further delegate + interrupts to child domains. There is one device tree node for each APLIC + domain. + +allOf: + - $ref: /schemas/interrupt-controller.yaml# + +properties: + compatible: + items: + - enum: + - riscv,qemu-aplic + - const: riscv,aplic + + reg: + maxItems: 1 + + interrupt-controller: true + + "#interrupt-cells": + const: 2 + + interrupts-extended: + minItems: 1 + maxItems: 16384 + description: + Given APLIC domain directly injects external interrupts to a set of + RISC-V HARTS (or CPUs). Each node pointed to should be a riscv,cpu-intc + node, which has a riscv node (i.e. RISC-V HART) as parent. + + msi-parent: + description: + Given APLIC domain forwards wired interrupts as MSIs to a AIA incoming + message signaled interrupt controller (IMSIC). This property should be + considered only when the interrupts-extended property is absent. + + riscv,num-sources: + $ref: /schemas/types.yaml#/definitions/uint32 + minimum: 1 + maximum: 1023 + description: + Specifies how many wired interrupts are supported by this APLIC domain. + + riscv,children: + $ref: /schemas/types.yaml#/definitions/phandle-array + minItems: 1 + maxItems: 1024 + items: + maxItems: 1 + description: + A list of child APLIC domains for the given APLIC domain. Each child + APLIC domain is assigned child index in increasing order with the + first child APLIC domain assigned child index 0. The APLIC domain + child index is used by firmware to delegate interrupts from the + given APLIC domain to a particular child APLIC domain. + + riscv,delegate: + $ref: /schemas/types.yaml#/definitions/phandle-array + minItems: 1 + maxItems: 1024 + items: + items: + - description: child APLIC domain phandle + - description: first interrupt number (inclusive) + - description: last interrupt number (inclusive) + description: + A interrupt delegation list where each entry is a triple consisting + of child APLIC domain phandle, first interrupt number, and last + interrupt number. The firmware will configure interrupt delegation + registers based on interrupt delegation list. + +required: + - compatible + - reg + - interrupt-controller + - "#interrupt-cells" + - riscv,num-sources + +unevaluatedProperties: false + +examples: + - | + // Example 1 (APLIC domains directly injecting interrupt to HARTs): + + aplic0: interrupt-controller@c000000 { + compatible = "riscv,qemu-aplic", "riscv,aplic"; + interrupts-extended = <&cpu1_intc 11>, + <&cpu2_intc 11>, + <&cpu3_intc 11>, + <&cpu4_intc 11>; + reg = <0xc000000 0x4080>; + interrupt-controller; + #interrupt-cells = <2>; + riscv,num-sources = <63>; + riscv,children = <&aplic1>, <&aplic2>; + riscv,delegate = <&aplic1 1 63>; + }; + + aplic1: interrupt-controller@d000000 { + compatible = "riscv,qemu-aplic", "riscv,aplic"; + interrupts-extended = <&cpu1_intc 9>, + <&cpu2_intc 9>; + reg = <0xd000000 0x4080>; + interrupt-controller; + #interrupt-cells = <2>; + riscv,num-sources = <63>; + }; + + aplic2: interrupt-controller@e000000 { + compatible = "riscv,qemu-aplic", "riscv,aplic"; + interrupts-extended = <&cpu3_intc 9>, + <&cpu4_intc 9>; + reg = <0xe000000 0x4080>; + interrupt-controller; + #interrupt-cells = <2>; + riscv,num-sources = <63>; + }; + + - | + // Example 2 (APLIC domains forwarding interrupts as MSIs): + + aplic3: interrupt-controller@c000000 { + compatible = "riscv,qemu-aplic", "riscv,aplic"; + msi-parent = <&imsic_mlevel>; + reg = <0xc000000 0x4000>; + interrupt-controller; + #interrupt-cells = <2>; + riscv,num-sources = <63>; + riscv,children = <&aplic4>; + riscv,delegate = <&aplic4 1 63>; + }; + + aplic4: interrupt-controller@d000000 { + compatible = "riscv,qemu-aplic", "riscv,aplic"; + msi-parent = <&imsic_slevel>; + reg = <0xd000000 0x4000>; + interrupt-controller; + #interrupt-cells = <2>; + riscv,num-sources = <63>; + }; +... From patchwork Tue Jan 3 14:14:07 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Anup Patel X-Patchwork-Id: 13087758 X-Patchwork-Delegate: palmer@dabbelt.com Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 78829C3DA7D for ; Tue, 3 Jan 2023 17:12:46 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=mj9N+dfQMiP1ZsUILpPN8dJ7M1M1siH8AOBVo3RjNiA=; b=ix46XyzqwOMHxy 4qVKlgrJ72SDZlJ4MK9AkCqwVI1k2JqsLOV9axjid8G++AOSSSgiO0EgeLlQjRATBYZbvwooXS4Fk 6554AU7acO/8HHvDNVVGr3IXXc2bhsmISFSF7T5sirhNRQ5rSir8xphh01CULe0zxPupVz5NWm/MP cvD3BiFCP8QHl5AJwaWZTXaP2/GHYRvPvs4+kCrGf5idiuv1TJwIzUf8y6vWy9UnHbgNHuSjw1hEU 1sn0M8Iv4fpUBxujJiKD1jP44p2PKqRP4KLZJF1Cmp26DQzVD3m3APN624R43GxO8jJ2olhP8Jd6Q tFOfmfj1FFt0rQ/5SA1g==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1pCkq2-003Hbr-9H; Tue, 03 Jan 2023 17:12:34 +0000 Received: from mail-pg1-x529.google.com ([2607:f8b0:4864:20::529]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1pCi42-001rmw-MO for linux-riscv@lists.infradead.org; Tue, 03 Jan 2023 14:14:54 +0000 Received: by mail-pg1-x529.google.com with SMTP id r18so20186150pgr.12 for ; Tue, 03 Jan 2023 06:14:50 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ventanamicro.com; s=google; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=FjmLTahZL4Oi6eN8AZ1um1tP0fyJrArM+S5Nwuv6Y8k=; b=cARbJYi34IPhZqePD5xGowe5m1z0BoYneWpdoydJL9Nacvxy0Nv5WLG0K0uB+XxxAp I43p4CrUSWOqoZJzc8BQjYiXTTweHJcbu6CfUQXMcf/PA3b+ewx82oVxLv3VQ3TpABpW J6DI8jxaWw4XY+5tAXjc036vEWlPqoAXg7d1c/ZZDqUIqPrxDHtnZfToo1fjVWenZGFl BChB2Cfkhyp/svQezMcokFfBHf4Z+231GqTwmKYla/MSqNRxnNp9TIvOOiM6fUbxJAZ8 6mWUcbKs8NUn1kUEbBhHa8NzmI/q3SrpVU1Q1HE73gEUSgNFnIotQnylNcXqxBCwF1lw 7moA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=FjmLTahZL4Oi6eN8AZ1um1tP0fyJrArM+S5Nwuv6Y8k=; b=SGHDjRqc0DoOApEZaPV8/U5JmBt+GqdnRlVfwz1In7VF4sMusLbCIZF1nPFe9lawvT bcNtA8Tz0KZfEQNnrgEx7rgIKL4usxG8RGk297LvRF5UNdpttnq5HtTNHni/JsDJubVZ 4DyIJqsvrwTrB94ufKsQv+u3NWGUbwuRsHq9xKwRkIEdi1voTrjF/QGm6qtBCW43R+4P 6xD73o7grLwSuESth8vfmdRYivDVY+8Yh3UVEryDCtAiFxD7WV8mbKCRjF+lg8sbrYmT IqnVI30IlxsUAniufbH7IOM7mSA0pWW88nz0/r7eGtqvUqatVxcW4dQee/6k/O+vzZNN Mnfg== X-Gm-Message-State: AFqh2kqHqFf7iPU/rWe2MhhL+LDC95to0BB2cK5NO1h28X6IsuYd1sb8 hPlpO3ygIeVOiH01wul1rxtafA== X-Google-Smtp-Source: AMrXdXtbq3NUCltem7SvTb/riBZlf2ola70hQ2N0eX7p6Q9rUmmUrGZy95CO1zFX3hAvL4kdjsOL8Q== X-Received: by 2002:a05:6a00:150c:b0:581:38df:f9f8 with SMTP id q12-20020a056a00150c00b0058138dff9f8mr35788783pfu.4.1672755289752; Tue, 03 Jan 2023 06:14:49 -0800 (PST) Received: from anup-ubuntu-vm.localdomain ([171.76.85.241]) by smtp.gmail.com with ESMTPSA id h1-20020a628301000000b0056be4dbd4besm5936035pfe.111.2023.01.03.06.14.45 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 03 Jan 2023 06:14:49 -0800 (PST) From: Anup Patel To: Palmer Dabbelt , Paul Walmsley , Thomas Gleixner , Marc Zyngier , Rob Herring , Krzysztof Kozlowski Cc: Atish Patra , Alistair Francis , Anup Patel , linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, devicetree@vger.kernel.org, Anup Patel Subject: [PATCH v2 7/9] irqchip: Add RISC-V advanced PLIC driver Date: Tue, 3 Jan 2023 19:44:07 +0530 Message-Id: <20230103141409.772298-8-apatel@ventanamicro.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230103141409.772298-1-apatel@ventanamicro.com> References: <20230103141409.772298-1-apatel@ventanamicro.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230103_061450_875385_A4809E16 X-CRM114-Status: GOOD ( 26.01 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org The RISC-V advanced interrupt architecture (AIA) specification defines a new interrupt controller for managing wired interrupts on a RISC-V platform. This new interrupt controller is referred to as advanced platform-level interrupt controller (APLIC) which can forward wired interrupts to CPUs (or HARTs) as local interrupts OR as message signaled interrupts. (For more details refer https://github.com/riscv/riscv-aia) This patch adds an irqchip driver for RISC-V APLIC found on RISC-V platforms. Signed-off-by: Anup Patel --- drivers/irqchip/Kconfig | 6 + drivers/irqchip/Makefile | 1 + drivers/irqchip/irq-riscv-aplic.c | 670 ++++++++++++++++++++++++++++ include/linux/irqchip/riscv-aplic.h | 117 +++++ 4 files changed, 794 insertions(+) create mode 100644 drivers/irqchip/irq-riscv-aplic.c create mode 100644 include/linux/irqchip/riscv-aplic.h diff --git a/drivers/irqchip/Kconfig b/drivers/irqchip/Kconfig index a1315189a595..936e59fe1f99 100644 --- a/drivers/irqchip/Kconfig +++ b/drivers/irqchip/Kconfig @@ -547,6 +547,12 @@ config SIFIVE_PLIC select IRQ_DOMAIN_HIERARCHY select GENERIC_IRQ_EFFECTIVE_AFF_MASK if SMP +config RISCV_APLIC + bool + depends on RISCV + select IRQ_DOMAIN_HIERARCHY + select GENERIC_MSI_IRQ_DOMAIN + config RISCV_IMSIC bool depends on RISCV diff --git a/drivers/irqchip/Makefile b/drivers/irqchip/Makefile index 22c723cc6ec8..6154e5bc4228 100644 --- a/drivers/irqchip/Makefile +++ b/drivers/irqchip/Makefile @@ -96,6 +96,7 @@ obj-$(CONFIG_QCOM_MPM) += irq-qcom-mpm.o obj-$(CONFIG_CSKY_MPINTC) += irq-csky-mpintc.o obj-$(CONFIG_CSKY_APB_INTC) += irq-csky-apb-intc.o obj-$(CONFIG_RISCV_INTC) += irq-riscv-intc.o +obj-$(CONFIG_RISCV_APLIC) += irq-riscv-aplic.o obj-$(CONFIG_RISCV_IMSIC) += irq-riscv-imsic.o obj-$(CONFIG_SIFIVE_PLIC) += irq-sifive-plic.o obj-$(CONFIG_IMX_IRQSTEER) += irq-imx-irqsteer.o diff --git a/drivers/irqchip/irq-riscv-aplic.c b/drivers/irqchip/irq-riscv-aplic.c new file mode 100644 index 000000000000..63f20892d7d3 --- /dev/null +++ b/drivers/irqchip/irq-riscv-aplic.c @@ -0,0 +1,670 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Copyright (C) 2021 Western Digital Corporation or its affiliates. + * Copyright (C) 2022 Ventana Micro Systems Inc. + */ + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#define APLIC_DEFAULT_PRIORITY 1 +#define APLIC_DISABLE_IDELIVERY 0 +#define APLIC_ENABLE_IDELIVERY 1 +#define APLIC_DISABLE_ITHRESHOLD 1 +#define APLIC_ENABLE_ITHRESHOLD 0 + +struct aplic_msicfg { + phys_addr_t base_ppn; + u32 hhxs; + u32 hhxw; + u32 lhxs; + u32 lhxw; +}; + +struct aplic_idc { + unsigned int hart_index; + void __iomem *regs; + struct aplic_priv *priv; +}; + +struct aplic_priv { + struct device *dev; + u32 nr_irqs; + u32 nr_idcs; + void __iomem *regs; + struct irq_domain *irqdomain; + struct aplic_msicfg msicfg; + struct cpumask lmask; +}; + +static unsigned int aplic_idc_parent_irq; +static DEFINE_PER_CPU(struct aplic_idc, aplic_idcs); + +static void aplic_irq_unmask(struct irq_data *d) +{ + struct aplic_priv *priv = irq_data_get_irq_chip_data(d); + + writel(d->hwirq, priv->regs + APLIC_SETIENUM); + + if (!priv->nr_idcs) + irq_chip_unmask_parent(d); +} + +static void aplic_irq_mask(struct irq_data *d) +{ + struct aplic_priv *priv = irq_data_get_irq_chip_data(d); + + writel(d->hwirq, priv->regs + APLIC_CLRIENUM); + + if (!priv->nr_idcs) + irq_chip_mask_parent(d); +} + +static int aplic_set_type(struct irq_data *d, unsigned int type) +{ + u32 val = 0; + void __iomem *sourcecfg; + struct aplic_priv *priv = irq_data_get_irq_chip_data(d); + + switch (type) { + case IRQ_TYPE_NONE: + val = APLIC_SOURCECFG_SM_INACTIVE; + break; + case IRQ_TYPE_LEVEL_LOW: + val = APLIC_SOURCECFG_SM_LEVEL_LOW; + break; + case IRQ_TYPE_LEVEL_HIGH: + val = APLIC_SOURCECFG_SM_LEVEL_HIGH; + break; + case IRQ_TYPE_EDGE_FALLING: + val = APLIC_SOURCECFG_SM_EDGE_FALL; + break; + case IRQ_TYPE_EDGE_RISING: + val = APLIC_SOURCECFG_SM_EDGE_RISE; + break; + default: + return -EINVAL; + } + + sourcecfg = priv->regs + APLIC_SOURCECFG_BASE; + sourcecfg += (d->hwirq - 1) * sizeof(u32); + writel(val, sourcecfg); + + return 0; +} + +#ifdef CONFIG_SMP +static int aplic_set_affinity(struct irq_data *d, + const struct cpumask *mask_val, bool force) +{ + struct aplic_priv *priv = irq_data_get_irq_chip_data(d); + struct aplic_idc *idc; + unsigned int cpu, val; + struct cpumask amask; + void __iomem *target; + + if (!priv->nr_idcs) + return irq_chip_set_affinity_parent(d, mask_val, force); + + cpumask_and(&amask, &priv->lmask, mask_val); + + if (force) + cpu = cpumask_first(&amask); + else + cpu = cpumask_any_and(&amask, cpu_online_mask); + + if (cpu >= nr_cpu_ids) + return -EINVAL; + + idc = per_cpu_ptr(&aplic_idcs, cpu); + target = priv->regs + APLIC_TARGET_BASE; + target += (d->hwirq - 1) * sizeof(u32); + val = idc->hart_index & APLIC_TARGET_HART_IDX_MASK; + val <<= APLIC_TARGET_HART_IDX_SHIFT; + val |= APLIC_DEFAULT_PRIORITY; + writel(val, target); + + irq_data_update_effective_affinity(d, cpumask_of(cpu)); + + return IRQ_SET_MASK_OK_DONE; +} +#endif + +static struct irq_chip aplic_chip = { + .name = "RISC-V APLIC", + .irq_mask = aplic_irq_mask, + .irq_unmask = aplic_irq_unmask, + .irq_set_type = aplic_set_type, +#ifdef CONFIG_SMP + .irq_set_affinity = aplic_set_affinity, +#endif + .flags = IRQCHIP_SET_TYPE_MASKED | + IRQCHIP_SKIP_SET_WAKE | + IRQCHIP_MASK_ON_SUSPEND, +}; + +static int aplic_irqdomain_translate(struct irq_domain *d, + struct irq_fwspec *fwspec, + unsigned long *hwirq, + unsigned int *type) +{ + if (WARN_ON(fwspec->param_count < 2)) + return -EINVAL; + if (WARN_ON(!fwspec->param[0])) + return -EINVAL; + + *hwirq = fwspec->param[0]; + *type = fwspec->param[1] & IRQ_TYPE_SENSE_MASK; + + WARN_ON(*type == IRQ_TYPE_NONE); + + return 0; +} + +static int aplic_irqdomain_msi_alloc(struct irq_domain *domain, + unsigned int virq, unsigned int nr_irqs, + void *arg) +{ + int i, ret; + unsigned int type; + irq_hw_number_t hwirq; + struct irq_fwspec *fwspec = arg; + struct aplic_priv *priv = platform_msi_get_host_data(domain); + + ret = aplic_irqdomain_translate(domain, fwspec, &hwirq, &type); + if (ret) + return ret; + + ret = platform_msi_device_domain_alloc(domain, virq, nr_irqs); + if (ret) + return ret; + + for (i = 0; i < nr_irqs; i++) + irq_domain_set_hwirq_and_chip(domain, virq + i, hwirq + i, + &aplic_chip, priv); + + return 0; +} + +static const struct irq_domain_ops aplic_irqdomain_msi_ops = { + .translate = aplic_irqdomain_translate, + .alloc = aplic_irqdomain_msi_alloc, + .free = platform_msi_device_domain_free, +}; + +static int aplic_irqdomain_idc_alloc(struct irq_domain *domain, + unsigned int virq, unsigned int nr_irqs, + void *arg) +{ + int i, ret; + unsigned int type; + irq_hw_number_t hwirq; + struct irq_fwspec *fwspec = arg; + struct aplic_priv *priv = domain->host_data; + + ret = aplic_irqdomain_translate(domain, fwspec, &hwirq, &type); + if (ret) + return ret; + + for (i = 0; i < nr_irqs; i++) { + irq_domain_set_info(domain, virq + i, hwirq + i, + &aplic_chip, priv, handle_simple_irq, + NULL, NULL); + irq_set_affinity(virq + i, &priv->lmask); + } + + return 0; +} + +static const struct irq_domain_ops aplic_irqdomain_idc_ops = { + .translate = aplic_irqdomain_translate, + .alloc = aplic_irqdomain_idc_alloc, + .free = irq_domain_free_irqs_top, +}; + +static void aplic_init_hw_irqs(struct aplic_priv *priv) +{ + int i; + + /* Disable all interrupts */ + for (i = 0; i <= priv->nr_irqs; i += 32) + writel(-1U, priv->regs + APLIC_CLRIE_BASE + + (i / 32) * sizeof(u32)); + + /* Set interrupt type and default priority for all interrupts */ + for (i = 1; i <= priv->nr_irqs; i++) { + writel(0, priv->regs + APLIC_SOURCECFG_BASE + + (i - 1) * sizeof(u32)); + writel(APLIC_DEFAULT_PRIORITY, + priv->regs + APLIC_TARGET_BASE + + (i - 1) * sizeof(u32)); + } + + /* Clear APLIC domaincfg */ + writel(0, priv->regs + APLIC_DOMAINCFG); +} + +static void aplic_init_hw_global(struct aplic_priv *priv) +{ + u32 val; +#ifdef CONFIG_RISCV_M_MODE + u32 valH; + + if (!priv->nr_idcs) { + val = priv->msicfg.base_ppn; + valH = (priv->msicfg.base_ppn >> 32) & + APLIC_xMSICFGADDRH_BAPPN_MASK; + valH |= (priv->msicfg.lhxw & APLIC_xMSICFGADDRH_LHXW_MASK) + << APLIC_xMSICFGADDRH_LHXW_SHIFT; + valH |= (priv->msicfg.hhxw & APLIC_xMSICFGADDRH_HHXW_MASK) + << APLIC_xMSICFGADDRH_HHXW_SHIFT; + valH |= (priv->msicfg.lhxs & APLIC_xMSICFGADDRH_LHXS_MASK) + << APLIC_xMSICFGADDRH_LHXS_SHIFT; + valH |= (priv->msicfg.hhxs & APLIC_xMSICFGADDRH_HHXS_MASK) + << APLIC_xMSICFGADDRH_HHXS_SHIFT; + writel(val, priv->regs + APLIC_xMSICFGADDR); + writel(valH, priv->regs + APLIC_xMSICFGADDRH); + } +#endif + + /* Setup APLIC domaincfg register */ + val = readl(priv->regs + APLIC_DOMAINCFG); + val |= APLIC_DOMAINCFG_IE; + if (!priv->nr_idcs) + val |= APLIC_DOMAINCFG_DM; + writel(val, priv->regs + APLIC_DOMAINCFG); + if (readl(priv->regs + APLIC_DOMAINCFG) != val) + dev_warn(priv->dev, + "unable to write 0x%x in domaincfg\n", val); +} + +static void aplic_msi_write_msg(struct msi_desc *desc, struct msi_msg *msg) +{ + unsigned int group_index, hart_index, guest_index, val; + struct device *dev = msi_desc_to_dev(desc); + struct aplic_priv *priv = dev_get_drvdata(dev); + struct irq_data *d = irq_get_irq_data(desc->irq); + struct aplic_msicfg *mc = &priv->msicfg; + phys_addr_t tppn, tbppn, msg_addr; + void __iomem *target; + + /* For zeroed MSI, simply write zero into the target register */ + if (!msg->address_hi && !msg->address_lo && !msg->data) { + target = priv->regs + APLIC_TARGET_BASE; + target += (d->hwirq - 1) * sizeof(u32); + writel(0, target); + return; + } + + /* Sanity check on message data */ + WARN_ON(msg->data > APLIC_TARGET_EIID_MASK); + + /* Compute target MSI address */ + msg_addr = (((u64)msg->address_hi) << 32) | msg->address_lo; + tppn = msg_addr >> APLIC_xMSICFGADDR_PPN_SHIFT; + + /* Compute target HART Base PPN */ + tbppn = tppn; + tbppn &= ~APLIC_xMSICFGADDR_PPN_HART(mc->lhxs); + tbppn &= ~APLIC_xMSICFGADDR_PPN_LHX(mc->lhxw, mc->lhxs); + tbppn &= ~APLIC_xMSICFGADDR_PPN_HHX(mc->hhxw, mc->hhxs); + WARN_ON(tbppn != mc->base_ppn); + + /* Compute target group and hart indexes */ + group_index = (tppn >> APLIC_xMSICFGADDR_PPN_HHX_SHIFT(mc->hhxs)) & + APLIC_xMSICFGADDR_PPN_HHX_MASK(mc->hhxw); + hart_index = (tppn >> APLIC_xMSICFGADDR_PPN_LHX_SHIFT(mc->lhxs)) & + APLIC_xMSICFGADDR_PPN_LHX_MASK(mc->lhxw); + hart_index |= (group_index << mc->lhxw); + WARN_ON(hart_index > APLIC_TARGET_HART_IDX_MASK); + + /* Compute target guest index */ + guest_index = tppn & APLIC_xMSICFGADDR_PPN_HART(mc->lhxs); + WARN_ON(guest_index > APLIC_TARGET_GUEST_IDX_MASK); + + /* Update IRQ TARGET register */ + target = priv->regs + APLIC_TARGET_BASE; + target += (d->hwirq - 1) * sizeof(u32); + val = (hart_index & APLIC_TARGET_HART_IDX_MASK) + << APLIC_TARGET_HART_IDX_SHIFT; + val |= (guest_index & APLIC_TARGET_GUEST_IDX_MASK) + << APLIC_TARGET_GUEST_IDX_SHIFT; + val |= (msg->data & APLIC_TARGET_EIID_MASK); + writel(val, target); +} + +static int aplic_setup_msi(struct aplic_priv *priv) +{ + struct device *dev = priv->dev; + struct aplic_msicfg *mc = &priv->msicfg; + const struct imsic_global_config *imsic_global; + + /* + * The APLIC outgoing MSI config registers assume target MSI + * controller to be RISC-V AIA IMSIC controller. + */ + imsic_global = imsic_get_global_config(); + if (!imsic_global) { + dev_err(dev, "IMSIC global config not found\n"); + return -ENODEV; + } + + /* Find number of guest index bits (LHXS) */ + mc->lhxs = imsic_global->guest_index_bits; + if (APLIC_xMSICFGADDRH_LHXS_MASK < mc->lhxs) { + dev_err(dev, "IMSIC guest index bits big for APLIC LHXS\n"); + return -EINVAL; + } + + /* Find number of HART index bits (LHXW) */ + mc->lhxw = imsic_global->hart_index_bits; + if (APLIC_xMSICFGADDRH_LHXW_MASK < mc->lhxw) { + dev_err(dev, "IMSIC hart index bits big for APLIC LHXW\n"); + return -EINVAL; + } + + /* Find number of group index bits (HHXW) */ + mc->hhxw = imsic_global->group_index_bits; + if (APLIC_xMSICFGADDRH_HHXW_MASK < mc->hhxw) { + dev_err(dev, "IMSIC group index bits big for APLIC HHXW\n"); + return -EINVAL; + } + + /* Find first bit position of group index (HHXS) */ + mc->hhxs = imsic_global->group_index_shift; + if (mc->hhxs < (2 * APLIC_xMSICFGADDR_PPN_SHIFT)) { + dev_err(dev, "IMSIC group index shift should be >= %d\n", + (2 * APLIC_xMSICFGADDR_PPN_SHIFT)); + return -EINVAL; + } + mc->hhxs -= (2 * APLIC_xMSICFGADDR_PPN_SHIFT); + if (APLIC_xMSICFGADDRH_HHXS_MASK < mc->hhxs) { + dev_err(dev, "IMSIC group index shift big for APLIC HHXS\n"); + return -EINVAL; + } + + /* Compute PPN base */ + mc->base_ppn = imsic_global->base_addr >> APLIC_xMSICFGADDR_PPN_SHIFT; + mc->base_ppn &= ~APLIC_xMSICFGADDR_PPN_HART(mc->lhxs); + mc->base_ppn &= ~APLIC_xMSICFGADDR_PPN_LHX(mc->lhxw, mc->lhxs); + mc->base_ppn &= ~APLIC_xMSICFGADDR_PPN_HHX(mc->hhxw, mc->hhxs); + + /* Use all possible CPUs as lmask */ + cpumask_copy(&priv->lmask, cpu_possible_mask); + + return 0; +} + +/* + * To handle an APLIC IDC interrupts, we just read the CLAIMI register + * which will return highest priority pending interrupt and clear the + * pending bit of the interrupt. This process is repeated until CLAIMI + * register return zero value. + */ +static void aplic_idc_handle_irq(struct irq_desc *desc) +{ + struct aplic_idc *idc = this_cpu_ptr(&aplic_idcs); + struct irq_chip *chip = irq_desc_get_chip(desc); + irq_hw_number_t hw_irq; + int irq; + + chained_irq_enter(chip, desc); + + while ((hw_irq = readl(idc->regs + APLIC_IDC_CLAIMI))) { + hw_irq = hw_irq >> APLIC_IDC_TOPI_ID_SHIFT; + irq = irq_find_mapping(idc->priv->irqdomain, hw_irq); + + if (unlikely(irq <= 0)) + pr_warn_ratelimited("hw_irq %lu mapping not found\n", + hw_irq); + else + generic_handle_irq(irq); + } + + chained_irq_exit(chip, desc); +} + +static void aplic_idc_set_delivery(struct aplic_idc *idc, bool en) +{ + u32 de = (en) ? APLIC_ENABLE_IDELIVERY : APLIC_DISABLE_IDELIVERY; + u32 th = (en) ? APLIC_ENABLE_ITHRESHOLD : APLIC_DISABLE_ITHRESHOLD; + + /* Priority must be less than threshold for interrupt triggering */ + writel(th, idc->regs + APLIC_IDC_ITHRESHOLD); + + /* Delivery must be set to 1 for interrupt triggering */ + writel(de, idc->regs + APLIC_IDC_IDELIVERY); +} + +static int aplic_idc_dying_cpu(unsigned int cpu) +{ + if (aplic_idc_parent_irq) + disable_percpu_irq(aplic_idc_parent_irq); + + return 0; +} + +static int aplic_idc_starting_cpu(unsigned int cpu) +{ + if (aplic_idc_parent_irq) + enable_percpu_irq(aplic_idc_parent_irq, + irq_get_trigger_type(aplic_idc_parent_irq)); + + return 0; +} + +static int aplic_setup_idc(struct aplic_priv *priv) +{ + int i, j, rc, cpu, setup_count = 0; + struct device_node *node = priv->dev->of_node; + struct device *dev = priv->dev; + struct of_phandle_args parent; + struct irq_domain *domain; + unsigned long hartid; + struct aplic_idc *idc; + u32 val; + + /* Setup per-CPU IDC and target CPU mask */ + for (i = 0; i < priv->nr_idcs; i++) { + if (of_irq_parse_one(node, i, &parent)) { + dev_err(dev, "failed to parse parent for IDC%d.\n", + i); + return -EIO; + } + + /* Skip IDCs which do not connect to external interrupts */ + if (parent.args[0] != RV_IRQ_EXT) + continue; + + rc = riscv_of_parent_hartid(parent.np, &hartid); + if (rc) { + dev_err(dev, "failed to parse hart ID for IDC%d.\n", + i); + return rc; + } + + cpu = riscv_hartid_to_cpuid(hartid); + if (cpu < 0) { + dev_warn(dev, "invalid cpuid for IDC%d\n", i); + continue; + } + + cpumask_set_cpu(cpu, &priv->lmask); + + idc = per_cpu_ptr(&aplic_idcs, cpu); + WARN_ON(idc->priv); + + idc->hart_index = i; + idc->regs = priv->regs + APLIC_IDC_BASE + i * APLIC_IDC_SIZE; + idc->priv = priv; + + aplic_idc_set_delivery(idc, true); + + /* + * Boot cpu might not have APLIC hart_index = 0 so check + * and update target registers of all interrupts. + */ + if (cpu == smp_processor_id() && idc->hart_index) { + val = idc->hart_index & APLIC_TARGET_HART_IDX_MASK; + val <<= APLIC_TARGET_HART_IDX_SHIFT; + val |= APLIC_DEFAULT_PRIORITY; + for (j = 1; j <= priv->nr_irqs; j++) + writel(val, priv->regs + APLIC_TARGET_BASE + + (j - 1) * sizeof(u32)); + } + + setup_count++; + } + + /* Find parent domain and register chained handler */ + domain = irq_find_matching_fwnode(riscv_get_intc_hwnode(), + DOMAIN_BUS_ANY); + if (!aplic_idc_parent_irq && domain) { + aplic_idc_parent_irq = irq_create_mapping(domain, RV_IRQ_EXT); + if (aplic_idc_parent_irq) { + irq_set_chained_handler(aplic_idc_parent_irq, + aplic_idc_handle_irq); + + /* + * Setup CPUHP notifier to enable IDC parent + * interrupt on all CPUs + */ + cpuhp_setup_state(CPUHP_AP_ONLINE_DYN, + "irqchip/riscv/aplic:starting", + aplic_idc_starting_cpu, + aplic_idc_dying_cpu); + } + } + + /* Fail if we were not able to setup IDC for any CPU */ + return (setup_count) ? 0 : -ENODEV; +} + +static int aplic_probe(struct platform_device *pdev) +{ + struct device_node *node = pdev->dev.of_node; + struct device *dev = &pdev->dev; + struct aplic_priv *priv; + struct resource *regs; + phys_addr_t pa; + int rc; + + regs = platform_get_resource(pdev, IORESOURCE_MEM, 0); + if (!regs) { + dev_err(dev, "cannot find registers resource\n"); + return -ENOENT; + } + + priv = devm_kzalloc(dev, sizeof(*priv), GFP_KERNEL); + if (!priv) + return -ENOMEM; + platform_set_drvdata(pdev, priv); + priv->dev = dev; + + priv->regs = devm_ioremap(dev, regs->start, resource_size(regs)); + if (WARN_ON(!priv->regs)) { + dev_err(dev, "failed ioremap registers\n"); + return -EIO; + } + + of_property_read_u32(node, "riscv,num-sources", &priv->nr_irqs); + if (!priv->nr_irqs) { + dev_err(dev, "failed to get number of interrupt sources\n"); + return -EINVAL; + } + + /* Setup initial state APLIC interrupts */ + aplic_init_hw_irqs(priv); + + /* + * Setup IDCs or MSIs based on parent interrupts in DT node + * + * If "msi-parent" DT property is present then we ignore the + * APLIC IDCs which forces the APLIC driver to use MSI mode. + */ + priv->nr_idcs = of_property_read_bool(node, "msi-parent") ? + 0 : of_irq_count(node); + if (priv->nr_idcs) + rc = aplic_setup_idc(priv); + else + rc = aplic_setup_msi(priv); + if (rc) + return rc; + + /* Setup global config and interrupt delivery */ + aplic_init_hw_global(priv); + + /* Create irq domain instance for the APLIC */ + if (priv->nr_idcs) + priv->irqdomain = irq_domain_create_linear( + of_node_to_fwnode(node), + priv->nr_irqs + 1, + &aplic_irqdomain_idc_ops, + priv); + else + priv->irqdomain = platform_msi_create_device_domain(dev, + priv->nr_irqs + 1, + aplic_msi_write_msg, + &aplic_irqdomain_msi_ops, + priv); + if (!priv->irqdomain) { + dev_err(dev, "failed to add irq domain\n"); + return -ENOMEM; + } + + /* Advertise the interrupt controller */ + if (priv->nr_idcs) { + dev_info(dev, "%d interrupts directly connected to %d CPUs\n", + priv->nr_irqs, priv->nr_idcs); + } else { + pa = priv->msicfg.base_ppn << APLIC_xMSICFGADDR_PPN_SHIFT; + dev_info(dev, "%d interrupts forwared to MSI base %pa\n", + priv->nr_irqs, &pa); + } + + return 0; +} + +static int aplic_remove(struct platform_device *pdev) +{ + struct aplic_priv *priv = platform_get_drvdata(pdev); + + irq_domain_remove(priv->irqdomain); + + return 0; +} + +static const struct of_device_id aplic_match[] = { + { .compatible = "riscv,aplic" }, + {} +}; + +static struct platform_driver aplic_driver = { + .driver = { + .name = "riscv-aplic", + .of_match_table = aplic_match, + }, + .probe = aplic_probe, + .remove = aplic_remove, +}; + +static int __init aplic_init(void) +{ + return platform_driver_register(&aplic_driver); +} +core_initcall(aplic_init); diff --git a/include/linux/irqchip/riscv-aplic.h b/include/linux/irqchip/riscv-aplic.h new file mode 100644 index 000000000000..88177eefd411 --- /dev/null +++ b/include/linux/irqchip/riscv-aplic.h @@ -0,0 +1,117 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* + * Copyright (C) 2021 Western Digital Corporation or its affiliates. + * Copyright (C) 2022 Ventana Micro Systems Inc. + */ +#ifndef __LINUX_IRQCHIP_RISCV_APLIC_H +#define __LINUX_IRQCHIP_RISCV_APLIC_H + +#include + +#define APLIC_MAX_IDC BIT(14) +#define APLIC_MAX_SOURCE 1024 + +#define APLIC_DOMAINCFG 0x0000 +#define APLIC_DOMAINCFG_RDONLY 0x80000000 +#define APLIC_DOMAINCFG_IE BIT(8) +#define APLIC_DOMAINCFG_DM BIT(2) +#define APLIC_DOMAINCFG_BE BIT(0) + +#define APLIC_SOURCECFG_BASE 0x0004 +#define APLIC_SOURCECFG_D BIT(10) +#define APLIC_SOURCECFG_CHILDIDX_MASK 0x000003ff +#define APLIC_SOURCECFG_SM_MASK 0x00000007 +#define APLIC_SOURCECFG_SM_INACTIVE 0x0 +#define APLIC_SOURCECFG_SM_DETACH 0x1 +#define APLIC_SOURCECFG_SM_EDGE_RISE 0x4 +#define APLIC_SOURCECFG_SM_EDGE_FALL 0x5 +#define APLIC_SOURCECFG_SM_LEVEL_HIGH 0x6 +#define APLIC_SOURCECFG_SM_LEVEL_LOW 0x7 + +#define APLIC_MMSICFGADDR 0x1bc0 +#define APLIC_MMSICFGADDRH 0x1bc4 +#define APLIC_SMSICFGADDR 0x1bc8 +#define APLIC_SMSICFGADDRH 0x1bcc + +#ifdef CONFIG_RISCV_M_MODE +#define APLIC_xMSICFGADDR APLIC_MMSICFGADDR +#define APLIC_xMSICFGADDRH APLIC_MMSICFGADDRH +#else +#define APLIC_xMSICFGADDR APLIC_SMSICFGADDR +#define APLIC_xMSICFGADDRH APLIC_SMSICFGADDRH +#endif + +#define APLIC_xMSICFGADDRH_L BIT(31) +#define APLIC_xMSICFGADDRH_HHXS_MASK 0x1f +#define APLIC_xMSICFGADDRH_HHXS_SHIFT 24 +#define APLIC_xMSICFGADDRH_LHXS_MASK 0x7 +#define APLIC_xMSICFGADDRH_LHXS_SHIFT 20 +#define APLIC_xMSICFGADDRH_HHXW_MASK 0x7 +#define APLIC_xMSICFGADDRH_HHXW_SHIFT 16 +#define APLIC_xMSICFGADDRH_LHXW_MASK 0xf +#define APLIC_xMSICFGADDRH_LHXW_SHIFT 12 +#define APLIC_xMSICFGADDRH_BAPPN_MASK 0xfff + +#define APLIC_xMSICFGADDR_PPN_SHIFT 12 + +#define APLIC_xMSICFGADDR_PPN_HART(__lhxs) \ + (BIT(__lhxs) - 1) + +#define APLIC_xMSICFGADDR_PPN_LHX_MASK(__lhxw) \ + (BIT(__lhxw) - 1) +#define APLIC_xMSICFGADDR_PPN_LHX_SHIFT(__lhxs) \ + ((__lhxs)) +#define APLIC_xMSICFGADDR_PPN_LHX(__lhxw, __lhxs) \ + (APLIC_xMSICFGADDR_PPN_LHX_MASK(__lhxw) << \ + APLIC_xMSICFGADDR_PPN_LHX_SHIFT(__lhxs)) + +#define APLIC_xMSICFGADDR_PPN_HHX_MASK(__hhxw) \ + (BIT(__hhxw) - 1) +#define APLIC_xMSICFGADDR_PPN_HHX_SHIFT(__hhxs) \ + ((__hhxs) + APLIC_xMSICFGADDR_PPN_SHIFT) +#define APLIC_xMSICFGADDR_PPN_HHX(__hhxw, __hhxs) \ + (APLIC_xMSICFGADDR_PPN_HHX_MASK(__hhxw) << \ + APLIC_xMSICFGADDR_PPN_HHX_SHIFT(__hhxs)) + +#define APLIC_SETIP_BASE 0x1c00 +#define APLIC_SETIPNUM 0x1cdc + +#define APLIC_CLRIP_BASE 0x1d00 +#define APLIC_CLRIPNUM 0x1ddc + +#define APLIC_SETIE_BASE 0x1e00 +#define APLIC_SETIENUM 0x1edc + +#define APLIC_CLRIE_BASE 0x1f00 +#define APLIC_CLRIENUM 0x1fdc + +#define APLIC_SETIPNUM_LE 0x2000 +#define APLIC_SETIPNUM_BE 0x2004 + +#define APLIC_GENMSI 0x3000 + +#define APLIC_TARGET_BASE 0x3004 +#define APLIC_TARGET_HART_IDX_SHIFT 18 +#define APLIC_TARGET_HART_IDX_MASK 0x3fff +#define APLIC_TARGET_GUEST_IDX_SHIFT 12 +#define APLIC_TARGET_GUEST_IDX_MASK 0x3f +#define APLIC_TARGET_IPRIO_MASK 0xff +#define APLIC_TARGET_EIID_MASK 0x7ff + +#define APLIC_IDC_BASE 0x4000 +#define APLIC_IDC_SIZE 32 + +#define APLIC_IDC_IDELIVERY 0x00 + +#define APLIC_IDC_IFORCE 0x04 + +#define APLIC_IDC_ITHRESHOLD 0x08 + +#define APLIC_IDC_TOPI 0x18 +#define APLIC_IDC_TOPI_ID_SHIFT 16 +#define APLIC_IDC_TOPI_ID_MASK 0x3ff +#define APLIC_IDC_TOPI_PRIO_MASK 0xff + +#define APLIC_IDC_CLAIMI 0x1c + +#endif From patchwork Tue Jan 3 14:14:08 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Anup Patel X-Patchwork-Id: 13087759 X-Patchwork-Delegate: palmer@dabbelt.com Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 3FBDCC3DA7D for ; Tue, 3 Jan 2023 17:13:18 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=ToW6zYOtLia5RMPHVlBnNNWN0OgOYguPeBpJnMTzwXE=; b=YtcdrLOe2OcxOR 9nK74yN8HNIhxh9c7tulqJUHmsRJF8ev5fYE+Km1YDQz65GBDR0h4JDpWo0hs4e9fx84wia9D4nRn 7SNiz/uY3LaVRxQRotXiMEePy+mo1S/rHFb9/mHPehK0k6biir4I0fPCzlMHL3+9FkLv+V1W7E4Nx 7qbrr5qpXgzqKDSyJaUvfI4x85e83SSdz2rIL3DEuagaLmadwQTmgdqAW2bPELil6W8qOkBLRnbZs gPyt7hLK35gFHO4tB1x4byVD6L4mHmMfKcD8ol4rsXlSC8P0mhqBlLfdYbT+xRN1UQ0Vmv3AcNskm 3BQIcxaEgyfmXivxZF8Q==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1pCkqX-003Hv6-TM; Tue, 03 Jan 2023 17:13:06 +0000 Received: from mail-pg1-x536.google.com ([2607:f8b0:4864:20::536]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1pCi49-001rpP-02 for linux-riscv@lists.infradead.org; Tue, 03 Jan 2023 14:14:58 +0000 Received: by mail-pg1-x536.google.com with SMTP id f3so20211289pgc.2 for ; Tue, 03 Jan 2023 06:14:55 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ventanamicro.com; s=google; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=195vKHGR+QakKYu5xEQ6bJmupoltT/5fZUVnyxT/JS0=; b=bJSAHr4quvhyCBCcMOee2hkHJrxJWiPH0TiFcKcUx6zEBGO4/nhdtcWPMD2IpjnROZ PyQaBBpPodBYpT6wCf79SU6hylJMOcugKM2f+zbOPerQGKaDKbEszQ7D6MxnOtF3vGlA 1he26shoOo4EjTS4RLWUTnA1fBfHOW22CTeVq5KdkAae1LMSCcF3Scf8r/N8kcSM/wwv NFi24qFJPgGSkxRZ8PSTIGReMJCph7ryWq6G2o0vTCZ2siHVbz5/bBErm31yrdTs9gJP 49RwUqzcP2HV0/LUgFlWGYuHoLLVdPZdQOvHxtp7W8BFojgkCuRZv/SEhCv+chU9keX9 uJ3Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=195vKHGR+QakKYu5xEQ6bJmupoltT/5fZUVnyxT/JS0=; b=RWlQByUr6jDt8aRTkGEjQiBv4jw2Av0zqTBpnWA93rlfmMuB2XtbTklZXL2cG7LpA2 e9GArG1R38tLMWfLBWn9pe9kkSkV7ntPcxDB9oKC2nx6RmeswGFgcenuO9JSJ4NnAARe eYtple104Uw6nPXjvSyz/YwQoU7BTUM/HFvMENSRJraadbg0QQa6RPMqJtwd/hnlD6pv 0VaxPcrSZ1LFGHfgyRU3Ydb+aOPV1AbA0DgRYHidRC+iBEwqrdfgDdI9cgQV9+8Bkya1 PQccFEtsAYQjfd+ESLJ8pqkwHpgxqOAYPcOsYz6IEpjkP1IAAvN6BubN/oQOSfwcwFvn TwdQ== X-Gm-Message-State: AFqh2krPvLAyilYL8/dbsSNr9yKQUU0IykJS+PNYPE/lmcFFH56fgtiP ZpQdXUTDRFEkDSMYURzlaXme/A== X-Google-Smtp-Source: AMrXdXuIbWO6Ed/ZM9fAR5Tl3UhF7v8lrhmEy+hu1Tl0m3zImIUpzgFLQe+RTWo7etI4V9mqPpMYaQ== X-Received: by 2002:a05:6a00:2164:b0:581:2f69:87ed with SMTP id r4-20020a056a00216400b005812f6987edmr27557486pff.34.1672755294609; Tue, 03 Jan 2023 06:14:54 -0800 (PST) Received: from anup-ubuntu-vm.localdomain ([171.76.85.241]) by smtp.gmail.com with ESMTPSA id h1-20020a628301000000b0056be4dbd4besm5936035pfe.111.2023.01.03.06.14.50 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 03 Jan 2023 06:14:54 -0800 (PST) From: Anup Patel To: Palmer Dabbelt , Paul Walmsley , Thomas Gleixner , Marc Zyngier , Rob Herring , Krzysztof Kozlowski Cc: Atish Patra , Alistair Francis , Anup Patel , linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, devicetree@vger.kernel.org, Anup Patel Subject: [PATCH v2 8/9] RISC-V: Select APLIC and IMSIC drivers Date: Tue, 3 Jan 2023 19:44:08 +0530 Message-Id: <20230103141409.772298-9-apatel@ventanamicro.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230103141409.772298-1-apatel@ventanamicro.com> References: <20230103141409.772298-1-apatel@ventanamicro.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230103_061457_133103_AF9C9267 X-CRM114-Status: UNSURE ( 9.29 ) X-CRM114-Notice: Please train this message. X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org The QEMU virt machine supports AIA emulation and we also have quite a few RISC-V platforms with AIA support under development so let us select APLIC and IMSIC drivers for all RISC-V platforms. Signed-off-by: Anup Patel --- arch/riscv/Kconfig | 2 ++ 1 file changed, 2 insertions(+) diff --git a/arch/riscv/Kconfig b/arch/riscv/Kconfig index d153e1cd890b..616a27e43827 100644 --- a/arch/riscv/Kconfig +++ b/arch/riscv/Kconfig @@ -127,6 +127,8 @@ config RISCV select OF_IRQ select PCI_DOMAINS_GENERIC if PCI select PCI_MSI if PCI + select RISCV_APLIC + select RISCV_IMSIC select RISCV_INTC select RISCV_TIMER if RISCV_SBI select SIFIVE_PLIC From patchwork Tue Jan 3 14:14:09 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Anup Patel X-Patchwork-Id: 13087760 X-Patchwork-Delegate: palmer@dabbelt.com Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 3B07DC46467 for ; Tue, 3 Jan 2023 17:13:54 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=WXllFq6J+nL3nYSpnjKzWyAVZiozhdrp7RhbBcZ+InM=; b=nnDS5+D3rwkjGW PH9mQeJHoABLqiPD8Gh70EoLoEOd6vQrlGV4peMxLYrzNIqOnCqPdss6qHOBrzizRLez91uBAki6a usz+3+EiQ2oAOBPZn+TsYI0PlvnPVv3wk7B9+r7MeUiMD4ak+589yxvz1zSj1nY7FGUwFV9cFfVj/ gOAI6AqtrXDUlXX3NxRB0YOmhWxWvMth+EgZBW2CGfsz5bYIIYceUdiqxPc1V9iMMy3v5a332+nBL IUFb14LLs1NLGtYprnIgqGCNwpHkmOoNLLtnNZACIRPGkR37/uyaWmyW4QOTS24p3NcL4wBP9lU3Q T8/67rOphTl5iRlNGEKg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1pCkr7-003IDJ-S5; Tue, 03 Jan 2023 17:13:41 +0000 Received: from mail-pf1-x430.google.com ([2607:f8b0:4864:20::430]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1pCi4C-001rrf-TO for linux-riscv@lists.infradead.org; Tue, 03 Jan 2023 14:15:02 +0000 Received: by mail-pf1-x430.google.com with SMTP id k19so12932525pfg.11 for ; Tue, 03 Jan 2023 06:14:59 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ventanamicro.com; s=google; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=l9cahVv2TInUZQq8hRVXfSnhUh5CoJndbcse7r+efrI=; b=JvQp+D22PddPJwvOIzWqpbiumCna0mY0Qs6kY5uGOPhttIQbAM07RfSktwA4YPQDfl FRzZBAQa1PKaQ4UOFU8QKXnvXEq9fkmdQVQzIxvWxnMxmq+/PDipRUhO8QEZiF9wTfMU P52esXVKy7v38CPrSEkE68VSvOw77TJgHkFqytND0WptHKP4kcU4dR7iKqTNVgp7n072 q5q4uG7oGjmRdcQQogprEohimTvr8TGccFiqF7wBLc6VQPwPG7u0F0S09v1MTCiW9TIC yCy+hpq5AQGc+VyAeDFktNK0ysxTjMzLHXasMDKkUO6XMMAenXgOCUhII9mEC2Pz56ig 9D6g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=l9cahVv2TInUZQq8hRVXfSnhUh5CoJndbcse7r+efrI=; b=TVZFmdl40oEZU3uxVqYbtP4d3/muZsi8DAZzMiRJahp2oKaQ/nrsbfsrprAt919k7Q 4CT2xM+XhaL4iUF2QMOcaFyUJJdOXD9X3i3XOT6Brf8z4STVxL6Fldelq/wb+HlHlt5J cbraKJuJ5iJQ5BlEQ5dcLg0EizIAypCVBAeiGiojphbJVCMvdt1I4Vev+le8dxvYjRgQ NtDfekLB+FlE9VRA33T8xJrppTN5/L4cvxGrCN/R69nmOuQYRQJ9TUJI3oFMlWVTnKnt P+f4BpKfA9t37Iuo5coj+9wwfdt4zCiNwX9WLcpYhJl+IbC/RopYKcRHt8tGnCuFGOSf r1bw== X-Gm-Message-State: AFqh2kryKNs303pm3M826+hyzoT4IOOp9Vj5dHaKymjuL+L6/3E5/TO0 IUj1UUv1Yv4oHjpdcLeicKTTEw== X-Google-Smtp-Source: AMrXdXvCOxRJHxTUnBXhg91eXznx4qUxOjJDdB2DRp725FJ6It0JynsEemtk32n/ofi6kbO+KalXBg== X-Received: by 2002:a62:cf42:0:b0:581:a004:3f36 with SMTP id b63-20020a62cf42000000b00581a0043f36mr21220945pfg.25.1672755298798; Tue, 03 Jan 2023 06:14:58 -0800 (PST) Received: from anup-ubuntu-vm.localdomain ([171.76.85.241]) by smtp.gmail.com with ESMTPSA id h1-20020a628301000000b0056be4dbd4besm5936035pfe.111.2023.01.03.06.14.54 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 03 Jan 2023 06:14:58 -0800 (PST) From: Anup Patel To: Palmer Dabbelt , Paul Walmsley , Thomas Gleixner , Marc Zyngier , Rob Herring , Krzysztof Kozlowski Cc: Atish Patra , Alistair Francis , Anup Patel , linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, devicetree@vger.kernel.org, Anup Patel Subject: [PATCH v2 9/9] MAINTAINERS: Add entry for RISC-V AIA drivers Date: Tue, 3 Jan 2023 19:44:09 +0530 Message-Id: <20230103141409.772298-10-apatel@ventanamicro.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230103141409.772298-1-apatel@ventanamicro.com> References: <20230103141409.772298-1-apatel@ventanamicro.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230103_061501_087314_300CFF20 X-CRM114-Status: UNSURE ( 9.00 ) X-CRM114-Notice: Please train this message. X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org Add myself as maintainer for RISC-V AIA drivers including the RISC-V INTC driver which supports both AIA and non-AIA platforms. Signed-off-by: Anup Patel --- MAINTAINERS | 12 ++++++++++++ 1 file changed, 12 insertions(+) diff --git a/MAINTAINERS b/MAINTAINERS index 7f86d02cb427..c5b8eda0780e 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -17942,6 +17942,18 @@ F: drivers/perf/riscv_pmu.c F: drivers/perf/riscv_pmu_legacy.c F: drivers/perf/riscv_pmu_sbi.c +RISC-V AIA DRIVERS +M: Anup Patel +L: linux-riscv@lists.infradead.org +S: Maintained +F: Documentation/devicetree/bindings/interrupt-controller/riscv,aplic.yaml +F: Documentation/devicetree/bindings/interrupt-controller/riscv,imsic.yaml +F: drivers/irqchip/irq-riscv-aplic.c +F: drivers/irqchip/irq-riscv-imsic.c +F: drivers/irqchip/irq-riscv-intc.c +F: include/linux/irqchip/riscv-aplic.h +F: include/linux/irqchip/riscv-imsic.h + RISC-V ARCHITECTURE M: Paul Walmsley M: Palmer Dabbelt