From patchwork Thu Jun 15 07:33:44 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Anup Patel X-Patchwork-Id: 13280822 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id DA4ECEB64DB for ; Thu, 15 Jun 2023 07:35:25 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S243668AbjFOHfW (ORCPT ); Thu, 15 Jun 2023 03:35:22 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:49224 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S244940AbjFOHeq (ORCPT ); Thu, 15 Jun 2023 03:34:46 -0400 Received: from mail-pl1-x636.google.com (mail-pl1-x636.google.com [IPv6:2607:f8b0:4864:20::636]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 44ADD2964 for ; Thu, 15 Jun 2023 00:34:09 -0700 (PDT) Received: by mail-pl1-x636.google.com with SMTP id d9443c01a7336-1b3c8f454b0so29631675ad.1 for ; Thu, 15 Jun 2023 00:34:09 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ventanamicro.com; s=google; t=1686814448; x=1689406448; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=JNbAHcdjHwv4iW/ZPMySouot4umzQzwv5+IFQnyeg/0=; b=CLbUDuQkVrUbWhazzlsD+afGwJGj7r/aClM6Nr2e5FO8on2zv6KpSVk9MCgngCFtgx xD5PuZX2CflB0UfUWEh9bFmiwqXr5RtiuHOFM6Rj/TrKRkc43KgIn4u6BFLH+OWPxKIl H6MhV1LoP+uxOP/73dUX/z9Q8Y9G25E5RtTHBpJGFq/GfKM1tFeKxcZ95Xba8oaOeDrE 7anUDZxu1x7rar220iZ5SydQ1D7efrqPgVfaId6CXr0LCxfsjsFLSsmq7WURqZKEFM0L 9d7cocXj/m3DENrmfBHrXRKteVsF0tIaAzwU0wLccsg2Q4OpFGbxWyF6G3hKw/csJ7JE 4pIg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1686814448; x=1689406448; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=JNbAHcdjHwv4iW/ZPMySouot4umzQzwv5+IFQnyeg/0=; b=L8zyEe1j10xWKE6uo5EW3xlVwdJoP1UxYBQNFrO0kLAXgTLhD4asETBDNDmQ+gO9uD GWcvryve7Qg+fyIn4p3u3G3iAFrBxs/NVmccZz7yNhEJJVY/CkoTDfJHhVbezll07SMB HehknBNRHj9BrPnHetVnnyersfD13lpkq2dEGODgIyu1ikMUFY1AxzNS2S2IIQIFb6JD 4EIyuyQxbL2B2cVFoQ7vRsK+sUeuw0FIigDUIXhu6L9XR7eTqg2vShEGamCK38fJgSDM hi7QYTYNw4gP1y4ZES393r5Htq0ZGntJV7w28Q84FSnTZ92mGPCWri5+7e1j2bKkrjvg zyxg== X-Gm-Message-State: AC+VfDxG7hhBUm8cwbm3g6KQe+enpdOoKrGD8jzqk7C5njeVc6pNfWA/ +M3M3U4+QEmK+ewcmhPbI5homQ== X-Google-Smtp-Source: ACHHUZ4UN0WG+1wB4R6U2uP/UWBwRBcxYB9743nQk/eN5YfVIOaGteYEH8AQPPTlTJDHyPXKR1zeBg== X-Received: by 2002:a17:902:ef89:b0:1b3:e6ba:3539 with SMTP id iz9-20020a170902ef8900b001b3e6ba3539mr5856714plb.51.1686814445997; Thu, 15 Jun 2023 00:34:05 -0700 (PDT) Received: from anup-ubuntu-vm.localdomain ([106.51.83.242]) by smtp.gmail.com with ESMTPSA id ji1-20020a170903324100b001b016313b1dsm8049855plb.86.2023.06.15.00.34.03 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 15 Jun 2023 00:34:05 -0700 (PDT) From: Anup Patel To: Paolo Bonzini , Atish Patra Cc: Palmer Dabbelt , Paul Walmsley , Andrew Jones , kvm@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, Anup Patel , Atish Patra Subject: [PATCH v3 01/10] RISC-V: KVM: Implement guest external interrupt line management Date: Thu, 15 Jun 2023 13:03:44 +0530 Message-Id: <20230615073353.85435-2-apatel@ventanamicro.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230615073353.85435-1-apatel@ventanamicro.com> References: <20230615073353.85435-1-apatel@ventanamicro.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org The RISC-V host will have one guest external interrupt line for each VS-level IMSICs associated with a HART. The guest external interrupt lines are per-HART resources and hypervisor can use HGEIE, HGEIP, and HIE CSRs to manage these guest external interrupt lines. Signed-off-by: Anup Patel Reviewed-by: Andrew Jones Reviewed-by: Atish Patra --- arch/riscv/include/asm/kvm_aia.h | 10 ++ arch/riscv/kvm/aia.c | 244 +++++++++++++++++++++++++++++++ arch/riscv/kvm/main.c | 3 +- arch/riscv/kvm/vcpu.c | 2 + 4 files changed, 258 insertions(+), 1 deletion(-) diff --git a/arch/riscv/include/asm/kvm_aia.h b/arch/riscv/include/asm/kvm_aia.h index 1de0717112e5..0938e0cadf80 100644 --- a/arch/riscv/include/asm/kvm_aia.h +++ b/arch/riscv/include/asm/kvm_aia.h @@ -44,10 +44,15 @@ struct kvm_vcpu_aia { #define irqchip_in_kernel(k) ((k)->arch.aia.in_kernel) +extern unsigned int kvm_riscv_aia_nr_hgei; DECLARE_STATIC_KEY_FALSE(kvm_riscv_aia_available); #define kvm_riscv_aia_available() \ static_branch_unlikely(&kvm_riscv_aia_available) +static inline void kvm_riscv_vcpu_aia_imsic_release(struct kvm_vcpu *vcpu) +{ +} + #define KVM_RISCV_AIA_IMSIC_TOPEI (ISELECT_MASK + 1) static inline int kvm_riscv_vcpu_aia_imsic_rmw(struct kvm_vcpu *vcpu, unsigned long isel, @@ -119,6 +124,11 @@ static inline void kvm_riscv_aia_destroy_vm(struct kvm *kvm) { } +int kvm_riscv_aia_alloc_hgei(int cpu, struct kvm_vcpu *owner, + void __iomem **hgei_va, phys_addr_t *hgei_pa); +void kvm_riscv_aia_free_hgei(int cpu, int hgei); +void kvm_riscv_aia_wakeon_hgei(struct kvm_vcpu *owner, bool enable); + void kvm_riscv_aia_enable(void); void kvm_riscv_aia_disable(void); int kvm_riscv_aia_init(void); diff --git a/arch/riscv/kvm/aia.c b/arch/riscv/kvm/aia.c index 4f1286fc7f17..1cee75a8c883 100644 --- a/arch/riscv/kvm/aia.c +++ b/arch/riscv/kvm/aia.c @@ -8,11 +8,47 @@ */ #include +#include +#include +#include #include +#include +#include #include +struct aia_hgei_control { + raw_spinlock_t lock; + unsigned long free_bitmap; + struct kvm_vcpu *owners[BITS_PER_LONG]; +}; +static DEFINE_PER_CPU(struct aia_hgei_control, aia_hgei); +static int hgei_parent_irq; + +unsigned int kvm_riscv_aia_nr_hgei; DEFINE_STATIC_KEY_FALSE(kvm_riscv_aia_available); +static int aia_find_hgei(struct kvm_vcpu *owner) +{ + int i, hgei; + unsigned long flags; + struct aia_hgei_control *hgctrl = get_cpu_ptr(&aia_hgei); + + raw_spin_lock_irqsave(&hgctrl->lock, flags); + + hgei = -1; + for (i = 1; i <= kvm_riscv_aia_nr_hgei; i++) { + if (hgctrl->owners[i] == owner) { + hgei = i; + break; + } + } + + raw_spin_unlock_irqrestore(&hgctrl->lock, flags); + + put_cpu_ptr(&aia_hgei); + return hgei; +} + static void aia_set_hvictl(bool ext_irq_pending) { unsigned long hvictl; @@ -56,6 +92,7 @@ void kvm_riscv_vcpu_aia_sync_interrupts(struct kvm_vcpu *vcpu) bool kvm_riscv_vcpu_aia_has_interrupts(struct kvm_vcpu *vcpu, u64 mask) { + int hgei; unsigned long seip; if (!kvm_riscv_aia_available()) @@ -74,6 +111,10 @@ bool kvm_riscv_vcpu_aia_has_interrupts(struct kvm_vcpu *vcpu, u64 mask) if (!kvm_riscv_aia_initialized(vcpu->kvm) || !seip) return false; + hgei = aia_find_hgei(vcpu); + if (hgei > 0) + return !!(csr_read(CSR_HGEIP) & BIT(hgei)); + return false; } @@ -348,6 +389,143 @@ int kvm_riscv_vcpu_aia_rmw_ireg(struct kvm_vcpu *vcpu, unsigned int csr_num, return KVM_INSN_EXIT_TO_USER_SPACE; } +int kvm_riscv_aia_alloc_hgei(int cpu, struct kvm_vcpu *owner, + void __iomem **hgei_va, phys_addr_t *hgei_pa) +{ + int ret = -ENOENT; + unsigned long flags; + struct aia_hgei_control *hgctrl = per_cpu_ptr(&aia_hgei, cpu); + + if (!kvm_riscv_aia_available() || !hgctrl) + return -ENODEV; + + raw_spin_lock_irqsave(&hgctrl->lock, flags); + + if (hgctrl->free_bitmap) { + ret = __ffs(hgctrl->free_bitmap); + hgctrl->free_bitmap &= ~BIT(ret); + hgctrl->owners[ret] = owner; + } + + raw_spin_unlock_irqrestore(&hgctrl->lock, flags); + + /* TODO: To be updated later by AIA in-kernel irqchip support */ + if (hgei_va) + *hgei_va = NULL; + if (hgei_pa) + *hgei_pa = 0; + + return ret; +} + +void kvm_riscv_aia_free_hgei(int cpu, int hgei) +{ + unsigned long flags; + struct aia_hgei_control *hgctrl = per_cpu_ptr(&aia_hgei, cpu); + + if (!kvm_riscv_aia_available() || !hgctrl) + return; + + raw_spin_lock_irqsave(&hgctrl->lock, flags); + + if (hgei > 0 && hgei <= kvm_riscv_aia_nr_hgei) { + if (!(hgctrl->free_bitmap & BIT(hgei))) { + hgctrl->free_bitmap |= BIT(hgei); + hgctrl->owners[hgei] = NULL; + } + } + + raw_spin_unlock_irqrestore(&hgctrl->lock, flags); +} + +void kvm_riscv_aia_wakeon_hgei(struct kvm_vcpu *owner, bool enable) +{ + int hgei; + + if (!kvm_riscv_aia_available()) + return; + + hgei = aia_find_hgei(owner); + if (hgei > 0) { + if (enable) + csr_set(CSR_HGEIE, BIT(hgei)); + else + csr_clear(CSR_HGEIE, BIT(hgei)); + } +} + +static irqreturn_t hgei_interrupt(int irq, void *dev_id) +{ + int i; + unsigned long hgei_mask, flags; + struct aia_hgei_control *hgctrl = get_cpu_ptr(&aia_hgei); + + hgei_mask = csr_read(CSR_HGEIP) & csr_read(CSR_HGEIE); + csr_clear(CSR_HGEIE, hgei_mask); + + raw_spin_lock_irqsave(&hgctrl->lock, flags); + + for_each_set_bit(i, &hgei_mask, BITS_PER_LONG) { + if (hgctrl->owners[i]) + kvm_vcpu_kick(hgctrl->owners[i]); + } + + raw_spin_unlock_irqrestore(&hgctrl->lock, flags); + + put_cpu_ptr(&aia_hgei); + return IRQ_HANDLED; +} + +static int aia_hgei_init(void) +{ + int cpu, rc; + struct irq_domain *domain; + struct aia_hgei_control *hgctrl; + + /* Initialize per-CPU guest external interrupt line management */ + for_each_possible_cpu(cpu) { + hgctrl = per_cpu_ptr(&aia_hgei, cpu); + raw_spin_lock_init(&hgctrl->lock); + if (kvm_riscv_aia_nr_hgei) { + hgctrl->free_bitmap = + BIT(kvm_riscv_aia_nr_hgei + 1) - 1; + hgctrl->free_bitmap &= ~BIT(0); + } else + hgctrl->free_bitmap = 0; + } + + /* Find INTC irq domain */ + domain = irq_find_matching_fwnode(riscv_get_intc_hwnode(), + DOMAIN_BUS_ANY); + if (!domain) { + kvm_err("unable to find INTC domain\n"); + return -ENOENT; + } + + /* Map per-CPU SGEI interrupt from INTC domain */ + hgei_parent_irq = irq_create_mapping(domain, IRQ_S_GEXT); + if (!hgei_parent_irq) { + kvm_err("unable to map SGEI IRQ\n"); + return -ENOMEM; + } + + /* Request per-CPU SGEI interrupt */ + rc = request_percpu_irq(hgei_parent_irq, hgei_interrupt, + "riscv-kvm", &aia_hgei); + if (rc) { + kvm_err("failed to request SGEI IRQ\n"); + return rc; + } + + return 0; +} + +static void aia_hgei_exit(void) +{ + /* Free per-CPU SGEI interrupt */ + free_percpu_irq(hgei_parent_irq, &aia_hgei); +} + void kvm_riscv_aia_enable(void) { if (!kvm_riscv_aia_available()) @@ -362,21 +540,82 @@ void kvm_riscv_aia_enable(void) csr_write(CSR_HVIPRIO1H, 0x0); csr_write(CSR_HVIPRIO2H, 0x0); #endif + + /* Enable per-CPU SGEI interrupt */ + enable_percpu_irq(hgei_parent_irq, + irq_get_trigger_type(hgei_parent_irq)); + csr_set(CSR_HIE, BIT(IRQ_S_GEXT)); } void kvm_riscv_aia_disable(void) { + int i; + unsigned long flags; + struct kvm_vcpu *vcpu; + struct aia_hgei_control *hgctrl; + if (!kvm_riscv_aia_available()) return; + hgctrl = get_cpu_ptr(&aia_hgei); + + /* Disable per-CPU SGEI interrupt */ + csr_clear(CSR_HIE, BIT(IRQ_S_GEXT)); + disable_percpu_irq(hgei_parent_irq); aia_set_hvictl(false); + + raw_spin_lock_irqsave(&hgctrl->lock, flags); + + for (i = 0; i <= kvm_riscv_aia_nr_hgei; i++) { + vcpu = hgctrl->owners[i]; + if (!vcpu) + continue; + + /* + * We release hgctrl->lock before notifying IMSIC + * so that we don't have lock ordering issues. + */ + raw_spin_unlock_irqrestore(&hgctrl->lock, flags); + + /* Notify IMSIC */ + kvm_riscv_vcpu_aia_imsic_release(vcpu); + + /* + * Wakeup VCPU if it was blocked so that it can + * run on other HARTs + */ + if (csr_read(CSR_HGEIE) & BIT(i)) { + csr_clear(CSR_HGEIE, BIT(i)); + kvm_vcpu_kick(vcpu); + } + + raw_spin_lock_irqsave(&hgctrl->lock, flags); + } + + raw_spin_unlock_irqrestore(&hgctrl->lock, flags); + + put_cpu_ptr(&aia_hgei); } int kvm_riscv_aia_init(void) { + int rc; + if (!riscv_isa_extension_available(NULL, SxAIA)) return -ENODEV; + /* Figure-out number of bits in HGEIE */ + csr_write(CSR_HGEIE, -1UL); + kvm_riscv_aia_nr_hgei = fls_long(csr_read(CSR_HGEIE)); + csr_write(CSR_HGEIE, 0); + if (kvm_riscv_aia_nr_hgei) + kvm_riscv_aia_nr_hgei--; + + /* Initialize guest external interrupt line management */ + rc = aia_hgei_init(); + if (rc) + return rc; + /* Enable KVM AIA support */ static_branch_enable(&kvm_riscv_aia_available); @@ -385,4 +624,9 @@ int kvm_riscv_aia_init(void) void kvm_riscv_aia_exit(void) { + if (!kvm_riscv_aia_available()) + return; + + /* Cleanup the HGEI state */ + aia_hgei_exit(); } diff --git a/arch/riscv/kvm/main.c b/arch/riscv/kvm/main.c index a7112d583637..48ae0d4b3932 100644 --- a/arch/riscv/kvm/main.c +++ b/arch/riscv/kvm/main.c @@ -116,7 +116,8 @@ static int __init riscv_kvm_init(void) kvm_info("VMID %ld bits available\n", kvm_riscv_gstage_vmid_bits()); if (kvm_riscv_aia_available()) - kvm_info("AIA available\n"); + kvm_info("AIA available with %d guest external interrupts\n", + kvm_riscv_aia_nr_hgei); rc = kvm_init(sizeof(struct kvm_vcpu), 0, THIS_MODULE); if (rc) { diff --git a/arch/riscv/kvm/vcpu.c b/arch/riscv/kvm/vcpu.c index 8bd9f2a8a0b9..2db62c6c0d3e 100644 --- a/arch/riscv/kvm/vcpu.c +++ b/arch/riscv/kvm/vcpu.c @@ -250,10 +250,12 @@ int kvm_cpu_has_pending_timer(struct kvm_vcpu *vcpu) void kvm_arch_vcpu_blocking(struct kvm_vcpu *vcpu) { + kvm_riscv_aia_wakeon_hgei(vcpu, true); } void kvm_arch_vcpu_unblocking(struct kvm_vcpu *vcpu) { + kvm_riscv_aia_wakeon_hgei(vcpu, false); } int kvm_arch_vcpu_runnable(struct kvm_vcpu *vcpu) From patchwork Thu Jun 15 07:33:45 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Anup Patel X-Patchwork-Id: 13280824 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id EEEE5EB64DB for ; Thu, 15 Jun 2023 07:35:29 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S244415AbjFOHf2 (ORCPT ); Thu, 15 Jun 2023 03:35:28 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:49388 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S245034AbjFOHer (ORCPT ); Thu, 15 Jun 2023 03:34:47 -0400 Received: from mail-pl1-x631.google.com (mail-pl1-x631.google.com [IPv6:2607:f8b0:4864:20::631]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 31ECD2969 for ; Thu, 15 Jun 2023 00:34:10 -0700 (PDT) Received: by mail-pl1-x631.google.com with SMTP id d9443c01a7336-1b51414b080so5055105ad.0 for ; Thu, 15 Jun 2023 00:34:10 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ventanamicro.com; s=google; t=1686814449; x=1689406449; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=O5HbPVR3QtNXxU1oKHkD/V666BtdibNb7IDtcLf2Zqk=; b=KIN1D7FID2JoNlDH8zGF4bkh2OCtJTdH1POzUKXCdInIE6fpu38dhi8pwZLanO/eFa VzXCrDUEf5hhdTHu0TZr3iVwRi9OQtSumjS9PKZSj6A5UdojcJHtY/wWY+mPfWQQ7jrW sopEXgToYFzpCh9PQzUgj+/aDA85niWpcabbBpemUrsdbIqb7ViqrdFweXQAyoad1W+c /1jGNK4aqPvp5yjD1NOAsrMDtFr7lEO5JnraRTPPIwynEFPZLUa3b32r/mAiL4YeIjcG yJjNaG4gmSXWsO+tvWLUiE1jFsDIZ2UigSbTJnmRztc653D4YDyIVLdxTeqiN8JiXFH3 sQYQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1686814449; x=1689406449; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=O5HbPVR3QtNXxU1oKHkD/V666BtdibNb7IDtcLf2Zqk=; b=Ck4BIpoB0c4DDeyBD5cw6wj2oDlEf+rrKK1wbabIDgIefi+ZRM5NpuPglRbYMLLjDw IvBbYhv8T2kCnm7i5IYJvwTP9DDcDgxxsUQbLEkPrN9tNUyOYQ6Z6K3H4uoG0UPY6Fet fYcuzSk0jOHIlu8fcX5TRomG17sWRoPSpM+LsOYtFYQ3CXlAoBGq7Hrw+d4sXcY1pVlO znXOyJQP1m9g0RY8Da89Z+fMxbtD6G/49Gij3RhOfyuh6ndBOngfiFoM1D5Qg1UuBaFN Uxn8+a3m7/gGoAU87pkRPICx3G/G0SEjFEE+VXsHizb3M6UQX81z42Q3XgQXKQidKmvb mpGw== X-Gm-Message-State: AC+VfDw3j+Jl7jJTHymNaam/YsnBeUlNmndlRcCFXUIGCYdYWe+N0ADj lfIhr7CJGdCqrSJYnmcWikdGpQ== X-Google-Smtp-Source: ACHHUZ4WHHdbev9KkvtpUlC2u1yPsievjxeS0pSrfXL0jG8l30ecmSB7b61z1U+MRLZr4dJIybMHtg== X-Received: by 2002:a17:902:7597:b0:1b3:bbe3:25a8 with SMTP id j23-20020a170902759700b001b3bbe325a8mr10233865pll.55.1686814449238; Thu, 15 Jun 2023 00:34:09 -0700 (PDT) Received: from anup-ubuntu-vm.localdomain ([106.51.83.242]) by smtp.gmail.com with ESMTPSA id ji1-20020a170903324100b001b016313b1dsm8049855plb.86.2023.06.15.00.34.06 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 15 Jun 2023 00:34:08 -0700 (PDT) From: Anup Patel To: Paolo Bonzini , Atish Patra Cc: Palmer Dabbelt , Paul Walmsley , Andrew Jones , kvm@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, Anup Patel , Atish Patra Subject: [PATCH v3 02/10] RISC-V: KVM: Add IMSIC related defines Date: Thu, 15 Jun 2023 13:03:45 +0530 Message-Id: <20230615073353.85435-3-apatel@ventanamicro.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230615073353.85435-1-apatel@ventanamicro.com> References: <20230615073353.85435-1-apatel@ventanamicro.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org We add IMSIC related defines in a separate header so that different parts of KVM code can share it. Once AIA drivers are merged will have a common IMSIC header shared by both KVM and IRQCHIP driver. Signed-off-by: Anup Patel Reviewed-by: Atish Patra --- arch/riscv/include/asm/kvm_aia_imsic.h | 38 ++++++++++++++++++++++++++ arch/riscv/kvm/aia.c | 3 +- 2 files changed, 39 insertions(+), 2 deletions(-) create mode 100644 arch/riscv/include/asm/kvm_aia_imsic.h diff --git a/arch/riscv/include/asm/kvm_aia_imsic.h b/arch/riscv/include/asm/kvm_aia_imsic.h new file mode 100644 index 000000000000..da5881d2bde0 --- /dev/null +++ b/arch/riscv/include/asm/kvm_aia_imsic.h @@ -0,0 +1,38 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* + * Copyright (C) 2021 Western Digital Corporation or its affiliates. + * Copyright (C) 2022 Ventana Micro Systems Inc. + */ +#ifndef __KVM_RISCV_AIA_IMSIC_H +#define __KVM_RISCV_AIA_IMSIC_H + +#include +#include + +#define IMSIC_MMIO_PAGE_SHIFT 12 +#define IMSIC_MMIO_PAGE_SZ (1UL << IMSIC_MMIO_PAGE_SHIFT) +#define IMSIC_MMIO_PAGE_LE 0x00 +#define IMSIC_MMIO_PAGE_BE 0x04 + +#define IMSIC_MIN_ID 63 +#define IMSIC_MAX_ID 2048 + +#define IMSIC_EIDELIVERY 0x70 + +#define IMSIC_EITHRESHOLD 0x72 + +#define IMSIC_EIP0 0x80 +#define IMSIC_EIP63 0xbf +#define IMSIC_EIPx_BITS 32 + +#define IMSIC_EIE0 0xc0 +#define IMSIC_EIE63 0xff +#define IMSIC_EIEx_BITS 32 + +#define IMSIC_FIRST IMSIC_EIDELIVERY +#define IMSIC_LAST IMSIC_EIE63 + +#define IMSIC_MMIO_SETIPNUM_LE 0x00 +#define IMSIC_MMIO_SETIPNUM_BE 0x04 + +#endif diff --git a/arch/riscv/kvm/aia.c b/arch/riscv/kvm/aia.c index 1cee75a8c883..c78c06d99e39 100644 --- a/arch/riscv/kvm/aia.c +++ b/arch/riscv/kvm/aia.c @@ -15,6 +15,7 @@ #include #include #include +#include struct aia_hgei_control { raw_spinlock_t lock; @@ -364,8 +365,6 @@ static int aia_rmw_iprio(struct kvm_vcpu *vcpu, unsigned int isel, return KVM_INSN_CONTINUE_NEXT_SEPC; } -#define IMSIC_FIRST 0x70 -#define IMSIC_LAST 0xff int kvm_riscv_vcpu_aia_rmw_ireg(struct kvm_vcpu *vcpu, unsigned int csr_num, unsigned long *val, unsigned long new_val, unsigned long wr_mask) From patchwork Thu Jun 15 07:33:46 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Anup Patel X-Patchwork-Id: 13280820 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9C89CEB64D9 for ; Thu, 15 Jun 2023 07:35:19 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S244853AbjFOHfS (ORCPT ); Thu, 15 Jun 2023 03:35:18 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48888 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S245056AbjFOHes (ORCPT ); Thu, 15 Jun 2023 03:34:48 -0400 Received: from mail-pl1-x631.google.com (mail-pl1-x631.google.com [IPv6:2607:f8b0:4864:20::631]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B575A2973 for ; Thu, 15 Jun 2023 00:34:13 -0700 (PDT) Received: by mail-pl1-x631.google.com with SMTP id d9443c01a7336-1b506a66882so8740875ad.2 for ; Thu, 15 Jun 2023 00:34:13 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ventanamicro.com; s=google; t=1686814452; x=1689406452; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=n5vQwpyR3/1vG/MlBdXYEenCvwMIo2s7ZFYf5nDLaQE=; b=Wzl1Ykl8eOwlw+4AW6ewSJO4VQKgM5N7dvFdKlOR/mSpdA9DgVny251fYxrcO4DFzv MP/TXjrazhukelqOeBJ3uKasUkan0XMgyw3oWWGVRZYofoYEs1lSpHrd/oDdvSlpxeA8 gR/9L6uWDIbVgO7ai5ZqsjIKYPkf2Sl7MZVmYuwRXya1nKZPH76w0/i542oR8dqr8uN6 qke9O+CV/kW5+Mw9EwbSRsqKtQPbAWRxkqgr9thgAWqyOmOR6vJaCVocIISUHTzJXtOz WW0CJCPdeVFlmV7TE4bDABAR5DULT95/FIy3T/RsQtvRwytg6TQ+38dJWHV6vhKdF6vo Fjug== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1686814452; x=1689406452; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=n5vQwpyR3/1vG/MlBdXYEenCvwMIo2s7ZFYf5nDLaQE=; b=Jh1BDL7y766LTb3MELdbaNth3+HJaDLxwHhIF3vG9ocOMOjTOqpkmLQs0iEhGQ8FAt 3eYT2Wgm1fGgz7YIIO7pbbfCT0wwPoEuhXZlxup93N7F4emHjr9iK871AItoc83/B2bP ESPHci2C5i/QgoQ43MA2DemwblM6h2BY7zLHu8xh1m+fZGDNLSekXPUVJitSxW+m48Qg +Su/Jl9ZvhwZOWt8qalkfnwuMSZGCiSmSLtjpM4UUDPrluFdSS2rQuJYs8+iqyD4l5Pr Gis4knLH2fpuChtue9EYDCS7X7j3Juvz3Ra+ZGhBv2ZDwlqQVeSf0yZN78yJuTrSWed1 BI/g== X-Gm-Message-State: AC+VfDyGQwK3pDpDkiAWeiBR2OPnRT1/W9Bnre/EJ7NJWA4Tc5FNg8kN wxabfBs6JtLXAk2llUKdKxYmXw== X-Google-Smtp-Source: ACHHUZ54Z0ylk6QSHwqf8Rx/bzJHljB9afofIvFuNZQdqnZxhtOQfU7vwzkuSAjx7/MXeqngk3V20w== X-Received: by 2002:a17:903:54:b0:1af:cf34:a645 with SMTP id l20-20020a170903005400b001afcf34a645mr10978937pla.21.1686814452521; Thu, 15 Jun 2023 00:34:12 -0700 (PDT) Received: from anup-ubuntu-vm.localdomain ([106.51.83.242]) by smtp.gmail.com with ESMTPSA id ji1-20020a170903324100b001b016313b1dsm8049855plb.86.2023.06.15.00.34.09 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 15 Jun 2023 00:34:12 -0700 (PDT) From: Anup Patel To: Paolo Bonzini , Atish Patra Cc: Palmer Dabbelt , Paul Walmsley , Andrew Jones , kvm@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, Anup Patel , Atish Patra Subject: [PATCH v3 03/10] RISC-V: KVM: Add APLIC related defines Date: Thu, 15 Jun 2023 13:03:46 +0530 Message-Id: <20230615073353.85435-4-apatel@ventanamicro.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230615073353.85435-1-apatel@ventanamicro.com> References: <20230615073353.85435-1-apatel@ventanamicro.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org We add APLIC related defines in a separate header so that different parts of KVM code can share it. Once AIA drivers are merged will have a common APLIC header shared by both KVM and IRQCHIP driver. Signed-off-by: Anup Patel Reviewed-by: Atish Patra --- arch/riscv/include/asm/kvm_aia_aplic.h | 58 ++++++++++++++++++++++++++ 1 file changed, 58 insertions(+) create mode 100644 arch/riscv/include/asm/kvm_aia_aplic.h diff --git a/arch/riscv/include/asm/kvm_aia_aplic.h b/arch/riscv/include/asm/kvm_aia_aplic.h new file mode 100644 index 000000000000..6dd1a4809ec1 --- /dev/null +++ b/arch/riscv/include/asm/kvm_aia_aplic.h @@ -0,0 +1,58 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* + * Copyright (C) 2021 Western Digital Corporation or its affiliates. + * Copyright (C) 2022 Ventana Micro Systems Inc. + */ +#ifndef __KVM_RISCV_AIA_IMSIC_H +#define __KVM_RISCV_AIA_IMSIC_H + +#include + +#define APLIC_MAX_IDC BIT(14) +#define APLIC_MAX_SOURCE 1024 + +#define APLIC_DOMAINCFG 0x0000 +#define APLIC_DOMAINCFG_RDONLY 0x80000000 +#define APLIC_DOMAINCFG_IE BIT(8) +#define APLIC_DOMAINCFG_DM BIT(2) +#define APLIC_DOMAINCFG_BE BIT(0) + +#define APLIC_SOURCECFG_BASE 0x0004 +#define APLIC_SOURCECFG_D BIT(10) +#define APLIC_SOURCECFG_CHILDIDX_MASK 0x000003ff +#define APLIC_SOURCECFG_SM_MASK 0x00000007 +#define APLIC_SOURCECFG_SM_INACTIVE 0x0 +#define APLIC_SOURCECFG_SM_DETACH 0x1 +#define APLIC_SOURCECFG_SM_EDGE_RISE 0x4 +#define APLIC_SOURCECFG_SM_EDGE_FALL 0x5 +#define APLIC_SOURCECFG_SM_LEVEL_HIGH 0x6 +#define APLIC_SOURCECFG_SM_LEVEL_LOW 0x7 + +#define APLIC_IRQBITS_PER_REG 32 + +#define APLIC_SETIP_BASE 0x1c00 +#define APLIC_SETIPNUM 0x1cdc + +#define APLIC_CLRIP_BASE 0x1d00 +#define APLIC_CLRIPNUM 0x1ddc + +#define APLIC_SETIE_BASE 0x1e00 +#define APLIC_SETIENUM 0x1edc + +#define APLIC_CLRIE_BASE 0x1f00 +#define APLIC_CLRIENUM 0x1fdc + +#define APLIC_SETIPNUM_LE 0x2000 +#define APLIC_SETIPNUM_BE 0x2004 + +#define APLIC_GENMSI 0x3000 + +#define APLIC_TARGET_BASE 0x3004 +#define APLIC_TARGET_HART_IDX_SHIFT 18 +#define APLIC_TARGET_HART_IDX_MASK 0x3fff +#define APLIC_TARGET_GUEST_IDX_SHIFT 12 +#define APLIC_TARGET_GUEST_IDX_MASK 0x3f +#define APLIC_TARGET_IPRIO_MASK 0xff +#define APLIC_TARGET_EIID_MASK 0x7ff + +#endif From patchwork Thu Jun 15 07:33:47 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Anup Patel X-Patchwork-Id: 13280818 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5DA14EB64D9 for ; Thu, 15 Jun 2023 07:35:13 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S243940AbjFOHfM (ORCPT ); Thu, 15 Jun 2023 03:35:12 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:49286 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S245079AbjFOHeu (ORCPT ); Thu, 15 Jun 2023 03:34:50 -0400 Received: from mail-pl1-x629.google.com (mail-pl1-x629.google.com [IPv6:2607:f8b0:4864:20::629]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B184A2D41 for ; Thu, 15 Jun 2023 00:34:16 -0700 (PDT) Received: by mail-pl1-x629.google.com with SMTP id d9443c01a7336-1b3d0b33dc2so26805635ad.0 for ; Thu, 15 Jun 2023 00:34:16 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ventanamicro.com; s=google; t=1686814456; x=1689406456; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=qy2FvnQ7sT+hFfzfZbK6StB7FDfBUwIcnd7vowD7Hdo=; b=Nt5IJ9A8Go305vi9k3xNch93gz4wG7hVjY7emu+ejrqDCvtVPNFSMj/ewo8/JK8jKC 5mTCTUhtatBK9HMGYBEbL1PQ08wnaCe7WXC+hWTgRDBES+tNYHWe/UI821L/gPKf8//x 0ggc2QMn2Bd4RHfoQL5qR+bIELmGTRAjHA1f5VFO+cJfPiov87zTo4q4rQDv5BaUL2ro jgYmzC2oRLN4GydsRhhA5SOMkVWeZxc7z1uIl+FgKR0B0/XjHWmimv0ZNw86j/5Ge3Am ir0opF7aLz9gqQJ1q21Vhp/uz/wejAlOt2q/vOSIwYoNQUB29t/JHEHK8W+lT6yaY+fQ QJWw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1686814456; x=1689406456; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=qy2FvnQ7sT+hFfzfZbK6StB7FDfBUwIcnd7vowD7Hdo=; b=NcdGuT8TeOp3l2faKQAQqSgDVM9DXLdDf8SfWOGGAySLNbAWO0x7QGTKAcxCUM6lb9 IoMMWB4b+ZkSjWrnI6Bk/uvXn7FtALn1NHq2QXCV7c3zrQOLiVsrjHATFjjVjobIyvWR jpx6Tsmc+BHedXRokTTPf2WYQCrf5Pj2C0qZyTWyayIR6ZXXEOIDgSiny3TBWPW1ppYY 1uxs6gH0PQOXoBkp7rcaI9WEPOD7F6G4Qip/hseBLXeSRYdBVWE+LDWz1tnpF9+REnUI jpLNdXtlHJWQkUadSFBEniw91hdz/7mP7VdiznCxoAiActkWzfO/Czdz+eaZ5duRadj2 snKA== X-Gm-Message-State: AC+VfDzVNCOBDRKKVabQq9bUnqhWT0JbtieeE/HHI7QwEiyxrUsl1gyX EtyOnz1xLBrIUGaB0mOAOgG5aw== X-Google-Smtp-Source: ACHHUZ6mHR+PnahYyMyHEriKGo1bZrAuECg7TeZ+DuP6p9ElRRQTkxt63TCxjTz3+MbZztPJbCNo0w== X-Received: by 2002:a17:903:22c6:b0:1b3:d25a:5ece with SMTP id y6-20020a17090322c600b001b3d25a5ecemr10637846plg.31.1686814455799; Thu, 15 Jun 2023 00:34:15 -0700 (PDT) Received: from anup-ubuntu-vm.localdomain ([106.51.83.242]) by smtp.gmail.com with ESMTPSA id ji1-20020a170903324100b001b016313b1dsm8049855plb.86.2023.06.15.00.34.12 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 15 Jun 2023 00:34:15 -0700 (PDT) From: Anup Patel To: Paolo Bonzini , Atish Patra Cc: Palmer Dabbelt , Paul Walmsley , Andrew Jones , kvm@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, Anup Patel , Atish Patra Subject: [PATCH v3 04/10] RISC-V: KVM: Set kvm_riscv_aia_nr_hgei to zero Date: Thu, 15 Jun 2023 13:03:47 +0530 Message-Id: <20230615073353.85435-5-apatel@ventanamicro.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230615073353.85435-1-apatel@ventanamicro.com> References: <20230615073353.85435-1-apatel@ventanamicro.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org We hard-code the kvm_riscv_aia_nr_hgei to zero until IMSIC HW guest file support is added in KVM RISC-V. Signed-off-by: Anup Patel Reviewed-by: Atish Patra --- arch/riscv/kvm/aia.c | 10 +++++++++- 1 file changed, 9 insertions(+), 1 deletion(-) diff --git a/arch/riscv/kvm/aia.c b/arch/riscv/kvm/aia.c index c78c06d99e39..3f97575707eb 100644 --- a/arch/riscv/kvm/aia.c +++ b/arch/riscv/kvm/aia.c @@ -408,7 +408,7 @@ int kvm_riscv_aia_alloc_hgei(int cpu, struct kvm_vcpu *owner, raw_spin_unlock_irqrestore(&hgctrl->lock, flags); - /* TODO: To be updated later by AIA in-kernel irqchip support */ + /* TODO: To be updated later by AIA IMSIC HW guest file support */ if (hgei_va) *hgei_va = NULL; if (hgei_pa) @@ -610,6 +610,14 @@ int kvm_riscv_aia_init(void) if (kvm_riscv_aia_nr_hgei) kvm_riscv_aia_nr_hgei--; + /* + * Number of usable HGEI lines should be minimum of per-HART + * IMSIC guest files and number of bits in HGEIE + * + * TODO: To be updated later by AIA IMSIC HW guest file support + */ + kvm_riscv_aia_nr_hgei = 0; + /* Initialize guest external interrupt line management */ rc = aia_hgei_init(); if (rc) From patchwork Thu Jun 15 07:33:48 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Anup Patel X-Patchwork-Id: 13280821 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 05396EB64DB for ; Thu, 15 Jun 2023 07:35:22 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S244871AbjFOHfT (ORCPT ); Thu, 15 Jun 2023 03:35:19 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:49296 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S245094AbjFOHew (ORCPT ); Thu, 15 Jun 2023 03:34:52 -0400 Received: from mail-pf1-x42c.google.com (mail-pf1-x42c.google.com [IPv6:2607:f8b0:4864:20::42c]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 1396B26B9 for ; Thu, 15 Jun 2023 00:34:19 -0700 (PDT) Received: by mail-pf1-x42c.google.com with SMTP id d2e1a72fcca58-666a49cd7a3so114310b3a.2 for ; Thu, 15 Jun 2023 00:34:19 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ventanamicro.com; s=google; t=1686814459; x=1689406459; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=ydUY6W4RBwQVaJzdNLHhpuNt+hTlaFUSHxN7MqcHuTg=; b=YpilIwIakRTIY3ttlI8qn0TP9/teJ2L98fQmFi/fnanGQgOQV8TjhepFs0p9U1EW8e KyHaRPh4M09E2ODEPcEO8PCt7KuOOk9WdOhjaZDl+O//d5J12s6EyZZ/wRQsgQIObcnp QVhhuQGnlArl6oA1wNGAe+RyqyjMm83+/iSjVcP/wNFB/8qAgZCEvh8+LjKe+UtfI1pi 613x3dkGH9LSARtN06udMwpMt7WKHJpZ6rqxGvFG6/3Cgv0x7OgrT5Xo2x9qgIUZjpSM N1l8Nhg4ZwkcjbVCQve26U+8ELv3CugopPEwRa/ia8Q+t8ptIyB6uUUPSTDdQbgQw3Kh xMwQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1686814459; x=1689406459; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=ydUY6W4RBwQVaJzdNLHhpuNt+hTlaFUSHxN7MqcHuTg=; b=Cpwc0oi9TW8i8PbQZNXAVK3zabNrPg+/14KuOUpQgqG6aPJzfmBOHob9KdnpZM0y4l fY/TerI712p+x6nmHvPQTiqCZQ+7W3g3yMJCso8sLk0kw/y6raIMQGU+nfVTVsO9OJnj zprUT+g/JwI0nZ4bD1WwClT5KOBGMTGcXkZkmVlRAfCQIZCaVdeYQ9EweMDbVh7W97mT zZtz1erDrFUSox7ecjJovbaduOegJYut9vODvGXKdft4rcO57tvpAukfCMMcCT+z/qvZ TBiJC4ZPCa9I55b7j5S7SPac9c2wjQqZPiujaCpsA6C6A1ODhOc0mrvDi4rMwCm8+1te 7fpA== X-Gm-Message-State: AC+VfDw60b23Og+oC6bqXnnA+uiKB+cri/AaRVeXc0KlcIoFaixclX6m FszbM4MxzJYQlZpH5H0+2PHOLg== X-Google-Smtp-Source: ACHHUZ5VWrDJ88RER88CXUeALQYeGPbGnxWHKnwMwZJd0QJ47Fungz1vUCs1/fLMqgcEEbRHSrbUrA== X-Received: by 2002:a05:6a20:2451:b0:10d:5390:eadb with SMTP id t17-20020a056a20245100b0010d5390eadbmr4136908pzc.2.1686814459082; Thu, 15 Jun 2023 00:34:19 -0700 (PDT) Received: from anup-ubuntu-vm.localdomain ([106.51.83.242]) by smtp.gmail.com with ESMTPSA id ji1-20020a170903324100b001b016313b1dsm8049855plb.86.2023.06.15.00.34.16 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 15 Jun 2023 00:34:18 -0700 (PDT) From: Anup Patel To: Paolo Bonzini , Atish Patra Cc: Palmer Dabbelt , Paul Walmsley , Andrew Jones , kvm@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, Anup Patel , Atish Patra Subject: [PATCH v3 05/10] RISC-V: KVM: Skeletal in-kernel AIA irqchip support Date: Thu, 15 Jun 2023 13:03:48 +0530 Message-Id: <20230615073353.85435-6-apatel@ventanamicro.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230615073353.85435-1-apatel@ventanamicro.com> References: <20230615073353.85435-1-apatel@ventanamicro.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org To incrementally implement in-kernel AIA irqchip support, we first add minimal skeletal support which only compiles but does not provide any functionality. Signed-off-by: Anup Patel Reviewed-by: Atish Patra --- arch/riscv/include/asm/kvm_aia.h | 20 +++++ arch/riscv/include/asm/kvm_host.h | 4 + arch/riscv/include/uapi/asm/kvm.h | 4 + arch/riscv/kvm/Kconfig | 4 + arch/riscv/kvm/aia.c | 8 ++ arch/riscv/kvm/vm.c | 118 ++++++++++++++++++++++++++++++ 6 files changed, 158 insertions(+) diff --git a/arch/riscv/include/asm/kvm_aia.h b/arch/riscv/include/asm/kvm_aia.h index 0938e0cadf80..3bc0a0e47a15 100644 --- a/arch/riscv/include/asm/kvm_aia.h +++ b/arch/riscv/include/asm/kvm_aia.h @@ -45,6 +45,7 @@ struct kvm_vcpu_aia { #define irqchip_in_kernel(k) ((k)->arch.aia.in_kernel) extern unsigned int kvm_riscv_aia_nr_hgei; +extern unsigned int kvm_riscv_aia_max_ids; DECLARE_STATIC_KEY_FALSE(kvm_riscv_aia_available); #define kvm_riscv_aia_available() \ static_branch_unlikely(&kvm_riscv_aia_available) @@ -116,6 +117,25 @@ static inline void kvm_riscv_vcpu_aia_deinit(struct kvm_vcpu *vcpu) { } +static inline int kvm_riscv_aia_inject_msi_by_id(struct kvm *kvm, + u32 hart_index, + u32 guest_index, u32 iid) +{ + return 0; +} + +static inline int kvm_riscv_aia_inject_msi(struct kvm *kvm, + struct kvm_msi *msi) +{ + return 0; +} + +static inline int kvm_riscv_aia_inject_irq(struct kvm *kvm, + unsigned int irq, bool level) +{ + return 0; +} + static inline void kvm_riscv_aia_init_vm(struct kvm *kvm) { } diff --git a/arch/riscv/include/asm/kvm_host.h b/arch/riscv/include/asm/kvm_host.h index ee0acccb1d3b..871432586a63 100644 --- a/arch/riscv/include/asm/kvm_host.h +++ b/arch/riscv/include/asm/kvm_host.h @@ -27,6 +27,8 @@ #define KVM_VCPU_MAX_FEATURES 0 +#define KVM_IRQCHIP_NUM_PINS 1024 + #define KVM_REQ_SLEEP \ KVM_ARCH_REQ_FLAGS(0, KVM_REQUEST_WAIT | KVM_REQUEST_NO_WAKEUP) #define KVM_REQ_VCPU_RESET KVM_ARCH_REQ(1) @@ -318,6 +320,8 @@ int kvm_riscv_gstage_vmid_init(struct kvm *kvm); bool kvm_riscv_gstage_vmid_ver_changed(struct kvm_vmid *vmid); void kvm_riscv_gstage_vmid_update(struct kvm_vcpu *vcpu); +int kvm_riscv_setup_default_irq_routing(struct kvm *kvm, u32 lines); + void __kvm_riscv_unpriv_trap(void); unsigned long kvm_riscv_vcpu_unpriv_read(struct kvm_vcpu *vcpu, diff --git a/arch/riscv/include/uapi/asm/kvm.h b/arch/riscv/include/uapi/asm/kvm.h index f92790c9481a..332d4a274891 100644 --- a/arch/riscv/include/uapi/asm/kvm.h +++ b/arch/riscv/include/uapi/asm/kvm.h @@ -15,6 +15,7 @@ #include #include +#define __KVM_HAVE_IRQ_LINE #define __KVM_HAVE_READONLY_MEM #define KVM_COALESCED_MMIO_PAGE_OFFSET 1 @@ -203,6 +204,9 @@ enum KVM_RISCV_SBI_EXT_ID { #define KVM_REG_RISCV_SBI_MULTI_REG_LAST \ KVM_REG_RISCV_SBI_MULTI_REG(KVM_RISCV_SBI_EXT_MAX - 1) +/* One single KVM irqchip, ie. the AIA */ +#define KVM_NR_IRQCHIPS 1 + #endif #endif /* __LINUX_KVM_RISCV_H */ diff --git a/arch/riscv/kvm/Kconfig b/arch/riscv/kvm/Kconfig index 28891e583259..dfc237d7875b 100644 --- a/arch/riscv/kvm/Kconfig +++ b/arch/riscv/kvm/Kconfig @@ -21,6 +21,10 @@ config KVM tristate "Kernel-based Virtual Machine (KVM) support (EXPERIMENTAL)" depends on RISCV_SBI && MMU select HAVE_KVM_EVENTFD + select HAVE_KVM_IRQCHIP + select HAVE_KVM_IRQFD + select HAVE_KVM_IRQ_ROUTING + select HAVE_KVM_MSI select HAVE_KVM_VCPU_ASYNC_IOCTL select KVM_GENERIC_DIRTYLOG_READ_PROTECT select KVM_GENERIC_HARDWARE_ENABLING diff --git a/arch/riscv/kvm/aia.c b/arch/riscv/kvm/aia.c index 3f97575707eb..18c442c15ff2 100644 --- a/arch/riscv/kvm/aia.c +++ b/arch/riscv/kvm/aia.c @@ -26,6 +26,7 @@ static DEFINE_PER_CPU(struct aia_hgei_control, aia_hgei); static int hgei_parent_irq; unsigned int kvm_riscv_aia_nr_hgei; +unsigned int kvm_riscv_aia_max_ids; DEFINE_STATIC_KEY_FALSE(kvm_riscv_aia_available); static int aia_find_hgei(struct kvm_vcpu *owner) @@ -618,6 +619,13 @@ int kvm_riscv_aia_init(void) */ kvm_riscv_aia_nr_hgei = 0; + /* + * Find number of guest MSI IDs + * + * TODO: To be updated later by AIA IMSIC HW guest file support + */ + kvm_riscv_aia_max_ids = IMSIC_MAX_ID; + /* Initialize guest external interrupt line management */ rc = aia_hgei_init(); if (rc) diff --git a/arch/riscv/kvm/vm.c b/arch/riscv/kvm/vm.c index 6ef15f78e80f..7e2b50c692c1 100644 --- a/arch/riscv/kvm/vm.c +++ b/arch/riscv/kvm/vm.c @@ -55,11 +55,129 @@ void kvm_arch_destroy_vm(struct kvm *kvm) kvm_riscv_aia_destroy_vm(kvm); } +int kvm_vm_ioctl_irq_line(struct kvm *kvm, struct kvm_irq_level *irql, + bool line_status) +{ + if (!irqchip_in_kernel(kvm)) + return -ENXIO; + + return kvm_riscv_aia_inject_irq(kvm, irql->irq, irql->level); +} + +int kvm_set_msi(struct kvm_kernel_irq_routing_entry *e, + struct kvm *kvm, int irq_source_id, + int level, bool line_status) +{ + struct kvm_msi msi; + + if (!level) + return -1; + + msi.address_lo = e->msi.address_lo; + msi.address_hi = e->msi.address_hi; + msi.data = e->msi.data; + msi.flags = e->msi.flags; + msi.devid = e->msi.devid; + + return kvm_riscv_aia_inject_msi(kvm, &msi); +} + +static int kvm_riscv_set_irq(struct kvm_kernel_irq_routing_entry *e, + struct kvm *kvm, int irq_source_id, + int level, bool line_status) +{ + return kvm_riscv_aia_inject_irq(kvm, e->irqchip.pin, level); +} + +int kvm_riscv_setup_default_irq_routing(struct kvm *kvm, u32 lines) +{ + struct kvm_irq_routing_entry *ents; + int i, rc; + + ents = kcalloc(lines, sizeof(*ents), GFP_KERNEL); + if (!ents) + return -ENOMEM; + + for (i = 0; i < lines; i++) { + ents[i].gsi = i; + ents[i].type = KVM_IRQ_ROUTING_IRQCHIP; + ents[i].u.irqchip.irqchip = 0; + ents[i].u.irqchip.pin = i; + } + rc = kvm_set_irq_routing(kvm, ents, lines, 0); + kfree(ents); + + return rc; +} + +bool kvm_arch_can_set_irq_routing(struct kvm *kvm) +{ + return irqchip_in_kernel(kvm); +} + +int kvm_set_routing_entry(struct kvm *kvm, + struct kvm_kernel_irq_routing_entry *e, + const struct kvm_irq_routing_entry *ue) +{ + int r = -EINVAL; + + switch (ue->type) { + case KVM_IRQ_ROUTING_IRQCHIP: + e->set = kvm_riscv_set_irq; + e->irqchip.irqchip = ue->u.irqchip.irqchip; + e->irqchip.pin = ue->u.irqchip.pin; + if ((e->irqchip.pin >= KVM_IRQCHIP_NUM_PINS) || + (e->irqchip.irqchip >= KVM_NR_IRQCHIPS)) + goto out; + break; + case KVM_IRQ_ROUTING_MSI: + e->set = kvm_set_msi; + e->msi.address_lo = ue->u.msi.address_lo; + e->msi.address_hi = ue->u.msi.address_hi; + e->msi.data = ue->u.msi.data; + e->msi.flags = ue->flags; + e->msi.devid = ue->u.msi.devid; + break; + default: + goto out; + } + r = 0; +out: + return r; +} + +int kvm_arch_set_irq_inatomic(struct kvm_kernel_irq_routing_entry *e, + struct kvm *kvm, int irq_source_id, int level, + bool line_status) +{ + if (!level) + return -EWOULDBLOCK; + + switch (e->type) { + case KVM_IRQ_ROUTING_MSI: + return kvm_set_msi(e, kvm, irq_source_id, level, line_status); + + case KVM_IRQ_ROUTING_IRQCHIP: + return kvm_riscv_set_irq(e, kvm, irq_source_id, + level, line_status); + } + + return -EWOULDBLOCK; +} + +bool kvm_arch_irqchip_in_kernel(struct kvm *kvm) +{ + return irqchip_in_kernel(kvm); +} + int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext) { int r; switch (ext) { + case KVM_CAP_IRQCHIP: + r = kvm_riscv_aia_available(); + break; case KVM_CAP_IOEVENTFD: case KVM_CAP_DEVICE_CTRL: case KVM_CAP_USER_MEMORY: From patchwork Thu Jun 15 07:33:49 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Anup Patel X-Patchwork-Id: 13280823 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id D0060EB64DD for ; Thu, 15 Jun 2023 07:35:27 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S238023AbjFOHf0 (ORCPT ); Thu, 15 Jun 2023 03:35:26 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:49314 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S245122AbjFOHey (ORCPT ); Thu, 15 Jun 2023 03:34:54 -0400 Received: from mail-pl1-x62f.google.com (mail-pl1-x62f.google.com [IPv6:2607:f8b0:4864:20::62f]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 82AAF2D4F for ; Thu, 15 Jun 2023 00:34:23 -0700 (PDT) Received: by mail-pl1-x62f.google.com with SMTP id d9443c01a7336-1b4fef08cfdso10622775ad.1 for ; Thu, 15 Jun 2023 00:34:23 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ventanamicro.com; s=google; t=1686814463; x=1689406463; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=oe+wOxCwtq9BECo4I1qTDjMqMHqDufwDUGdfqwIPBeg=; b=oKIhlWF+BFLwGPAFdQpoDuya4MVeYDIoxGt/KkcuYxuVb44idx2f2viZw6PFxCJtag WU5AnN5Ubxwy49PvIZEveBmI/QC6Sio7GIRZs6gHjInm+DiQiUAPQS/vriEcl/E3Diqu DWk4zEEJVfDzerKX88+GaFLdPYVGvxovSRqztyqxMM1QYCxYwxeeKLkk1uq1L5JKfkVt opuWlNgDDoAu/PUDuH0pDBG724MYV9X6q4bPy4CvxOUXj2MRrT9Xsni+C3jyupEeCXuB rF6Z3EqF03tm7ieGlkNNtPezaSABqTnqi1jsogrui3DRM/qUWr2H6cJ/KDzyaNmyl38M tgog== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1686814463; x=1689406463; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=oe+wOxCwtq9BECo4I1qTDjMqMHqDufwDUGdfqwIPBeg=; b=Q3fdUIra7UKMwVvbwC8BEnt4bsXma3iT6Ja3Q60ZkUSpeMisxxX2Gv3PMwcmz5cCtm jFrubpl0W7PQEFpzgKHjZf7ImSFvTgiY6N6/5LPYUE63ZaAvFBL/wVzHS1k06cAIiOF6 LpevWCgH9oYPyWwbInR75IXkkhI4Lr37WfnYwcD6BP2WH2fO3JRjaR1hdZSghV8/+rl0 qFPU0u/cE06w0oXJPq1P4tobrU1+5UCrEKtxzx6IC41/AY6miQUuRPo3O1fawx+DEgLi 9+mHmWq9lhQLG881vnk7oNi1AS41igURzt2ImwFTHAlec+VomoxDE+kU8XZPzuT56Td1 rJOA== X-Gm-Message-State: AC+VfDxwHnSItTGBJRuROstt51kqLffBFUHGyz9Rt3MbeIXMvvLXY2FA dyAGMz3FQTiP1HTiIEioO2V5NA== X-Google-Smtp-Source: ACHHUZ5FTaZfPDADCw5INAHWodiSSau8v24DLyPOkJZnNnZGfYHIdi/9JPNDlT9QqBC0mj1IteSviQ== X-Received: by 2002:a17:902:728d:b0:1b2:450f:9b6 with SMTP id d13-20020a170902728d00b001b2450f09b6mr11610213pll.8.1686814462623; Thu, 15 Jun 2023 00:34:22 -0700 (PDT) Received: from anup-ubuntu-vm.localdomain ([106.51.83.242]) by smtp.gmail.com with ESMTPSA id ji1-20020a170903324100b001b016313b1dsm8049855plb.86.2023.06.15.00.34.19 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 15 Jun 2023 00:34:22 -0700 (PDT) From: Anup Patel To: Paolo Bonzini , Atish Patra Cc: Palmer Dabbelt , Paul Walmsley , Andrew Jones , kvm@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, Anup Patel , Atish Patra Subject: [PATCH v3 06/10] RISC-V: KVM: Implement device interface for AIA irqchip Date: Thu, 15 Jun 2023 13:03:49 +0530 Message-Id: <20230615073353.85435-7-apatel@ventanamicro.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230615073353.85435-1-apatel@ventanamicro.com> References: <20230615073353.85435-1-apatel@ventanamicro.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org We implement KVM device interface for in-kernel AIA irqchip so that user-space can use KVM device ioctls to create, configure, and destroy in-kernel AIA irqchip. Signed-off-by: Anup Patel Reviewed-by: Atish Patra --- arch/riscv/include/asm/kvm_aia.h | 132 +++++-- arch/riscv/include/uapi/asm/kvm.h | 45 +++ arch/riscv/kvm/Makefile | 1 + arch/riscv/kvm/aia.c | 11 + arch/riscv/kvm/aia_device.c | 623 ++++++++++++++++++++++++++++++ include/uapi/linux/kvm.h | 2 + 6 files changed, 772 insertions(+), 42 deletions(-) create mode 100644 arch/riscv/kvm/aia_device.c diff --git a/arch/riscv/include/asm/kvm_aia.h b/arch/riscv/include/asm/kvm_aia.h index 3bc0a0e47a15..a1281ebc9b92 100644 --- a/arch/riscv/include/asm/kvm_aia.h +++ b/arch/riscv/include/asm/kvm_aia.h @@ -20,6 +20,33 @@ struct kvm_aia { /* In-kernel irqchip initialized */ bool initialized; + + /* Virtualization mode (Emulation, HW Accelerated, or Auto) */ + u32 mode; + + /* Number of MSIs */ + u32 nr_ids; + + /* Number of wired IRQs */ + u32 nr_sources; + + /* Number of group bits in IMSIC address */ + u32 nr_group_bits; + + /* Position of group bits in IMSIC address */ + u32 nr_group_shift; + + /* Number of hart bits in IMSIC address */ + u32 nr_hart_bits; + + /* Number of guest bits in IMSIC address */ + u32 nr_guest_bits; + + /* Guest physical address of APLIC */ + gpa_t aplic_addr; + + /* Internal state of APLIC */ + void *aplic_state; }; struct kvm_vcpu_aia_csr { @@ -38,8 +65,19 @@ struct kvm_vcpu_aia { /* CPU AIA CSR context upon Guest VCPU reset */ struct kvm_vcpu_aia_csr guest_reset_csr; + + /* Guest physical address of IMSIC for this VCPU */ + gpa_t imsic_addr; + + /* HART index of IMSIC extacted from guest physical address */ + u32 hart_index; + + /* Internal state of IMSIC for this VCPU */ + void *imsic_state; }; +#define KVM_RISCV_AIA_UNDEF_ADDR (-1) + #define kvm_riscv_aia_initialized(k) ((k)->arch.aia.initialized) #define irqchip_in_kernel(k) ((k)->arch.aia.in_kernel) @@ -50,10 +88,17 @@ DECLARE_STATIC_KEY_FALSE(kvm_riscv_aia_available); #define kvm_riscv_aia_available() \ static_branch_unlikely(&kvm_riscv_aia_available) +extern struct kvm_device_ops kvm_riscv_aia_device_ops; + static inline void kvm_riscv_vcpu_aia_imsic_release(struct kvm_vcpu *vcpu) { } +static inline int kvm_riscv_vcpu_aia_imsic_update(struct kvm_vcpu *vcpu) +{ + return 1; +} + #define KVM_RISCV_AIA_IMSIC_TOPEI (ISELECT_MASK + 1) static inline int kvm_riscv_vcpu_aia_imsic_rmw(struct kvm_vcpu *vcpu, unsigned long isel, @@ -64,6 +109,41 @@ static inline int kvm_riscv_vcpu_aia_imsic_rmw(struct kvm_vcpu *vcpu, return 0; } +static inline void kvm_riscv_vcpu_aia_imsic_reset(struct kvm_vcpu *vcpu) +{ +} + +static inline int kvm_riscv_vcpu_aia_imsic_inject(struct kvm_vcpu *vcpu, + u32 guest_index, u32 offset, + u32 iid) +{ + return 0; +} + +static inline int kvm_riscv_vcpu_aia_imsic_init(struct kvm_vcpu *vcpu) +{ + return 0; +} + +static inline void kvm_riscv_vcpu_aia_imsic_cleanup(struct kvm_vcpu *vcpu) +{ +} + +static inline int kvm_riscv_aia_aplic_inject(struct kvm *kvm, + u32 source, bool level) +{ + return 0; +} + +static inline int kvm_riscv_aia_aplic_init(struct kvm *kvm) +{ + return 0; +} + +static inline void kvm_riscv_aia_aplic_cleanup(struct kvm *kvm) +{ +} + #ifdef CONFIG_32BIT void kvm_riscv_vcpu_aia_flush_interrupts(struct kvm_vcpu *vcpu); void kvm_riscv_vcpu_aia_sync_interrupts(struct kvm_vcpu *vcpu); @@ -99,50 +179,18 @@ int kvm_riscv_vcpu_aia_rmw_ireg(struct kvm_vcpu *vcpu, unsigned int csr_num, { .base = CSR_SIREG, .count = 1, .func = kvm_riscv_vcpu_aia_rmw_ireg }, \ { .base = CSR_STOPEI, .count = 1, .func = kvm_riscv_vcpu_aia_rmw_topei }, -static inline int kvm_riscv_vcpu_aia_update(struct kvm_vcpu *vcpu) -{ - return 1; -} - -static inline void kvm_riscv_vcpu_aia_reset(struct kvm_vcpu *vcpu) -{ -} - -static inline int kvm_riscv_vcpu_aia_init(struct kvm_vcpu *vcpu) -{ - return 0; -} - -static inline void kvm_riscv_vcpu_aia_deinit(struct kvm_vcpu *vcpu) -{ -} - -static inline int kvm_riscv_aia_inject_msi_by_id(struct kvm *kvm, - u32 hart_index, - u32 guest_index, u32 iid) -{ - return 0; -} - -static inline int kvm_riscv_aia_inject_msi(struct kvm *kvm, - struct kvm_msi *msi) -{ - return 0; -} +int kvm_riscv_vcpu_aia_update(struct kvm_vcpu *vcpu); +void kvm_riscv_vcpu_aia_reset(struct kvm_vcpu *vcpu); +int kvm_riscv_vcpu_aia_init(struct kvm_vcpu *vcpu); +void kvm_riscv_vcpu_aia_deinit(struct kvm_vcpu *vcpu); -static inline int kvm_riscv_aia_inject_irq(struct kvm *kvm, - unsigned int irq, bool level) -{ - return 0; -} +int kvm_riscv_aia_inject_msi_by_id(struct kvm *kvm, u32 hart_index, + u32 guest_index, u32 iid); +int kvm_riscv_aia_inject_msi(struct kvm *kvm, struct kvm_msi *msi); +int kvm_riscv_aia_inject_irq(struct kvm *kvm, unsigned int irq, bool level); -static inline void kvm_riscv_aia_init_vm(struct kvm *kvm) -{ -} - -static inline void kvm_riscv_aia_destroy_vm(struct kvm *kvm) -{ -} +void kvm_riscv_aia_init_vm(struct kvm *kvm); +void kvm_riscv_aia_destroy_vm(struct kvm *kvm); int kvm_riscv_aia_alloc_hgei(int cpu, struct kvm_vcpu *owner, void __iomem **hgei_va, phys_addr_t *hgei_pa); diff --git a/arch/riscv/include/uapi/asm/kvm.h b/arch/riscv/include/uapi/asm/kvm.h index 332d4a274891..047c8fc5bd71 100644 --- a/arch/riscv/include/uapi/asm/kvm.h +++ b/arch/riscv/include/uapi/asm/kvm.h @@ -204,6 +204,51 @@ enum KVM_RISCV_SBI_EXT_ID { #define KVM_REG_RISCV_SBI_MULTI_REG_LAST \ KVM_REG_RISCV_SBI_MULTI_REG(KVM_RISCV_SBI_EXT_MAX - 1) +/* Device Control API: RISC-V AIA */ +#define KVM_DEV_RISCV_APLIC_ALIGN 0x1000 +#define KVM_DEV_RISCV_APLIC_SIZE 0x4000 +#define KVM_DEV_RISCV_APLIC_MAX_HARTS 0x4000 +#define KVM_DEV_RISCV_IMSIC_ALIGN 0x1000 +#define KVM_DEV_RISCV_IMSIC_SIZE 0x1000 + +#define KVM_DEV_RISCV_AIA_GRP_CONFIG 0 +#define KVM_DEV_RISCV_AIA_CONFIG_MODE 0 +#define KVM_DEV_RISCV_AIA_CONFIG_IDS 1 +#define KVM_DEV_RISCV_AIA_CONFIG_SRCS 2 +#define KVM_DEV_RISCV_AIA_CONFIG_GROUP_BITS 3 +#define KVM_DEV_RISCV_AIA_CONFIG_GROUP_SHIFT 4 +#define KVM_DEV_RISCV_AIA_CONFIG_HART_BITS 5 +#define KVM_DEV_RISCV_AIA_CONFIG_GUEST_BITS 6 + +/* + * Modes of RISC-V AIA device: + * 1) EMUL (aka Emulation): Trap-n-emulate IMSIC + * 2) HWACCEL (aka HW Acceleration): Virtualize IMSIC using IMSIC guest files + * 3) AUTO (aka Automatic): Virtualize IMSIC using IMSIC guest files whenever + * available otherwise fallback to trap-n-emulation + */ +#define KVM_DEV_RISCV_AIA_MODE_EMUL 0 +#define KVM_DEV_RISCV_AIA_MODE_HWACCEL 1 +#define KVM_DEV_RISCV_AIA_MODE_AUTO 2 + +#define KVM_DEV_RISCV_AIA_IDS_MIN 63 +#define KVM_DEV_RISCV_AIA_IDS_MAX 2048 +#define KVM_DEV_RISCV_AIA_SRCS_MAX 1024 +#define KVM_DEV_RISCV_AIA_GROUP_BITS_MAX 8 +#define KVM_DEV_RISCV_AIA_GROUP_SHIFT_MIN 24 +#define KVM_DEV_RISCV_AIA_GROUP_SHIFT_MAX 56 +#define KVM_DEV_RISCV_AIA_HART_BITS_MAX 16 +#define KVM_DEV_RISCV_AIA_GUEST_BITS_MAX 8 + +#define KVM_DEV_RISCV_AIA_GRP_ADDR 1 +#define KVM_DEV_RISCV_AIA_ADDR_APLIC 0 +#define KVM_DEV_RISCV_AIA_ADDR_IMSIC(__vcpu) (1 + (__vcpu)) +#define KVM_DEV_RISCV_AIA_ADDR_MAX \ + (1 + KVM_DEV_RISCV_APLIC_MAX_HARTS) + +#define KVM_DEV_RISCV_AIA_GRP_CTRL 2 +#define KVM_DEV_RISCV_AIA_CTRL_INIT 0 + /* One single KVM irqchip, ie. the AIA */ #define KVM_NR_IRQCHIPS 1 diff --git a/arch/riscv/kvm/Makefile b/arch/riscv/kvm/Makefile index 8031b8912a0d..dd69ebe098bd 100644 --- a/arch/riscv/kvm/Makefile +++ b/arch/riscv/kvm/Makefile @@ -27,3 +27,4 @@ kvm-y += vcpu_sbi_hsm.o kvm-y += vcpu_timer.o kvm-$(CONFIG_RISCV_PMU_SBI) += vcpu_pmu.o vcpu_sbi_pmu.o kvm-y += aia.o +kvm-y += aia_device.o diff --git a/arch/riscv/kvm/aia.c b/arch/riscv/kvm/aia.c index 18c442c15ff2..585a3b42c52c 100644 --- a/arch/riscv/kvm/aia.c +++ b/arch/riscv/kvm/aia.c @@ -631,6 +631,14 @@ int kvm_riscv_aia_init(void) if (rc) return rc; + /* Register device operations */ + rc = kvm_register_device_ops(&kvm_riscv_aia_device_ops, + KVM_DEV_TYPE_RISCV_AIA); + if (rc) { + aia_hgei_exit(); + return rc; + } + /* Enable KVM AIA support */ static_branch_enable(&kvm_riscv_aia_available); @@ -642,6 +650,9 @@ void kvm_riscv_aia_exit(void) if (!kvm_riscv_aia_available()) return; + /* Unregister device operations */ + kvm_unregister_device_ops(KVM_DEV_TYPE_RISCV_AIA); + /* Cleanup the HGEI state */ aia_hgei_exit(); } diff --git a/arch/riscv/kvm/aia_device.c b/arch/riscv/kvm/aia_device.c new file mode 100644 index 000000000000..7ab555121872 --- /dev/null +++ b/arch/riscv/kvm/aia_device.c @@ -0,0 +1,623 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Copyright (C) 2021 Western Digital Corporation or its affiliates. + * Copyright (C) 2022 Ventana Micro Systems Inc. + * + * Authors: + * Anup Patel + */ + +#include +#include +#include +#include + +static void unlock_vcpus(struct kvm *kvm, int vcpu_lock_idx) +{ + struct kvm_vcpu *tmp_vcpu; + + for (; vcpu_lock_idx >= 0; vcpu_lock_idx--) { + tmp_vcpu = kvm_get_vcpu(kvm, vcpu_lock_idx); + mutex_unlock(&tmp_vcpu->mutex); + } +} + +static void unlock_all_vcpus(struct kvm *kvm) +{ + unlock_vcpus(kvm, atomic_read(&kvm->online_vcpus) - 1); +} + +static bool lock_all_vcpus(struct kvm *kvm) +{ + struct kvm_vcpu *tmp_vcpu; + unsigned long c; + + kvm_for_each_vcpu(c, tmp_vcpu, kvm) { + if (!mutex_trylock(&tmp_vcpu->mutex)) { + unlock_vcpus(kvm, c - 1); + return false; + } + } + + return true; +} + +static int aia_create(struct kvm_device *dev, u32 type) +{ + int ret; + unsigned long i; + struct kvm *kvm = dev->kvm; + struct kvm_vcpu *vcpu; + + if (irqchip_in_kernel(kvm)) + return -EEXIST; + + ret = -EBUSY; + if (!lock_all_vcpus(kvm)) + return ret; + + kvm_for_each_vcpu(i, vcpu, kvm) { + if (vcpu->arch.ran_atleast_once) + goto out_unlock; + } + ret = 0; + + kvm->arch.aia.in_kernel = true; + +out_unlock: + unlock_all_vcpus(kvm); + return ret; +} + +static void aia_destroy(struct kvm_device *dev) +{ + kfree(dev); +} + +static int aia_config(struct kvm *kvm, unsigned long type, + u32 *nr, bool write) +{ + struct kvm_aia *aia = &kvm->arch.aia; + + /* Writes can only be done before irqchip is initialized */ + if (write && kvm_riscv_aia_initialized(kvm)) + return -EBUSY; + + switch (type) { + case KVM_DEV_RISCV_AIA_CONFIG_MODE: + if (write) { + switch (*nr) { + case KVM_DEV_RISCV_AIA_MODE_EMUL: + break; + case KVM_DEV_RISCV_AIA_MODE_HWACCEL: + case KVM_DEV_RISCV_AIA_MODE_AUTO: + /* + * HW Acceleration and Auto modes only + * supported on host with non-zero guest + * external interrupts (i.e. non-zero + * VS-level IMSIC pages). + */ + if (!kvm_riscv_aia_nr_hgei) + return -EINVAL; + break; + default: + return -EINVAL; + }; + aia->mode = *nr; + } else + *nr = aia->mode; + break; + case KVM_DEV_RISCV_AIA_CONFIG_IDS: + if (write) { + if ((*nr < KVM_DEV_RISCV_AIA_IDS_MIN) || + (*nr >= KVM_DEV_RISCV_AIA_IDS_MAX) || + ((*nr & KVM_DEV_RISCV_AIA_IDS_MIN) != + KVM_DEV_RISCV_AIA_IDS_MIN) || + (kvm_riscv_aia_max_ids <= *nr)) + return -EINVAL; + aia->nr_ids = *nr; + } else + *nr = aia->nr_ids; + break; + case KVM_DEV_RISCV_AIA_CONFIG_SRCS: + if (write) { + if ((*nr >= KVM_DEV_RISCV_AIA_SRCS_MAX) || + (*nr >= kvm_riscv_aia_max_ids)) + return -EINVAL; + aia->nr_sources = *nr; + } else + *nr = aia->nr_sources; + break; + case KVM_DEV_RISCV_AIA_CONFIG_GROUP_BITS: + if (write) { + if (*nr >= KVM_DEV_RISCV_AIA_GROUP_BITS_MAX) + return -EINVAL; + aia->nr_group_bits = *nr; + } else + *nr = aia->nr_group_bits; + break; + case KVM_DEV_RISCV_AIA_CONFIG_GROUP_SHIFT: + if (write) { + if ((*nr < KVM_DEV_RISCV_AIA_GROUP_SHIFT_MIN) || + (*nr >= KVM_DEV_RISCV_AIA_GROUP_SHIFT_MAX)) + return -EINVAL; + aia->nr_group_shift = *nr; + } else + *nr = aia->nr_group_shift; + break; + case KVM_DEV_RISCV_AIA_CONFIG_HART_BITS: + if (write) { + if (*nr >= KVM_DEV_RISCV_AIA_HART_BITS_MAX) + return -EINVAL; + aia->nr_hart_bits = *nr; + } else + *nr = aia->nr_hart_bits; + break; + case KVM_DEV_RISCV_AIA_CONFIG_GUEST_BITS: + if (write) { + if (*nr >= KVM_DEV_RISCV_AIA_GUEST_BITS_MAX) + return -EINVAL; + aia->nr_guest_bits = *nr; + } else + *nr = aia->nr_guest_bits; + break; + default: + return -ENXIO; + }; + + return 0; +} + +static int aia_aplic_addr(struct kvm *kvm, u64 *addr, bool write) +{ + struct kvm_aia *aia = &kvm->arch.aia; + + if (write) { + /* Writes can only be done before irqchip is initialized */ + if (kvm_riscv_aia_initialized(kvm)) + return -EBUSY; + + if (*addr & (KVM_DEV_RISCV_APLIC_ALIGN - 1)) + return -EINVAL; + + aia->aplic_addr = *addr; + } else + *addr = aia->aplic_addr; + + return 0; +} + +static int aia_imsic_addr(struct kvm *kvm, u64 *addr, + unsigned long vcpu_idx, bool write) +{ + struct kvm_vcpu *vcpu; + struct kvm_vcpu_aia *vcpu_aia; + + vcpu = kvm_get_vcpu(kvm, vcpu_idx); + if (!vcpu) + return -EINVAL; + vcpu_aia = &vcpu->arch.aia_context; + + if (write) { + /* Writes can only be done before irqchip is initialized */ + if (kvm_riscv_aia_initialized(kvm)) + return -EBUSY; + + if (*addr & (KVM_DEV_RISCV_IMSIC_ALIGN - 1)) + return -EINVAL; + } + + mutex_lock(&vcpu->mutex); + if (write) + vcpu_aia->imsic_addr = *addr; + else + *addr = vcpu_aia->imsic_addr; + mutex_unlock(&vcpu->mutex); + + return 0; +} + +static gpa_t aia_imsic_ppn(struct kvm_aia *aia, gpa_t addr) +{ + u32 h, l; + gpa_t mask = 0; + + h = aia->nr_hart_bits + aia->nr_guest_bits + + IMSIC_MMIO_PAGE_SHIFT - 1; + mask = GENMASK_ULL(h, 0); + + if (aia->nr_group_bits) { + h = aia->nr_group_bits + aia->nr_group_shift - 1; + l = aia->nr_group_shift; + mask |= GENMASK_ULL(h, l); + } + + return (addr & ~mask) >> IMSIC_MMIO_PAGE_SHIFT; +} + +static u32 aia_imsic_hart_index(struct kvm_aia *aia, gpa_t addr) +{ + u32 hart, group = 0; + + hart = (addr >> (aia->nr_guest_bits + IMSIC_MMIO_PAGE_SHIFT)) & + GENMASK_ULL(aia->nr_hart_bits - 1, 0); + if (aia->nr_group_bits) + group = (addr >> aia->nr_group_shift) & + GENMASK_ULL(aia->nr_group_bits - 1, 0); + + return (group << aia->nr_hart_bits) | hart; +} + +static int aia_init(struct kvm *kvm) +{ + int ret, i; + unsigned long idx; + struct kvm_vcpu *vcpu; + struct kvm_vcpu_aia *vaia; + struct kvm_aia *aia = &kvm->arch.aia; + gpa_t base_ppn = KVM_RISCV_AIA_UNDEF_ADDR; + + /* Irqchip can be initialized only once */ + if (kvm_riscv_aia_initialized(kvm)) + return -EBUSY; + + /* We might be in the middle of creating a VCPU? */ + if (kvm->created_vcpus != atomic_read(&kvm->online_vcpus)) + return -EBUSY; + + /* Number of sources should be less than or equals number of IDs */ + if (aia->nr_ids < aia->nr_sources) + return -EINVAL; + + /* APLIC base is required for non-zero number of sources */ + if (aia->nr_sources && aia->aplic_addr == KVM_RISCV_AIA_UNDEF_ADDR) + return -EINVAL; + + /* Initialize APLIC */ + ret = kvm_riscv_aia_aplic_init(kvm); + if (ret) + return ret; + + /* Iterate over each VCPU */ + kvm_for_each_vcpu(idx, vcpu, kvm) { + vaia = &vcpu->arch.aia_context; + + /* IMSIC base is required */ + if (vaia->imsic_addr == KVM_RISCV_AIA_UNDEF_ADDR) { + ret = -EINVAL; + goto fail_cleanup_imsics; + } + + /* All IMSICs should have matching base PPN */ + if (base_ppn == KVM_RISCV_AIA_UNDEF_ADDR) + base_ppn = aia_imsic_ppn(aia, vaia->imsic_addr); + if (base_ppn != aia_imsic_ppn(aia, vaia->imsic_addr)) { + ret = -EINVAL; + goto fail_cleanup_imsics; + } + + /* Update HART index of the IMSIC based on IMSIC base */ + vaia->hart_index = aia_imsic_hart_index(aia, + vaia->imsic_addr); + + /* Initialize IMSIC for this VCPU */ + ret = kvm_riscv_vcpu_aia_imsic_init(vcpu); + if (ret) + goto fail_cleanup_imsics; + } + + /* Set the initialized flag */ + kvm->arch.aia.initialized = true; + + return 0; + +fail_cleanup_imsics: + for (i = idx - 1; i >= 0; i--) { + vcpu = kvm_get_vcpu(kvm, i); + if (!vcpu) + continue; + kvm_riscv_vcpu_aia_imsic_cleanup(vcpu); + } + kvm_riscv_aia_aplic_cleanup(kvm); + return ret; +} + +static int aia_set_attr(struct kvm_device *dev, struct kvm_device_attr *attr) +{ + u32 nr; + u64 addr; + int nr_vcpus, r = -ENXIO; + unsigned long type = (unsigned long)attr->attr; + void __user *uaddr = (void __user *)(long)attr->addr; + + switch (attr->group) { + case KVM_DEV_RISCV_AIA_GRP_CONFIG: + if (copy_from_user(&nr, uaddr, sizeof(nr))) + return -EFAULT; + + mutex_lock(&dev->kvm->lock); + r = aia_config(dev->kvm, type, &nr, true); + mutex_unlock(&dev->kvm->lock); + + break; + + case KVM_DEV_RISCV_AIA_GRP_ADDR: + if (copy_from_user(&addr, uaddr, sizeof(addr))) + return -EFAULT; + + nr_vcpus = atomic_read(&dev->kvm->online_vcpus); + mutex_lock(&dev->kvm->lock); + if (type == KVM_DEV_RISCV_AIA_ADDR_APLIC) + r = aia_aplic_addr(dev->kvm, &addr, true); + else if (type < KVM_DEV_RISCV_AIA_ADDR_IMSIC(nr_vcpus)) + r = aia_imsic_addr(dev->kvm, &addr, + type - KVM_DEV_RISCV_AIA_ADDR_IMSIC(0), true); + mutex_unlock(&dev->kvm->lock); + + break; + + case KVM_DEV_RISCV_AIA_GRP_CTRL: + switch (type) { + case KVM_DEV_RISCV_AIA_CTRL_INIT: + mutex_lock(&dev->kvm->lock); + r = aia_init(dev->kvm); + mutex_unlock(&dev->kvm->lock); + break; + } + + break; + } + + return r; +} + +static int aia_get_attr(struct kvm_device *dev, struct kvm_device_attr *attr) +{ + u32 nr; + u64 addr; + int nr_vcpus, r = -ENXIO; + void __user *uaddr = (void __user *)(long)attr->addr; + unsigned long type = (unsigned long)attr->attr; + + switch (attr->group) { + case KVM_DEV_RISCV_AIA_GRP_CONFIG: + if (copy_from_user(&nr, uaddr, sizeof(nr))) + return -EFAULT; + + mutex_lock(&dev->kvm->lock); + r = aia_config(dev->kvm, type, &nr, false); + mutex_unlock(&dev->kvm->lock); + if (r) + return r; + + if (copy_to_user(uaddr, &nr, sizeof(nr))) + return -EFAULT; + + break; + case KVM_DEV_RISCV_AIA_GRP_ADDR: + if (copy_from_user(&addr, uaddr, sizeof(addr))) + return -EFAULT; + + nr_vcpus = atomic_read(&dev->kvm->online_vcpus); + mutex_lock(&dev->kvm->lock); + if (type == KVM_DEV_RISCV_AIA_ADDR_APLIC) + r = aia_aplic_addr(dev->kvm, &addr, false); + else if (type < KVM_DEV_RISCV_AIA_ADDR_IMSIC(nr_vcpus)) + r = aia_imsic_addr(dev->kvm, &addr, + type - KVM_DEV_RISCV_AIA_ADDR_IMSIC(0), false); + mutex_unlock(&dev->kvm->lock); + if (r) + return r; + + if (copy_to_user(uaddr, &addr, sizeof(addr))) + return -EFAULT; + + break; + } + + return r; +} + +static int aia_has_attr(struct kvm_device *dev, struct kvm_device_attr *attr) +{ + int nr_vcpus; + + switch (attr->group) { + case KVM_DEV_RISCV_AIA_GRP_CONFIG: + switch (attr->attr) { + case KVM_DEV_RISCV_AIA_CONFIG_MODE: + case KVM_DEV_RISCV_AIA_CONFIG_IDS: + case KVM_DEV_RISCV_AIA_CONFIG_SRCS: + case KVM_DEV_RISCV_AIA_CONFIG_GROUP_BITS: + case KVM_DEV_RISCV_AIA_CONFIG_GROUP_SHIFT: + case KVM_DEV_RISCV_AIA_CONFIG_HART_BITS: + case KVM_DEV_RISCV_AIA_CONFIG_GUEST_BITS: + return 0; + } + break; + case KVM_DEV_RISCV_AIA_GRP_ADDR: + nr_vcpus = atomic_read(&dev->kvm->online_vcpus); + if (attr->attr == KVM_DEV_RISCV_AIA_ADDR_APLIC) + return 0; + else if (attr->attr < KVM_DEV_RISCV_AIA_ADDR_IMSIC(nr_vcpus)) + return 0; + break; + case KVM_DEV_RISCV_AIA_GRP_CTRL: + switch (attr->attr) { + case KVM_DEV_RISCV_AIA_CTRL_INIT: + return 0; + } + break; + } + + return -ENXIO; +} + +struct kvm_device_ops kvm_riscv_aia_device_ops = { + .name = "kvm-riscv-aia", + .create = aia_create, + .destroy = aia_destroy, + .set_attr = aia_set_attr, + .get_attr = aia_get_attr, + .has_attr = aia_has_attr, +}; + +int kvm_riscv_vcpu_aia_update(struct kvm_vcpu *vcpu) +{ + /* Proceed only if AIA was initialized successfully */ + if (!kvm_riscv_aia_initialized(vcpu->kvm)) + return 1; + + /* Update the IMSIC HW state before entering guest mode */ + return kvm_riscv_vcpu_aia_imsic_update(vcpu); +} + +void kvm_riscv_vcpu_aia_reset(struct kvm_vcpu *vcpu) +{ + struct kvm_vcpu_aia_csr *csr = &vcpu->arch.aia_context.guest_csr; + struct kvm_vcpu_aia_csr *reset_csr = + &vcpu->arch.aia_context.guest_reset_csr; + + if (!kvm_riscv_aia_available()) + return; + memcpy(csr, reset_csr, sizeof(*csr)); + + /* Proceed only if AIA was initialized successfully */ + if (!kvm_riscv_aia_initialized(vcpu->kvm)) + return; + + /* Reset the IMSIC context */ + kvm_riscv_vcpu_aia_imsic_reset(vcpu); +} + +int kvm_riscv_vcpu_aia_init(struct kvm_vcpu *vcpu) +{ + struct kvm_vcpu_aia *vaia = &vcpu->arch.aia_context; + + if (!kvm_riscv_aia_available()) + return 0; + + /* + * We don't do any memory allocations over here because these + * will be done after AIA device is initialized by the user-space. + * + * Refer, aia_init() implementation for more details. + */ + + /* Initialize default values in AIA vcpu context */ + vaia->imsic_addr = KVM_RISCV_AIA_UNDEF_ADDR; + vaia->hart_index = vcpu->vcpu_idx; + + return 0; +} + +void kvm_riscv_vcpu_aia_deinit(struct kvm_vcpu *vcpu) +{ + /* Proceed only if AIA was initialized successfully */ + if (!kvm_riscv_aia_initialized(vcpu->kvm)) + return; + + /* Cleanup IMSIC context */ + kvm_riscv_vcpu_aia_imsic_cleanup(vcpu); +} + +int kvm_riscv_aia_inject_msi_by_id(struct kvm *kvm, u32 hart_index, + u32 guest_index, u32 iid) +{ + unsigned long idx; + struct kvm_vcpu *vcpu; + + /* Proceed only if AIA was initialized successfully */ + if (!kvm_riscv_aia_initialized(kvm)) + return -EBUSY; + + /* Inject MSI to matching VCPU */ + kvm_for_each_vcpu(idx, vcpu, kvm) { + if (vcpu->arch.aia_context.hart_index == hart_index) + return kvm_riscv_vcpu_aia_imsic_inject(vcpu, + guest_index, + 0, iid); + } + + return 0; +} + +int kvm_riscv_aia_inject_msi(struct kvm *kvm, struct kvm_msi *msi) +{ + gpa_t tppn, ippn; + unsigned long idx; + struct kvm_vcpu *vcpu; + u32 g, toff, iid = msi->data; + struct kvm_aia *aia = &kvm->arch.aia; + gpa_t target = (((gpa_t)msi->address_hi) << 32) | msi->address_lo; + + /* Proceed only if AIA was initialized successfully */ + if (!kvm_riscv_aia_initialized(kvm)) + return -EBUSY; + + /* Convert target address to target PPN */ + tppn = target >> IMSIC_MMIO_PAGE_SHIFT; + + /* Extract and clear Guest ID from target PPN */ + g = tppn & (BIT(aia->nr_guest_bits) - 1); + tppn &= ~((gpa_t)(BIT(aia->nr_guest_bits) - 1)); + + /* Inject MSI to matching VCPU */ + kvm_for_each_vcpu(idx, vcpu, kvm) { + ippn = vcpu->arch.aia_context.imsic_addr >> + IMSIC_MMIO_PAGE_SHIFT; + if (ippn == tppn) { + toff = target & (IMSIC_MMIO_PAGE_SZ - 1); + return kvm_riscv_vcpu_aia_imsic_inject(vcpu, g, + toff, iid); + } + } + + return 0; +} + +int kvm_riscv_aia_inject_irq(struct kvm *kvm, unsigned int irq, bool level) +{ + /* Proceed only if AIA was initialized successfully */ + if (!kvm_riscv_aia_initialized(kvm)) + return -EBUSY; + + /* Inject interrupt level change in APLIC */ + return kvm_riscv_aia_aplic_inject(kvm, irq, level); +} + +void kvm_riscv_aia_init_vm(struct kvm *kvm) +{ + struct kvm_aia *aia = &kvm->arch.aia; + + if (!kvm_riscv_aia_available()) + return; + + /* + * We don't do any memory allocations over here because these + * will be done after AIA device is initialized by the user-space. + * + * Refer, aia_init() implementation for more details. + */ + + /* Initialize default values in AIA global context */ + aia->mode = (kvm_riscv_aia_nr_hgei) ? + KVM_DEV_RISCV_AIA_MODE_AUTO : KVM_DEV_RISCV_AIA_MODE_EMUL; + aia->nr_ids = kvm_riscv_aia_max_ids - 1; + aia->nr_sources = 0; + aia->nr_group_bits = 0; + aia->nr_group_shift = KVM_DEV_RISCV_AIA_GROUP_SHIFT_MIN; + aia->nr_hart_bits = 0; + aia->nr_guest_bits = 0; + aia->aplic_addr = KVM_RISCV_AIA_UNDEF_ADDR; +} + +void kvm_riscv_aia_destroy_vm(struct kvm *kvm) +{ + /* Proceed only if AIA was initialized successfully */ + if (!kvm_riscv_aia_initialized(kvm)) + return; + + /* Cleanup APLIC context */ + kvm_riscv_aia_aplic_cleanup(kvm); +} diff --git a/include/uapi/linux/kvm.h b/include/uapi/linux/kvm.h index 737318b1c1d9..27ccd07898e1 100644 --- a/include/uapi/linux/kvm.h +++ b/include/uapi/linux/kvm.h @@ -1442,6 +1442,8 @@ enum kvm_device_type { #define KVM_DEV_TYPE_XIVE KVM_DEV_TYPE_XIVE KVM_DEV_TYPE_ARM_PV_TIME, #define KVM_DEV_TYPE_ARM_PV_TIME KVM_DEV_TYPE_ARM_PV_TIME + KVM_DEV_TYPE_RISCV_AIA, +#define KVM_DEV_TYPE_RISCV_AIA KVM_DEV_TYPE_RISCV_AIA KVM_DEV_TYPE_MAX, }; From patchwork Thu Jun 15 07:33:50 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Anup Patel X-Patchwork-Id: 13280825 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 94AA5EB64D9 for ; Thu, 15 Jun 2023 07:35:33 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S244393AbjFOHfb (ORCPT ); Thu, 15 Jun 2023 03:35:31 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:49378 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S245158AbjFOHe6 (ORCPT ); Thu, 15 Jun 2023 03:34:58 -0400 Received: from mail-pl1-x633.google.com (mail-pl1-x633.google.com [IPv6:2607:f8b0:4864:20::633]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8C7352D5F for ; Thu, 15 Jun 2023 00:34:26 -0700 (PDT) Received: by mail-pl1-x633.google.com with SMTP id d9443c01a7336-1b50d7b4aaaso7311595ad.3 for ; Thu, 15 Jun 2023 00:34:26 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ventanamicro.com; s=google; t=1686814466; x=1689406466; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=6AICVLlusJyAivo0r9J+PXCIYSOoM2kFh2Aa6kK7Vgk=; b=UxrP7uaOj8C8ROn1s+Uv4Ek0LIuiM1UXQaAg55X2nypfDUarnuUYLvDhRIlBYSopY1 1Je3cYqPVp/Fqlb0zWK86wgla2Ilwxjr/SF7SmBkM8/9SgZd3RCGXJY78qMg0O61TrjV HzIIcSIuuY0ZOXVt6xeokaLx7390mHFT/vfUwOq1mXvPXSXxIQud9An4nrtJhVOdVtH9 Sex2SZ+nOblOyWHFi6vaDQREbKQ2NM/tAlvzARhnrK2t6iQaXN6cFuZqBmIti7w7K8CP rnAJNs/sX0LZkY1xNggp/7wYmlB2pE4orGkpmhfJNu2YfrbduJa8r0lH+CblgkYjbBeC RSHA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1686814466; x=1689406466; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=6AICVLlusJyAivo0r9J+PXCIYSOoM2kFh2Aa6kK7Vgk=; b=Vpfev/FBAMssc2egHeCLfAgFB37bfMngWbdEZ96l8LYziFDN+6zgFxB92IGbW0OyZ3 YgqkYcEG3D5DpWquJlgozG/sT8fdIwB0pfTd1h/tpzx2O4TWKTCHid8e2LKoW2yu636v axngcB0+QJ0gkStF991LD/MiyKqv+MUzqqLH0CrbaoLzok9SlJo+Zrp/pwtQqiCididy 8GbXk4XgJLgoaJxzQJC8TaEijBStuuz8Hs5lVI8aiWZc5HNGRBz8qWkLeNXUrPEDLPB4 Y36zZZd/xdCXYyrlCpc2M7Ynh9kiuwx9ZVKglZVkgmYZmNDsyeNeiiKR1qOMZqICWmXc WA8w== X-Gm-Message-State: AC+VfDzbEgPgIQOMIB1tw6gREkj+GNumlXYInG+hAkCXprND1NajpkAf ce9DPrNpLm9MWAlt/87g5d71lQ== X-Google-Smtp-Source: ACHHUZ78d/j1ItizUc10LlcAcjMcancHQIPbNl3wyu44XH5u+VFiOrXlqc3OpkvmcXoOLUiPt5WQCQ== X-Received: by 2002:a17:902:a416:b0:1ad:edbd:8547 with SMTP id p22-20020a170902a41600b001adedbd8547mr12830881plq.15.1686814465715; Thu, 15 Jun 2023 00:34:25 -0700 (PDT) Received: from anup-ubuntu-vm.localdomain ([106.51.83.242]) by smtp.gmail.com with ESMTPSA id ji1-20020a170903324100b001b016313b1dsm8049855plb.86.2023.06.15.00.34.22 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 15 Jun 2023 00:34:25 -0700 (PDT) From: Anup Patel To: Paolo Bonzini , Atish Patra Cc: Palmer Dabbelt , Paul Walmsley , Andrew Jones , kvm@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, Anup Patel Subject: [PATCH v3 07/10] RISC-V: KVM: Add in-kernel emulation of AIA APLIC Date: Thu, 15 Jun 2023 13:03:50 +0530 Message-Id: <20230615073353.85435-8-apatel@ventanamicro.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230615073353.85435-1-apatel@ventanamicro.com> References: <20230615073353.85435-1-apatel@ventanamicro.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org There is no virtualization support in AIA APLIC so we add in-kernel emulation of AIA APLIC which only supports MSI-mode (i.e. wired interrupts forwarded to AIA IMSIC as MSIs). Signed-off-by: Anup Patel Reviewed-by: Atish Patra --- arch/riscv/include/asm/kvm_aia.h | 17 +- arch/riscv/kvm/Makefile | 1 + arch/riscv/kvm/aia_aplic.c | 576 +++++++++++++++++++++++++++++++ 3 files changed, 580 insertions(+), 14 deletions(-) create mode 100644 arch/riscv/kvm/aia_aplic.c diff --git a/arch/riscv/include/asm/kvm_aia.h b/arch/riscv/include/asm/kvm_aia.h index a1281ebc9b92..f6bd8523395f 100644 --- a/arch/riscv/include/asm/kvm_aia.h +++ b/arch/riscv/include/asm/kvm_aia.h @@ -129,20 +129,9 @@ static inline void kvm_riscv_vcpu_aia_imsic_cleanup(struct kvm_vcpu *vcpu) { } -static inline int kvm_riscv_aia_aplic_inject(struct kvm *kvm, - u32 source, bool level) -{ - return 0; -} - -static inline int kvm_riscv_aia_aplic_init(struct kvm *kvm) -{ - return 0; -} - -static inline void kvm_riscv_aia_aplic_cleanup(struct kvm *kvm) -{ -} +int kvm_riscv_aia_aplic_inject(struct kvm *kvm, u32 source, bool level); +int kvm_riscv_aia_aplic_init(struct kvm *kvm); +void kvm_riscv_aia_aplic_cleanup(struct kvm *kvm); #ifdef CONFIG_32BIT void kvm_riscv_vcpu_aia_flush_interrupts(struct kvm_vcpu *vcpu); diff --git a/arch/riscv/kvm/Makefile b/arch/riscv/kvm/Makefile index dd69ebe098bd..94c43702c765 100644 --- a/arch/riscv/kvm/Makefile +++ b/arch/riscv/kvm/Makefile @@ -28,3 +28,4 @@ kvm-y += vcpu_timer.o kvm-$(CONFIG_RISCV_PMU_SBI) += vcpu_pmu.o vcpu_sbi_pmu.o kvm-y += aia.o kvm-y += aia_device.o +kvm-y += aia_aplic.o diff --git a/arch/riscv/kvm/aia_aplic.c b/arch/riscv/kvm/aia_aplic.c new file mode 100644 index 000000000000..eecd8f4abe21 --- /dev/null +++ b/arch/riscv/kvm/aia_aplic.c @@ -0,0 +1,576 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Copyright (C) 2021 Western Digital Corporation or its affiliates. + * Copyright (C) 2022 Ventana Micro Systems Inc. + * + * Authors: + * Anup Patel + */ + +#include +#include +#include +#include +#include +#include + +struct aplic_irq { + raw_spinlock_t lock; + u32 sourcecfg; + u32 state; +#define APLIC_IRQ_STATE_PENDING BIT(0) +#define APLIC_IRQ_STATE_ENABLED BIT(1) +#define APLIC_IRQ_STATE_ENPEND (APLIC_IRQ_STATE_PENDING | \ + APLIC_IRQ_STATE_ENABLED) +#define APLIC_IRQ_STATE_INPUT BIT(8) + u32 target; +}; + +struct aplic { + struct kvm_io_device iodev; + + u32 domaincfg; + u32 genmsi; + + u32 nr_irqs; + u32 nr_words; + struct aplic_irq *irqs; +}; + +static u32 aplic_read_sourcecfg(struct aplic *aplic, u32 irq) +{ + u32 ret; + unsigned long flags; + struct aplic_irq *irqd; + + if (!irq || aplic->nr_irqs <= irq) + return 0; + irqd = &aplic->irqs[irq]; + + raw_spin_lock_irqsave(&irqd->lock, flags); + ret = irqd->sourcecfg; + raw_spin_unlock_irqrestore(&irqd->lock, flags); + + return ret; +} + +static void aplic_write_sourcecfg(struct aplic *aplic, u32 irq, u32 val) +{ + unsigned long flags; + struct aplic_irq *irqd; + + if (!irq || aplic->nr_irqs <= irq) + return; + irqd = &aplic->irqs[irq]; + + if (val & APLIC_SOURCECFG_D) + val = 0; + else + val &= APLIC_SOURCECFG_SM_MASK; + + raw_spin_lock_irqsave(&irqd->lock, flags); + irqd->sourcecfg = val; + raw_spin_unlock_irqrestore(&irqd->lock, flags); +} + +static u32 aplic_read_target(struct aplic *aplic, u32 irq) +{ + u32 ret; + unsigned long flags; + struct aplic_irq *irqd; + + if (!irq || aplic->nr_irqs <= irq) + return 0; + irqd = &aplic->irqs[irq]; + + raw_spin_lock_irqsave(&irqd->lock, flags); + ret = irqd->target; + raw_spin_unlock_irqrestore(&irqd->lock, flags); + + return ret; +} + +static void aplic_write_target(struct aplic *aplic, u32 irq, u32 val) +{ + unsigned long flags; + struct aplic_irq *irqd; + + if (!irq || aplic->nr_irqs <= irq) + return; + irqd = &aplic->irqs[irq]; + + val &= APLIC_TARGET_EIID_MASK | + (APLIC_TARGET_HART_IDX_MASK << APLIC_TARGET_HART_IDX_SHIFT) | + (APLIC_TARGET_GUEST_IDX_MASK << APLIC_TARGET_GUEST_IDX_SHIFT); + + raw_spin_lock_irqsave(&irqd->lock, flags); + irqd->target = val; + raw_spin_unlock_irqrestore(&irqd->lock, flags); +} + +static bool aplic_read_pending(struct aplic *aplic, u32 irq) +{ + bool ret; + unsigned long flags; + struct aplic_irq *irqd; + + if (!irq || aplic->nr_irqs <= irq) + return false; + irqd = &aplic->irqs[irq]; + + raw_spin_lock_irqsave(&irqd->lock, flags); + ret = (irqd->state & APLIC_IRQ_STATE_PENDING) ? true : false; + raw_spin_unlock_irqrestore(&irqd->lock, flags); + + return ret; +} + +static void aplic_write_pending(struct aplic *aplic, u32 irq, bool pending) +{ + unsigned long flags, sm; + struct aplic_irq *irqd; + + if (!irq || aplic->nr_irqs <= irq) + return; + irqd = &aplic->irqs[irq]; + + raw_spin_lock_irqsave(&irqd->lock, flags); + + sm = irqd->sourcecfg & APLIC_SOURCECFG_SM_MASK; + if (!pending && + ((sm == APLIC_SOURCECFG_SM_LEVEL_HIGH) || + (sm == APLIC_SOURCECFG_SM_LEVEL_LOW))) + goto skip_write_pending; + + if (pending) + irqd->state |= APLIC_IRQ_STATE_PENDING; + else + irqd->state &= ~APLIC_IRQ_STATE_PENDING; + +skip_write_pending: + raw_spin_unlock_irqrestore(&irqd->lock, flags); +} + +static bool aplic_read_enabled(struct aplic *aplic, u32 irq) +{ + bool ret; + unsigned long flags; + struct aplic_irq *irqd; + + if (!irq || aplic->nr_irqs <= irq) + return false; + irqd = &aplic->irqs[irq]; + + raw_spin_lock_irqsave(&irqd->lock, flags); + ret = (irqd->state & APLIC_IRQ_STATE_ENABLED) ? true : false; + raw_spin_unlock_irqrestore(&irqd->lock, flags); + + return ret; +} + +static void aplic_write_enabled(struct aplic *aplic, u32 irq, bool enabled) +{ + unsigned long flags; + struct aplic_irq *irqd; + + if (!irq || aplic->nr_irqs <= irq) + return; + irqd = &aplic->irqs[irq]; + + raw_spin_lock_irqsave(&irqd->lock, flags); + if (enabled) + irqd->state |= APLIC_IRQ_STATE_ENABLED; + else + irqd->state &= ~APLIC_IRQ_STATE_ENABLED; + raw_spin_unlock_irqrestore(&irqd->lock, flags); +} + +static bool aplic_read_input(struct aplic *aplic, u32 irq) +{ + bool ret; + unsigned long flags; + struct aplic_irq *irqd; + + if (!irq || aplic->nr_irqs <= irq) + return false; + irqd = &aplic->irqs[irq]; + + raw_spin_lock_irqsave(&irqd->lock, flags); + ret = (irqd->state & APLIC_IRQ_STATE_INPUT) ? true : false; + raw_spin_unlock_irqrestore(&irqd->lock, flags); + + return ret; +} + +static void aplic_inject_msi(struct kvm *kvm, u32 irq, u32 target) +{ + u32 hart_idx, guest_idx, eiid; + + hart_idx = target >> APLIC_TARGET_HART_IDX_SHIFT; + hart_idx &= APLIC_TARGET_HART_IDX_MASK; + guest_idx = target >> APLIC_TARGET_GUEST_IDX_SHIFT; + guest_idx &= APLIC_TARGET_GUEST_IDX_MASK; + eiid = target & APLIC_TARGET_EIID_MASK; + kvm_riscv_aia_inject_msi_by_id(kvm, hart_idx, guest_idx, eiid); +} + +static void aplic_update_irq_range(struct kvm *kvm, u32 first, u32 last) +{ + bool inject; + u32 irq, target; + unsigned long flags; + struct aplic_irq *irqd; + struct aplic *aplic = kvm->arch.aia.aplic_state; + + if (!(aplic->domaincfg & APLIC_DOMAINCFG_IE)) + return; + + for (irq = first; irq <= last; irq++) { + if (!irq || aplic->nr_irqs <= irq) + continue; + irqd = &aplic->irqs[irq]; + + raw_spin_lock_irqsave(&irqd->lock, flags); + + inject = false; + target = irqd->target; + if ((irqd->state & APLIC_IRQ_STATE_ENPEND) == + APLIC_IRQ_STATE_ENPEND) { + irqd->state &= ~APLIC_IRQ_STATE_PENDING; + inject = true; + } + + raw_spin_unlock_irqrestore(&irqd->lock, flags); + + if (inject) + aplic_inject_msi(kvm, irq, target); + } +} + +int kvm_riscv_aia_aplic_inject(struct kvm *kvm, u32 source, bool level) +{ + u32 target; + bool inject = false, ie; + unsigned long flags; + struct aplic_irq *irqd; + struct aplic *aplic = kvm->arch.aia.aplic_state; + + if (!aplic || !source || (aplic->nr_irqs <= source)) + return -ENODEV; + irqd = &aplic->irqs[source]; + ie = (aplic->domaincfg & APLIC_DOMAINCFG_IE) ? true : false; + + raw_spin_lock_irqsave(&irqd->lock, flags); + + if (irqd->sourcecfg & APLIC_SOURCECFG_D) + goto skip_unlock; + + switch (irqd->sourcecfg & APLIC_SOURCECFG_SM_MASK) { + case APLIC_SOURCECFG_SM_EDGE_RISE: + if (level && !(irqd->state & APLIC_IRQ_STATE_INPUT) && + !(irqd->state & APLIC_IRQ_STATE_PENDING)) + irqd->state |= APLIC_IRQ_STATE_PENDING; + break; + case APLIC_SOURCECFG_SM_EDGE_FALL: + if (!level && (irqd->state & APLIC_IRQ_STATE_INPUT) && + !(irqd->state & APLIC_IRQ_STATE_PENDING)) + irqd->state |= APLIC_IRQ_STATE_PENDING; + break; + case APLIC_SOURCECFG_SM_LEVEL_HIGH: + if (level && !(irqd->state & APLIC_IRQ_STATE_PENDING)) + irqd->state |= APLIC_IRQ_STATE_PENDING; + break; + case APLIC_SOURCECFG_SM_LEVEL_LOW: + if (!level && !(irqd->state & APLIC_IRQ_STATE_PENDING)) + irqd->state |= APLIC_IRQ_STATE_PENDING; + break; + } + + if (level) + irqd->state |= APLIC_IRQ_STATE_INPUT; + else + irqd->state &= ~APLIC_IRQ_STATE_INPUT; + + target = irqd->target; + if (ie && ((irqd->state & APLIC_IRQ_STATE_ENPEND) == + APLIC_IRQ_STATE_ENPEND)) { + irqd->state &= ~APLIC_IRQ_STATE_PENDING; + inject = true; + } + +skip_unlock: + raw_spin_unlock_irqrestore(&irqd->lock, flags); + + if (inject) + aplic_inject_msi(kvm, source, target); + + return 0; +} + +static u32 aplic_read_input_word(struct aplic *aplic, u32 word) +{ + u32 i, ret = 0; + + for (i = 0; i < 32; i++) + ret |= aplic_read_input(aplic, word * 32 + i) ? BIT(i) : 0; + + return ret; +} + +static u32 aplic_read_pending_word(struct aplic *aplic, u32 word) +{ + u32 i, ret = 0; + + for (i = 0; i < 32; i++) + ret |= aplic_read_pending(aplic, word * 32 + i) ? BIT(i) : 0; + + return ret; +} + +static void aplic_write_pending_word(struct aplic *aplic, u32 word, + u32 val, bool pending) +{ + u32 i; + + for (i = 0; i < 32; i++) { + if (val & BIT(i)) + aplic_write_pending(aplic, word * 32 + i, pending); + } +} + +static u32 aplic_read_enabled_word(struct aplic *aplic, u32 word) +{ + u32 i, ret = 0; + + for (i = 0; i < 32; i++) + ret |= aplic_read_enabled(aplic, word * 32 + i) ? BIT(i) : 0; + + return ret; +} + +static void aplic_write_enabled_word(struct aplic *aplic, u32 word, + u32 val, bool enabled) +{ + u32 i; + + for (i = 0; i < 32; i++) { + if (val & BIT(i)) + aplic_write_enabled(aplic, word * 32 + i, enabled); + } +} + +static int aplic_mmio_read_offset(struct kvm *kvm, gpa_t off, u32 *val32) +{ + u32 i; + struct aplic *aplic = kvm->arch.aia.aplic_state; + + if ((off & 0x3) != 0) + return -EOPNOTSUPP; + + if (off == APLIC_DOMAINCFG) { + *val32 = APLIC_DOMAINCFG_RDONLY | + aplic->domaincfg | APLIC_DOMAINCFG_DM; + } else if ((off >= APLIC_SOURCECFG_BASE) && + (off < (APLIC_SOURCECFG_BASE + (aplic->nr_irqs - 1) * 4))) { + i = ((off - APLIC_SOURCECFG_BASE) >> 2) + 1; + *val32 = aplic_read_sourcecfg(aplic, i); + } else if ((off >= APLIC_SETIP_BASE) && + (off < (APLIC_SETIP_BASE + aplic->nr_words * 4))) { + i = (off - APLIC_SETIP_BASE) >> 2; + *val32 = aplic_read_pending_word(aplic, i); + } else if (off == APLIC_SETIPNUM) { + *val32 = 0; + } else if ((off >= APLIC_CLRIP_BASE) && + (off < (APLIC_CLRIP_BASE + aplic->nr_words * 4))) { + i = (off - APLIC_CLRIP_BASE) >> 2; + *val32 = aplic_read_input_word(aplic, i); + } else if (off == APLIC_CLRIPNUM) { + *val32 = 0; + } else if ((off >= APLIC_SETIE_BASE) && + (off < (APLIC_SETIE_BASE + aplic->nr_words * 4))) { + i = (off - APLIC_SETIE_BASE) >> 2; + *val32 = aplic_read_enabled_word(aplic, i); + } else if (off == APLIC_SETIENUM) { + *val32 = 0; + } else if ((off >= APLIC_CLRIE_BASE) && + (off < (APLIC_CLRIE_BASE + aplic->nr_words * 4))) { + *val32 = 0; + } else if (off == APLIC_CLRIENUM) { + *val32 = 0; + } else if (off == APLIC_SETIPNUM_LE) { + *val32 = 0; + } else if (off == APLIC_SETIPNUM_BE) { + *val32 = 0; + } else if (off == APLIC_GENMSI) { + *val32 = aplic->genmsi; + } else if ((off >= APLIC_TARGET_BASE) && + (off < (APLIC_TARGET_BASE + (aplic->nr_irqs - 1) * 4))) { + i = ((off - APLIC_TARGET_BASE) >> 2) + 1; + *val32 = aplic_read_target(aplic, i); + } else + return -ENODEV; + + return 0; +} + +static int aplic_mmio_read(struct kvm_vcpu *vcpu, struct kvm_io_device *dev, + gpa_t addr, int len, void *val) +{ + if (len != 4) + return -EOPNOTSUPP; + + return aplic_mmio_read_offset(vcpu->kvm, + addr - vcpu->kvm->arch.aia.aplic_addr, + val); +} + +static int aplic_mmio_write_offset(struct kvm *kvm, gpa_t off, u32 val32) +{ + u32 i; + struct aplic *aplic = kvm->arch.aia.aplic_state; + + if ((off & 0x3) != 0) + return -EOPNOTSUPP; + + if (off == APLIC_DOMAINCFG) { + /* Only IE bit writeable */ + aplic->domaincfg = val32 & APLIC_DOMAINCFG_IE; + } else if ((off >= APLIC_SOURCECFG_BASE) && + (off < (APLIC_SOURCECFG_BASE + (aplic->nr_irqs - 1) * 4))) { + i = ((off - APLIC_SOURCECFG_BASE) >> 2) + 1; + aplic_write_sourcecfg(aplic, i, val32); + } else if ((off >= APLIC_SETIP_BASE) && + (off < (APLIC_SETIP_BASE + aplic->nr_words * 4))) { + i = (off - APLIC_SETIP_BASE) >> 2; + aplic_write_pending_word(aplic, i, val32, true); + } else if (off == APLIC_SETIPNUM) { + aplic_write_pending(aplic, val32, true); + } else if ((off >= APLIC_CLRIP_BASE) && + (off < (APLIC_CLRIP_BASE + aplic->nr_words * 4))) { + i = (off - APLIC_CLRIP_BASE) >> 2; + aplic_write_pending_word(aplic, i, val32, false); + } else if (off == APLIC_CLRIPNUM) { + aplic_write_pending(aplic, val32, false); + } else if ((off >= APLIC_SETIE_BASE) && + (off < (APLIC_SETIE_BASE + aplic->nr_words * 4))) { + i = (off - APLIC_SETIE_BASE) >> 2; + aplic_write_enabled_word(aplic, i, val32, true); + } else if (off == APLIC_SETIENUM) { + aplic_write_enabled(aplic, val32, true); + } else if ((off >= APLIC_CLRIE_BASE) && + (off < (APLIC_CLRIE_BASE + aplic->nr_words * 4))) { + i = (off - APLIC_CLRIE_BASE) >> 2; + aplic_write_enabled_word(aplic, i, val32, false); + } else if (off == APLIC_CLRIENUM) { + aplic_write_enabled(aplic, val32, false); + } else if (off == APLIC_SETIPNUM_LE) { + aplic_write_pending(aplic, val32, true); + } else if (off == APLIC_SETIPNUM_BE) { + aplic_write_pending(aplic, __swab32(val32), true); + } else if (off == APLIC_GENMSI) { + aplic->genmsi = val32 & ~(APLIC_TARGET_GUEST_IDX_MASK << + APLIC_TARGET_GUEST_IDX_SHIFT); + kvm_riscv_aia_inject_msi_by_id(kvm, + val32 >> APLIC_TARGET_HART_IDX_SHIFT, 0, + val32 & APLIC_TARGET_EIID_MASK); + } else if ((off >= APLIC_TARGET_BASE) && + (off < (APLIC_TARGET_BASE + (aplic->nr_irqs - 1) * 4))) { + i = ((off - APLIC_TARGET_BASE) >> 2) + 1; + aplic_write_target(aplic, i, val32); + } else + return -ENODEV; + + aplic_update_irq_range(kvm, 1, aplic->nr_irqs - 1); + + return 0; +} + +static int aplic_mmio_write(struct kvm_vcpu *vcpu, struct kvm_io_device *dev, + gpa_t addr, int len, const void *val) +{ + if (len != 4) + return -EOPNOTSUPP; + + return aplic_mmio_write_offset(vcpu->kvm, + addr - vcpu->kvm->arch.aia.aplic_addr, + *((const u32 *)val)); +} + +static struct kvm_io_device_ops aplic_iodoev_ops = { + .read = aplic_mmio_read, + .write = aplic_mmio_write, +}; + +int kvm_riscv_aia_aplic_init(struct kvm *kvm) +{ + int i, ret = 0; + struct aplic *aplic; + + /* Do nothing if we have zero sources */ + if (!kvm->arch.aia.nr_sources) + return 0; + + /* Allocate APLIC global state */ + aplic = kzalloc(sizeof(*aplic), GFP_KERNEL); + if (!aplic) + return -ENOMEM; + kvm->arch.aia.aplic_state = aplic; + + /* Setup APLIC IRQs */ + aplic->nr_irqs = kvm->arch.aia.nr_sources + 1; + aplic->nr_words = DIV_ROUND_UP(aplic->nr_irqs, 32); + aplic->irqs = kcalloc(aplic->nr_irqs, + sizeof(*aplic->irqs), GFP_KERNEL); + if (!aplic->irqs) { + ret = -ENOMEM; + goto fail_free_aplic; + } + for (i = 0; i < aplic->nr_irqs; i++) + raw_spin_lock_init(&aplic->irqs[i].lock); + + /* Setup IO device */ + kvm_iodevice_init(&aplic->iodev, &aplic_iodoev_ops); + mutex_lock(&kvm->slots_lock); + ret = kvm_io_bus_register_dev(kvm, KVM_MMIO_BUS, + kvm->arch.aia.aplic_addr, + KVM_DEV_RISCV_APLIC_SIZE, + &aplic->iodev); + mutex_unlock(&kvm->slots_lock); + if (ret) + goto fail_free_aplic_irqs; + + /* Setup default IRQ routing */ + ret = kvm_riscv_setup_default_irq_routing(kvm, aplic->nr_irqs); + if (ret) + goto fail_unreg_iodev; + + return 0; + +fail_unreg_iodev: + mutex_lock(&kvm->slots_lock); + kvm_io_bus_unregister_dev(kvm, KVM_MMIO_BUS, &aplic->iodev); + mutex_unlock(&kvm->slots_lock); +fail_free_aplic_irqs: + kfree(aplic->irqs); +fail_free_aplic: + kvm->arch.aia.aplic_state = NULL; + kfree(aplic); + return ret; +} + +void kvm_riscv_aia_aplic_cleanup(struct kvm *kvm) +{ + struct aplic *aplic = kvm->arch.aia.aplic_state; + + if (!aplic) + return; + + mutex_lock(&kvm->slots_lock); + kvm_io_bus_unregister_dev(kvm, KVM_MMIO_BUS, &aplic->iodev); + mutex_unlock(&kvm->slots_lock); + + kfree(aplic->irqs); + + kvm->arch.aia.aplic_state = NULL; + kfree(aplic); +} From patchwork Thu Jun 15 07:33:51 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Anup Patel X-Patchwork-Id: 13280826 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1F30FEB64DD for ; Thu, 15 Jun 2023 07:35:36 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S244969AbjFOHfe (ORCPT ); Thu, 15 Jun 2023 03:35:34 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:49602 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S245178AbjFOHfA (ORCPT ); Thu, 15 Jun 2023 03:35:00 -0400 Received: from mail-pl1-x62e.google.com (mail-pl1-x62e.google.com [IPv6:2607:f8b0:4864:20::62e]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id DE9E42D73 for ; Thu, 15 Jun 2023 00:34:29 -0700 (PDT) Received: by mail-pl1-x62e.google.com with SMTP id d9443c01a7336-1b3c578c602so31337765ad.2 for ; Thu, 15 Jun 2023 00:34:29 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ventanamicro.com; s=google; t=1686814469; x=1689406469; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=NwO5wYN1w5TATPv4eKfzhxcMf0t/5mA6O0is8eCDj4I=; b=kGoN5P2dzjM6VyzhQDtDDPc/RqumyRWRevCZB0lJ0iAkXcz40wAl7+0oMGIq7AnAlQ huAZ4qeBAlJRCuH7LA/9/W961J8SKDyrWRWF9DadY+u7ZBcGPBlB90/eLDoU+JzYSwq/ izmZSPCp4uPS6ZQzQOIOjMzOcajsPGkvcuL+IOW9lpGxBBCJY8Pa2+99VdTY7TK0zAPu VEBUlHaPWdEmEOin2hF2vVJ1u9jVGaPmduODLGQ0RYTmlwDr7C3N1j9B/zkDa80qPML5 shcslfc1jn+3tpU6qPeaOs/QqcnrKXVNfYgfjwxxFg4WLgBKHB2rY9rlJhpiUFm+JaMP K0bw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1686814469; x=1689406469; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=NwO5wYN1w5TATPv4eKfzhxcMf0t/5mA6O0is8eCDj4I=; b=KmX32hgriOOXwRez8EmWpEH0JgLlOgMSEdSxrkHEWaG9FA83zKBDwoWePRF7GSKUkL 0a48Mvmb6BaGt4Q5L9hv0Pn3njO6/H6hcUBTWeRuzmr9G3uJQdS3iy+iImOZTKUKdvgF VN0C1R84D5DfGixo9XSLRfnQLhp2nlgN3hY5h3S/B0EnqeZBpl4eZUQfz/Yskkyf+iaZ eGQNO8jplJ6NBqOkyw/dKzOGiXMOMaNXF/BBNIpzLGyXBxbZqRYMkneY3Qy6zSW+LQBC IHHLGNKO6wC6CqITjLrQWEYCM0S+R/lX+ouVtjlMrSEn1ILBxoKxRPCZRDh/eO+xugJL frQw== X-Gm-Message-State: AC+VfDx/T+/iiSj2Pj91zXSSCxxyfEFevUjp+buvfJ2DrtA1CBnj8/td LLw/afoO7qfb8mYKLnHs2hdIQg== X-Google-Smtp-Source: ACHHUZ4+fpw9v3b9+CDDN8KzkgoO92jhEaJj2s2l2W7Nb4/d3WXidiPCblEyGbx2Z4njRPIddKVlzg== X-Received: by 2002:a17:902:f543:b0:1b3:e77b:f70d with SMTP id h3-20020a170902f54300b001b3e77bf70dmr8718741plf.29.1686814468796; Thu, 15 Jun 2023 00:34:28 -0700 (PDT) Received: from anup-ubuntu-vm.localdomain ([106.51.83.242]) by smtp.gmail.com with ESMTPSA id ji1-20020a170903324100b001b016313b1dsm8049855plb.86.2023.06.15.00.34.26 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 15 Jun 2023 00:34:28 -0700 (PDT) From: Anup Patel To: Paolo Bonzini , Atish Patra Cc: Palmer Dabbelt , Paul Walmsley , Andrew Jones , kvm@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, Anup Patel Subject: [PATCH v3 08/10] RISC-V: KVM: Expose APLIC registers as attributes of AIA irqchip Date: Thu, 15 Jun 2023 13:03:51 +0530 Message-Id: <20230615073353.85435-9-apatel@ventanamicro.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230615073353.85435-1-apatel@ventanamicro.com> References: <20230615073353.85435-1-apatel@ventanamicro.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org We expose APLIC registers as KVM device attributes of the in-kernel AIA irqchip device. This will allow KVM user-space to save/restore APLIC state using KVM device ioctls(). Signed-off-by: Anup Patel Reviewed-by: Atish Patra --- arch/riscv/include/asm/kvm_aia.h | 3 +++ arch/riscv/include/uapi/asm/kvm.h | 6 +++++ arch/riscv/kvm/aia_aplic.c | 43 +++++++++++++++++++++++++++++++ arch/riscv/kvm/aia_device.c | 25 ++++++++++++++++++ 4 files changed, 77 insertions(+) diff --git a/arch/riscv/include/asm/kvm_aia.h b/arch/riscv/include/asm/kvm_aia.h index f6bd8523395f..ba939c0054aa 100644 --- a/arch/riscv/include/asm/kvm_aia.h +++ b/arch/riscv/include/asm/kvm_aia.h @@ -129,6 +129,9 @@ static inline void kvm_riscv_vcpu_aia_imsic_cleanup(struct kvm_vcpu *vcpu) { } +int kvm_riscv_aia_aplic_set_attr(struct kvm *kvm, unsigned long type, u32 v); +int kvm_riscv_aia_aplic_get_attr(struct kvm *kvm, unsigned long type, u32 *v); +int kvm_riscv_aia_aplic_has_attr(struct kvm *kvm, unsigned long type); int kvm_riscv_aia_aplic_inject(struct kvm *kvm, u32 source, bool level); int kvm_riscv_aia_aplic_init(struct kvm *kvm); void kvm_riscv_aia_aplic_cleanup(struct kvm *kvm); diff --git a/arch/riscv/include/uapi/asm/kvm.h b/arch/riscv/include/uapi/asm/kvm.h index 047c8fc5bd71..9ed822fc5589 100644 --- a/arch/riscv/include/uapi/asm/kvm.h +++ b/arch/riscv/include/uapi/asm/kvm.h @@ -249,6 +249,12 @@ enum KVM_RISCV_SBI_EXT_ID { #define KVM_DEV_RISCV_AIA_GRP_CTRL 2 #define KVM_DEV_RISCV_AIA_CTRL_INIT 0 +/* + * The device attribute type contains the memory mapped offset of the + * APLIC register (range 0x0000-0x3FFF) and it must be 4-byte aligned. + */ +#define KVM_DEV_RISCV_AIA_GRP_APLIC 3 + /* One single KVM irqchip, ie. the AIA */ #define KVM_NR_IRQCHIPS 1 diff --git a/arch/riscv/kvm/aia_aplic.c b/arch/riscv/kvm/aia_aplic.c index eecd8f4abe21..39e72aa016a4 100644 --- a/arch/riscv/kvm/aia_aplic.c +++ b/arch/riscv/kvm/aia_aplic.c @@ -501,6 +501,49 @@ static struct kvm_io_device_ops aplic_iodoev_ops = { .write = aplic_mmio_write, }; +int kvm_riscv_aia_aplic_set_attr(struct kvm *kvm, unsigned long type, u32 v) +{ + int rc; + + if (!kvm->arch.aia.aplic_state) + return -ENODEV; + + rc = aplic_mmio_write_offset(kvm, type, v); + if (rc) + return rc; + + return 0; +} + +int kvm_riscv_aia_aplic_get_attr(struct kvm *kvm, unsigned long type, u32 *v) +{ + int rc; + + if (!kvm->arch.aia.aplic_state) + return -ENODEV; + + rc = aplic_mmio_read_offset(kvm, type, v); + if (rc) + return rc; + + return 0; +} + +int kvm_riscv_aia_aplic_has_attr(struct kvm *kvm, unsigned long type) +{ + int rc; + u32 val; + + if (!kvm->arch.aia.aplic_state) + return -ENODEV; + + rc = aplic_mmio_read_offset(kvm, type, &val); + if (rc) + return rc; + + return 0; +} + int kvm_riscv_aia_aplic_init(struct kvm *kvm) { int i, ret = 0; diff --git a/arch/riscv/kvm/aia_device.c b/arch/riscv/kvm/aia_device.c index 7ab555121872..c649ad6e8e0a 100644 --- a/arch/riscv/kvm/aia_device.c +++ b/arch/riscv/kvm/aia_device.c @@ -365,6 +365,15 @@ static int aia_set_attr(struct kvm_device *dev, struct kvm_device_attr *attr) break; } + break; + case KVM_DEV_RISCV_AIA_GRP_APLIC: + if (copy_from_user(&nr, uaddr, sizeof(nr))) + return -EFAULT; + + mutex_lock(&dev->kvm->lock); + r = kvm_riscv_aia_aplic_set_attr(dev->kvm, type, nr); + mutex_unlock(&dev->kvm->lock); + break; } @@ -412,6 +421,20 @@ static int aia_get_attr(struct kvm_device *dev, struct kvm_device_attr *attr) if (copy_to_user(uaddr, &addr, sizeof(addr))) return -EFAULT; + break; + case KVM_DEV_RISCV_AIA_GRP_APLIC: + if (copy_from_user(&nr, uaddr, sizeof(nr))) + return -EFAULT; + + mutex_lock(&dev->kvm->lock); + r = kvm_riscv_aia_aplic_get_attr(dev->kvm, type, &nr); + mutex_unlock(&dev->kvm->lock); + if (r) + return r; + + if (copy_to_user(uaddr, &nr, sizeof(nr))) + return -EFAULT; + break; } @@ -448,6 +471,8 @@ static int aia_has_attr(struct kvm_device *dev, struct kvm_device_attr *attr) return 0; } break; + case KVM_DEV_RISCV_AIA_GRP_APLIC: + return kvm_riscv_aia_aplic_has_attr(dev->kvm, attr->attr); } return -ENXIO; From patchwork Thu Jun 15 07:33:52 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Anup Patel X-Patchwork-Id: 13280827 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1FA2CEB64D9 for ; Thu, 15 Jun 2023 07:35:56 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S245076AbjFOHfx (ORCPT ); Thu, 15 Jun 2023 03:35:53 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:49264 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S245239AbjFOHfE (ORCPT ); Thu, 15 Jun 2023 03:35:04 -0400 Received: from mail-pl1-x631.google.com (mail-pl1-x631.google.com [IPv6:2607:f8b0:4864:20::631]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E41AB30D1 for ; Thu, 15 Jun 2023 00:34:32 -0700 (PDT) Received: by mail-pl1-x631.google.com with SMTP id d9443c01a7336-1b5018cb4dcso9730325ad.2 for ; Thu, 15 Jun 2023 00:34:32 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ventanamicro.com; s=google; t=1686814472; x=1689406472; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=QA85kUweGLK7Yick+wWMXt75j3hUiFGftPlVxbYkhsw=; b=RfY0cMYN1kfrmVIYKTVGaxJZ7WaeTOm+nNr3WNsBTJb944Q7+M8Pake/NX+ujd06bZ G1npc4oACeGi0x58qTGvv6Ylfe0TzOQoC9ezcHFT82B4T1XStcp5vLH00PcmemZ9DONv iVdCjZDUigkhnWsJD3foY8fibJg9qKrP+ARLYIKJItcr4p3IpkhTm+COdJhtdnenv/0Q vF2XvIJ3QUzgl2xzwXskCMt0Hb0r8O8KVsuwLWwMd2SdoygNfoMfKbPRR5KnXNPG+j4d Jwchjc1/EV56fwD8+B0CzAKSVhq1igqzZg82sZ3dQnoQGDzeSN0tPjWU4Dha46V/QCNb AOLA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1686814472; x=1689406472; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=QA85kUweGLK7Yick+wWMXt75j3hUiFGftPlVxbYkhsw=; b=APkd299YOJCDQ2wUdbYcU0ix/sVT1ukDvYLs3RRaUhElJbcuAA496vRG+Hkf+8FB1q jcg3Liyas7cNT1VU7AMxBijl3yhjmtfaQ2pma1q8qQaO/z3ibxP5MFjmEhRuEd5uRKF4 B5GOdakDPago7ScMPLyljsJdshcwZyR6e2ybiVnz22bTrZU07Crh1W4lU0QCB2AwzjGJ MT1o/s6sWfrrodNswWOI3WbshJk8fetOvUMbY9hrQlTFpauW/AO3YWIw33RxHEopKZDO VeHq970VEbO4VaZMEIaFLH/59schBQCsRw9Nr/gXzJr0Jpyxy0SHe/jKc4xO72dRvY5q GZDw== X-Gm-Message-State: AC+VfDw6bQ6ZKIi5EWeEAh2nFAtPiCM9L9daw55kjN10T8hfKR3hiKj2 ZT571FKHBY/ClyfRETua8xPw8A== X-Google-Smtp-Source: ACHHUZ7Z6Q/6KG74N9DVZaiCSfFNQ0I5KiJFDR0sx2ACorEjRzJcJI/ChfCQWopCLSvFXJKCva26eA== X-Received: by 2002:a17:902:ce92:b0:1b2:2a96:4283 with SMTP id f18-20020a170902ce9200b001b22a964283mr13935461plg.47.1686814472125; Thu, 15 Jun 2023 00:34:32 -0700 (PDT) Received: from anup-ubuntu-vm.localdomain ([106.51.83.242]) by smtp.gmail.com with ESMTPSA id ji1-20020a170903324100b001b016313b1dsm8049855plb.86.2023.06.15.00.34.29 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 15 Jun 2023 00:34:31 -0700 (PDT) From: Anup Patel To: Paolo Bonzini , Atish Patra Cc: Palmer Dabbelt , Paul Walmsley , Andrew Jones , kvm@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, Anup Patel , Atish Patra Subject: [PATCH v3 09/10] RISC-V: KVM: Add in-kernel virtualization of AIA IMSIC Date: Thu, 15 Jun 2023 13:03:52 +0530 Message-Id: <20230615073353.85435-10-apatel@ventanamicro.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230615073353.85435-1-apatel@ventanamicro.com> References: <20230615073353.85435-1-apatel@ventanamicro.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org We can have AIA IMSIC support for both HS-level and VS-level but the VS-level IMSICs are optional. We use the VS-level IMSICs for Guest/VM whenever available otherwise we fallback to software emulation of AIA IMSIC. This patch adds in-kernel virtualization of AIA IMSIC. Signed-off-by: Anup Patel Reviewed-by: Atish Patra --- arch/riscv/include/asm/kvm_aia.h | 46 +- arch/riscv/kvm/Makefile | 1 + arch/riscv/kvm/aia_imsic.c | 913 +++++++++++++++++++++++++++++++ 3 files changed, 924 insertions(+), 36 deletions(-) create mode 100644 arch/riscv/kvm/aia_imsic.c diff --git a/arch/riscv/include/asm/kvm_aia.h b/arch/riscv/include/asm/kvm_aia.h index ba939c0054aa..a4f6ebf90e31 100644 --- a/arch/riscv/include/asm/kvm_aia.h +++ b/arch/riscv/include/asm/kvm_aia.h @@ -90,44 +90,18 @@ DECLARE_STATIC_KEY_FALSE(kvm_riscv_aia_available); extern struct kvm_device_ops kvm_riscv_aia_device_ops; -static inline void kvm_riscv_vcpu_aia_imsic_release(struct kvm_vcpu *vcpu) -{ -} - -static inline int kvm_riscv_vcpu_aia_imsic_update(struct kvm_vcpu *vcpu) -{ - return 1; -} +void kvm_riscv_vcpu_aia_imsic_release(struct kvm_vcpu *vcpu); +int kvm_riscv_vcpu_aia_imsic_update(struct kvm_vcpu *vcpu); #define KVM_RISCV_AIA_IMSIC_TOPEI (ISELECT_MASK + 1) -static inline int kvm_riscv_vcpu_aia_imsic_rmw(struct kvm_vcpu *vcpu, - unsigned long isel, - unsigned long *val, - unsigned long new_val, - unsigned long wr_mask) -{ - return 0; -} - -static inline void kvm_riscv_vcpu_aia_imsic_reset(struct kvm_vcpu *vcpu) -{ -} - -static inline int kvm_riscv_vcpu_aia_imsic_inject(struct kvm_vcpu *vcpu, - u32 guest_index, u32 offset, - u32 iid) -{ - return 0; -} - -static inline int kvm_riscv_vcpu_aia_imsic_init(struct kvm_vcpu *vcpu) -{ - return 0; -} - -static inline void kvm_riscv_vcpu_aia_imsic_cleanup(struct kvm_vcpu *vcpu) -{ -} +int kvm_riscv_vcpu_aia_imsic_rmw(struct kvm_vcpu *vcpu, unsigned long isel, + unsigned long *val, unsigned long new_val, + unsigned long wr_mask); +void kvm_riscv_vcpu_aia_imsic_reset(struct kvm_vcpu *vcpu); +int kvm_riscv_vcpu_aia_imsic_inject(struct kvm_vcpu *vcpu, + u32 guest_index, u32 offset, u32 iid); +int kvm_riscv_vcpu_aia_imsic_init(struct kvm_vcpu *vcpu); +void kvm_riscv_vcpu_aia_imsic_cleanup(struct kvm_vcpu *vcpu); int kvm_riscv_aia_aplic_set_attr(struct kvm *kvm, unsigned long type, u32 v); int kvm_riscv_aia_aplic_get_attr(struct kvm *kvm, unsigned long type, u32 *v); diff --git a/arch/riscv/kvm/Makefile b/arch/riscv/kvm/Makefile index 94c43702c765..c1d1356387ff 100644 --- a/arch/riscv/kvm/Makefile +++ b/arch/riscv/kvm/Makefile @@ -29,3 +29,4 @@ kvm-$(CONFIG_RISCV_PMU_SBI) += vcpu_pmu.o vcpu_sbi_pmu.o kvm-y += aia.o kvm-y += aia_device.o kvm-y += aia_aplic.o +kvm-y += aia_imsic.o diff --git a/arch/riscv/kvm/aia_imsic.c b/arch/riscv/kvm/aia_imsic.c new file mode 100644 index 000000000000..2dc09dcb8ab5 --- /dev/null +++ b/arch/riscv/kvm/aia_imsic.c @@ -0,0 +1,913 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Copyright (C) 2021 Western Digital Corporation or its affiliates. + * Copyright (C) 2022 Ventana Micro Systems Inc. + * + * Authors: + * Anup Patel + */ + +#include +#include +#include +#include +#include +#include +#include +#include + +#define IMSIC_MAX_EIX (IMSIC_MAX_ID / BITS_PER_TYPE(u64)) + +struct imsic_mrif_eix { + unsigned long eip[BITS_PER_TYPE(u64) / BITS_PER_LONG]; + unsigned long eie[BITS_PER_TYPE(u64) / BITS_PER_LONG]; +}; + +struct imsic_mrif { + struct imsic_mrif_eix eix[IMSIC_MAX_EIX]; + unsigned long eithreshold; + unsigned long eidelivery; +}; + +struct imsic { + struct kvm_io_device iodev; + + u32 nr_msis; + u32 nr_eix; + u32 nr_hw_eix; + + /* + * At any point in time, the register state is in + * one of the following places: + * + * 1) Hardware: IMSIC VS-file (vsfile_cpu >= 0) + * 2) Software: IMSIC SW-file (vsfile_cpu < 0) + */ + + /* IMSIC VS-file */ + rwlock_t vsfile_lock; + int vsfile_cpu; + int vsfile_hgei; + void __iomem *vsfile_va; + phys_addr_t vsfile_pa; + + /* IMSIC SW-file */ + struct imsic_mrif *swfile; + phys_addr_t swfile_pa; +}; + +#define imsic_vs_csr_read(__c) \ +({ \ + unsigned long __r; \ + csr_write(CSR_VSISELECT, __c); \ + __r = csr_read(CSR_VSIREG); \ + __r; \ +}) + +#define imsic_read_switchcase(__ireg) \ + case __ireg: \ + return imsic_vs_csr_read(__ireg); +#define imsic_read_switchcase_2(__ireg) \ + imsic_read_switchcase(__ireg + 0) \ + imsic_read_switchcase(__ireg + 1) +#define imsic_read_switchcase_4(__ireg) \ + imsic_read_switchcase_2(__ireg + 0) \ + imsic_read_switchcase_2(__ireg + 2) +#define imsic_read_switchcase_8(__ireg) \ + imsic_read_switchcase_4(__ireg + 0) \ + imsic_read_switchcase_4(__ireg + 4) +#define imsic_read_switchcase_16(__ireg) \ + imsic_read_switchcase_8(__ireg + 0) \ + imsic_read_switchcase_8(__ireg + 8) +#define imsic_read_switchcase_32(__ireg) \ + imsic_read_switchcase_16(__ireg + 0) \ + imsic_read_switchcase_16(__ireg + 16) +#define imsic_read_switchcase_64(__ireg) \ + imsic_read_switchcase_32(__ireg + 0) \ + imsic_read_switchcase_32(__ireg + 32) + +static unsigned long imsic_eix_read(int ireg) +{ + switch (ireg) { + imsic_read_switchcase_64(IMSIC_EIP0) + imsic_read_switchcase_64(IMSIC_EIE0) + }; + + return 0; +} + +#define imsic_vs_csr_swap(__c, __v) \ +({ \ + unsigned long __r; \ + csr_write(CSR_VSISELECT, __c); \ + __r = csr_swap(CSR_VSIREG, __v); \ + __r; \ +}) + +#define imsic_swap_switchcase(__ireg, __v) \ + case __ireg: \ + return imsic_vs_csr_swap(__ireg, __v); +#define imsic_swap_switchcase_2(__ireg, __v) \ + imsic_swap_switchcase(__ireg + 0, __v) \ + imsic_swap_switchcase(__ireg + 1, __v) +#define imsic_swap_switchcase_4(__ireg, __v) \ + imsic_swap_switchcase_2(__ireg + 0, __v) \ + imsic_swap_switchcase_2(__ireg + 2, __v) +#define imsic_swap_switchcase_8(__ireg, __v) \ + imsic_swap_switchcase_4(__ireg + 0, __v) \ + imsic_swap_switchcase_4(__ireg + 4, __v) +#define imsic_swap_switchcase_16(__ireg, __v) \ + imsic_swap_switchcase_8(__ireg + 0, __v) \ + imsic_swap_switchcase_8(__ireg + 8, __v) +#define imsic_swap_switchcase_32(__ireg, __v) \ + imsic_swap_switchcase_16(__ireg + 0, __v) \ + imsic_swap_switchcase_16(__ireg + 16, __v) +#define imsic_swap_switchcase_64(__ireg, __v) \ + imsic_swap_switchcase_32(__ireg + 0, __v) \ + imsic_swap_switchcase_32(__ireg + 32, __v) + +static unsigned long imsic_eix_swap(int ireg, unsigned long val) +{ + switch (ireg) { + imsic_swap_switchcase_64(IMSIC_EIP0, val) + imsic_swap_switchcase_64(IMSIC_EIE0, val) + }; + + return 0; +} + +#define imsic_vs_csr_write(__c, __v) \ +do { \ + csr_write(CSR_VSISELECT, __c); \ + csr_write(CSR_VSIREG, __v); \ +} while (0) + +#define imsic_write_switchcase(__ireg, __v) \ + case __ireg: \ + imsic_vs_csr_write(__ireg, __v); \ + break; +#define imsic_write_switchcase_2(__ireg, __v) \ + imsic_write_switchcase(__ireg + 0, __v) \ + imsic_write_switchcase(__ireg + 1, __v) +#define imsic_write_switchcase_4(__ireg, __v) \ + imsic_write_switchcase_2(__ireg + 0, __v) \ + imsic_write_switchcase_2(__ireg + 2, __v) +#define imsic_write_switchcase_8(__ireg, __v) \ + imsic_write_switchcase_4(__ireg + 0, __v) \ + imsic_write_switchcase_4(__ireg + 4, __v) +#define imsic_write_switchcase_16(__ireg, __v) \ + imsic_write_switchcase_8(__ireg + 0, __v) \ + imsic_write_switchcase_8(__ireg + 8, __v) +#define imsic_write_switchcase_32(__ireg, __v) \ + imsic_write_switchcase_16(__ireg + 0, __v) \ + imsic_write_switchcase_16(__ireg + 16, __v) +#define imsic_write_switchcase_64(__ireg, __v) \ + imsic_write_switchcase_32(__ireg + 0, __v) \ + imsic_write_switchcase_32(__ireg + 32, __v) + +static void imsic_eix_write(int ireg, unsigned long val) +{ + switch (ireg) { + imsic_write_switchcase_64(IMSIC_EIP0, val) + imsic_write_switchcase_64(IMSIC_EIE0, val) + }; +} + +#define imsic_vs_csr_set(__c, __v) \ +do { \ + csr_write(CSR_VSISELECT, __c); \ + csr_set(CSR_VSIREG, __v); \ +} while (0) + +#define imsic_set_switchcase(__ireg, __v) \ + case __ireg: \ + imsic_vs_csr_set(__ireg, __v); \ + break; +#define imsic_set_switchcase_2(__ireg, __v) \ + imsic_set_switchcase(__ireg + 0, __v) \ + imsic_set_switchcase(__ireg + 1, __v) +#define imsic_set_switchcase_4(__ireg, __v) \ + imsic_set_switchcase_2(__ireg + 0, __v) \ + imsic_set_switchcase_2(__ireg + 2, __v) +#define imsic_set_switchcase_8(__ireg, __v) \ + imsic_set_switchcase_4(__ireg + 0, __v) \ + imsic_set_switchcase_4(__ireg + 4, __v) +#define imsic_set_switchcase_16(__ireg, __v) \ + imsic_set_switchcase_8(__ireg + 0, __v) \ + imsic_set_switchcase_8(__ireg + 8, __v) +#define imsic_set_switchcase_32(__ireg, __v) \ + imsic_set_switchcase_16(__ireg + 0, __v) \ + imsic_set_switchcase_16(__ireg + 16, __v) +#define imsic_set_switchcase_64(__ireg, __v) \ + imsic_set_switchcase_32(__ireg + 0, __v) \ + imsic_set_switchcase_32(__ireg + 32, __v) + +static void imsic_eix_set(int ireg, unsigned long val) +{ + switch (ireg) { + imsic_set_switchcase_64(IMSIC_EIP0, val) + imsic_set_switchcase_64(IMSIC_EIE0, val) + }; +} + +static unsigned long imsic_mrif_atomic_rmw(struct imsic_mrif *mrif, + unsigned long *ptr, + unsigned long new_val, + unsigned long wr_mask) +{ + unsigned long old_val = 0, tmp = 0; + + __asm__ __volatile__ ( + "0: lr.w.aq %1, %0\n" + " and %2, %1, %3\n" + " or %2, %2, %4\n" + " sc.w.rl %2, %2, %0\n" + " bnez %2, 0b" + : "+A" (*ptr), "+r" (old_val), "+r" (tmp) + : "r" (~wr_mask), "r" (new_val & wr_mask) + : "memory"); + + return old_val; +} + +static unsigned long imsic_mrif_atomic_or(struct imsic_mrif *mrif, + unsigned long *ptr, + unsigned long val) +{ + return arch_atomic_long_fetch_or(val, (atomic_long_t *)ptr); +} + +#define imsic_mrif_atomic_write(__mrif, __ptr, __new_val) \ + imsic_mrif_atomic_rmw(__mrif, __ptr, __new_val, -1UL) +#define imsic_mrif_atomic_read(__mrif, __ptr) \ + imsic_mrif_atomic_or(__mrif, __ptr, 0) + +static u32 imsic_mrif_topei(struct imsic_mrif *mrif, u32 nr_eix, u32 nr_msis) +{ + struct imsic_mrif_eix *eix; + u32 i, imin, imax, ei, max_msi; + unsigned long eipend[BITS_PER_TYPE(u64) / BITS_PER_LONG]; + unsigned long eithreshold = imsic_mrif_atomic_read(mrif, + &mrif->eithreshold); + + max_msi = (eithreshold && (eithreshold <= nr_msis)) ? + eithreshold : nr_msis; + for (ei = 0; ei < nr_eix; ei++) { + eix = &mrif->eix[ei]; + eipend[0] = imsic_mrif_atomic_read(mrif, &eix->eie[0]) & + imsic_mrif_atomic_read(mrif, &eix->eip[0]); +#ifdef CONFIG_32BIT + eipend[1] = imsic_mrif_atomic_read(mrif, &eix->eie[1]) & + imsic_mrif_atomic_read(mrif, &eix->eip[1]); + if (!eipend[0] && !eipend[1]) +#else + if (!eipend[0]) +#endif + continue; + + imin = ei * BITS_PER_TYPE(u64); + imax = ((imin + BITS_PER_TYPE(u64)) < max_msi) ? + imin + BITS_PER_TYPE(u64) : max_msi; + for (i = (!imin) ? 1 : imin; i < imax; i++) { + if (test_bit(i - imin, eipend)) + return (i << TOPEI_ID_SHIFT) | i; + } + } + + return 0; +} + +static int imsic_mrif_rmw(struct imsic_mrif *mrif, u32 nr_eix, + unsigned long isel, unsigned long *val, + unsigned long new_val, unsigned long wr_mask) +{ + bool pend; + struct imsic_mrif_eix *eix; + unsigned long *ei, num, old_val = 0; + + switch (isel) { + case IMSIC_EIDELIVERY: + old_val = imsic_mrif_atomic_rmw(mrif, &mrif->eidelivery, + new_val, wr_mask & 0x1); + break; + case IMSIC_EITHRESHOLD: + old_val = imsic_mrif_atomic_rmw(mrif, &mrif->eithreshold, + new_val, wr_mask & (IMSIC_MAX_ID - 1)); + break; + case IMSIC_EIP0 ... IMSIC_EIP63: + case IMSIC_EIE0 ... IMSIC_EIE63: + if (isel >= IMSIC_EIP0 && isel <= IMSIC_EIP63) { + pend = true; + num = isel - IMSIC_EIP0; + } else { + pend = false; + num = isel - IMSIC_EIE0; + } + + if ((num / 2) >= nr_eix) + return -EINVAL; + eix = &mrif->eix[num / 2]; + +#ifndef CONFIG_32BIT + if (num & 0x1) + return -EINVAL; + ei = (pend) ? &eix->eip[0] : &eix->eie[0]; +#else + ei = (pend) ? &eix->eip[num & 0x1] : &eix->eie[num & 0x1]; +#endif + + /* Bit0 of EIP0 or EIE0 is read-only */ + if (!num) + wr_mask &= ~BIT(0); + + old_val = imsic_mrif_atomic_rmw(mrif, ei, new_val, wr_mask); + break; + default: + return -ENOENT; + }; + + if (val) + *val = old_val; + + return 0; +} + +struct imsic_vsfile_read_data { + int hgei; + u32 nr_eix; + bool clear; + struct imsic_mrif *mrif; +}; + +static void imsic_vsfile_local_read(void *data) +{ + u32 i; + struct imsic_mrif_eix *eix; + struct imsic_vsfile_read_data *idata = data; + struct imsic_mrif *mrif = idata->mrif; + unsigned long new_hstatus, old_hstatus, old_vsiselect; + + old_vsiselect = csr_read(CSR_VSISELECT); + old_hstatus = csr_read(CSR_HSTATUS); + new_hstatus = old_hstatus & ~HSTATUS_VGEIN; + new_hstatus |= ((unsigned long)idata->hgei) << HSTATUS_VGEIN_SHIFT; + csr_write(CSR_HSTATUS, new_hstatus); + + /* + * We don't use imsic_mrif_atomic_xyz() functions to store + * values in MRIF because imsic_vsfile_read() is always called + * with pointer to temporary MRIF on stack. + */ + + if (idata->clear) { + mrif->eidelivery = imsic_vs_csr_swap(IMSIC_EIDELIVERY, 0); + mrif->eithreshold = imsic_vs_csr_swap(IMSIC_EITHRESHOLD, 0); + for (i = 0; i < idata->nr_eix; i++) { + eix = &mrif->eix[i]; + eix->eip[0] = imsic_eix_swap(IMSIC_EIP0 + i * 2, 0); + eix->eie[0] = imsic_eix_swap(IMSIC_EIE0 + i * 2, 0); +#ifdef CONFIG_32BIT + eix->eip[1] = imsic_eix_swap(IMSIC_EIP0 + i * 2 + 1, 0); + eix->eie[1] = imsic_eix_swap(IMSIC_EIE0 + i * 2 + 1, 0); +#endif + } + } else { + mrif->eidelivery = imsic_vs_csr_read(IMSIC_EIDELIVERY); + mrif->eithreshold = imsic_vs_csr_read(IMSIC_EITHRESHOLD); + for (i = 0; i < idata->nr_eix; i++) { + eix = &mrif->eix[i]; + eix->eip[0] = imsic_eix_read(IMSIC_EIP0 + i * 2); + eix->eie[0] = imsic_eix_read(IMSIC_EIE0 + i * 2); +#ifdef CONFIG_32BIT + eix->eip[1] = imsic_eix_read(IMSIC_EIP0 + i * 2 + 1); + eix->eie[1] = imsic_eix_read(IMSIC_EIE0 + i * 2 + 1); +#endif + } + } + + csr_write(CSR_HSTATUS, old_hstatus); + csr_write(CSR_VSISELECT, old_vsiselect); +} + +static void imsic_vsfile_read(int vsfile_hgei, int vsfile_cpu, u32 nr_eix, + bool clear, struct imsic_mrif *mrif) +{ + struct imsic_vsfile_read_data idata; + + /* We can only read clear if we have a IMSIC VS-file */ + if (vsfile_cpu < 0 || vsfile_hgei <= 0) + return; + + /* We can only read clear on local CPU */ + idata.hgei = vsfile_hgei; + idata.nr_eix = nr_eix; + idata.clear = clear; + idata.mrif = mrif; + on_each_cpu_mask(cpumask_of(vsfile_cpu), + imsic_vsfile_local_read, &idata, 1); +} + +static void imsic_vsfile_local_clear(int vsfile_hgei, u32 nr_eix) +{ + u32 i; + unsigned long new_hstatus, old_hstatus, old_vsiselect; + + /* We can only zero-out if we have a IMSIC VS-file */ + if (vsfile_hgei <= 0) + return; + + old_vsiselect = csr_read(CSR_VSISELECT); + old_hstatus = csr_read(CSR_HSTATUS); + new_hstatus = old_hstatus & ~HSTATUS_VGEIN; + new_hstatus |= ((unsigned long)vsfile_hgei) << HSTATUS_VGEIN_SHIFT; + csr_write(CSR_HSTATUS, new_hstatus); + + imsic_vs_csr_write(IMSIC_EIDELIVERY, 0); + imsic_vs_csr_write(IMSIC_EITHRESHOLD, 0); + for (i = 0; i < nr_eix; i++) { + imsic_eix_write(IMSIC_EIP0 + i * 2, 0); + imsic_eix_write(IMSIC_EIE0 + i * 2, 0); +#ifdef CONFIG_32BIT + imsic_eix_write(IMSIC_EIP0 + i * 2 + 1, 0); + imsic_eix_write(IMSIC_EIE0 + i * 2 + 1, 0); +#endif + } + + csr_write(CSR_HSTATUS, old_hstatus); + csr_write(CSR_VSISELECT, old_vsiselect); +} + +static void imsic_vsfile_local_update(int vsfile_hgei, u32 nr_eix, + struct imsic_mrif *mrif) +{ + u32 i; + struct imsic_mrif_eix *eix; + unsigned long new_hstatus, old_hstatus, old_vsiselect; + + /* We can only update if we have a HW IMSIC context */ + if (vsfile_hgei <= 0) + return; + + /* + * We don't use imsic_mrif_atomic_xyz() functions to read values + * from MRIF in this function because it is always called with + * pointer to temporary MRIF on stack. + */ + + old_vsiselect = csr_read(CSR_VSISELECT); + old_hstatus = csr_read(CSR_HSTATUS); + new_hstatus = old_hstatus & ~HSTATUS_VGEIN; + new_hstatus |= ((unsigned long)vsfile_hgei) << HSTATUS_VGEIN_SHIFT; + csr_write(CSR_HSTATUS, new_hstatus); + + for (i = 0; i < nr_eix; i++) { + eix = &mrif->eix[i]; + imsic_eix_set(IMSIC_EIP0 + i * 2, eix->eip[0]); + imsic_eix_set(IMSIC_EIE0 + i * 2, eix->eie[0]); +#ifdef CONFIG_32BIT + imsic_eix_set(IMSIC_EIP0 + i * 2 + 1, eix->eip[1]); + imsic_eix_set(IMSIC_EIE0 + i * 2 + 1, eix->eie[1]); +#endif + } + imsic_vs_csr_write(IMSIC_EITHRESHOLD, mrif->eithreshold); + imsic_vs_csr_write(IMSIC_EIDELIVERY, mrif->eidelivery); + + csr_write(CSR_HSTATUS, old_hstatus); + csr_write(CSR_VSISELECT, old_vsiselect); +} + +static void imsic_vsfile_cleanup(struct imsic *imsic) +{ + int old_vsfile_hgei, old_vsfile_cpu; + unsigned long flags; + + /* + * We don't use imsic_mrif_atomic_xyz() functions to clear the + * SW-file in this function because it is always called when the + * VCPU is being destroyed. + */ + + write_lock_irqsave(&imsic->vsfile_lock, flags); + old_vsfile_hgei = imsic->vsfile_hgei; + old_vsfile_cpu = imsic->vsfile_cpu; + imsic->vsfile_cpu = imsic->vsfile_hgei = -1; + imsic->vsfile_va = NULL; + imsic->vsfile_pa = 0; + write_unlock_irqrestore(&imsic->vsfile_lock, flags); + + memset(imsic->swfile, 0, sizeof(*imsic->swfile)); + + if (old_vsfile_cpu >= 0) + kvm_riscv_aia_free_hgei(old_vsfile_cpu, old_vsfile_hgei); +} + +static void imsic_swfile_extirq_update(struct kvm_vcpu *vcpu) +{ + struct imsic *imsic = vcpu->arch.aia_context.imsic_state; + struct imsic_mrif *mrif = imsic->swfile; + + if (imsic_mrif_atomic_read(mrif, &mrif->eidelivery) && + imsic_mrif_topei(mrif, imsic->nr_eix, imsic->nr_msis)) + kvm_riscv_vcpu_set_interrupt(vcpu, IRQ_VS_EXT); + else + kvm_riscv_vcpu_unset_interrupt(vcpu, IRQ_VS_EXT); +} + +static void imsic_swfile_read(struct kvm_vcpu *vcpu, bool clear, + struct imsic_mrif *mrif) +{ + struct imsic *imsic = vcpu->arch.aia_context.imsic_state; + + /* + * We don't use imsic_mrif_atomic_xyz() functions to read and + * write SW-file and MRIF in this function because it is always + * called when VCPU is not using SW-file and the MRIF points to + * a temporary MRIF on stack. + */ + + memcpy(mrif, imsic->swfile, sizeof(*mrif)); + if (clear) { + memset(imsic->swfile, 0, sizeof(*imsic->swfile)); + kvm_riscv_vcpu_unset_interrupt(vcpu, IRQ_VS_EXT); + } +} + +static void imsic_swfile_update(struct kvm_vcpu *vcpu, + struct imsic_mrif *mrif) +{ + u32 i; + struct imsic_mrif_eix *seix, *eix; + struct imsic *imsic = vcpu->arch.aia_context.imsic_state; + struct imsic_mrif *smrif = imsic->swfile; + + imsic_mrif_atomic_write(smrif, &smrif->eidelivery, mrif->eidelivery); + imsic_mrif_atomic_write(smrif, &smrif->eithreshold, mrif->eithreshold); + for (i = 0; i < imsic->nr_eix; i++) { + seix = &smrif->eix[i]; + eix = &mrif->eix[i]; + imsic_mrif_atomic_or(smrif, &seix->eip[0], eix->eip[0]); + imsic_mrif_atomic_or(smrif, &seix->eie[0], eix->eie[0]); +#ifdef CONFIG_32BIT + imsic_mrif_atomic_or(smrif, &seix->eip[1], eix->eip[1]); + imsic_mrif_atomic_or(smrif, &seix->eie[1], eix->eie[1]); +#endif + } + + imsic_swfile_extirq_update(vcpu); +} + +void kvm_riscv_vcpu_aia_imsic_release(struct kvm_vcpu *vcpu) +{ + unsigned long flags; + struct imsic_mrif tmrif; + int old_vsfile_hgei, old_vsfile_cpu; + struct imsic *imsic = vcpu->arch.aia_context.imsic_state; + + /* Read and clear IMSIC VS-file details */ + write_lock_irqsave(&imsic->vsfile_lock, flags); + old_vsfile_hgei = imsic->vsfile_hgei; + old_vsfile_cpu = imsic->vsfile_cpu; + imsic->vsfile_cpu = imsic->vsfile_hgei = -1; + imsic->vsfile_va = NULL; + imsic->vsfile_pa = 0; + write_unlock_irqrestore(&imsic->vsfile_lock, flags); + + /* Do nothing, if no IMSIC VS-file to release */ + if (old_vsfile_cpu < 0) + return; + + /* + * At this point, all interrupt producers are still using + * the old IMSIC VS-file so we first re-direct all interrupt + * producers. + */ + + /* Purge the G-stage mapping */ + kvm_riscv_gstage_iounmap(vcpu->kvm, + vcpu->arch.aia_context.imsic_addr, + IMSIC_MMIO_PAGE_SZ); + + /* TODO: Purge the IOMMU mapping ??? */ + + /* + * At this point, all interrupt producers have been re-directed + * to somewhere else so we move register state from the old IMSIC + * VS-file to the IMSIC SW-file. + */ + + /* Read and clear register state from old IMSIC VS-file */ + memset(&tmrif, 0, sizeof(tmrif)); + imsic_vsfile_read(old_vsfile_hgei, old_vsfile_cpu, imsic->nr_hw_eix, + true, &tmrif); + + /* Update register state in IMSIC SW-file */ + imsic_swfile_update(vcpu, &tmrif); + + /* Free-up old IMSIC VS-file */ + kvm_riscv_aia_free_hgei(old_vsfile_cpu, old_vsfile_hgei); +} + +int kvm_riscv_vcpu_aia_imsic_update(struct kvm_vcpu *vcpu) +{ + unsigned long flags; + phys_addr_t new_vsfile_pa; + struct imsic_mrif tmrif; + void __iomem *new_vsfile_va; + struct kvm *kvm = vcpu->kvm; + struct kvm_run *run = vcpu->run; + struct kvm_vcpu_aia *vaia = &vcpu->arch.aia_context; + struct imsic *imsic = vaia->imsic_state; + int ret = 0, new_vsfile_hgei = -1, old_vsfile_hgei, old_vsfile_cpu; + + /* Do nothing for emulation mode */ + if (kvm->arch.aia.mode == KVM_DEV_RISCV_AIA_MODE_EMUL) + return 1; + + /* Read old IMSIC VS-file details */ + read_lock_irqsave(&imsic->vsfile_lock, flags); + old_vsfile_hgei = imsic->vsfile_hgei; + old_vsfile_cpu = imsic->vsfile_cpu; + read_unlock_irqrestore(&imsic->vsfile_lock, flags); + + /* Do nothing if we are continuing on same CPU */ + if (old_vsfile_cpu == vcpu->cpu) + return 1; + + /* Allocate new IMSIC VS-file */ + ret = kvm_riscv_aia_alloc_hgei(vcpu->cpu, vcpu, + &new_vsfile_va, &new_vsfile_pa); + if (ret <= 0) { + /* For HW acceleration mode, we can't continue */ + if (kvm->arch.aia.mode == KVM_DEV_RISCV_AIA_MODE_HWACCEL) { + run->fail_entry.hardware_entry_failure_reason = + CSR_HSTATUS; + run->fail_entry.cpu = vcpu->cpu; + run->exit_reason = KVM_EXIT_FAIL_ENTRY; + return 0; + } + + /* Release old IMSIC VS-file */ + if (old_vsfile_cpu >= 0) + kvm_riscv_vcpu_aia_imsic_release(vcpu); + + /* For automatic mode, we continue */ + goto done; + } + new_vsfile_hgei = ret; + + /* + * At this point, all interrupt producers are still using + * to the old IMSIC VS-file so we first move all interrupt + * producers to the new IMSIC VS-file. + */ + + /* Zero-out new IMSIC VS-file */ + imsic_vsfile_local_clear(new_vsfile_hgei, imsic->nr_hw_eix); + + /* Update G-stage mapping for the new IMSIC VS-file */ + ret = kvm_riscv_gstage_ioremap(kvm, vcpu->arch.aia_context.imsic_addr, + new_vsfile_pa, IMSIC_MMIO_PAGE_SZ, + true, true); + if (ret) + goto fail_free_vsfile_hgei; + + /* TODO: Update the IOMMU mapping ??? */ + + /* Update new IMSIC VS-file details in IMSIC context */ + write_lock_irqsave(&imsic->vsfile_lock, flags); + imsic->vsfile_hgei = new_vsfile_hgei; + imsic->vsfile_cpu = vcpu->cpu; + imsic->vsfile_va = new_vsfile_va; + imsic->vsfile_pa = new_vsfile_pa; + write_unlock_irqrestore(&imsic->vsfile_lock, flags); + + /* + * At this point, all interrupt producers have been moved + * to the new IMSIC VS-file so we move register state from + * the old IMSIC VS/SW-file to the new IMSIC VS-file. + */ + + memset(&tmrif, 0, sizeof(tmrif)); + if (old_vsfile_cpu >= 0) { + /* Read and clear register state from old IMSIC VS-file */ + imsic_vsfile_read(old_vsfile_hgei, old_vsfile_cpu, + imsic->nr_hw_eix, true, &tmrif); + + /* Free-up old IMSIC VS-file */ + kvm_riscv_aia_free_hgei(old_vsfile_cpu, old_vsfile_hgei); + } else { + /* Read and clear register state from IMSIC SW-file */ + imsic_swfile_read(vcpu, true, &tmrif); + } + + /* Restore register state in the new IMSIC VS-file */ + imsic_vsfile_local_update(new_vsfile_hgei, imsic->nr_hw_eix, &tmrif); + +done: + /* Set VCPU HSTATUS.VGEIN to new IMSIC VS-file */ + vcpu->arch.guest_context.hstatus &= ~HSTATUS_VGEIN; + if (new_vsfile_hgei > 0) + vcpu->arch.guest_context.hstatus |= + ((unsigned long)new_vsfile_hgei) << HSTATUS_VGEIN_SHIFT; + + /* Continue run-loop */ + return 1; + +fail_free_vsfile_hgei: + kvm_riscv_aia_free_hgei(vcpu->cpu, new_vsfile_hgei); + return ret; +} + +int kvm_riscv_vcpu_aia_imsic_rmw(struct kvm_vcpu *vcpu, unsigned long isel, + unsigned long *val, unsigned long new_val, + unsigned long wr_mask) +{ + u32 topei; + struct imsic_mrif_eix *eix; + int r, rc = KVM_INSN_CONTINUE_NEXT_SEPC; + struct imsic *imsic = vcpu->arch.aia_context.imsic_state; + + if (isel == KVM_RISCV_AIA_IMSIC_TOPEI) { + /* Read pending and enabled interrupt with highest priority */ + topei = imsic_mrif_topei(imsic->swfile, imsic->nr_eix, + imsic->nr_msis); + if (val) + *val = topei; + + /* Writes ignore value and clear top pending interrupt */ + if (topei && wr_mask) { + topei >>= TOPEI_ID_SHIFT; + if (topei) { + eix = &imsic->swfile->eix[topei / + BITS_PER_TYPE(u64)]; + clear_bit(topei & (BITS_PER_TYPE(u64) - 1), + eix->eip); + } + } + } else { + r = imsic_mrif_rmw(imsic->swfile, imsic->nr_eix, isel, + val, new_val, wr_mask); + /* Forward unknown IMSIC register to user-space */ + if (r) + rc = (r == -ENOENT) ? 0 : KVM_INSN_ILLEGAL_TRAP; + } + + if (wr_mask) + imsic_swfile_extirq_update(vcpu); + + return rc; +} + +void kvm_riscv_vcpu_aia_imsic_reset(struct kvm_vcpu *vcpu) +{ + struct imsic *imsic = vcpu->arch.aia_context.imsic_state; + + if (!imsic) + return; + + kvm_riscv_vcpu_aia_imsic_release(vcpu); + + memset(imsic->swfile, 0, sizeof(*imsic->swfile)); +} + +int kvm_riscv_vcpu_aia_imsic_inject(struct kvm_vcpu *vcpu, + u32 guest_index, u32 offset, u32 iid) +{ + unsigned long flags; + struct imsic_mrif_eix *eix; + struct imsic *imsic = vcpu->arch.aia_context.imsic_state; + + /* We only emulate one IMSIC MMIO page for each Guest VCPU */ + if (!imsic || !iid || guest_index || + (offset != IMSIC_MMIO_SETIPNUM_LE && + offset != IMSIC_MMIO_SETIPNUM_BE)) + return -ENODEV; + + iid = (offset == IMSIC_MMIO_SETIPNUM_BE) ? __swab32(iid) : iid; + if (imsic->nr_msis <= iid) + return -EINVAL; + + read_lock_irqsave(&imsic->vsfile_lock, flags); + + if (imsic->vsfile_cpu >= 0) { + writel(iid, imsic->vsfile_va + IMSIC_MMIO_SETIPNUM_LE); + kvm_vcpu_kick(vcpu); + } else { + eix = &imsic->swfile->eix[iid / BITS_PER_TYPE(u64)]; + set_bit(iid & (BITS_PER_TYPE(u64) - 1), eix->eip); + imsic_swfile_extirq_update(vcpu); + } + + read_unlock_irqrestore(&imsic->vsfile_lock, flags); + + return 0; +} + +static int imsic_mmio_read(struct kvm_vcpu *vcpu, struct kvm_io_device *dev, + gpa_t addr, int len, void *val) +{ + if (len != 4 || (addr & 0x3) != 0) + return -EOPNOTSUPP; + + *((u32 *)val) = 0; + + return 0; +} + +static int imsic_mmio_write(struct kvm_vcpu *vcpu, struct kvm_io_device *dev, + gpa_t addr, int len, const void *val) +{ + struct kvm_msi msi = { 0 }; + + if (len != 4 || (addr & 0x3) != 0) + return -EOPNOTSUPP; + + msi.address_hi = addr >> 32; + msi.address_lo = (u32)addr; + msi.data = *((const u32 *)val); + kvm_riscv_aia_inject_msi(vcpu->kvm, &msi); + + return 0; +}; + +static struct kvm_io_device_ops imsic_iodoev_ops = { + .read = imsic_mmio_read, + .write = imsic_mmio_write, +}; + +int kvm_riscv_vcpu_aia_imsic_init(struct kvm_vcpu *vcpu) +{ + int ret = 0; + struct imsic *imsic; + struct page *swfile_page; + struct kvm *kvm = vcpu->kvm; + + /* Fail if we have zero IDs */ + if (!kvm->arch.aia.nr_ids) + return -EINVAL; + + /* Allocate IMSIC context */ + imsic = kzalloc(sizeof(*imsic), GFP_KERNEL); + if (!imsic) + return -ENOMEM; + vcpu->arch.aia_context.imsic_state = imsic; + + /* Setup IMSIC context */ + imsic->nr_msis = kvm->arch.aia.nr_ids + 1; + rwlock_init(&imsic->vsfile_lock); + imsic->nr_eix = BITS_TO_U64(imsic->nr_msis); + imsic->nr_hw_eix = BITS_TO_U64(kvm_riscv_aia_max_ids); + imsic->vsfile_hgei = imsic->vsfile_cpu = -1; + + /* Setup IMSIC SW-file */ + swfile_page = alloc_pages(GFP_KERNEL | __GFP_ZERO, + get_order(sizeof(*imsic->swfile))); + if (!swfile_page) { + ret = -ENOMEM; + goto fail_free_imsic; + } + imsic->swfile = page_to_virt(swfile_page); + imsic->swfile_pa = page_to_phys(swfile_page); + + /* Setup IO device */ + kvm_iodevice_init(&imsic->iodev, &imsic_iodoev_ops); + mutex_lock(&kvm->slots_lock); + ret = kvm_io_bus_register_dev(kvm, KVM_MMIO_BUS, + vcpu->arch.aia_context.imsic_addr, + KVM_DEV_RISCV_IMSIC_SIZE, + &imsic->iodev); + mutex_unlock(&kvm->slots_lock); + if (ret) + goto fail_free_swfile; + + return 0; + +fail_free_swfile: + free_pages((unsigned long)imsic->swfile, + get_order(sizeof(*imsic->swfile))); +fail_free_imsic: + vcpu->arch.aia_context.imsic_state = NULL; + kfree(imsic); + return ret; +} + +void kvm_riscv_vcpu_aia_imsic_cleanup(struct kvm_vcpu *vcpu) +{ + struct kvm *kvm = vcpu->kvm; + struct imsic *imsic = vcpu->arch.aia_context.imsic_state; + + if (!imsic) + return; + + imsic_vsfile_cleanup(imsic); + + mutex_lock(&kvm->slots_lock); + kvm_io_bus_unregister_dev(kvm, KVM_MMIO_BUS, &imsic->iodev); + mutex_unlock(&kvm->slots_lock); + + free_pages((unsigned long)imsic->swfile, + get_order(sizeof(*imsic->swfile))); + + vcpu->arch.aia_context.imsic_state = NULL; + kfree(imsic); +} From patchwork Thu Jun 15 07:33:53 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Anup Patel X-Patchwork-Id: 13280845 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 20E5CEB64DB for ; Thu, 15 Jun 2023 07:36:14 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S245109AbjFOHgN (ORCPT ); Thu, 15 Jun 2023 03:36:13 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:49284 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S245259AbjFOHfF (ORCPT ); Thu, 15 Jun 2023 03:35:05 -0400 Received: from mail-pl1-x62a.google.com (mail-pl1-x62a.google.com [IPv6:2607:f8b0:4864:20::62a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 4B51830E0 for ; Thu, 15 Jun 2023 00:34:36 -0700 (PDT) Received: by mail-pl1-x62a.google.com with SMTP id d9443c01a7336-1b3b3f67ad6so43166075ad.3 for ; Thu, 15 Jun 2023 00:34:36 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ventanamicro.com; s=google; t=1686814475; x=1689406475; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=a6Rq1yi2+qD0HFNpBzumkvByzhdG8I7NvHz0A/3B5uA=; b=D01H5OrVIr8we3GFahjjLHOivkiaE0jKVyBdXjUGqt0vsAO+bn24SN+JpiLa0PUN1H 2Z6DX6it+lm1BaxQA9tDe7zTNNqfjuyjWywwmeFRsHlKPMa+nsCsHmRzD5QD9pysNvME MZD3apcuHfjaQOsjcnTqFZwXpqvNYfneO33m755HSZ0yX1KEDAUuC95Z9pGTTLQddnsk f50Dxqrk2DcJYPfmdNKO0VjvN6FEzFdOoDuPv6aq8wrMkbl9vVR8dNuuItzJA0gn/SJY +yGqUkXWeDXotS0UNBjQNGetCAo2kzU6BiLjW4KR1jmG4Fr7wkMM/VjXt819XqNZKPjn RBaw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1686814475; x=1689406475; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=a6Rq1yi2+qD0HFNpBzumkvByzhdG8I7NvHz0A/3B5uA=; b=DhCL6bZxoTWeE157KbIKxvWlVVR4HFNmMFVp1O5rL92wTfH6ulOUUIhi4+HCtNs/cn nqfZgzHCUWT0dDd0hFaIjyuCcSa4nGZDf0V8JE8/TronExE9Xk7Ml9qFRWXiZv/dz4kh bkHcOBsfNEP/j8/zDx/nAJjmZEWYC5dLoFXDe9tyNkEWXXRcOg71xDMBoD/hpQ1KRs5D K7eo2o5Gj411J2qN5VaMXNo27ZycspME/jJ6agJ08//0r48U2TRPAMpnkHynJS1T/eWP VFWb2BW6o8vEh5MEqNajsWoRCJETlJRHV5VtL+5mgfkU5oOUQfwdXHx0ITjnjEH0nAB0 +Mbg== X-Gm-Message-State: AC+VfDxB/29dwgJ9y8mmNqJ2MD/HwZUGlTmVnvR8Ox47Sa1Vm7yeI03Z IpnUHpxJpfdk22ov88L/neBWSw== X-Google-Smtp-Source: ACHHUZ4l7irIUA/CXf68cBSaDH7QdRt29DwIkNVyjOIFDLs8AX8gkGfmdfm1sIJDRXEdvS6whAibZg== X-Received: by 2002:a17:902:da91:b0:1b5:1e24:8a76 with SMTP id j17-20020a170902da9100b001b51e248a76mr455504plx.65.1686814475589; Thu, 15 Jun 2023 00:34:35 -0700 (PDT) Received: from anup-ubuntu-vm.localdomain ([106.51.83.242]) by smtp.gmail.com with ESMTPSA id ji1-20020a170903324100b001b016313b1dsm8049855plb.86.2023.06.15.00.34.32 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 15 Jun 2023 00:34:35 -0700 (PDT) From: Anup Patel To: Paolo Bonzini , Atish Patra Cc: Palmer Dabbelt , Paul Walmsley , Andrew Jones , kvm@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, Anup Patel , Atish Patra Subject: [PATCH v3 10/10] RISC-V: KVM: Expose IMSIC registers as attributes of AIA irqchip Date: Thu, 15 Jun 2023 13:03:53 +0530 Message-Id: <20230615073353.85435-11-apatel@ventanamicro.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230615073353.85435-1-apatel@ventanamicro.com> References: <20230615073353.85435-1-apatel@ventanamicro.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org We expose IMSIC registers as KVM device attributes of the in-kernel AIA irqchip device. This will allow KVM user-space to save/restore IMISC state of each VCPU using KVM device ioctls(). Signed-off-by: Anup Patel Reviewed-by: Atish Patra --- arch/riscv/include/asm/kvm_aia.h | 3 + arch/riscv/include/uapi/asm/kvm.h | 17 +++ arch/riscv/kvm/aia_device.c | 29 ++++- arch/riscv/kvm/aia_imsic.c | 170 ++++++++++++++++++++++++++++++ 4 files changed, 217 insertions(+), 2 deletions(-) diff --git a/arch/riscv/include/asm/kvm_aia.h b/arch/riscv/include/asm/kvm_aia.h index a4f6ebf90e31..1f37b600ca47 100644 --- a/arch/riscv/include/asm/kvm_aia.h +++ b/arch/riscv/include/asm/kvm_aia.h @@ -97,6 +97,9 @@ int kvm_riscv_vcpu_aia_imsic_update(struct kvm_vcpu *vcpu); int kvm_riscv_vcpu_aia_imsic_rmw(struct kvm_vcpu *vcpu, unsigned long isel, unsigned long *val, unsigned long new_val, unsigned long wr_mask); +int kvm_riscv_aia_imsic_rw_attr(struct kvm *kvm, unsigned long type, + bool write, unsigned long *val); +int kvm_riscv_aia_imsic_has_attr(struct kvm *kvm, unsigned long type); void kvm_riscv_vcpu_aia_imsic_reset(struct kvm_vcpu *vcpu); int kvm_riscv_vcpu_aia_imsic_inject(struct kvm_vcpu *vcpu, u32 guest_index, u32 offset, u32 iid); diff --git a/arch/riscv/include/uapi/asm/kvm.h b/arch/riscv/include/uapi/asm/kvm.h index 9ed822fc5589..61d7fecc4899 100644 --- a/arch/riscv/include/uapi/asm/kvm.h +++ b/arch/riscv/include/uapi/asm/kvm.h @@ -255,6 +255,23 @@ enum KVM_RISCV_SBI_EXT_ID { */ #define KVM_DEV_RISCV_AIA_GRP_APLIC 3 +/* + * The lower 12-bits of the device attribute type contains the iselect + * value of the IMSIC register (range 0x70-0xFF) whereas the higher order + * bits contains the VCPU id. + */ +#define KVM_DEV_RISCV_AIA_GRP_IMSIC 4 +#define KVM_DEV_RISCV_AIA_IMSIC_ISEL_BITS 12 +#define KVM_DEV_RISCV_AIA_IMSIC_ISEL_MASK \ + ((1U << KVM_DEV_RISCV_AIA_IMSIC_ISEL_BITS) - 1) +#define KVM_DEV_RISCV_AIA_IMSIC_MKATTR(__vcpu, __isel) \ + (((__vcpu) << KVM_DEV_RISCV_AIA_IMSIC_ISEL_BITS) | \ + ((__isel) & KVM_DEV_RISCV_AIA_IMSIC_ISEL_MASK)) +#define KVM_DEV_RISCV_AIA_IMSIC_GET_ISEL(__attr) \ + ((__attr) & KVM_DEV_RISCV_AIA_IMSIC_ISEL_MASK) +#define KVM_DEV_RISCV_AIA_IMSIC_GET_VCPU(__attr) \ + ((__attr) >> KVM_DEV_RISCV_AIA_IMSIC_ISEL_BITS) + /* One single KVM irqchip, ie. the AIA */ #define KVM_NR_IRQCHIPS 1 diff --git a/arch/riscv/kvm/aia_device.c b/arch/riscv/kvm/aia_device.c index c649ad6e8e0a..84dae351b6d7 100644 --- a/arch/riscv/kvm/aia_device.c +++ b/arch/riscv/kvm/aia_device.c @@ -327,7 +327,7 @@ static int aia_set_attr(struct kvm_device *dev, struct kvm_device_attr *attr) u32 nr; u64 addr; int nr_vcpus, r = -ENXIO; - unsigned long type = (unsigned long)attr->attr; + unsigned long v, type = (unsigned long)attr->attr; void __user *uaddr = (void __user *)(long)attr->addr; switch (attr->group) { @@ -374,6 +374,15 @@ static int aia_set_attr(struct kvm_device *dev, struct kvm_device_attr *attr) r = kvm_riscv_aia_aplic_set_attr(dev->kvm, type, nr); mutex_unlock(&dev->kvm->lock); + break; + case KVM_DEV_RISCV_AIA_GRP_IMSIC: + if (copy_from_user(&v, uaddr, sizeof(v))) + return -EFAULT; + + mutex_lock(&dev->kvm->lock); + r = kvm_riscv_aia_imsic_rw_attr(dev->kvm, type, true, &v); + mutex_unlock(&dev->kvm->lock); + break; } @@ -386,7 +395,7 @@ static int aia_get_attr(struct kvm_device *dev, struct kvm_device_attr *attr) u64 addr; int nr_vcpus, r = -ENXIO; void __user *uaddr = (void __user *)(long)attr->addr; - unsigned long type = (unsigned long)attr->attr; + unsigned long v, type = (unsigned long)attr->attr; switch (attr->group) { case KVM_DEV_RISCV_AIA_GRP_CONFIG: @@ -435,6 +444,20 @@ static int aia_get_attr(struct kvm_device *dev, struct kvm_device_attr *attr) if (copy_to_user(uaddr, &nr, sizeof(nr))) return -EFAULT; + break; + case KVM_DEV_RISCV_AIA_GRP_IMSIC: + if (copy_from_user(&v, uaddr, sizeof(v))) + return -EFAULT; + + mutex_lock(&dev->kvm->lock); + r = kvm_riscv_aia_imsic_rw_attr(dev->kvm, type, false, &v); + mutex_unlock(&dev->kvm->lock); + if (r) + return r; + + if (copy_to_user(uaddr, &v, sizeof(v))) + return -EFAULT; + break; } @@ -473,6 +496,8 @@ static int aia_has_attr(struct kvm_device *dev, struct kvm_device_attr *attr) break; case KVM_DEV_RISCV_AIA_GRP_APLIC: return kvm_riscv_aia_aplic_has_attr(dev->kvm, attr->attr); + case KVM_DEV_RISCV_AIA_GRP_IMSIC: + return kvm_riscv_aia_imsic_has_attr(dev->kvm, attr->attr); } return -ENXIO; diff --git a/arch/riscv/kvm/aia_imsic.c b/arch/riscv/kvm/aia_imsic.c index 2dc09dcb8ab5..8f108cfa80e5 100644 --- a/arch/riscv/kvm/aia_imsic.c +++ b/arch/riscv/kvm/aia_imsic.c @@ -277,6 +277,33 @@ static u32 imsic_mrif_topei(struct imsic_mrif *mrif, u32 nr_eix, u32 nr_msis) return 0; } +static int imsic_mrif_isel_check(u32 nr_eix, unsigned long isel) +{ + u32 num = 0; + + switch (isel) { + case IMSIC_EIDELIVERY: + case IMSIC_EITHRESHOLD: + break; + case IMSIC_EIP0 ... IMSIC_EIP63: + num = isel - IMSIC_EIP0; + break; + case IMSIC_EIE0 ... IMSIC_EIE63: + num = isel - IMSIC_EIE0; + break; + default: + return -ENOENT; + }; +#ifndef CONFIG_32BIT + if (num & 0x1) + return -EINVAL; +#endif + if ((num / 2) >= nr_eix) + return -EINVAL; + + return 0; +} + static int imsic_mrif_rmw(struct imsic_mrif *mrif, u32 nr_eix, unsigned long isel, unsigned long *val, unsigned long new_val, unsigned long wr_mask) @@ -407,6 +434,86 @@ static void imsic_vsfile_read(int vsfile_hgei, int vsfile_cpu, u32 nr_eix, imsic_vsfile_local_read, &idata, 1); } +struct imsic_vsfile_rw_data { + int hgei; + int isel; + bool write; + unsigned long val; +}; + +static void imsic_vsfile_local_rw(void *data) +{ + struct imsic_vsfile_rw_data *idata = data; + unsigned long new_hstatus, old_hstatus, old_vsiselect; + + old_vsiselect = csr_read(CSR_VSISELECT); + old_hstatus = csr_read(CSR_HSTATUS); + new_hstatus = old_hstatus & ~HSTATUS_VGEIN; + new_hstatus |= ((unsigned long)idata->hgei) << HSTATUS_VGEIN_SHIFT; + csr_write(CSR_HSTATUS, new_hstatus); + + switch (idata->isel) { + case IMSIC_EIDELIVERY: + if (idata->write) + imsic_vs_csr_write(IMSIC_EIDELIVERY, idata->val); + else + idata->val = imsic_vs_csr_read(IMSIC_EIDELIVERY); + break; + case IMSIC_EITHRESHOLD: + if (idata->write) + imsic_vs_csr_write(IMSIC_EITHRESHOLD, idata->val); + else + idata->val = imsic_vs_csr_read(IMSIC_EITHRESHOLD); + break; + case IMSIC_EIP0 ... IMSIC_EIP63: + case IMSIC_EIE0 ... IMSIC_EIE63: +#ifndef CONFIG_32BIT + if (idata->isel & 0x1) + break; +#endif + if (idata->write) + imsic_eix_write(idata->isel, idata->val); + else + idata->val = imsic_eix_read(idata->isel); + break; + default: + break; + } + + csr_write(CSR_HSTATUS, old_hstatus); + csr_write(CSR_VSISELECT, old_vsiselect); +} + +static int imsic_vsfile_rw(int vsfile_hgei, int vsfile_cpu, u32 nr_eix, + unsigned long isel, bool write, + unsigned long *val) +{ + int rc; + struct imsic_vsfile_rw_data rdata; + + /* We can only access register if we have a IMSIC VS-file */ + if (vsfile_cpu < 0 || vsfile_hgei <= 0) + return -EINVAL; + + /* Check IMSIC register iselect */ + rc = imsic_mrif_isel_check(nr_eix, isel); + if (rc) + return rc; + + /* We can only access register on local CPU */ + rdata.hgei = vsfile_hgei; + rdata.isel = isel; + rdata.write = write; + rdata.val = (write) ? *val : 0; + on_each_cpu_mask(cpumask_of(vsfile_cpu), + imsic_vsfile_local_rw, &rdata, 1); + + if (!write) + *val = rdata.val; + + return 0; +} + static void imsic_vsfile_local_clear(int vsfile_hgei, u32 nr_eix) { u32 i; @@ -758,6 +865,69 @@ int kvm_riscv_vcpu_aia_imsic_rmw(struct kvm_vcpu *vcpu, unsigned long isel, return rc; } +int kvm_riscv_aia_imsic_rw_attr(struct kvm *kvm, unsigned long type, + bool write, unsigned long *val) +{ + u32 isel, vcpu_id; + unsigned long flags; + struct imsic *imsic; + struct kvm_vcpu *vcpu; + int rc, vsfile_hgei, vsfile_cpu; + + if (!kvm_riscv_aia_initialized(kvm)) + return -ENODEV; + + vcpu_id = KVM_DEV_RISCV_AIA_IMSIC_GET_VCPU(type); + vcpu = kvm_get_vcpu_by_id(kvm, vcpu_id); + if (!vcpu) + return -ENODEV; + + isel = KVM_DEV_RISCV_AIA_IMSIC_GET_ISEL(type); + imsic = vcpu->arch.aia_context.imsic_state; + + read_lock_irqsave(&imsic->vsfile_lock, flags); + + rc = 0; + vsfile_hgei = imsic->vsfile_hgei; + vsfile_cpu = imsic->vsfile_cpu; + if (vsfile_cpu < 0) { + if (write) { + rc = imsic_mrif_rmw(imsic->swfile, imsic->nr_eix, + isel, NULL, *val, -1UL); + imsic_swfile_extirq_update(vcpu); + } else + rc = imsic_mrif_rmw(imsic->swfile, imsic->nr_eix, + isel, val, 0, 0); + } + + read_unlock_irqrestore(&imsic->vsfile_lock, flags); + + if (!rc && vsfile_cpu >= 0) + rc = imsic_vsfile_rw(vsfile_hgei, vsfile_cpu, imsic->nr_eix, + isel, write, val); + + return rc; +} + +int kvm_riscv_aia_imsic_has_attr(struct kvm *kvm, unsigned long type) +{ + u32 isel, vcpu_id; + struct imsic *imsic; + struct kvm_vcpu *vcpu; + + if (!kvm_riscv_aia_initialized(kvm)) + return -ENODEV; + + vcpu_id = KVM_DEV_RISCV_AIA_IMSIC_GET_VCPU(type); + vcpu = kvm_get_vcpu_by_id(kvm, vcpu_id); + if (!vcpu) + return -ENODEV; + + isel = KVM_DEV_RISCV_AIA_IMSIC_GET_ISEL(type); + imsic = vcpu->arch.aia_context.imsic_state; + return imsic_mrif_isel_check(imsic->nr_eix, isel); +} + void kvm_riscv_vcpu_aia_imsic_reset(struct kvm_vcpu *vcpu) { struct imsic *imsic = vcpu->arch.aia_context.imsic_state;