From patchwork Mon Apr 3 09:33:10 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Anup Patel X-Patchwork-Id: 13197931 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 27574C761A6 for ; Mon, 3 Apr 2023 09:34:22 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=NuR/0NBEX5gE51Lv0IhnFIrwOlsfqVVv3aF0Ob8vAg4=; b=lURW/1vWJjznun caNeYG8KbmV2Mgp4z64COAn5SflapXfDhHIR3s0C0UoT1lcBjnIeH6c7FtUHJcRx7PNVkrCcIxqFK BSgGqoRO7K7ysBdY6IsniT0DXw2m9HhyTXkR+L/0VJD4AtbAOthEwq/4lfizF7wmQma/h9XMNkqoe +3d6ZWTnm1AsTdJTd5kQwARcaRjrI4ABAKt3BbnoURPxjNu7ya71c29BSTfSi87WsmWzHQw9SagQ3 E84uukm3jJG2qCgN3mCJVnxICf1Y8yaHZjLsx8YY9Q/wJRr8Rx40cNaw/4yEwh7IHvIEtit9hTjn2 2sBSokGMN7nuUjZExMJw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1pjGZq-00Ejvm-2B; Mon, 03 Apr 2023 09:34:14 +0000 Received: from mail-oa1-f48.google.com ([209.85.160.48]) by bombadil.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1pjGZn-00Ejsp-0Y for linux-riscv@lists.infradead.org; Mon, 03 Apr 2023 09:34:13 +0000 Received: by mail-oa1-f48.google.com with SMTP id 586e51a60fabf-17683b570b8so30039072fac.13 for ; Mon, 03 Apr 2023 02:34:09 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ventanamicro.com; s=google; t=1680514449; x=1683106449; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=36qDkjzHmxUW0X2MqXPdSwGD3Ix1QdRn1PpTCnDGYIc=; b=m8mCm6sqS+HQJL58ugQ3EdcYkZbLDbyhNL0+DRFMcMlqxNJ0231SbcNn/qvXNaCQFG 6snzMGRyDOMjO547GUXzF5gmjgSUUslGalQAIZ24Vi1+1+OwicCLYidNY/D86dKoURFw t66fPH/tFGXIC5cIMEforKgHC7Iyf56pcwWC/wV9nzi9/SPKMnX9Qk7Iy5fHxuZp1rUJ WgSYxuBPdujbCCGWZcMXLZLT9XzGTXyF6uWV7QV+WDOjp4D/vDTuyC+/Ck8BVdZT/ecO mlhn60ff9usfWAjoZnK3BPI+KAhjTj21KkMlRZCWjvMm9g9E1zWp5hXBbVxxZ7cHWQJO Efog== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; t=1680514449; x=1683106449; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=36qDkjzHmxUW0X2MqXPdSwGD3Ix1QdRn1PpTCnDGYIc=; b=ZAUZG1lsaRVT/BmZ5z2kxvcQV9/dfhsACzubM9CG8Adds/2PA5clKpbyhPrQRAHz6J vDPuA6J3iht6NtsvIAAdvWeNJTdnnvrerSkcn302/Nd7ossos8eaFMft2X/Z4f/gt/yn kUxwC5V8JA5LcX/tUyKX8GAJonc43r428e0Q4Zl+Qa+T7pB1/UXD9oii0P6ike0Pn8xS w+gxXndurZttHsEqKcGI+NRpFbdEaPBfB0Yq6McAW3PWqe8Ccva6tPB/cT0COjBQrY/C ijn/+zCLbALW1SHEap9JmAGE7Gopncbv8mjRprtfRemFUSkzRuzeOmt+Uq4xOATNaCfd kLlw== X-Gm-Message-State: AAQBX9cl94JT6urVEUN10n88XVkjXRPqX0rPYpAvHbgqUI9owUcPt2TM JPmlB/0+mDcPtxjv6+8pC56kmg== X-Google-Smtp-Source: AKy350YjRdWP+xUY9L9lpLX/5hHDHBgauckPBqmpIgXJTscIVZAqp254ZBZcmZJTcjtIYuQrAW+FoQ== X-Received: by 2002:a05:6870:972a:b0:17a:e448:3dcd with SMTP id n42-20020a056870972a00b0017ae4483dcdmr22233510oaq.59.1680514448816; Mon, 03 Apr 2023 02:34:08 -0700 (PDT) Received: from anup-ubuntu-vm.localdomain ([103.97.165.210]) by smtp.gmail.com with ESMTPSA id f5-20020a9d6c05000000b006a154373578sm3953953otq.39.2023.04.03.02.34.03 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 03 Apr 2023 02:34:08 -0700 (PDT) From: Anup Patel To: Paolo Bonzini , Atish Patra Cc: Palmer Dabbelt , Paul Walmsley , Andrew Jones , Anup Patel , kvm@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, Anup Patel Subject: [PATCH v3 8/8] RISC-V: KVM: Implement guest external interrupt line management Date: Mon, 3 Apr 2023 15:03:10 +0530 Message-Id: <20230403093310.2271142-9-apatel@ventanamicro.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230403093310.2271142-1-apatel@ventanamicro.com> References: <20230403093310.2271142-1-apatel@ventanamicro.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230403_023411_374095_BA08C292 X-CRM114-Status: GOOD ( 22.04 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org The RISC-V host will have one guest external interrupt line for each VS-level IMSICs associated with a HART. The guest external interrupt lines are per-HART resources and hypervisor can use HGEIE, HGEIP, and HIE CSRs to manage these guest external interrupt lines. Signed-off-by: Anup Patel --- arch/riscv/include/asm/kvm_aia.h | 10 ++ arch/riscv/kvm/aia.c | 241 +++++++++++++++++++++++++++++++ arch/riscv/kvm/main.c | 3 +- arch/riscv/kvm/vcpu.c | 2 + 4 files changed, 255 insertions(+), 1 deletion(-) diff --git a/arch/riscv/include/asm/kvm_aia.h b/arch/riscv/include/asm/kvm_aia.h index 1de0717112e5..0938e0cadf80 100644 --- a/arch/riscv/include/asm/kvm_aia.h +++ b/arch/riscv/include/asm/kvm_aia.h @@ -44,10 +44,15 @@ struct kvm_vcpu_aia { #define irqchip_in_kernel(k) ((k)->arch.aia.in_kernel) +extern unsigned int kvm_riscv_aia_nr_hgei; DECLARE_STATIC_KEY_FALSE(kvm_riscv_aia_available); #define kvm_riscv_aia_available() \ static_branch_unlikely(&kvm_riscv_aia_available) +static inline void kvm_riscv_vcpu_aia_imsic_release(struct kvm_vcpu *vcpu) +{ +} + #define KVM_RISCV_AIA_IMSIC_TOPEI (ISELECT_MASK + 1) static inline int kvm_riscv_vcpu_aia_imsic_rmw(struct kvm_vcpu *vcpu, unsigned long isel, @@ -119,6 +124,11 @@ static inline void kvm_riscv_aia_destroy_vm(struct kvm *kvm) { } +int kvm_riscv_aia_alloc_hgei(int cpu, struct kvm_vcpu *owner, + void __iomem **hgei_va, phys_addr_t *hgei_pa); +void kvm_riscv_aia_free_hgei(int cpu, int hgei); +void kvm_riscv_aia_wakeon_hgei(struct kvm_vcpu *owner, bool enable); + void kvm_riscv_aia_enable(void); void kvm_riscv_aia_disable(void); int kvm_riscv_aia_init(void); diff --git a/arch/riscv/kvm/aia.c b/arch/riscv/kvm/aia.c index d530912f28bc..1264783e7c4d 100644 --- a/arch/riscv/kvm/aia.c +++ b/arch/riscv/kvm/aia.c @@ -7,11 +7,46 @@ * Anup Patel */ +#include +#include +#include #include +#include +#include #include +struct aia_hgei_control { + raw_spinlock_t lock; + unsigned long free_bitmap; + struct kvm_vcpu *owners[BITS_PER_LONG]; +}; +static DEFINE_PER_CPU(struct aia_hgei_control, aia_hgei); +static int hgei_parent_irq; + +unsigned int kvm_riscv_aia_nr_hgei; DEFINE_STATIC_KEY_FALSE(kvm_riscv_aia_available); +static int aia_find_hgei(struct kvm_vcpu *owner) +{ + int i, hgei; + unsigned long flags; + struct aia_hgei_control *hgctrl = this_cpu_ptr(&aia_hgei); + + raw_spin_lock_irqsave(&hgctrl->lock, flags); + + hgei = -1; + for (i = 1; i <= kvm_riscv_aia_nr_hgei; i++) { + if (hgctrl->owners[i] == owner) { + hgei = i; + break; + } + } + + raw_spin_unlock_irqrestore(&hgctrl->lock, flags); + + return hgei; +} + static void aia_set_hvictl(bool ext_irq_pending) { unsigned long hvictl; @@ -55,6 +90,7 @@ void kvm_riscv_vcpu_aia_sync_interrupts(struct kvm_vcpu *vcpu) bool kvm_riscv_vcpu_aia_has_interrupts(struct kvm_vcpu *vcpu, u64 mask) { + int hgei; unsigned long seip; if (!kvm_riscv_aia_available()) @@ -72,6 +108,10 @@ bool kvm_riscv_vcpu_aia_has_interrupts(struct kvm_vcpu *vcpu, u64 mask) if (!kvm_riscv_aia_initialized(vcpu->kvm) || !seip) return false; + hgei = aia_find_hgei(vcpu); + if (hgei > 0) + return (csr_read(CSR_HGEIP) & BIT(hgei)) ? true : false; + return false; } @@ -343,6 +383,144 @@ int kvm_riscv_vcpu_aia_rmw_ireg(struct kvm_vcpu *vcpu, unsigned int csr_num, return KVM_INSN_EXIT_TO_USER_SPACE; } +int kvm_riscv_aia_alloc_hgei(int cpu, struct kvm_vcpu *owner, + void __iomem **hgei_va, phys_addr_t *hgei_pa) +{ + int ret = -ENOENT; + unsigned long flags; + struct aia_hgei_control *hgctrl = per_cpu_ptr(&aia_hgei, cpu); + + if (!kvm_riscv_aia_available()) + return -ENODEV; + if (!hgctrl) + return -ENODEV; + + raw_spin_lock_irqsave(&hgctrl->lock, flags); + + if (hgctrl->free_bitmap) { + ret = __ffs(hgctrl->free_bitmap); + hgctrl->free_bitmap &= ~BIT(ret); + hgctrl->owners[ret] = owner; + } + + raw_spin_unlock_irqrestore(&hgctrl->lock, flags); + + /* TODO: To be updated later by AIA in-kernel irqchip support */ + if (hgei_va) + *hgei_va = NULL; + if (hgei_pa) + *hgei_pa = 0; + + return ret; +} + +void kvm_riscv_aia_free_hgei(int cpu, int hgei) +{ + unsigned long flags; + struct aia_hgei_control *hgctrl = per_cpu_ptr(&aia_hgei, cpu); + + if (!kvm_riscv_aia_available() || !hgctrl) + return; + + raw_spin_lock_irqsave(&hgctrl->lock, flags); + + if (hgei > 0 && hgei <= kvm_riscv_aia_nr_hgei) { + if (!(hgctrl->free_bitmap & BIT(hgei))) { + hgctrl->free_bitmap |= BIT(hgei); + hgctrl->owners[hgei] = NULL; + } + } + + raw_spin_unlock_irqrestore(&hgctrl->lock, flags); +} + +void kvm_riscv_aia_wakeon_hgei(struct kvm_vcpu *owner, bool enable) +{ + int hgei; + + if (!kvm_riscv_aia_available()) + return; + + hgei = aia_find_hgei(owner); + if (hgei > 0) { + if (enable) + csr_set(CSR_HGEIE, BIT(hgei)); + else + csr_clear(CSR_HGEIE, BIT(hgei)); + } +} + +static irqreturn_t hgei_interrupt(int irq, void *dev_id) +{ + int i; + unsigned long hgei_mask, flags; + struct aia_hgei_control *hgctrl = this_cpu_ptr(&aia_hgei); + + hgei_mask = csr_read(CSR_HGEIP) & csr_read(CSR_HGEIE); + csr_clear(CSR_HGEIE, hgei_mask); + + raw_spin_lock_irqsave(&hgctrl->lock, flags); + + for_each_set_bit(i, &hgei_mask, BITS_PER_LONG) { + if (hgctrl->owners[i]) + kvm_vcpu_kick(hgctrl->owners[i]); + } + + raw_spin_unlock_irqrestore(&hgctrl->lock, flags); + + return IRQ_HANDLED; +} + +static int aia_hgei_init(void) +{ + int cpu, rc; + struct irq_domain *domain; + struct aia_hgei_control *hgctrl; + + /* Initialize per-CPU guest external interrupt line management */ + for_each_possible_cpu(cpu) { + hgctrl = per_cpu_ptr(&aia_hgei, cpu); + raw_spin_lock_init(&hgctrl->lock); + if (kvm_riscv_aia_nr_hgei) { + hgctrl->free_bitmap = + BIT(kvm_riscv_aia_nr_hgei + 1) - 1; + hgctrl->free_bitmap &= ~BIT(0); + } else + hgctrl->free_bitmap = 0; + } + + /* Find INTC irq domain */ + domain = irq_find_matching_fwnode(riscv_get_intc_hwnode(), + DOMAIN_BUS_ANY); + if (!domain) { + kvm_err("unable to find INTC domain\n"); + return -ENOENT; + } + + /* Map per-CPU SGEI interrupt from INTC domain */ + hgei_parent_irq = irq_create_mapping(domain, IRQ_S_GEXT); + if (!hgei_parent_irq) { + kvm_err("unable to map SGEI IRQ\n"); + return -ENOMEM; + } + + /* Request per-CPU SGEI interrupt */ + rc = request_percpu_irq(hgei_parent_irq, hgei_interrupt, + "riscv-kvm", &aia_hgei); + if (rc) { + kvm_err("failed to request SGEI IRQ\n"); + return rc; + } + + return 0; +} + +static void aia_hgei_exit(void) +{ + /* Free per-CPU SGEI interrupt */ + free_percpu_irq(hgei_parent_irq, &aia_hgei); +} + void kvm_riscv_aia_enable(void) { if (!kvm_riscv_aia_available()) @@ -357,21 +535,79 @@ void kvm_riscv_aia_enable(void) csr_write(CSR_HVIPRIO1H, 0x0); csr_write(CSR_HVIPRIO2H, 0x0); #endif + + /* Enable per-CPU SGEI interrupt */ + enable_percpu_irq(hgei_parent_irq, + irq_get_trigger_type(hgei_parent_irq)); + csr_set(CSR_HIE, BIT(IRQ_S_GEXT)); } void kvm_riscv_aia_disable(void) { + int i; + unsigned long flags; + struct kvm_vcpu *vcpu; + struct aia_hgei_control *hgctrl = this_cpu_ptr(&aia_hgei); + if (!kvm_riscv_aia_available()) return; + /* Disable per-CPU SGEI interrupt */ + csr_clear(CSR_HIE, BIT(IRQ_S_GEXT)); + disable_percpu_irq(hgei_parent_irq); + aia_set_hvictl(false); + + raw_spin_lock_irqsave(&hgctrl->lock, flags); + + for (i = 0; i <= kvm_riscv_aia_nr_hgei; i++) { + vcpu = hgctrl->owners[i]; + if (!vcpu) + continue; + + /* + * We release hgctrl->lock before notifying IMSIC + * so that we don't have lock ordering issues. + */ + raw_spin_unlock_irqrestore(&hgctrl->lock, flags); + + /* Notify IMSIC */ + kvm_riscv_vcpu_aia_imsic_release(vcpu); + + /* + * Wakeup VCPU if it was blocked so that it can + * run on other HARTs + */ + if (csr_read(CSR_HGEIE) & BIT(i)) { + csr_clear(CSR_HGEIE, BIT(i)); + kvm_vcpu_kick(vcpu); + } + + raw_spin_lock_irqsave(&hgctrl->lock, flags); + } + + raw_spin_unlock_irqrestore(&hgctrl->lock, flags); } int kvm_riscv_aia_init(void) { + int rc; + if (!riscv_isa_extension_available(NULL, SxAIA)) return -ENODEV; + /* Figure-out number of bits in HGEIE */ + csr_write(CSR_HGEIE, -1UL); + kvm_riscv_aia_nr_hgei = fls_long(csr_read(CSR_HGEIE)); + csr_write(CSR_HGEIE, 0); + if (kvm_riscv_aia_nr_hgei) + kvm_riscv_aia_nr_hgei--; + + /* Initialize guest external interrupt line management */ + rc = aia_hgei_init(); + if (rc) + return rc; + /* Enable KVM AIA support */ static_branch_enable(&kvm_riscv_aia_available); @@ -380,4 +616,9 @@ int kvm_riscv_aia_init(void) void kvm_riscv_aia_exit(void) { + if (!kvm_riscv_aia_available()) + return; + + /* Cleanup the HGEI state */ + aia_hgei_exit(); } diff --git a/arch/riscv/kvm/main.c b/arch/riscv/kvm/main.c index 6396352b4e4d..b0b46f48f31e 100644 --- a/arch/riscv/kvm/main.c +++ b/arch/riscv/kvm/main.c @@ -116,7 +116,8 @@ static int __init riscv_kvm_init(void) kvm_info("VMID %ld bits available\n", kvm_riscv_gstage_vmid_bits()); if (kvm_riscv_aia_available()) - kvm_info("AIA available\n"); + kvm_info("AIA available with %d guest external interrupts\n", + kvm_riscv_aia_nr_hgei); rc = kvm_init(sizeof(struct kvm_vcpu), 0, THIS_MODULE); if (rc) { diff --git a/arch/riscv/kvm/vcpu.c b/arch/riscv/kvm/vcpu.c index 30acf3ebdc3d..eace51dd896f 100644 --- a/arch/riscv/kvm/vcpu.c +++ b/arch/riscv/kvm/vcpu.c @@ -249,10 +249,12 @@ int kvm_cpu_has_pending_timer(struct kvm_vcpu *vcpu) void kvm_arch_vcpu_blocking(struct kvm_vcpu *vcpu) { + kvm_riscv_aia_wakeon_hgei(vcpu, true); } void kvm_arch_vcpu_unblocking(struct kvm_vcpu *vcpu) { + kvm_riscv_aia_wakeon_hgei(vcpu, false); } int kvm_arch_vcpu_runnable(struct kvm_vcpu *vcpu)